repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/accelerate | 2,474 | how to turn off fp16 auto_cast? | i notice that the deepspeed config always set my `auto_cast=True` and this is my data
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_multinode_launcher: standard
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_offload_param_pin_memory: true
... | https://github.com/huggingface/accelerate/issues/2474 | closed | [] | 2024-02-21T11:54:51Z | 2025-02-18T08:53:20Z | null | haorannlp |
huggingface/chat-ui | 852 | what is the difference between "chat-ui-db" docker image and "chat-ui" docker image? | I found there are 2 packages in the chat-ui repository: one is chat-ui and the other is chat-ui-db. what is the difference between "chat-ui-db" docker image and "chat-ui" docker image?
I've pulled two images from the mirror site: huggingface/text-generation-inference:1.4 and mongo:latest.
I hope to use the two i... | https://github.com/huggingface/chat-ui/issues/852 | closed | [] | 2024-02-21T09:31:07Z | 2024-02-23T02:58:03Z | null | majestichou |
huggingface/instruction-tuned-sd | 22 | How to use a custom image for validation | Hello,
I tried using a custom image for validation since I'm training it on a custom style i uploaded my val image on hub as the mountain.png but it always gives me error for unidentified also for mountain.png it shows validation summary on wandb but for my val image it shows nothing.
Do i need to change something s... | https://github.com/huggingface/instruction-tuned-sd/issues/22 | closed | [] | 2024-02-21T08:15:30Z | 2024-02-22T05:49:11Z | null | roshan2024nar |
huggingface/gsplat.js | 67 | How to set the background color of the scene | Hi:
Want to know how to set the background color of the scene,now it's black | https://github.com/huggingface/gsplat.js/issues/67 | open | [] | 2024-02-21T05:49:33Z | 2024-02-26T09:32:25Z | null | jamess922 |
huggingface/gsplat.js | 66 | How to adjust the axis of rotation? | When the model's z-axis is not perpendicular to the ground plane, the rotation effect may feel unnatural, as is the case with this model: testmodel.splat.
[testmodel.zip](https://github.com/huggingface/gsplat.js/files/14353919/testmodel.zip)
I would like to rotate the model along an axis that is perpendicular ... | https://github.com/huggingface/gsplat.js/issues/66 | closed | [] | 2024-02-21T04:13:01Z | 2024-02-23T02:37:59Z | null | gotoeasy |
huggingface/sentence-transformers | 2,494 | How to get embedding vector when input is tokenized already | First, thank you so much for sentence-transformer.
How to get embedding vector when input is tokenized already?
i guess sentence-transformer can `.encode(original text)`.
But i want to know there is way like `.encode(token_ids )` or `.encode(token_ids, attention_masks)`
This is my background b... | https://github.com/huggingface/sentence-transformers/issues/2494 | open | [] | 2024-02-20T22:38:18Z | 2024-02-23T10:01:07Z | null | sogmgm |
huggingface/optimum | 1,703 | How can I export onnx-model for Qwen/Qwen-7B? | ### Feature request
I need to export the model named qwen to accelerate.
```optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code```
### Motivation
I want to export the model qwen to use onnxruntime
### Your contribution
I can give the input and output. | https://github.com/huggingface/optimum/issues/1703 | open | [
"onnx"
] | 2024-02-20T13:22:08Z | 2024-02-26T13:19:19Z | 1 | smile2game |
huggingface/accelerate | 2,463 | How to initialize Accelerator twice but with different setup within the same code ? | ### System Info
```Shell
Hello I want to initialize accelerate once for the training and another time for the inference.
Looks like it does not work and the error message is not clear. Is there a way to reset the previously initialized accelerate and then initialize with inference setup?
For training I am doi... | https://github.com/huggingface/accelerate/issues/2463 | closed | [] | 2024-02-20T13:17:26Z | 2024-03-30T15:06:15Z | null | soneyahossain |
pytorch/TensorRT | 2,648 | ❓ Debugger deactivate | ## ❓ Question
How can I deactivate the debugger?
## What you have already tried
When I run any executable that uses Torch-TensorRT, I get a lot of debugger messages:
```log
...
DEBUG: [Torch-TensorRT - Debug Build] - Attempting to run engine (ID: __torch___torchvision_models_resnet_ResNet_trt_engine_)
IN... | https://github.com/pytorch/TensorRT/issues/2648 | closed | [
"question"
] | 2024-02-20T05:56:41Z | 2024-02-20T06:15:13Z | null | AndreasKaratzas |
huggingface/chat-ui | 840 | LLama.cpp error - String must contain at least 1 character(s)" | I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.
```
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
```
Not sure how to fix it.
```
[
{
"code": "too_small",
"min... | https://github.com/huggingface/chat-ui/issues/840 | open | [
"bug",
"models"
] | 2024-02-19T13:33:24Z | 2024-02-22T14:51:48Z | 2 | szymonrucinski |
huggingface/datatrove | 93 | Tokenization for Non English data | Hi HF team
I want to thank you for this incredible work.
And I have a question, I want to apply pipeline of deduplication for Arabic data.
For this I should change the tokenizer I think, And if yes is there a tip for this,
for this should I just edit the tokenizer here
`class SentenceDedupFilter(PipelineStep):
... | https://github.com/huggingface/datatrove/issues/93 | closed | [
"question"
] | 2024-02-19T11:02:04Z | 2024-04-11T12:47:24Z | null | Manel-Hik |
pytorch/pytorch | 120,194 | model loaded with torch._export.aot_load does not report what file is not found during inference and Cuda driver error. | ### 🐛 Describe the bug
when I load a pt2 model exported with torch._export in one Docker container from the image `ghcr.io/pytorch/pytorch-nightly:2.3.0.dev20240211-cuda12.1-cudnn8-devel` I get a working inference.
But when I run it in another container derived from the same base image, I get a CUDA driver erro... | https://github.com/pytorch/pytorch/issues/120194 | closed | [
"triaged",
"oncall: pt2",
"module: aotinductor"
] | 2024-02-19T07:12:30Z | 2025-02-07T08:44:15Z | null | rbavery |
huggingface/safetensors | 443 | Efficient key-wise streaming | ### Feature request
I'm interested in streaming the tensors in a model key by key without having to hold all keys at the same time in memory. Something like this:
```python
with safe_open("model.safetensors", framework="pt", device="cpu") as f:
for key in f.keys():
tensor = f.get_tensor(stream=True... | https://github.com/huggingface/safetensors/issues/443 | closed | [
"Stale"
] | 2024-02-18T23:22:09Z | 2024-04-17T01:47:28Z | 4 | ljleb |
huggingface/community-events | 200 | How to prepare audio dataset for whisper fine-tuning with timestamps? | I am trying to prepare a dataset for whisper fine-tuning , and I have a lot of small segment clip , most of them less than 6 seconds, I read the paper, but didn’t understand this paragraph:
“ When a final transcript segment is only partially included in the current 30- second audio chunk, we predict only its start t... | https://github.com/huggingface/community-events/issues/200 | open | [] | 2024-02-18T19:50:33Z | 2024-02-18T19:55:06Z | null | omarabb315 |
huggingface/diffusers | 7,010 | How to set export HF_HOME on Kaggle? | Kaggle temporary disk is slow once again and I want models to be downloaded into working directory.
I have used the below command but it didn't work. Which command I need?
`!export HF_HOME="/kaggle/working"`
| https://github.com/huggingface/diffusers/issues/7010 | closed | [
"bug"
] | 2024-02-18T11:15:21Z | 2024-02-18T14:39:08Z | null | FurkanGozukara |
huggingface/optimum-benchmark | 126 | How to obtain the data from the 'forward' and 'generate' stages? | I used the same configuration file to test the model, but the results obtained are different from those of a month ago. In the result files from a month ago, data from both the forward and generate stages were included; however, the current generated result files only contain information from the prefill and decode sta... | https://github.com/huggingface/optimum-benchmark/issues/126 | closed | [] | 2024-02-18T09:48:44Z | 2024-02-19T16:06:24Z | null | WCSY-YG |
huggingface/chat-ui | 838 | Explore the possibility for chat-ui to use OpenAI assistants API structure. | Hi @nsarrazin , I wanted to explore how we could collaborate in making chat-ui more work with OpenAI standards to make it more less opinionated over hosted inference provider. I need it as I am part of a team open-sourcing the GPTs platform https://github.com/OpenGPTs-platform and we will be leveraging chat-ui as the c... | https://github.com/huggingface/chat-ui/issues/838 | open | [
"enhancement",
"good first issue",
"back"
] | 2024-02-17T21:39:49Z | 2024-12-26T05:55:47Z | 4 | CakeCrusher |
huggingface/candle | 1,720 | How to define custom ops with arbitrary number of tensors ? | I dived into the issues and repo about the subject, because I wanted to be able to call cuda kernels regarding 3D gaussian splatting, and the way to invoke those kernel seems to be custom ops. But right now, we only have
```
CustomOp1(Tensor, std::sync::Arc<Box<dyn CustomOp1 + Send + Sync>>),
CustomOp2(
... | https://github.com/huggingface/candle/issues/1720 | open | [] | 2024-02-16T21:38:16Z | 2024-03-13T13:44:17Z | null | jeanfelixM |
huggingface/chat-ui | 837 | Cannot find assistants UI in the repo | Hi @nsarrazin I recently cloned the chat-ui and I noticed that the new assistants ui is missing, at the very least from the main branch.
Is the assistants ui in the repo somwhere?
If not is there any plans on making it open-source?
If so when? | https://github.com/huggingface/chat-ui/issues/837 | closed | [] | 2024-02-16T20:13:39Z | 2024-02-17T21:29:08Z | 4 | CakeCrusher |
pytorch/pytorch | 120,079 | Use sys.settrace or torch function mode to compute how much of a model was not covered by Dynamo | ### 🐛 Describe the bug
Suppose you have a model with a bunch of graph breaks / WON'T CONVERT. How much of the model have you managed to capture versus not capture? There are two metrics you could use to figure this out:
* When you run the model in eager mode, it will have run some number calls to torch functions.... | https://github.com/pytorch/pytorch/issues/120079 | open | [
"feature",
"low priority",
"module: logging",
"triaged",
"oncall: pt2"
] | 2024-02-16T14:54:04Z | 2025-07-11T18:03:17Z | null | ezyang |
huggingface/dataset-viewer | 2,456 | Link to the endpoint doc page in case of error? | eg. https://datasets-server.huggingface.co/parquet
could return
```json
{"error":"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet"}
```
or
```json
{"error":"Parameter 'dataset' is required.", "docs": "https://huggingface.co/docs/datasets-server/parqu... | https://github.com/huggingface/dataset-viewer/issues/2456 | open | [
"documentation",
"question",
"api",
"P2"
] | 2024-02-15T11:11:44Z | 2024-02-15T11:12:12Z | null | severo |
pytorch/text | 2,230 | how to install libtorchtext for cpp project use? please give some operation .thanks | ## 🐛 Bug
**Describe the bug** A clear and concise description of what the bug is.
**To Reproduce** Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior** A clear and concise description of what you expected to happen.
**Screensho... | https://github.com/pytorch/text/issues/2230 | open | [] | 2024-02-15T04:01:32Z | 2024-02-15T04:01:32Z | null | mullerhai |
pytorch/audio | 3,746 | how to install libtorchaudio for cpp project ? | ### 🐛 Describe the bug
HI ,I git clone audio project ,then add libtorch path to the audio CMakeTxt, try to make && make install ,but all finish ,I cannot find libtorchaudio.dylib file on my macos intel, only libtorchaudio.so libtorchaudio_sox.so in /usr/local/torchaudio
### Versions
latest | https://github.com/pytorch/audio/issues/3746 | open | [] | 2024-02-15T02:28:30Z | 2024-02-15T02:28:30Z | null | mullerhai |
pytorch/torchx | 824 | Determine scheduler from component level | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
<!-- your question here -->
Is it possible to tell or fill in at runtime whic... | https://github.com/meta-pytorch/torchx/issues/824 | open | [] | 2024-02-14T23:01:27Z | 2024-02-16T01:56:46Z | 1 | ryxli |
huggingface/gsplat.js | 64 | How to render from a set of camera position? | Hi, I am trying to render the scene from a set of camera position/rotation that I load from a JSON file.
I think the right way is first to disable the "orbitControls" (engine.orbitControls.enabled = false;) and then set the camera position/rotation manually like this: 'camera.data.update(position, rotation);'. Am I... | https://github.com/huggingface/gsplat.js/issues/64 | closed | [] | 2024-02-14T16:11:28Z | 2024-02-19T18:13:38Z | null | vahidEtt |
huggingface/chat-ui | 824 | what port is used by the websearch? | i put the chat in a container in a cluster with my mongodb.
the web search stopped working, i think it might be related to me not opening a port for the web search to access the web and could not find a doc that describes how the web search works.
would love to know what port/s i should open and bit more details in ... | https://github.com/huggingface/chat-ui/issues/824 | open | [
"support",
"websearch"
] | 2024-02-14T11:15:22Z | 2024-02-14T12:52:25Z | null | kaplanyaniv |
huggingface/transformers.js | 586 | Does `WEBGPU` Truly Enhance Inference Time Acceleration? | ### Question
Recently, I've been extensively utilizing transformers.js to load transformer models, and Kudos to the team for this wonderful library ...
Specifically, I've been experimenting with version 2.15.0 of transformers.js.
Despite the fact that the model runs on the `web-assembly backend`, I've noticed ... | https://github.com/huggingface/transformers.js/issues/586 | closed | [
"question"
] | 2024-02-14T09:23:52Z | 2024-10-18T13:30:13Z | null | kishorekaruppusamy |
huggingface/chat-ui | 823 | WebSearch uses the default model instead of current model selected | I have multiple models in my .env.local and it seems the WebSearch uses the default model to perform its search content extraction instead of the currently selected model (the one that I'm asking the question to...) Is it possible to add a config option to use same model for everything? | https://github.com/huggingface/chat-ui/issues/823 | open | [
"enhancement",
"back",
"models"
] | 2024-02-14T07:52:59Z | 2024-02-14T13:07:20Z | 4 | ihubanov |
huggingface/trl | 1,327 | how to save/load model? | I've tried save model via:
ppo_trainer.save_pretrained("./model_after_rl")
and load the model via:
model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
ref_model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
But the performance is same to without any reinf... | https://github.com/huggingface/trl/issues/1327 | closed | [] | 2024-02-14T06:56:07Z | 2024-04-24T15:05:14Z | null | ADoublLEN |
huggingface/accelerate | 2,440 | How to properly gather results of PartialState for inference on 4xGPUs | ### System Info
```Shell
torch==2.2.0
transformers==4.37.2
accelerate==0.27.0
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `example... | https://github.com/huggingface/accelerate/issues/2440 | closed | [] | 2024-02-13T14:00:13Z | 2024-03-23T15:07:26Z | null | ZeusFSX |
huggingface/chat-ui | 818 | Settings Page Freezes | When I go to settings to change model (after I ran a convo with a model), the UI settings page can't be closed. It freezes. Right now I have to keep reloading the page to use it | https://github.com/huggingface/chat-ui/issues/818 | closed | [
"question",
"support"
] | 2024-02-13T13:30:01Z | 2024-02-16T09:41:23Z | null | lordsoffallen |
huggingface/candle | 1,701 | How to train my own YOLOv8 model? | Candle provides an example of YOLOv8, which is very useful to use.
But I don't know how to train on my own dataset? Can handle directly load the model trained by pytorch? | https://github.com/huggingface/candle/issues/1701 | open | [] | 2024-02-13T01:56:49Z | 2024-03-18T13:45:07Z | null | mzdk100 |
huggingface/transformers.js | 585 | Using a server backend to generate masks - doublelotus | ### Question
Hi there, just continuing on from my question on - https://huggingface.co/posts/Xenova/240458016943176#65ca9d9c8e0d94e48742fad7.
I've just been reading through your response and initially I was trying it using a python backend and attempted to mimic the worekr.js code like so:
```py
from transfo... | https://github.com/huggingface/transformers.js/issues/585 | open | [
"question"
] | 2024-02-13T00:06:20Z | 2024-02-28T19:29:26Z | null | jeremiahmark |
huggingface/chat-ui | 817 | Question: Can someone explain "public app data sharing with model authors" please? | I am struggling to understand in which way data can or is actually shared with whom when the setting `shareConversationsWithModelAuthors` is activated (which it is by default)?
```javascript
{#if PUBLIC_APP_DATA_SHARING === "1"}
<!-- svelte-ignore a11y-label-has-associated-control -->
<label class="flex items-cen... | https://github.com/huggingface/chat-ui/issues/817 | closed | [
"question"
] | 2024-02-12T19:18:03Z | 2024-02-16T14:32:18Z | null | TomTom101 |
pytorch/pytorch | 119,604 | How to deal with mypy checking fx_node.args[i].meta? | # Issue
It's common in Inductor FX passes to do something like this
```
node: torch.fx.Node = ...
arg1: torch.fx.Argument = node.args[0]
arg2: torch.fx.Argument = node.args[1]
a, b = arg1.meta, arg2.meta
# do something with a & b
```
However, mypy will call this out ([see](https://mypy.readthedocs.io/en/stab... | https://github.com/pytorch/pytorch/issues/119604 | closed | [] | 2024-02-09T22:42:44Z | 2024-02-10T00:01:10Z | null | ColinPeppler |
pytorch/pytorch | 119,590 | Decide whether / how to ban SAC + inplace ops in eager | SAC exists as an API today (see [code](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L1256)), but:
(1) it "context" fn has a pt2-specific name
(1) We have a warning in the docs that it should only be used with `torch.compile`
(2) We have no warning or error that gets emitted at runtime if ... | https://github.com/pytorch/pytorch/issues/119590 | closed | [
"module: activation checkpointing",
"module: autograd",
"triaged",
"needs design"
] | 2024-02-09T20:33:05Z | 2024-06-27T20:13:20Z | null | bdhirsh |
huggingface/transformers.js | 581 | How can we use the sam-vit-huge in the production? | ### Question
The size of ONNX files for sam-vit-huge is around 600MB. If I am using the implementation mentioned in the documentation, it downloads these files first before performing the image segmentation. Is there a better way to avoid downloading these files and reduce the time it takes? Additionally, the model is... | https://github.com/huggingface/transformers.js/issues/581 | open | [
"question"
] | 2024-02-09T17:54:43Z | 2024-02-09T17:54:43Z | null | moneyhotspring |
huggingface/dataset-viewer | 2,434 | Create a new step: `config-features`? | See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.
We could:
- add a new /features endpoint
- or add a `features... | https://github.com/huggingface/dataset-viewer/issues/2434 | open | [
"question",
"refactoring / architecture",
"P2"
] | 2024-02-09T14:13:10Z | 2024-02-15T10:26:35Z | null | severo |
huggingface/diffusers | 6,920 | How to merge a lot of embedding into a single file | I create a lot of embedding through textual inversion, but I couldn't found a file to merge this ckpt
| https://github.com/huggingface/diffusers/issues/6920 | open | [
"stale"
] | 2024-02-09T08:18:42Z | 2024-03-13T15:02:51Z | null | Eggwardhan |
pytorch/pytorch | 119,479 | torch._constrain_as_value and related APIs accept Tensor, but this is typically not what you want | ### 🐛 Describe the bug
Internal xref: https://fb.workplace.com/groups/6829516587176185/posts/6829896033804907/
Because we are willing to call item() on scalar Tensor, these APIs will "work" but they will keep generating fresh unbacked symbols, so the value range ends up not getting used by anything. Would be good ... | https://github.com/pytorch/pytorch/issues/119479 | closed | [
"triaged",
"oncall: pt2",
"module: dynamic shapes"
] | 2024-02-08T20:13:23Z | 2024-09-13T03:10:12Z | null | ezyang |
pytorch/pytorch | 119,473 | Document how to override autocast rules properly | Since autocast is implemented as a dispatcher feature, and each rule is a relatively simple kernel being registered on the right key for the right kernel.
Overriding these rules can be done today by replacing the kernel registered by default with a custom one that does the appropriate casting before redispatching do... | https://github.com/pytorch/pytorch/issues/119473 | open | [
"triaged",
"module: amp (automated mixed precision)"
] | 2024-02-08T19:02:00Z | 2024-02-08T20:43:22Z | null | albanD |
pytorch/serve | 2,933 | https://github.com/pytorch/serve/issues/2870 - New Release Required for this Fix | ### 🐛 Describe the bug
Team,
seems like worker auto recovery fix in this PR. Can we create patch release so that we can proceed with production update?
Thanks
Regards,
Deepak Kumar A
### Error logs
NA
### Installation instructions
NA
### Model Packaing
NA
### config.properties
_No response_
### Ver... | https://github.com/pytorch/serve/issues/2933 | closed | [] | 2024-02-08T14:23:49Z | 2024-03-20T21:51:41Z | 2 | DeepakkumarArumugam |
huggingface/transformers | 28,924 | How to disable log history from getting printed every logging_steps | I'm writing a custom ProgressCallback that modifies the original ProgressCallback transformers implementation and adds some additional information/data to the tqdm progress bar. Here's what I have so far, and it works nicely and as intended.
```python
class ProgressCallback(TrainerCallback):
"""A [`TrainerCall... | https://github.com/huggingface/transformers/issues/28924 | closed | [] | 2024-02-08T10:23:28Z | 2024-02-08T17:26:02Z | null | arnavgarg1 |
huggingface/alignment-handbook | 120 | (QLoRA) DPO without previous SFT | Because of the following LLM-Leaderboard measurements, I want to perform QLoRA DPO without previous QLoRA SFT:
```
alignment-handbook/zephyr-7b-dpo-qlora: +Average: 63.51; +ARC 63.65; +HSwag 85.35; -+MMLU 63.82; ++TQA: 47.14; (+)Win 79.01; +GSM8K 42.08;
alignment-handbook/zephyr-7b-sft-qlora: -Averag... | https://github.com/huggingface/alignment-handbook/issues/120 | open | [] | 2024-02-08T09:56:50Z | 2024-02-09T22:15:10Z | 1 | DavidFarago |
huggingface/transformers.js | 577 | Getting 'fs is not defined' when trying the latest "background removal" functionality in the browser? | ### Question
I copied the code from https://github.com/xenova/transformers.js/blob/main/examples/remove-background-client/main.js to here, but I'm getting this error with v2.15.0 of @xenova/transformers.js:
```
Uncaught ReferenceError: fs is not defined
at env.js:36:31
at [project]/node_modules/.pnpm/@... | https://github.com/huggingface/transformers.js/issues/577 | open | [
"question"
] | 2024-02-08T04:34:59Z | 2024-11-26T05:20:22Z | null | lancejpollard |
pytorch/serve | 2,930 | How would you deploy a new model on a torch server running within a container? | I am looking for options to use torchserve to deploy multiple models at once. However, in the documentation and guides I cannot find examples where it is done. The examples usually describe a scenario of starting a torchserve container for a given model.
My question is if I have a torchserve container running, is th... | https://github.com/pytorch/serve/issues/2930 | closed | [] | 2024-02-07T14:51:06Z | 2024-02-07T16:33:20Z | 1 | mihailyanchev |
huggingface/transformers.js | 575 | Can GPU acceleration be used when using this library in a node.js environment? | ### Question
Hello, I have looked into the GPU support related issue, but all mentioned content is related to webGPU. May I ask if GPU acceleration in the node.js environment is already supported? Refer: https://github.com/microsoft/onnxruntime/tree/main/js/node | https://github.com/huggingface/transformers.js/issues/575 | closed | [
"question"
] | 2024-02-07T03:37:50Z | 2025-01-20T15:05:00Z | null | SchneeHertz |
pytorch/vision | 8,259 | support for convnextv2 | ### 🚀 The feature
is there any plan for adding convext-v2
### Motivation, pitch
ConvNeXt-V2 introduce FCMAE self sup pretrain and gain the performance for 0.5~1.5% top1 acc.
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/vision/issues/8259 | open | [] | 2024-02-07T01:45:29Z | 2024-02-07T01:45:29Z | 0 | chaoer |
huggingface/dataset-viewer | 2,408 | Add task tags in /hub-cache? | On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.
Previously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425 | https://github.com/huggingface/dataset-viewer/issues/2408 | closed | [
"question",
"feature request",
"P2"
] | 2024-02-06T11:17:19Z | 2024-06-19T15:43:15Z | null | severo |
huggingface/dataset-viewer | 2,407 | Remove env var HF_ENDPOINT? | Is it still required to set HF_ENDPOINT as an environment variable?
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45
| https://github.com/huggingface/dataset-viewer/issues/2407 | closed | [
"duplicate",
"question",
"refactoring / architecture",
"P2"
] | 2024-02-06T11:11:24Z | 2024-02-06T14:53:12Z | null | severo |
huggingface/chat-ui | 786 | Can't get Mixtral to work with web-search | I have been following this project for a while and recently tried setting up oobabooga Mixtral-8x7b
I used the official prompt template used in huggingface.co :
```
<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifA... | https://github.com/huggingface/chat-ui/issues/786 | open | [] | 2024-02-06T07:14:08Z | 2024-02-16T10:45:40Z | 2 | iChristGit |
pytorch/kineto | 864 | Question about how to run "make test" correctly? | Hi guys,
Follow the steps in [README.md](https://github.com/pytorch/kineto/tree/main/libkineto), I have succeed to build Libkineto. Then, I start to run the tests with the command "make test", but it doesn't change anything. In this [CMakeLists.txt](https://github.com/pytorch/kineto/blob/main/libkineto/CMakeLists... | https://github.com/pytorch/kineto/issues/864 | open | [
"bug"
] | 2024-02-06T06:01:11Z | 2024-04-23T15:45:46Z | null | PriscillaJCorn |
huggingface/dataset-viewer | 2,402 | Reduce resources for /filter and /search? | They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now
Should we reduce the number of pods? How to configure the right level? | https://github.com/huggingface/dataset-viewer/issues/2402 | closed | [
"question",
"infra",
"P2",
"prod"
] | 2024-02-05T21:44:56Z | 2024-02-28T17:55:50Z | null | severo |
pytorch/examples | 1,229 | If I am training on a SINGLE GPU, should this "--dist-backend 'gloo'" argument be added to the command? | @Jaiaid
Is this **"--dist-backend 'gloo'"** be included in the terminal command if using a **SINGLE GPU** or having just one GPU on the machine?
Is the following example command correct for SINGLE GPU?
python main.py **--dist-backend 'gloo'** -a resnet18 [imagenet-folder with train and val folders]
Is that... | https://github.com/pytorch/examples/issues/1229 | closed | [] | 2024-02-05T17:11:50Z | 2024-02-07T08:01:12Z | 10 | HassanBinHaroon |
huggingface/dataset-viewer | 2,390 | Store the repo visibility (public/private) to filter webhooks | See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050
Not sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets. | https://github.com/huggingface/dataset-viewer/issues/2390 | closed | [
"question",
"P2"
] | 2024-02-05T12:37:30Z | 2024-06-19T15:37:36Z | null | severo |
huggingface/transformers.js | 567 | Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order. | ### Question
Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order. | https://github.com/huggingface/transformers.js/issues/567 | open | [
"question"
] | 2024-02-05T11:12:34Z | 2024-02-05T11:12:34Z | null | a414166402 |
huggingface/transformers.js | 565 | How can i use this Model for image matting? | ### Question
https://github.com/ZHKKKe/MODNet?tab=readme-ov-file
They have ONNX file and the python cli usage looks simple, but I can't find how to use with transformers.js.
```
!python -m demo.image_matting.colab.inference \
--input-path demo/image_matting/colab/input \
--output-path demo/image... | https://github.com/huggingface/transformers.js/issues/565 | closed | [
"question"
] | 2024-02-05T09:28:28Z | 2024-02-07T11:33:26Z | null | cyio |
huggingface/transformers.js | 564 | Can models from user disks load and run in my HF space? | ### Question
Im fiddling around with the react-translator template.
What I have accomplished so far:
- Run local (on disk in public folder) model in localhost webapp.
- Run hosted (on HF) model in localhost webapp.
- Run hosted (on HF) model in HF Space webapp.
What i want to accomplish but can't figure out:
... | https://github.com/huggingface/transformers.js/issues/564 | closed | [
"question"
] | 2024-02-05T08:00:55Z | 2024-06-07T01:17:24Z | null | saferugdev |
huggingface/transformers | 28,860 | Question: How do LLMs learn to be "Generative", as we often describe them? | (Please forgive me and let me know if I'm not allowed to ask this kind of question here. I'm so sorry if I'm bothering everyone.)
AFAIK to be called "generative", a model should have the ability to learn the joint probability over the training data. In the case of LLMs, we apply the chain rule of Bayes' formula to a... | https://github.com/huggingface/transformers/issues/28860 | closed | [] | 2024-02-05T07:10:23Z | 2024-02-05T12:22:27Z | null | metalwhale |
huggingface/sentence-transformers | 2,470 | BGE Reranker / BERT Crossencoder Onnx model latency issue | I am using the Int8 quantized version of BGE-reranker-base model converted to the Onnx model. I am processing the inputs in batches. Now the scenario is that I am experiencing a latency of 20-30 secs with the original model. With the int8 quantized and onnx optimized model, the latency was reduced to 8-15 secs keeping ... | https://github.com/huggingface/sentence-transformers/issues/2470 | open | [
"question"
] | 2024-02-05T05:54:18Z | 2024-02-09T06:59:51Z | null | ojasDM |
pytorch/xla | 6,464 | How to benchmark PyTorch XLA code properly | ## ❓ Questions and Help
Hi! I'm trying to benchmark some pytorch XLA code, and can't find a way how to do it correctly.
For simplicity what's I'm benchmarking is `torch.matmul(a, b)`. Firstly I created the most straightforward version of benchmarking, inspired by cuda & triton benchmarking code:
```
# create tens... | https://github.com/pytorch/xla/issues/6464 | closed | [
"question"
] | 2024-02-05T00:55:57Z | 2025-04-21T13:15:33Z | null | ttim |
huggingface/chat-ui | 774 | Where are the image and pdf upload features when running on locally using this repo? | I see there are issues and features being talked about and added for the image upload and parsing PDFs as markdown etc. However, I dont see these features in when I cloned this repo and started chatui using "npm run dev" locally.
Am I missing something?
#641 are the features I am talking about. | https://github.com/huggingface/chat-ui/issues/774 | closed | [] | 2024-02-05T00:41:05Z | 2024-02-05T08:48:29Z | 1 | zubu007 |
huggingface/chat-ui | 771 | using openai api key for coporate | Hi
We are working with an open ai key for our corporate ( it has a corporate endpoint)
this is how we added the model to .env.local
```
MODELS=`[
{
"name": "Corporate local instance of GPT 3.5 Model",
"endpoints": [{
"type": "openai",
"url": "corporate url"
}],
"userMessageTo... | https://github.com/huggingface/chat-ui/issues/771 | open | [
"models"
] | 2024-02-04T11:23:59Z | 2024-02-06T15:01:50Z | 1 | RachelShalom |
huggingface/optimum-neuron | 460 | [QUESTION] What is the difference between optimum-neuron and transformers-neuronx? | I would like to understand the differences between this optimum-neuron and [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx). | https://github.com/huggingface/optimum-neuron/issues/460 | closed | [] | 2024-02-02T18:27:46Z | 2024-03-27T11:04:52Z | null | leoribeiro |
pytorch/tensordict | 656 | [Feature Request] Docs don't mention how to install tensordict / that it's a seperate package from torch | ## Motivation
As a user, the first thing I'd want to see when looking at a docs for a package is something like:
```
pip install <package>
```
Or
```
conda install <package>
```
This seems like it's currently missing from the docs [here](https://pytorch.org/tensordict). It is included in the Github readme... | https://github.com/pytorch/tensordict/issues/656 | closed | [
"enhancement"
] | 2024-02-02T17:50:58Z | 2024-02-05T13:49:01Z | null | sradc |
pytorch/tutorials | 2,858 | Better specify `torch.compile behaviour` on nested function/module | ### 📚 The doc issue
Can we better specify the behavior and eventually the best practices when decorating a function or compiling a module and the effect on the nested modules and nested function call?
https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html
### Suggest a potential alternative/fix
_... | https://github.com/pytorch/tutorials/issues/2858 | closed | [
"medium",
"docathon-h1-2024"
] | 2024-02-02T12:22:05Z | 2024-08-30T21:40:03Z | 10 | bhack |
huggingface/dataset-viewer | 2,376 | Should we increment "failed_runs" when error is "ResponseAlreadyComputedError"? | Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error? | https://github.com/huggingface/dataset-viewer/issues/2376 | closed | [
"question",
"P2"
] | 2024-02-02T12:08:31Z | 2024-02-22T21:16:12Z | null | severo |
huggingface/autotrain-advanced | 484 | How to ask question AutoTrained LLM , If I ask question dosn't return any answer | Hi,
LLM training was successful , But I asked any question from my trained context and it was not answered.How to ask proper question?
rom transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "bert-base-uncased_finetuning"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoMode... | https://github.com/huggingface/autotrain-advanced/issues/484 | closed | [
"stale"
] | 2024-02-02T09:29:07Z | 2024-03-04T15:01:36Z | null | charles-123456 |
huggingface/chat-ui | 761 | Does chat-ui support offline deployment? I have downloaded the weights to my local computer. | I have downloaded the weights to my local computer. Due to network issues, I am unable to interact with the huggingface website. Can I do offline deployment based on chat-ui and downloaded weights from huggingface? Do I not need to set HF_TOKEN=<your access token>?Does that mean I don't need to set HF_TOKEN=<your acce... | https://github.com/huggingface/chat-ui/issues/761 | closed | [
"support"
] | 2024-02-02T07:57:19Z | 2024-02-04T03:23:25Z | 2 | majestichou |
huggingface/transformers.js | 557 | how to cast types? | ### Question
I have the following code:
```
const pipe = await pipeline('embeddings');
const output = await pipe([
'The quick brown fox jumps over the lazy dog',
]);
const embedding = output[0][0];
```
`output[0][0]` causes a typescript error:
<img width="748" alt="CleanShot 2024... | https://github.com/huggingface/transformers.js/issues/557 | open | [
"question"
] | 2024-02-02T04:38:20Z | 2024-02-08T19:01:06Z | null | pthieu |
huggingface/diffusers | 6,819 | How to let diffusers use local code for pipelineinstead of download it online everytime We use it? | I tried to use the instaflowpipeline from example/community to.run my test However, even after i git cloned the repository to my environment it still Keep trying to Download the latest object of the instaflow pipeline code Unfortunately in my area is hard for the environment to download it directly from rawgithub. ... | https://github.com/huggingface/diffusers/issues/6819 | closed | [] | 2024-02-02T02:53:48Z | 2024-11-28T05:44:10Z | null | Kevin-shihello-world |
huggingface/diffusers | 6,817 | How to use class_labels in the Unet2DConditionalModel or Unet2DModel when forward? | Hi, I want to know what the shape or format of "class" is if I want to add the class condition to the unet? Just set the **classe_labels** 0, 1, 2, 3?
Unet2DModel: **class_labels** (torch.FloatTensor, optional, defaults to None) — Optional class labels for conditioning. Their embeddings will be summed with the times... | https://github.com/huggingface/diffusers/issues/6817 | closed | [] | 2024-02-02T02:17:40Z | 2024-02-07T07:31:35Z | null | boqian-li |
huggingface/sentence-transformers | 2,465 | How to load lora model to sentencetransformer model? | Dear UKPlab team,
My team and myself are working on a RAG project and right now we are fine tuning a retrieval model using peft library. The issue is once we have the model fine-tuned, we couldn't load the local config and checkpoints using `sentencetransformer`.
Here is our hierarchy of the local path of the peft... | https://github.com/huggingface/sentence-transformers/issues/2465 | closed | [] | 2024-02-02T00:18:04Z | 2024-11-08T12:32:36Z | null | Shengyun-Si |
huggingface/amused | 3 | How to generate multiple images? | Thank you for your amazing work! Could you kindly explain how to generate multiple images at a time? Thankyou | https://github.com/huggingface/amused/issues/3 | closed | [] | 2024-02-01T18:03:30Z | 2024-02-02T10:36:09Z | null | aishu194 |
huggingface/alignment-handbook | 110 | DPO loss on different datasets | In parallel with #38, tho i am relating to full training instead of lora.
When i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the cas... | https://github.com/huggingface/alignment-handbook/issues/110 | open | [] | 2024-02-01T15:49:29Z | 2024-02-01T15:49:29Z | 0 | wj210 |
huggingface/chat-ui | 757 | Which (temperature) configurations for Zephyr chat interface? | Hi, I apologise for what is maybe an obvious question but where can I find the exact configurations for the model offered on the HF Zephyr Chat interface on https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat for Zephyr 7B beta? I'm especially interested to see the temperature settings and wasn't able to find this ... | https://github.com/huggingface/chat-ui/issues/757 | closed | [
"support"
] | 2024-02-01T14:27:12Z | 2024-02-01T14:47:13Z | 3 | AylaRT |
huggingface/diffusers | 6,804 | How to only offload some parts but not whole model into cpu? | Using enable_cpu_offload() will offload the whole model into cpu, which can occupy a large part of cpu memory. How can I just offload a part of model into cpu? | https://github.com/huggingface/diffusers/issues/6804 | closed | [] | 2024-02-01T07:43:04Z | 2024-02-02T04:59:43Z | null | blx0102 |
huggingface/transformers.js | 553 | How to convert BAAI/bge-m3 for Transformers.js? | ### Question
I tried to convert https://huggingface.co/BAAI/bge-m3 to ONNX using the instructions at https://github.com/xenova/transformers.js?tab=readme-ov-file#convert-your-models-to-onnx but I'm getting errors.
```shell
$ python -m scripts.convert --model_id BAAI/bge-m3
Framework not specified. Using pt to e... | https://github.com/huggingface/transformers.js/issues/553 | closed | [
"question"
] | 2024-02-01T01:40:02Z | 2024-02-08T22:17:29Z | null | devfacet |
pytorch/torchx | 813 | Docker build verbosity | ## Description
Changing the docker image build to its low level implementation so it can be more verbose.
## Motivation/Background
Building the docker image can take quite some time, and for new users this makes it seem like the program is stuck (especially since the default base image that includes torchx is so b... | https://github.com/meta-pytorch/torchx/issues/813 | closed | [] | 2024-01-31T18:49:35Z | 2024-04-11T17:42:34Z | 3 | ccharest93 |
pytorch/tutorials | 2,859 | Correctness of when to call `set_device` in the docs for DDP | ### 📚 The doc issue
In the docs tutorial on [how to set up Multi-GPU training](https://pytorch.org/tutorials/beginner/ddp_series_multigpu.html), it is suggested that the following is the proper way to setup each process (initializing the, e.g., NCCL, process group and then calling `torch.cuda.set_device(rank)`):
`... | https://github.com/pytorch/tutorials/issues/2859 | closed | [] | 2024-01-31T18:06:42Z | 2024-05-07T17:10:56Z | 5 | craymichael |
huggingface/diffusers | 6,785 | How to finetune stable diffusion img2img(like instructpix2pix or controlnet) model with only one input channel? | Hello, experts!
I want to finetune stable diffusion img2img(like instructpix2pix or controlnet) model with only one input channel or greyscale image? I saw official docs says it is ok to increase the input channel from 4 to 9, but I want to know that is this ok to decrease the input channel to be one for finetuning?
... | https://github.com/huggingface/diffusers/issues/6785 | closed | [] | 2024-01-31T09:17:56Z | 2024-01-31T09:27:43Z | null | sapkun |
huggingface/accelerate | 2,399 | How to use vscode to debug the acceleration program with breakpoints? I checked a lot of information, but still didn't find a solution | How to use vscode to debug the acceleration program with breakpoints? I checked a lot of information, but still didn't find a solution

| https://github.com/huggingface/accelerate/issues/2399 | closed | [] | 2024-01-31T09:00:32Z | 2024-03-10T15:05:56Z | null | kejia1 |
huggingface/datatrove | 72 | Tokenization in Minhash deduplication | Hi,
I have noticed that the tokenization is different from those adopted by previous papers.
For example, this [paper](https://arxiv.org/abs/2107.06499) uses space tokenization, [refinedweb](https://arxiv.org/abs/2306.01116) states that they used GPT-2 tokenizer, while datatrove adopts nltk to extract n-grams.
... | https://github.com/huggingface/datatrove/issues/72 | closed | [
"question"
] | 2024-01-31T02:33:17Z | 2024-02-01T15:36:24Z | null | jordane95 |
huggingface/peft | 1,419 | How to torch.jit.trace a peft model | ### Feature request
Need an example of how to trace a peft model.
### Motivation
Hi, I'm trying to deploy a Lora-finetuned llama model on Nvidia Triton server. For that I need to `traced_model = torch.jit.trace(model, model_input_dict, strict=False)`, however I encountered issues like `Tracing failed sanity ch... | https://github.com/huggingface/peft/issues/1419 | closed | [] | 2024-01-30T22:56:10Z | 2024-02-06T09:16:07Z | null | dcy0577 |
huggingface/gsplat.js | 56 | how to change the camera clipping - and a feature request: add rotate control | Hello and thank you for your great work!
I am a coding noob but managed to use the jsfiddle example to set up a page on which I can display my splats.
Is it possible to change the clipping (and other) settings for the camera? If so, where should I look??
And for the request; never mind, I was not paying atten... | https://github.com/huggingface/gsplat.js/issues/56 | closed | [] | 2024-01-30T19:20:35Z | 2024-01-31T16:51:30Z | null | murcje |
huggingface/accelerate | 2,395 | Question: how to apply device map to a paired model | Hello everybody,
I have been experimenting with Mistral models and have written a small second model to be paired with it. However, I have a machine with 2 GPUs and would like to use both. I am aware that the parallelization `accelerate` uses is based on splitting the data by batches. How can I apply the device map ... | https://github.com/huggingface/accelerate/issues/2395 | closed | [] | 2024-01-30T19:17:52Z | 2024-02-01T19:18:08Z | null | EricLBuehler |
pytorch/cpuinfo | 221 | How to obtain information of CPU frequency? | if (core->processor_count == 1) {
printf("\t%" PRIu32 ": 1 processor (%" PRIu32 "), Frequency: %" PRIu64 " Hz\n",
i,
core->processor_start,
core->frequency);
}
Frequency output 0 | https://github.com/pytorch/cpuinfo/issues/221 | open | [
"enhancement"
] | 2024-01-30T03:21:26Z | 2025-12-30T22:59:44Z | null | yichenchenyi |
pytorch/text | 2,227 | Fail to import torchtext KeyError: 'SP_DIR' | ## ❓ Questions and Help
**Description**
I failed to import torchtext with the following error. I tried it with a fresh conda env install (under a different python version) and still got the same issue.
Originally I was able to use torchtext (I remember installed from pip) in an env of python 3.11, but then it... | https://github.com/pytorch/text/issues/2227 | closed | [] | 2024-01-30T02:50:25Z | 2024-02-08T02:04:18Z | 1 | cecilialee |
pytorch/xla | 6,411 | SPMD Global Batch size vs. --per_device_train_batch_size | ## ❓ Questions and Help
Hey all,
Am looking to solidify my understanding and seeking a clarification on the SPMD user guide: https://github.com/pytorch-tpu/transformers/blob/llama2-google-next-training/SPMD_USER_GUIDE.md
I see it says:
_global_batch_size: The global batch size to use. Note that this valu... | https://github.com/pytorch/xla/issues/6411 | closed | [
"question",
"distributed"
] | 2024-01-30T00:16:54Z | 2025-04-21T13:20:54Z | null | isaacr |
huggingface/diffusers | 6,755 | how to train a lora in inpainting model? | Is there a script to train Lora in SD 1.5 inpainting?
Is there any script to train Lora in SD 1.5 inpainting that works?
try this
https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint
but it gives error
`RuntimeError: element 0 of tensors does not require grad and doe... | https://github.com/huggingface/diffusers/issues/6755 | closed | [
"stale"
] | 2024-01-29T21:14:57Z | 2024-11-22T01:39:54Z | null | loboere |
pytorch/TensorRT | 2,624 | ❓ undefined reference when Building Torch-TensorRT | ## ❓ Question
<!-- Your question -->
## What you have already tried
I'm trying to build **Torch-TensorRT version 2.3.0a0**.
I successfully built **Torch 2.3.0.dev**.
When building Torch-TensorRT, if I comment **http_archive** for **libtorch** and **libtorch_pre_cxx11_abi** and use the **new_local_repositor... | https://github.com/pytorch/TensorRT/issues/2624 | open | [
"question"
] | 2024-01-29T18:26:34Z | 2024-11-19T08:23:07Z | null | nicholasguimaraes |
huggingface/optimum-benchmark | 116 | How to use optimum-benchmark for custom testing of my model | I am currently using Intel® Extension for Transformers to quantize a model, and I wonder if it is possible to utilize optimum-benchmark for testing the model. Alternatively, if there are other methods to load large models, could I conduct tests using optimum-benchmark after loading the model? Many thanks; this has been... | https://github.com/huggingface/optimum-benchmark/issues/116 | closed | [] | 2024-01-29T04:07:36Z | 2024-02-19T16:07:06Z | null | WCSY-YG |
pytorch/vision | 8,236 | segmentation fault when importing torchvision | ### 🐛 Describe the bug
Get Segment Fault when import torchvision
## Platform:
Macbook Pro 2018 13.3' with macOS 14.3
## Pytorch Version
2.1.2
## Torchvision Version:
0.16.2
## How to Reproduce
input below in shell terminal
```sh
python -c 'import torchvision'
```
then the output is
```sh
zsh:... | https://github.com/pytorch/vision/issues/8236 | closed | [] | 2024-01-29T01:02:48Z | 2024-01-31T17:17:50Z | 9 | Romeo-CC |
huggingface/chat-ui | 747 | .env.local config for llama-2-7b.Q4_K_S.gguf with llama.cpp server | I am using the following .env.local with llama-2-7b.Q4_K_S.gguf and llama prompt template
```
MODELS=`[
{
"name": "llama-2-7b.Q4_K_S.gguf",
"chatPromptTemplate": "<s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s>... | https://github.com/huggingface/chat-ui/issues/747 | open | [
"support"
] | 2024-01-29T00:54:19Z | 2024-02-22T14:54:08Z | 3 | smamindl |
huggingface/chat-ui | 746 | settings page does not reflect selected Theme | Settings page is always light/white regardless of the Theme selected (Dark or Light).
Is this intentional or we just did not have time to respect the selected Theme?
If we need to fix this, how much work load do you expect? Just small change on the main settings page (settings/+layout.svelte) or do we need to ch... | https://github.com/huggingface/chat-ui/issues/746 | open | [
"question",
"front"
] | 2024-01-28T23:09:38Z | 2024-01-29T11:48:59Z | null | hungryalgo |
huggingface/transformers.js | 547 | Text to speech generation using Xenova/mms-tts-por | ### Question
Hi! First of all, thank you for the awesome library, it's been handy so far!
I've got 2 questions regarding TTS:
- I'm using the model above to create a Brazilian Portuguese spoken audio and would like to know if there are options for this model, eg.: changing the voice from male to female, and the ... | https://github.com/huggingface/transformers.js/issues/547 | closed | [
"question"
] | 2024-01-28T13:51:21Z | 2025-01-13T22:15:35Z | null | Darksoulsong |
huggingface/diffusers | 6,739 | how to generate images based on the text token embedding outputted from CLIP. token_embedding module? | how to generate images based on the text token embedding outputted from CLIP. token_embedding module? | https://github.com/huggingface/diffusers/issues/6739 | closed | [
"stale",
"should-move-to-discussion"
] | 2024-01-28T08:51:45Z | 2024-11-19T09:27:00Z | null | FlyGreyWolf |
huggingface/transformers.js | 546 | header is not define | ### Question

| https://github.com/huggingface/transformers.js/issues/546 | closed | [
"question"
] | 2024-01-28T07:59:10Z | 2024-01-28T09:28:27Z | null | BipulRahi |
huggingface/datasets | 6,624 | How to download the laion-coco dataset | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | https://github.com/huggingface/datasets/issues/6624 | closed | [] | 2024-01-28T03:56:05Z | 2024-02-06T09:43:31Z | null | vanpersie32 |
huggingface/datasets | 6,623 | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | https://github.com/huggingface/datasets/issues/6623 | open | [
"enhancement"
] | 2024-01-27T23:46:13Z | 2025-12-08T12:26:20Z | 29 | rohitgr7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.