repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/trl | 1,510 | [question] how to apply model parallism to solve cuda memory error | hi team. I am using the SFT and PPO code to train my model, link https://github.com/huggingface/trl/tree/main/examples/scripts.
Due to long context length and 7B-level model size, I am facing cuda memory issue on my single gpu.
Is there any straightforward manner to utilize multiple gpus on my server to train th... | https://github.com/huggingface/trl/issues/1510 | closed | [] | 2024-04-06T02:09:36Z | 2024-05-06T17:02:35Z | null | yanan1116 |
huggingface/dataset-viewer | 2,667 | Rename datasets-server to dataset-viewer in infra internals? | Follow-up to #2650.
Is it necessary? Not urgent in any Case.
Some elements to review:
- [ ] https://github.com/huggingface/infra
- [ ] https://github.com/huggingface/infra-deployments
- [ ] docker image tags (https://hub.docker.com/r/huggingface/datasets-server-services-search -> https://hub.docker.com/r/huggi... | https://github.com/huggingface/dataset-viewer/issues/2667 | closed | [
"question",
"P2"
] | 2024-04-05T16:53:34Z | 2024-04-08T09:26:14Z | null | severo |
huggingface/dataset-viewer | 2,666 | Change API URL to dataset-viewer.huggingface.co? | Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650
Should we do it?
- https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875
- https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911
If we change it, we would have to update:
- moon-landing
-... | https://github.com/huggingface/dataset-viewer/issues/2666 | closed | [
"question",
"P2"
] | 2024-04-05T16:49:13Z | 2024-04-08T09:24:43Z | null | severo |
huggingface/huggingface.js | 609 | [Question] What is the correct way to access commit diff results via http? | Data I am interested in:

Here's the endpoint to list commits
https://huggingface.co/api/models/SimonMA/Codellama-7b-lora-rps-adapter/commits/main | https://github.com/huggingface/huggingface.js/issues/609 | closed | [] | 2024-04-05T12:00:15Z | 2024-04-09T18:40:05Z | null | madgetr |
huggingface/dataset-viewer | 2,661 | Increase the number of backfill workers? | Today, it's 8. Let's try increasing it and see if it speeds up the backfill job.
The current throughput is 577 datasets/minute. | https://github.com/huggingface/dataset-viewer/issues/2661 | open | [
"question",
"P2",
"prod"
] | 2024-04-05T10:42:11Z | 2024-04-05T16:42:13Z | null | severo |
huggingface/transformers | 30,066 | How to calculate the mAP on this network? | ### System Info
I want to evaluate my network with the mean Average Precision. I don't know how to get the class-id of my gt data. Are there any examples to calculate the mAP with this library?
I use the DetrForObjectDetection with my own dataset.
### Who can help?
_No response_
### Information
- [ ] ... | https://github.com/huggingface/transformers/issues/30066 | closed | [] | 2024-04-05T08:32:31Z | 2024-06-08T08:04:08Z | null | Sebi2106 |
huggingface/optimum-quanto | 152 | How does quanto calibrate torch functions? | I have learned quanto calibrate ops in module forms by adding module hooks, but how about torch functions like `torch.sigmoid`, `torch.elu`, and `torch.log` etc?
I think the output scale of `torch.sigmoid` could be directly evaluated similarly to quanto's approach with `softmax`. Additionally, `torch.elu` might be sub... | https://github.com/huggingface/optimum-quanto/issues/152 | closed | [
"question"
] | 2024-04-05T06:49:51Z | 2024-04-11T09:41:55Z | null | shuokay |
huggingface/candle | 2,007 | How to run inference of a (very) large model across mulitple GPUs ? | It is mentioned on README that candle supports multi GPU inference, using NCCL under the hood. How can this be implemented ? I wonder if there is any available example to look at..
Also, I know PyTorch has things like DDP and FSDP, is candle support for multi GPU inference comparable to these techniques ? | https://github.com/huggingface/candle/issues/2007 | open | [] | 2024-04-04T13:52:46Z | 2024-08-12T04:53:54Z | null | jorgeantonio21 |
huggingface/candle | 2,006 | How to get different outputs for the same prompt? | I used a gemma, it always returned same outputs for same prompt.
How can I get different outputs? Is there any method or parameter for sampling? (I even doubt that `top_p` works.)
| https://github.com/huggingface/candle/issues/2006 | closed | [] | 2024-04-04T10:43:31Z | 2024-04-13T11:17:36Z | null | Hojun-Son |
huggingface/chat-ui | 975 | is it possible to hide the setting from the users? most users do not want to create assistants, and they just want to use existing ones. | In the left-hand corner of hugginchat, "Assistants" and "Settings" are visible. We are considering whether it is possible to hide these options from our users, as they have expressed no interest in creating assistants and prefer to use existing ones. Many thanks for your kind help.. Howard | https://github.com/huggingface/chat-ui/issues/975 | open | [] | 2024-04-04T07:33:25Z | 2024-04-04T07:33:25Z | 0 | hjchenntnu |
huggingface/transformers.js | 679 | Speech Recognition/Whisper word level scores or confidence output | ### Question
Hey,
Big thanks for awesome project!
It possible to add score/confidence for word level output when using Speech Recognition/Whisper model?
Would appreciate any direction/comments or suggestion where to dig to add it.
Happy to submit PR if I will success in it.
Thanks!
| https://github.com/huggingface/transformers.js/issues/679 | open | [
"question"
] | 2024-04-04T07:04:00Z | 2024-04-04T07:04:00Z | null | wobbble |
huggingface/transformers | 30,034 | What is the data file format of `run_ner.py`? | ### Feature request
What is the correct format for custom dataset in run_ner.py? Would it be possible to include a few lines on this with a helpful example?
### Motivation
I am using the example script run_ner.py from [huggingface](https://github.com/huggingface)/transformers It is not possible to use standar... | https://github.com/huggingface/transformers/issues/30034 | closed | [
"Good First Issue"
] | 2024-04-04T06:36:30Z | 2024-04-08T11:50:00Z | null | sahil3773mehta |
huggingface/datasets | 6,777 | .Jsonl metadata not detected | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white... | https://github.com/huggingface/datasets/issues/6777 | open | [] | 2024-04-04T06:31:53Z | 2024-04-05T21:14:48Z | 5 | nighting0le01 |
huggingface/lighteval | 143 | Do an intro notebook on how to use `lighteval` | https://github.com/huggingface/lighteval/issues/143 | closed | [
"documentation"
] | 2024-04-03T07:53:25Z | 2024-12-05T10:18:42Z | null | clefourrier | |
huggingface/accelerate | 2,614 | How to I selectively apply accelerate to trainers | I have two trainers in a script, one is SFTTrainer and one is PPOTrainer, both from trl library. Is it possible to only apply accelerate to PPOTrainer? | https://github.com/huggingface/accelerate/issues/2614 | closed | [] | 2024-04-03T06:39:05Z | 2024-05-21T15:06:36Z | null | zyzhang1130 |
huggingface/sentence-transformers | 2,568 | How to improve sentence-transformers' performance on CPU? | On the CPU, I tried huggingface‘s optimization.onnx and sentence_transformers and I found that on the task of feature_extraction, optimization.onnx was not as good as sentence_transformers in batch encoding performance.
My question is, are sentence_transformers the current ceiling on CPU performance? | https://github.com/huggingface/sentence-transformers/issues/2568 | closed | [] | 2024-04-03T02:09:14Z | 2024-04-23T09:17:39Z | null | chensuo2048 |
huggingface/datasets | 6,773 | Dataset on Hub re-downloads every time? | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whene... | https://github.com/huggingface/datasets/issues/6773 | closed | [] | 2024-04-02T17:23:22Z | 2024-04-08T18:43:45Z | 5 | manestay |
huggingface/transformers.js | 677 | How you debug/measure Python -> Javascript ONNX Conversion | ### Question
I have converted a couple ONNX models to use ONNXRuntimeWeb from using the Python onnx version as the source. Ive spent weeks debugging though. What's your strategy for comparing tensor values, etc, with these onnx models?
Ive console log'd N# of values from the tensor/array to see if the values have... | https://github.com/huggingface/transformers.js/issues/677 | open | [
"question"
] | 2024-04-02T16:16:22Z | 2024-04-02T16:18:03Z | null | matbeedotcom |
huggingface/transformers.js | 676 | How to use fp16 version of the model file? | ### Question
example files: https://huggingface.co/Xenova/modnet/tree/main/onnx | https://github.com/huggingface/transformers.js/issues/676 | closed | [
"question"
] | 2024-04-02T12:10:24Z | 2024-04-03T02:56:52Z | null | cyio |
huggingface/chat-ui | 969 | Display does not automatically update after receiving message | After receiving the message, the chat page does not update and is always in the loading state. The received message can only be displayed after refreshing the page or switching sessions.

| https://github.com/huggingface/chat-ui/issues/969 | open | [
"question"
] | 2024-04-02T06:14:59Z | 2024-04-03T04:26:23Z | null | w4rw4r |
huggingface/dataset-viewer | 2,654 | Tutorial about how to start/run my own local dataset server. | Hey,
I'm new to the dataset server and rookie in the Web field. I wanted to build my own dataset server however, is there any tutorial that can guide me to build my own dataset server?
Many Thanks | https://github.com/huggingface/dataset-viewer/issues/2654 | closed | [] | 2024-04-02T01:30:12Z | 2024-05-11T15:03:50Z | null | ANYMS-A |
huggingface/accelerate | 2,603 | How to load a FSDP checkpoint model | I have fine tuned gemma 2b model using FSDP and these are the below files available under the checkpoint
```
optimizer_0 pytorch_model_fsdp_0 rng_state_0.pth rng_state_1.pth scheduler.pt trainer_state.json
```
How can i load the above FSDP object?
kindly help me with this issue,
| https://github.com/huggingface/accelerate/issues/2603 | closed | [] | 2024-04-01T16:53:24Z | 2024-05-11T15:06:21Z | null | nlpkiddo-2001 |
huggingface/datasets | 6,769 | (Willing to PR) Datasets with custom python objects | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives... | https://github.com/huggingface/datasets/issues/6769 | open | [
"enhancement"
] | 2024-04-01T13:18:47Z | 2024-04-01T13:36:58Z | 0 | fzyzcjy |
huggingface/optimum-quanto | 146 | Question about the gradient of QTensor and QBitTensor | I am confused by the gradient of the Quantizer and QBitTensor. Take QTensor as the example:
The evaluation of forward is:
```txt
data = base / scale (1)
data = round(data) (2)
data = clamp(data, qmin, qmax) (3)
```
I think the graidents should be:
```txt
grad_div = 1 / scale (1)
grad_round = 1 (2) #... | https://github.com/huggingface/optimum-quanto/issues/146 | closed | [
"question"
] | 2024-03-31T14:33:10Z | 2024-04-24T13:51:20Z | null | shuokay |
huggingface/transformers.js | 673 | Is dit-base supported | ### Question
There is a [Huggingface repo](https://huggingface.co/Xenova/dit-base) for the ONNX version of the dit-base model but I can't seem to make it work.
I keep getting the following error:

Is the mode... | https://github.com/huggingface/transformers.js/issues/673 | closed | [
"question"
] | 2024-03-31T01:18:42Z | 2024-03-31T01:48:24Z | null | Maxzurek |
huggingface/datatrove | 143 | Understand the output of deduplication | Hi
I have arabic split from the CC trying to deduplicate it
I used datatrove for this with a small example
I got in my output folder two files
0000.c4_dup and 0000.c4_sig
Could you help me to understand this output
I cannot read its content as it's c/00000.c4_sig is not UTF-8 encoded and seems to be binary files... | https://github.com/huggingface/datatrove/issues/143 | closed | [
"question"
] | 2024-03-30T23:16:21Z | 2024-05-06T09:30:43Z | null | Manel-Hik |
huggingface/candle | 1,971 | How to use `topk`? | I am trying to use `topk` to implement X-LoRA in Candle, and want to perform `topk` in the last dimension. Specifically, I need the `indices` return value (as returned by [`torch.topk`](https://pytorch.org/docs/stable/generated/torch.topk.html)).
These indices will either be used to creaste a mask to zero out all t... | https://github.com/huggingface/candle/issues/1971 | closed | [] | 2024-03-30T20:29:45Z | 2024-07-23T02:02:58Z | null | EricLBuehler |
huggingface/transformers.js | 671 | What is involved in upgrading to V3? | ### Question
In anticipation of being able to [generate music](https://github.com/xenova/transformers.js/issues/668) with musicGen I'm attempting to switch my project over to version 3, which I was able to build on my mac.
I noticed that when using SpeechT5, the voice sounds completely garbled. I've attached a zip ... | https://github.com/huggingface/transformers.js/issues/671 | closed | [
"question"
] | 2024-03-29T18:09:23Z | 2024-03-31T13:50:27Z | null | flatsiedatsie |
huggingface/datasets | 6,764 | load_dataset can't work with symbolic links | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metad... | https://github.com/huggingface/datasets/issues/6764 | open | [
"enhancement"
] | 2024-03-29T17:49:28Z | 2025-04-29T15:06:28Z | 1 | VladimirVincan |
huggingface/transformers.js | 670 | Are tokenizers supposed to work in the browser? | ### Question
I'd love to use some pretrained tokenizers, right in my browser. On a number of occasions, I've tried to use this library to load and use a tokenizer in my browser, but it always fails with an error like this:
```
Uncaught (in promise) SyntaxError: JSON.parse: unexpected character at line 1 column 1 of ... | https://github.com/huggingface/transformers.js/issues/670 | closed | [
"question"
] | 2024-03-29T16:10:46Z | 2024-03-29T16:53:21Z | null | Vectorrent |
huggingface/transformers.js | 669 | TinyLlama Conversion | ### Question
I ran the converter script on the tinyllama repo for both the TinyLlama models ([intermediate step 1431K 3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) and [chat v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)) and uploaded them to my repo ([intermediate... | https://github.com/huggingface/transformers.js/issues/669 | closed | [
"question"
] | 2024-03-29T14:50:06Z | 2025-10-13T04:57:32Z | null | dmmagdal |
huggingface/datatrove | 142 | Deduplicating local data throws an error | Hi,
I have data in my local machine in the format of a jsonl file and I want to deduplicate it. I'm using the following example:
`sent_dedup_config = SentDedupConfig(
n_sentences=3,
split_sentences=False, # set to False to split on \n instead
only_dedup_in_index=True,
min_doc_words=50,
)
FI... | https://github.com/huggingface/datatrove/issues/142 | closed | [
"question"
] | 2024-03-29T12:31:30Z | 2024-04-24T14:15:58Z | null | Manel-Hik |
huggingface/optimum-intel | 642 | How to apply LoRA adapter to a model loaded with OVModelForCausalLM()? | In the transformers library, we can load multiple adapters to the original model by load_adapter then switch the specified adapter with set_adapter like below.
```
# base model
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
# load multiple adapters
model.load_adapter("model/adapter1/", "adap... | https://github.com/huggingface/optimum-intel/issues/642 | closed | [] | 2024-03-29T01:13:44Z | 2024-08-03T12:34:21Z | null | nai-kon |
huggingface/transformers | 29,948 | How to All Utilize all GPU's when device="balanced_low_0" in GPU setting | ### System Info
I know that while loading the model in "balanced_low_0" GPU setting the model is loaded into all GPU's apart from 0: GPU. Where the 0: GPU is left to do the text inference. (i.e. text inference as in performing all the calculation to generate response inside the LLM)
So, as per the give device param... | https://github.com/huggingface/transformers/issues/29948 | closed | [] | 2024-03-28T19:54:09Z | 2024-05-07T13:43:08Z | null | kmukeshreddy |
huggingface/dataset-viewer | 2,649 | Should we support /filter on columns that contain SQL commands? | See the `schema` column on https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k. Clicking on any of the 'classes' leads to an error
<img width="1209" alt="Capture d’écran 2024-03-28 à 15 11 50" src="https://github.com/huggingface/datasets-server/assets/1676121/3aaf779f-0465-429a-bafb-1a16ff5f2901">
... | https://github.com/huggingface/dataset-viewer/issues/2649 | open | [
"question",
"api",
"P2"
] | 2024-03-28T14:14:01Z | 2024-03-28T14:24:34Z | null | severo |
huggingface/accelerate | 2,593 | How to use training function rather than training scripts in multi GPUs and multi node? | I confirmed that the Multi-gpu launcher is executed based on the training function using the PrepareForLaunch function in "accelerate/examples/multigpu_remote_launcher.py".
Usually, the "accelerate launch" or "python -m torch.distributed.run" command is used for multi-node, but is there a way to utilize a training f... | https://github.com/huggingface/accelerate/issues/2593 | closed | [] | 2024-03-28T07:05:50Z | 2024-05-05T15:06:26Z | null | wlsghks4043 |
huggingface/alignment-handbook | 144 | Can we please add the option to work with a tokenized dataset, escpailly for the CPT task. | Since we have the CPT task now, it would be nice to have the ability to feel a tokenized and packed dataset directly. | https://github.com/huggingface/alignment-handbook/issues/144 | open | [] | 2024-03-27T18:31:58Z | 2025-02-27T16:23:06Z | 1 | shamanez |
huggingface/transformers.js | 668 | Is it possible to run a music / sounds generation model? | ### Question
I'd love to create a browser-based music generation tool, or one that can turn text into sound effects. Is that supported?
I guess my more general question is: can Transformers.js run pretty much any .onnx I throw at it, or does each model require some level of implementation before it can be used? | https://github.com/huggingface/transformers.js/issues/668 | closed | [
"question"
] | 2024-03-27T18:22:31Z | 2024-05-13T21:17:54Z | null | flatsiedatsie |
huggingface/optimum-quanto | 139 | Dequantizing tensors using quanto | I noticed the quantized models have these 4 additional features, for every weight in the original, e.g:
```
model.layers.0.mlp.down_proj.activation_qtype,
model.layers.0.mlp.down_proj.input_scale,
model.layers.0.mlp.down_proj.output_scale,
model.layers.0.mlp.down_proj.weight_qtype
```
I guess `qtype` refers to t... | https://github.com/huggingface/optimum-quanto/issues/139 | closed | [
"question"
] | 2024-03-27T18:00:34Z | 2024-04-11T09:22:29Z | null | raunaks13 |
huggingface/safetensors | 458 | Safetensors uses excessive RAM when saving files | Safetensors uses around twice the RAM that `torch.save`:
```python
import resource
import torch
from safetensors.torch import save_file
torch.save({'tensor': torch.randn((500000000))}, 'test.torch')
print(resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)
save_file({'tensor': torch.randn((500000000))}, 'tes... | https://github.com/huggingface/safetensors/issues/458 | closed | [
"Stale"
] | 2024-03-27T12:11:38Z | 2024-05-02T01:47:32Z | 1 | sheepymeh |
huggingface/transformers | 29,897 | How to finetune a language model after extent token embeddings? | If I add some new tokens for a language model, I will get some random initialized weights in embeddings and lm_head. Is there any official way to train only these new weights? Or all I can do is adding hooks to the tensors to zero the gradient for weights I do not want to change? | https://github.com/huggingface/transformers/issues/29897 | closed | [] | 2024-03-27T08:20:24Z | 2024-03-27T15:01:04Z | null | bluewanderer |
huggingface/text-generation-inference | 1,677 | how to get the latest version number? | In the document, I use "docker run ghcr.io/huggingface/text-generation-inference:latest" to run the latest version of tgi. But in a production environment, I need to fix the version number. I can't find any webpage similar to [docker hub](https://hub.docker.com/r/pytorch/manylinux-cuda102). So how can I use docker comm... | https://github.com/huggingface/text-generation-inference/issues/1677 | closed | [] | 2024-03-27T05:43:49Z | 2024-03-29T02:30:10Z | null | fancyerii |
huggingface/optimum-quanto | 134 | Should quanto use int dtype in AffineQuantizer instead of uint? | According to code in https://github.com/huggingface/quanto/blob/main/quanto/tensor/qbitstensor.py#L34 I find quanto use uint dtype to store the quantized value in affine quantizer, while in symmetric quantizer it is int dtype
https://github.com/huggingface/quanto/blob/main/quanto/tensor/qtensor.py#L62.
Taking har... | https://github.com/huggingface/optimum-quanto/issues/134 | closed | [
"question"
] | 2024-03-26T14:21:25Z | 2024-04-11T09:25:09Z | null | shuokay |
huggingface/hub-docs | 1,257 | Add section about deprecation of script-based datasets? | Asked here: https://github.com/huggingface/datasets-server/issues/2385#issuecomment-2017984722
> Perhaps a little bit of suggestion from me is to include a disclaimer in the docs so that others are aware that developing a custom script is not supported.
It would also help answer the discussions + we could link in... | https://github.com/huggingface/hub-docs/issues/1257 | open | [
"question"
] | 2024-03-26T13:20:27Z | 2024-03-26T17:49:50Z | null | severo |
huggingface/candle | 1,941 | [help] how to update a portion of a long tensor | I'm aware of the closed issue(#1163 ) and understand that Var is mutable and Tensor is immutable by design. But I find it hard to impl some logic if it's impossible to update a portion of a Tensor.
For example, how can I generate a pairwise combination from two 2d tensors:
```rust
let a = Tensor::new(&[[1.... | https://github.com/huggingface/candle/issues/1941 | closed | [] | 2024-03-26T11:47:56Z | 2024-04-07T15:42:45Z | null | michael8090 |
huggingface/optimum | 1,776 | How to convert a model(tf_model.h5) with tokenizer folder to the onnx format | ### Feature request
I have trained the TensorFlow model using the Transformers library and saved the trained model and tokenizer in a folder named MODEL_WITH_TOKENIZER. The model is stored inside the folder in a **.h5** format - **tf_model.h5**
Here is the folder structure.
 and for each `gpt` turn a label (thumbs up or thumbs down). But for KTO training, I have only seen datasets with the columns `prompt`, `completion` and `label` (see e.g. https://huggingface.co/datasets/trl-lib/kto-mix-14k).
Do I need to unwind my shareGPT dialogs (se... | https://github.com/huggingface/alignment-handbook/issues/142 | open | [] | 2024-03-26T10:29:38Z | 2024-03-26T10:30:08Z | 0 | DavidFarago |
huggingface/transformers.js | 664 | How to confirm if webgpu actually working in the backend with inferencing | ### Question
Hi Team,
Thanks for the awsome library.
Recently I am experimenting to run background remove model in the client side using webgpu. I came across this solution https://huggingface.co/spaces/Xenova/remove-background-webgpu.
Tried to replicate the same in my local using your V3 branch.
The way I ... | https://github.com/huggingface/transformers.js/issues/664 | open | [
"question"
] | 2024-03-26T08:17:05Z | 2024-07-24T06:13:50Z | null | abiswas529 |
huggingface/dataset-viewer | 2,630 | Take spawning.io opted out URLs into account in responses? | In particular, for images (assets / cached-assets).
Raised internally: https://huggingface.slack.com/archives/C040J3VPJUR/p1702578556307069?thread_ts=1702577137.311409&cid=C040J3VPJUR | https://github.com/huggingface/dataset-viewer/issues/2630 | open | [
"question",
"P2"
] | 2024-03-25T11:49:49Z | 2024-03-25T11:49:58Z | null | severo |
huggingface/datasets | 6,756 | Support SQLite files? | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In ... | https://github.com/huggingface/datasets/issues/6756 | closed | [
"enhancement"
] | 2024-03-25T11:48:05Z | 2024-03-26T16:09:32Z | 3 | severo |
huggingface/dataset-viewer | 2,629 | Detect when a new commit only changes the dataset card? | Ideally, when we change the contents of the dataset card (not the YAML part), the responses computed by the datasets server should not be recomputed, because they will lead to the same results.
asked here (private slack channel): https://huggingface.slack.com/archives/C04N96UGUFM/p1701862863691809
> Sometimes I d... | https://github.com/huggingface/dataset-viewer/issues/2629 | closed | [
"question",
"improvement / optimization",
"P2"
] | 2024-03-25T10:57:36Z | 2024-06-19T16:02:33Z | null | severo |
huggingface/dataset-viewer | 2,627 | Replace our custom "stale bot" action with the GitHub's one? | See `actions/stale@v5`
```yaml
name: Mark inactive issues as stale
on:
schedule:
- cron: "30 1 * * *"
jobs:
close-issues:
runs-on: ubuntu-latest
permissions:
issues: write
pull-requests: write
steps:
- uses: actions/stale@v5
with:
days-before-is... | https://github.com/huggingface/dataset-viewer/issues/2627 | open | [
"question",
"ci",
"P2"
] | 2024-03-25T10:48:47Z | 2024-03-25T10:49:02Z | null | severo |
huggingface/candle-paged-attention | 1 | How to use candle-paged-attention in candle models? | Could you provide an example of candle-paged-attention for actual usage in candle models (candle-examples)? Is this crate ready to be used in candle? i.e., tested in end2end model inference? I'm a little bit confused about the construction of block_tables and context_lens. | https://github.com/huggingface/candle-paged-attention/issues/1 | open | [] | 2024-03-25T09:09:24Z | 2024-03-25T12:07:13Z | null | guoqingbao |
huggingface/optimum | 1,769 | Accuracy change with BetterTransformer | When transforming the model into BetterTransformer model I'm seeing accuracy drop on the models.
The output scores changes considerably (upto 1-2 decimal points of precision).
**Is accuracy change expected when switching to BetterTransformer ?** I'm not performing any ORT compilation or quantization on the model.
... | https://github.com/huggingface/optimum/issues/1769 | closed | [
"bettertransformer",
"Stale"
] | 2024-03-24T01:28:15Z | 2025-01-15T02:01:10Z | 7 | kapilsingh93 |
huggingface/optimum-quanto | 129 | Performance of quanto quants vs bnb, AWQ, GPTQ, GGML ? | I was wondering if there were any comparisons done looking at the speed and ppl of `quanto` quantizations with respect to the other quantization techniques out there. | https://github.com/huggingface/optimum-quanto/issues/129 | closed | [
"question"
] | 2024-03-23T11:37:33Z | 2024-04-11T09:22:47Z | null | nnethercott |
huggingface/transformers | 29,826 | How to convert pretrained hugging face model to .pt for deploy? | I'm attempting to convert this [model](https://huggingface.co/UrukHan/wav2vec2-russian) in .pt format. It's working fine for me so i dont want to fine-tune it. How can i export it to .pt and run interface for example in flask?
I tried using this to convert to .pt:
```
from transformers import AutoConfig, AutoPro... | https://github.com/huggingface/transformers/issues/29826 | closed | [] | 2024-03-23T10:09:16Z | 2025-10-13T23:08:57Z | null | vonexel |
huggingface/datasets | 6,750 | `load_dataset` requires a network connection for local download? | ### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not ... | https://github.com/huggingface/datasets/issues/6750 | closed | [] | 2024-03-23T01:06:32Z | 2024-04-15T15:38:52Z | 3 | MiroFurtado |
huggingface/dataset-viewer | 2,626 | upgrade to pyarrow 15? | we use pyarrow 14 | https://github.com/huggingface/dataset-viewer/issues/2626 | closed | [
"question",
"dependencies",
"P2"
] | 2024-03-22T18:22:04Z | 2024-04-30T16:19:19Z | null | severo |
huggingface/optimum-nvidia | 102 | Instructions on how to set TP/PP | https://github.com/huggingface/optimum-nvidia/blob/main/examples/text-generation.py is currently empty in that regard | https://github.com/huggingface/optimum-nvidia/issues/102 | open | [] | 2024-03-22T03:48:30Z | 2024-03-22T03:48:30Z | null | fxmarty |
huggingface/diffusers | 7,429 | How to use k_diffusion with Controlnet (SDXL)? | Dear developer,
I try to modify the code of [k_diffusion](https://github.com/huggingface/diffusers/blob/9613576191d8613fc550a1ec286adc4f1fc208ec/src/diffusers/pipelines/stable_diffusion_k_diffusion/pipeline_stable_diffusion_xl_k_diffusion.py#L837) to be compatible with controlnet.
But I got incorrect results, t... | https://github.com/huggingface/diffusers/issues/7429 | closed | [] | 2024-03-22T03:33:38Z | 2024-04-18T03:25:55Z | null | YoucanBaby |
huggingface/transformers | 29,777 | `MistralAttention`: where is the sliding window | Hi,
I'm trying to understand the implementation of Mistral's attention in `MistralAttention`.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L195
It is my understanding that it should always be using local window attention. In `MistralFlashAttention2` this... | https://github.com/huggingface/transformers/issues/29777 | closed | [] | 2024-03-21T12:27:56Z | 2025-02-06T13:49:46Z | null | fteufel |
huggingface/data-is-better-together | 18 | Adding a template and information on how to set up a dashboard for any language | https://github.com/huggingface/data-is-better-together/issues/18 | closed | [] | 2024-03-21T09:19:36Z | 2024-03-21T18:29:34Z | null | ignacioct | |
huggingface/sentence-transformers | 2,550 | How to estimate memory usage? | I would like to use `sentence-transformers` in a low-end machine (CPU-only) to load pre-trained models, such as `paraphrase-multilingual-MiniLM-L12-v2`, and compute a sentence's embedding.
How to estimate memory usage? Is there any guideline to describe the minimum system requirements for loading pre-trained models? | https://github.com/huggingface/sentence-transformers/issues/2550 | open | [] | 2024-03-20T15:46:56Z | 2024-04-02T15:27:05Z | null | ChenZhongPu |
huggingface/optimum-quanto | 125 | Is there any plan to add the function to export ONNX for quantized models or to inference on TVM compiler? | https://github.com/huggingface/optimum-quanto/issues/125 | closed | [
"question"
] | 2024-03-20T15:38:44Z | 2024-04-11T09:23:55Z | null | ntkhoa95 | |
huggingface/chat-ui | 947 | The prompt for title generation is not optimal | Hello,
I've noticed that the prompt for title generation is not optimal. For example on my simple message `Hello`... The title I got was `💬 Hello! How can I help you today? Let me know if you have any questions or topics you'd like me to explain. I'll do my best to provide accurate and helpful information. Have a gre... | https://github.com/huggingface/chat-ui/issues/947 | open | [] | 2024-03-20T10:27:11Z | 2024-03-21T18:18:58Z | 5 | ihubanov |
huggingface/pytorch-image-models | 2,114 | By using timm.create, how to download weights from url instead of HF? | I want to use url to load vit_base_patch8_224, and dino from hf_hub, so how can I do this? | https://github.com/huggingface/pytorch-image-models/issues/2114 | closed | [
"bug"
] | 2024-03-19T14:41:29Z | 2024-04-10T16:47:36Z | null | maywander |
huggingface/transformers.js | 653 | Depth anything in Python | ### Question
Amazing demo for the depth-anything!
I want to have a similar point cloud, but in Python, and wondering what's the logic behind your js [implementation](https://github.com/xenova/transformers.js/blob/main/examples/depth-anything-client/main.js).
Specifically:
1. How do you set up the intrinsic mat... | https://github.com/huggingface/transformers.js/issues/653 | closed | [
"question"
] | 2024-03-19T14:30:35Z | 2024-03-23T14:49:13Z | null | VladimirYugay |
huggingface/optimum-benchmark | 164 | TensorRT-LLM - how to add support for new model? | Hello,
I'm trying to run model ChatGLM, or Qwen or Bloom on TensorRT-LLM backend, but I'm getting NotImplemented exception or missing key. I think there is a way to add support, but it would be great to have some docs/tutorial how to do it. | https://github.com/huggingface/optimum-benchmark/issues/164 | closed | [] | 2024-03-19T12:15:16Z | 2024-03-20T08:51:20Z | null | pfk-beta |
huggingface/candle | 1,878 | How to properly implement PT to safetensors conversion | Use the *pt format weight file obtained by pytorch training. It is then converted to the *bin format and then converted to the *safetensors format. Error message is reported in candle yolov8 with error message
Error: cannot find tensor net.b.1.0.bn.running_mean | https://github.com/huggingface/candle/issues/1878 | closed | [] | 2024-03-19T11:51:59Z | 2024-04-06T11:37:24Z | null | EHW-liao |
huggingface/alignment-handbook | 138 | How to select parts to bp in sft | 
As the pic has shown, there are some cases that some parts of the gpt's response should not be cacluated in backward computing, if I want to achieve this function, what should I do? (or can you realize thi... | https://github.com/huggingface/alignment-handbook/issues/138 | open | [] | 2024-03-19T10:26:49Z | 2024-03-19T10:26:49Z | null | Fu-Dayuan |
huggingface/gsplat.js | 76 | How to start rendering with a local file path? | Hi, thanks for your work!
I am new to JS and want to ask how to start rendering given a local path. I really appreciate any help you can provide. | https://github.com/huggingface/gsplat.js/issues/76 | open | [] | 2024-03-18T07:13:31Z | 2024-04-18T13:14:24Z | null | yifanlu0227 |
huggingface/accelerate | 2,560 | [Multi-GPU training] How to specific backend used in DDP training? | ### System Info
```Shell
.....
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_g... | https://github.com/huggingface/accelerate/issues/2560 | closed | [] | 2024-03-17T01:46:47Z | 2024-05-17T15:06:51Z | null | Luciennnnnnn |
huggingface/swift-transformers | 72 | How to use BertTokenizer? | what is the best way to use the BertTokenizer? its not a public file so I'm not sure whats the best way to use it | https://github.com/huggingface/swift-transformers/issues/72 | closed | [] | 2024-03-16T18:13:36Z | 2024-03-22T10:29:54Z | null | jonathan-goodrx |
huggingface/chat-ui | 934 | What are the rules to create a chatPromptTemplate in .env.local? | We know that chatPromptTemplate for google/gemma-7b-it in .env.local is:
"chatPromptTemplate" : "{{#each messages}}{{#ifUser}}<start_of_turn>user\n{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}}<end_of_turn>\n<start_of_turn>model\n{{/ifUser}}{{#ifAssistant}}{{content}}<end_of_turn... | https://github.com/huggingface/chat-ui/issues/934 | open | [
"question"
] | 2024-03-16T17:51:38Z | 2024-04-04T14:02:20Z | null | houghtonweihu |
huggingface/chat-ui | 933 | Why the chat template of google/gemma-7b-it is invalid josn format in .env.local? | I used the chat template from google/gemma-7b-it in .env.local, shown below:
"chat_template": "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{{ raise_exception('System role not supported') }}{% endif %}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_except... | https://github.com/huggingface/chat-ui/issues/933 | closed | [
"question"
] | 2024-03-15T20:34:11Z | 2024-03-18T13:24:55Z | null | houghtonweihu |
huggingface/diffusers | 7,337 | How to convert multiple piped files into a single SafeTensor file? | How to convert multiple piped files into a single SafeTensor file?
For example, from this address: https://huggingface.co/Vargol/sdxl-lightning-4-steps/tree/main
```python
import torch
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
base = "Vargol/sdxl-lightning-4-steps"... | https://github.com/huggingface/diffusers/issues/7337 | closed | [] | 2024-03-15T05:49:01Z | 2024-03-15T06:51:24Z | null | xxddccaa |
huggingface/transformers.js | 648 | `aggregation_strategy` in TokenClassificationPipeline | ### Question
Hello, from Transformers original version they have aggregation_strategy parameter to group the token corresponding to the same entity together in the predictions or not. But in transformers.js version I haven't found this parameter. Is it possible to provide this parameter? I want the prediction result a... | https://github.com/huggingface/transformers.js/issues/648 | closed | [
"question"
] | 2024-03-15T04:07:22Z | 2024-04-10T21:35:42Z | null | boat-p |
huggingface/transformers.js | 646 | Library no longer maintained? | ### Question
1 year has passed since this PR is ready for merge: [Support React Native #118](https://github.com/xenova/transformers.js/pull/118)
Should we do our own fork of xenova/transformers.js ?
| https://github.com/huggingface/transformers.js/issues/646 | closed | [
"question"
] | 2024-03-14T10:37:33Z | 2024-06-10T15:32:41Z | null | pax-k |
huggingface/tokenizers | 1,469 | How to load tokenizer trained by sentencepiece or tiktoken | Hi, does this lib supports loading pre-trained tokenizer trained by other libs, like `sentencepiece` and `tiktoken`? Many models on hf hub store tokenizer in these formats | https://github.com/huggingface/tokenizers/issues/1469 | closed | [
"Stale",
"planned"
] | 2024-03-13T10:22:00Z | 2024-04-30T10:15:32Z | null | jordane95 |
huggingface/transformers.js | 644 | Contribution Question-What's next after run scripts.convert? | ### Question
Hi @xenova I am trying to figure out how to contribute. I am new to huggingface. Just 2 months down the rabbit hole.
I ran
`python -m scripts.convert --quantize --model_id SeaLLMs/SeaLLM-7B-v2`
command
Here is a list of file I got in `models/SeaLLMs/SeaLLM-7B-v2` folder
```
_model_layers.0_s... | https://github.com/huggingface/transformers.js/issues/644 | closed | [
"question"
] | 2024-03-13T08:51:37Z | 2024-04-11T02:33:04Z | null | pacozaa |
huggingface/making-games-with-ai-course | 11 | [UPDATE] Typo in Unit 1, "What is HF?" section. The word "Danse" should be "Dance" | # What do you want to improve?
There is a typo in Unit 1, "What is HF?" section.
The word "Danse" should be "Dance"
- Explain the typo/error or the part of the course you want to improve
There is a typo in Unit 1, "What is HF?" section.
The word "Danse" should be "Dance"
The English spelling doesn't seem t... | https://github.com/huggingface/making-games-with-ai-course/issues/11 | closed | [
"documentation"
] | 2024-03-12T17:12:20Z | 2024-04-18T07:18:12Z | null | PaulForest |
huggingface/transformers.js | 642 | RangeError: offset is out of bounds #601 | ### Question
```
class NsfwDetector {
constructor() {
this._threshold = 0.5;
this._nsfwLabels = [
'FEMALE_BREAST_EXPOSED',
'FEMALE_GENITALIA_EXPOSED',
'BUTTOCKS_EXPOSED',
'ANUS_EXPOSED',
'MALE_GENITALIA_EXPOSED',
'B... | https://github.com/huggingface/transformers.js/issues/642 | closed | [
"question"
] | 2024-03-12T16:47:58Z | 2024-03-13T05:57:23Z | null | vijishmadhavan |
huggingface/chat-ui | 926 | AWS credentials resolution for Sagemaker models | chat-ui is excellent, thanks for all your amazing work here!
I have been experimenting with a model in Sagemaker and am having some issues with the model endpoint configuration. It currently requires credentials to be provided explicitly. This does work, but the ergonomics are not great for our use cases:
- in deve... | https://github.com/huggingface/chat-ui/issues/926 | open | [] | 2024-03-12T16:24:57Z | 2024-03-13T10:30:52Z | 1 | nason |
huggingface/optimum | 1,754 | How to tell whether the backend of ONNXRuntime accelerator is Intel VINO. | According to the [wiki](https://onnxruntime.ai/docs/execution-providers/#summary-of-supported-execution-providers), OpenVINO is one of the ONNXRuntime's execution providers.
I am deploying model on Intel Xeon Gold server, which supports AVX512 and which is compatible with Intel OpenVINO. How could I tell if the acce... | https://github.com/huggingface/optimum/issues/1754 | closed | [] | 2024-03-12T08:54:01Z | 2024-07-08T11:31:13Z | null | ghost |
huggingface/alignment-handbook | 134 | Is there a way to freeze some layers of a model ? | Can we follow the normal way of:
```
for param in model.base_model.parameters():
param.requires_grad = False
``` | https://github.com/huggingface/alignment-handbook/issues/134 | open | [] | 2024-03-12T02:06:03Z | 2024-03-12T02:06:03Z | 0 | shamanez |
huggingface/diffusers | 7,283 | How to load lora trained with Stable Cascade? | I finished a lora traning based on Stable Cascade with onetrainer, but I cannot find a solution to load the load in diffusers pipeline. Anyone who can help me will be appreciated. | https://github.com/huggingface/diffusers/issues/7283 | closed | [
"stale"
] | 2024-03-12T01:33:01Z | 2024-06-29T13:35:45Z | null | zengjie617789 |
huggingface/datasets | 6,729 | Support zipfiles that span multiple disks? | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
F... | https://github.com/huggingface/datasets/issues/6729 | closed | [
"enhancement",
"question"
] | 2024-03-11T21:07:41Z | 2024-06-26T05:08:59Z | null | severo |
huggingface/candle | 1,834 | How to increase model performance? | Hello all,
I have recently benchmarked completion token time, which is 30ms on an H100. However, with llama.cpp it is 10ms. Because [mistral.rs](https://github.com/EricLBuehler/mistral.rs) is built on Candle, it inherits this performance deficit. In #1680, @guoqingbao said that the Candle implementation is not suita... | https://github.com/huggingface/candle/issues/1834 | closed | [] | 2024-03-11T12:36:45Z | 2024-03-29T20:44:46Z | null | EricLBuehler |
huggingface/transformers.js | 638 | Using an EfficientNet Model - Looking for advice | ### Question
Discovered this project from the recent Syntax podcast episode (which was excellent) - it got my mind racing with different possibilities.
I got some of the example projects up and running without too much issue and naturally wanted to try something a little more outside the box, which of course has l... | https://github.com/huggingface/transformers.js/issues/638 | closed | [
"question"
] | 2024-03-11T01:31:49Z | 2024-03-11T17:42:31Z | null | ozzyonfire |
huggingface/text-generation-inference | 1,636 | Need instructions for how to optimize for production serving (fast startup) | ### Feature request
I suggest better educating developers how to download and optimize the model at build time (in container or in a volume) so that the command `text-generation-launcher` serves as fast as possible.
### Motivation
By default, when running TGI using Docker, the container downloads the model on the fl... | https://github.com/huggingface/text-generation-inference/issues/1636 | closed | [
"Stale"
] | 2024-03-10T22:17:53Z | 2024-04-15T02:49:03Z | null | steren |
huggingface/optimum | 1,752 | Documentation for exporting openai/whisper-large-v3 to ONNX | ### Feature request
Hello, I am exporting the [OpenAI Whisper-large0v3](https://huggingface.co/openai/whisper-large-v3) to ONNX and see it exports several files, most importantly in this case encoder (encoder_model.onnx & encoder_model.onnx.data) and decoder (decoder_model.onnx, decoder_model.onnx.data, decoder_with... | https://github.com/huggingface/optimum/issues/1752 | open | [
"feature-request",
"onnx"
] | 2024-03-10T05:24:36Z | 2024-10-09T09:18:27Z | 10 | mmingo848 |
huggingface/transformers | 29,564 | How to add new special tokens | ### System Info
- `transformers` version: 4.38.0
- Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.2
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.0 (False)
- Tensorf... | https://github.com/huggingface/transformers/issues/29564 | closed | [] | 2024-03-09T22:56:44Z | 2024-04-17T08:03:43Z | null | lordsoffallen |
huggingface/datasets | 6,726 | Profiling for HF Filesystem shows there are easy performance gains to be made | ### Describe the bug
# Let's make it faster
First, an evidence...

Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106... | https://github.com/huggingface/datasets/issues/6726 | open | [] | 2024-03-09T07:08:45Z | 2024-03-09T07:11:08Z | 2 | awgr |
huggingface/alignment-handbook | 133 | Early Stopping Issue when used with ConstantLengthDataset | Hello
I modified the code to include the Constant Length Dataset and it's early stopping at around 15% of the training. This issue doesn't occur when not used with the normal code given. Is there an issue with constant length dataset? I used it with SFTTrainer. | https://github.com/huggingface/alignment-handbook/issues/133 | open | [] | 2024-03-08T23:08:08Z | 2024-03-08T23:08:08Z | 0 | sankydesai |
huggingface/transformers.js | 635 | Failed to process file. and Failed to upload. | ### Question
I am hosting Supabase on Docker in Ubuntu, and I am facing file upload failures on the chatbot-ui. The error messages displayed are "Failed to process file" and "Failed to upload." The console output error messages are as follows:
- POST https://chat.example.com/api/retrieval/process 500 (Internal Serv... | https://github.com/huggingface/transformers.js/issues/635 | closed | [
"question"
] | 2024-03-08T13:07:18Z | 2024-03-08T13:22:57Z | null | chawaa |
huggingface/peft | 1,545 | How to use lora finetune moe model | https://github.com/huggingface/peft/issues/1545 | closed | [] | 2024-03-08T11:45:09Z | 2024-04-16T15:03:39Z | null | Minami-su | |
huggingface/datatrove | 119 | how about make a ray executor to deduplication | - https://github.com/ChenghaoMou/text-dedup/blob/main/text_dedup/minhash_spark.py
- reference:https://github.com/alibaba/data-juicer/blob/main/data_juicer/core/ray_executor.py
- Ray is simpler and faster than Spark
| https://github.com/huggingface/datatrove/issues/119 | closed | [] | 2024-03-08T11:37:13Z | 2024-04-11T12:48:53Z | null | simplew2011 |
huggingface/transformers.js | 634 | For nomic-ai/nomic-embed-text-v1 8192 context length | ### Question
As per document: https://huggingface.co/nomic-ai/nomic-embed-text-v1
Model supports 8192 context length, however, in transformers.js model_max_length: 512.
Any guidance how to use full context (8192) instead of 512? | https://github.com/huggingface/transformers.js/issues/634 | closed | [
"question"
] | 2024-03-08T05:33:39Z | 2025-10-13T04:57:49Z | null | faizulhaque |
huggingface/diffusers | 7,254 | Request proper examples on how to training a diffusion models with diffusers on large scale dataset like LAION | Hi, I do not see any examples in diffusers/examples on how to training a diffusion models with diffusers on large scale dataset like LAION. However, it is important since many works and models is willing integrate their models into diffusers, so if they can train their models in diffusers, it would be more easy when t... | https://github.com/huggingface/diffusers/issues/7254 | closed | [
"stale"
] | 2024-03-08T01:31:33Z | 2024-06-30T05:27:57Z | null | Luciennnnnnn |
huggingface/swift-transformers | 56 | How to get models? | Missing in docu? | https://github.com/huggingface/swift-transformers/issues/56 | closed | [] | 2024-03-07T15:47:54Z | 2025-02-11T11:41:32Z | null | pannous |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.