repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | 6,721 | Hi,do you know how to load the dataset from local file now? | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| https://github.com/huggingface/datasets/issues/6721 | open | [] | 2024-03-07T13:58:40Z | 2024-03-31T08:09:25Z | null | Gera001 |
huggingface/transformers.js | 633 | Is 'aggregation_strategy' parameter available for token classification pipeline? | ### Question
Hi. I have question.
From HuggingFace Transformers documentation, they have **'aggregation_strategy'** parameter in token classification pipeline. [Link](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TokenClassificationPipeline.aggregation_strategy)
Need to know in th... | https://github.com/huggingface/transformers.js/issues/633 | open | [
"help wanted",
"good first issue",
"question"
] | 2024-03-07T07:02:55Z | 2024-06-09T15:16:56Z | null | boat-p |
huggingface/swift-coreml-diffusers | 93 | Blocked at "loading" screen - how to reset the app / cache ? | After playing a bit with the app, it now stays in "Loading" state at startup (see screenshot)
I tried to remove the cache in `~/Library/Application Support/hf-diffusion-models` but it just cause a re-download.
How can I reset the app, delete all files created and start like on a fresh machine again ?
Alternati... | https://github.com/huggingface/swift-coreml-diffusers/issues/93 | open | [] | 2024-03-06T12:50:29Z | 2024-03-10T11:24:49Z | null | sebsto |
huggingface/chat-ui | 905 | Fail to create assistant. | I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model. Using the image and model mentioned above, I set up a large language model dialog service on server A. Assume that the IP address of the server A is x.x.x.x.
I use dock... | https://github.com/huggingface/chat-ui/issues/905 | open | [] | 2024-03-06T08:33:03Z | 2024-03-06T08:33:03Z | 0 | majestichou |
huggingface/chat-ui | 904 | Running the project with `npm run dev`, but it does not hot reload. | Am I alone in this issue or are you just developing without hot reload? Does anyone have any ideas on how to resolve it?
**UPDATES:**
It has to do whenever you're running it on WSL.
I guess this is an unrelated issue so feel free to close, but would still be nice to know how to resolve this. | https://github.com/huggingface/chat-ui/issues/904 | closed | [] | 2024-03-06T03:34:21Z | 2024-03-06T16:07:11Z | 2 | CakeCrusher |
huggingface/dataset-viewer | 2,550 | More precise dataset size computation | Currently, the Hub uses the `/size` endpoint's `num_bytes_original_files` value to display the `Size of downloaded dataset files` on a dataset's card page. However, this value does not consider a possible overlap between the configs' data files (and simply [sums](https://github.com/huggingface/datasets-server/blob/e4aa... | https://github.com/huggingface/dataset-viewer/issues/2550 | open | [
"question",
"P2"
] | 2024-03-05T22:22:24Z | 2024-05-24T20:59:36Z | null | mariosasko |
huggingface/datasets | 6,719 | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset... | https://github.com/huggingface/datasets/issues/6719 | open | [] | 2024-03-05T15:55:13Z | 2024-03-05T15:55:13Z | 0 | ssharpe42 |
huggingface/chat-ui | 899 | Bug--Llama-2-70b-chat-hf error: `truncate` must be strictly positive and less than 1024. Given: 3072 | I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model.
In the model field of the .env.local file, I have the following settings
```
MODELS=`[
{
"name": "meta-llama/Llama-2-70b-chat-hf",
"endpoints": [{... | https://github.com/huggingface/chat-ui/issues/899 | open | [
"support",
"models"
] | 2024-03-05T12:27:45Z | 2024-03-06T00:59:10Z | 4 | majestichou |
huggingface/tokenizers | 1,468 | How to convert tokenizers.tokenizer to XXTokenizerFast in transformers? | ### Motivation
I followed the guide [build-a-tokenizer-from-scratch](https://huggingface.co/docs/tokenizers/quicktour#build-a-tokenizer-from-scratch) and got a single tokenizer.json from my corpus. Since I'm not sure if it is compatible with the trainer, I want to convert it back to XXTokenizerFast in transformers.
... | https://github.com/huggingface/tokenizers/issues/1468 | closed | [
"Stale",
"planned"
] | 2024-03-05T06:32:27Z | 2024-07-21T01:57:17Z | null | rangehow |
huggingface/gsplat.js | 71 | How to support VR? | It's great to be able to use vr on a vr device. | https://github.com/huggingface/gsplat.js/issues/71 | closed | [] | 2024-03-05T05:03:17Z | 2024-03-05T07:55:53Z | null | did66 |
huggingface/tgi-gaudi | 95 | How to use FP8 feature in TGI-gaudi | ### System Info
The FP8 quantization feature has been incorporated into the TGI-Gaudi branch. However, guidance is needed on how to utilize this feature. The process involves running the FP8 quantization through Measurement Mode and Quantization Mode. How to enable FP8 using the TGI 'docker run' command? Could you kin... | https://github.com/huggingface/tgi-gaudi/issues/95 | closed | [] | 2024-03-05T02:50:08Z | 2024-05-06T09:03:15Z | null | lvliang-intel |
huggingface/accelerate | 2,521 | how to set `num_processes` in multi-node training | Is it the total num of gpus or the number of gpus on a single node?
I have seen contradictory signals in the code.
https://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/docs/source/usage_guides/ipex.md?plain=1#L139 https://github.com/huggingface/accelerate/blob/ee004674b9560976688e... | https://github.com/huggingface/accelerate/issues/2521 | closed | [] | 2024-03-04T13:03:57Z | 2025-12-22T01:53:32Z | null | lxww302 |
huggingface/distil-whisper | 95 | How to use distil-whisper-large-v3-de-kd model from HF? | Officially, multi-language support is still not implemented in distil-whisper.
But I noticed, that the esteemed @sanchit-gandhi uploaded a German model for distil-whisper to HuggingFace, called 'distil-whisper-large-v3-de-kd'
How can I use this specific model for transcribing something? | https://github.com/huggingface/distil-whisper/issues/95 | open | [] | 2024-03-04T12:01:13Z | 2024-04-02T09:40:46Z | null | Arche151 |
huggingface/transformers.js | 623 | Converted QA model answers in lower case, original model does not. What am I doing wrong? | ### Question
I have converted [deutsche-telekom/electra-base-de-squad2](https://huggingface.co/deutsche-telekom/electra-base-de-squad2) to ONNX using ```python -m scripts.convert --quantize --model_id deutsche-telekom/electra-base-de-squad2```. The ONNX model, used with the same code, yields returns in lower case, whe... | https://github.com/huggingface/transformers.js/issues/623 | open | [
"question"
] | 2024-03-04T11:56:44Z | 2024-03-04T11:56:44Z | null | MarceloEmmerich |
huggingface/transformers.js | 618 | How do I convert a DistilBERT Model to Quantized ONNX - | ### Question
Note, https://huggingface.co/docs/transformers.js/en/index#convert-your-models-to-onnx is a broken link.
I have a simple DistilBERT model I'm trying to load with the examples/next-server (wdavies/public-question-in-text)
I tried the simplest version of converting to ONNX (wdavies/public-onnx-test f... | https://github.com/huggingface/transformers.js/issues/618 | closed | [
"question"
] | 2024-03-01T16:55:16Z | 2024-03-02T00:47:40Z | null | davies-w |
huggingface/sentence-transformers | 2,521 | Is the implementation of `MultipleNegativesRankingLoss` right? | It is confusing why the labels are `range(len(scores))`.
```python
class MultipleNegativesRankingLoss(nn.Module):
def __init__(self, model: SentenceTransformer, scale: float = 20.0, similarity_fct=util.cos_sim):
super(MultipleNegativesRankingLoss, self).__init__()
self.model = model
se... | https://github.com/huggingface/sentence-transformers/issues/2521 | closed | [
"question"
] | 2024-03-01T10:13:35Z | 2024-03-04T07:01:12Z | null | ghost |
huggingface/text-embeddings-inference | 178 | How to specify a local model | ### Feature request
model=BAAI/bge-reranker-large
volume=$PWD/data
docker run -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.0 --model-id $model
### Motivation
model=BAAI/bge-reranker-large
volume=$PWD/data
docker run -p 8080:80 -v $volume:/data --pull always ghcr.i... | https://github.com/huggingface/text-embeddings-inference/issues/178 | closed | [] | 2024-03-01T09:40:07Z | 2024-03-01T16:54:27Z | null | yuanjie-ai |
huggingface/chat-ui | 889 | How does huggingchat prompt the model to generate HTML output? | How does Huggingchat prompt the LLM to generate HTML output? Where can I find that prompt? I'd like to tweak it. thanks! | https://github.com/huggingface/chat-ui/issues/889 | open | [] | 2024-02-29T17:20:01Z | 2024-03-05T18:45:56Z | null | vgoklani |
huggingface/chat-ui | 888 | Code LLAMA doesn't work | I am simply entering this prompt:
```
You're given the following regex in python: \| *([^|]+?) *\|
This captures text values in markdown tables but fails to capture numbers. Update this regex to capture numbers as well
```
Then what happens is that my 1 core of CPU is used 100% for at least for 5 mins until ... | https://github.com/huggingface/chat-ui/issues/888 | closed | [] | 2024-02-29T12:44:20Z | 2025-01-01T11:54:48Z | 1 | lordsoffallen |
huggingface/text-generation-inference | 1,615 | How to use the grammar support feature? | ### Feature request

Can you please clarify how we can use this? what is it for?
### Motivation

In the text classification example of transformers v4.38.1, the columns are not removed.
h... | https://github.com/huggingface/datasets/issues/6700 | closed | [] | 2024-02-28T12:36:22Z | 2024-04-02T17:15:28Z | 3 | shelfofclub |
huggingface/optimum | 1,729 | tflite support for gemma | ### Feature request
As per the title, is there plans to support gemma in tfilte
### Motivation
necessary format for current work
### Your contribution
no | https://github.com/huggingface/optimum/issues/1729 | closed | [
"feature-request",
"tflite",
"Stale"
] | 2024-02-27T17:15:54Z | 2025-01-19T02:04:34Z | 2 | Kaya-P |
huggingface/huggingface_hub | 2,051 | How edit cache dir and in bad net download how to redownload with last download point | OSError: Consistency check failed: file should be of size 1215993967 but has size 118991296 (pytorch_model.bin).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.... | https://github.com/huggingface/huggingface_hub/issues/2051 | closed | [] | 2024-02-27T14:45:10Z | 2024-02-27T15:59:35Z | null | caihua |
huggingface/candle | 1,769 | [Question] How to modify Mistral to enable multiple batches? | Hello everybody,
I am attempting to implement multiple batches for the Mistral forward pass. However, the `forward` method takes an argument `seqlen_offset` which seems to be specific to the batch. I have attempted to implement it with a `position_ids` tensor in [this](https://github.com/EricLBuehler/mistral.rs/blob... | https://github.com/huggingface/candle/issues/1769 | closed | [] | 2024-02-27T13:18:18Z | 2024-03-01T14:01:21Z | null | EricLBuehler |
huggingface/datatrove | 108 | How to load a dataset with the output a tokenizer? | I planned to use datatrove to apply my tokenizer so that data is ready to use with nanotron.
I am using DocumentTokenizer[Merger] which produces *.ds and *ds.index binary files, although, from what I understood, nanotron is expecting datasets (with "input_ids" keys).
I see that things like ParquetWriter cannot be pip... | https://github.com/huggingface/datatrove/issues/108 | closed | [] | 2024-02-27T08:58:09Z | 2024-05-07T12:33:47Z | null | Jeronymous |
huggingface/chat-ui | 875 | Difficulty configuring multiple instances of the same model with distinct parameters | I am currently self-deploying an application that requires setting up multiple instances of the same model, each configured with different parameters. For example:
```
MODELS=`[{
"name": "gpt-4-0125-preview",
"displayName": "GPT 4",
"endpoints" : [{
"type": "openai"
}]
},
{
... | https://github.com/huggingface/chat-ui/issues/875 | open | [] | 2024-02-26T10:48:43Z | 2024-02-27T17:28:21Z | 1 | mmtpo |
huggingface/optimum-nvidia | 76 | How to install optimum-nvidia properly without building a docker image | It's quite hard for me to build a docker image, so I started from a docker environment with TensorRT LLM 0.6.1 inside.
I checked your dockerfile, followed the process, and built TensorRT LLM using (I am using 4090 so that cuda arch is 89):
```
python3 scripts/build_wheel.py -j --trt_root /usr/local/tensorrt --py... | https://github.com/huggingface/optimum-nvidia/issues/76 | closed | [] | 2024-02-26T05:05:24Z | 2024-03-11T13:36:18Z | null | Yuchen-Cao |
huggingface/diffusers | 7,088 | Vague error: `ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.` how to fix? | Trying to convert a .safetensors stable diffusion model to whatever the format is that hugging face requires. It throws a vague nonsequitur of an error:
`pipe = diffusers.StableDiffusionPipeline.from_single_file(str(aPathlibPath/"vodkaByFollowfoxAI_v40.safetensors") )`
```...
[1241](file:///C:/Users/openSourc... | https://github.com/huggingface/diffusers/issues/7088 | closed | [
"stale",
"single_file"
] | 2024-02-25T15:03:07Z | 2024-09-17T21:56:26Z | null | openSourcerer9000 |
huggingface/diffusers | 7,085 | how to train controlnet with lora? | train full controlnet need much resource and time, so how to train controlnet with lora?
| https://github.com/huggingface/diffusers/issues/7085 | closed | [
"should-move-to-discussion"
] | 2024-02-25T06:31:47Z | 2024-03-03T06:38:35Z | null | akk-123 |
huggingface/optimum-benchmark | 138 | How to set trt llm backend parameters | I am trying to run the trt_llama example: https://github.com/huggingface/optimum-benchmark/blob/main/examples/trt_llama.yaml
It seems optimem-benchmark will automatically transform the huggingface model to inference engine file then benchmarking its performance. When we use tensorrt llm, there is a model "build" pro... | https://github.com/huggingface/optimum-benchmark/issues/138 | closed | [] | 2024-02-24T17:12:12Z | 2024-02-27T12:48:44Z | null | Yuchen-Cao |
huggingface/optimum-nvidia | 75 | How to build this environment without docker? | My computer does not support the use of docker. How do I deploy this environment on my computer? | https://github.com/huggingface/optimum-nvidia/issues/75 | open | [] | 2024-02-24T16:59:37Z | 2024-03-06T13:45:18Z | null | lemon-little |
huggingface/accelerate | 2,485 | How to log information into a local logging file? | ### System Info
```Shell
Hi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ fo... | https://github.com/huggingface/accelerate/issues/2485 | closed | [] | 2024-02-24T07:52:55Z | 2024-04-03T15:06:24Z | null | Luciennnnnnn |
huggingface/optimum-benchmark | 136 | (question)When I use the memory tracking feature on the GPU, I find that my VRAM is reported as 0. Is this normal, and what might be causing it? | 
| https://github.com/huggingface/optimum-benchmark/issues/136 | closed | [] | 2024-02-24T02:57:49Z | 2024-03-08T16:59:41Z | null | WCSY-YG |
huggingface/optimum | 1,716 | Optimum for Jetson Orin Nano | ### System Info
```shell
optimum version: 1.17.1
platform: Jetson Orin Nano, Jetpack 6.0
Python: 3.10.13
CUDA: 12.2
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such ... | https://github.com/huggingface/optimum/issues/1716 | open | [
"bug"
] | 2024-02-23T23:22:08Z | 2024-02-26T10:03:59Z | 1 | JunyiYe |
huggingface/transformers | 29,244 | Google Gemma don't know what 1+1 is equal to? | ### System Info
[v4.38.1](https://github.com/huggingface/transformers/releases/tag/v4.38.1)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ..... | https://github.com/huggingface/transformers/issues/29244 | closed | [] | 2024-02-23T12:16:17Z | 2024-03-07T10:54:09Z | null | zhaoyun0071 |
huggingface/optimum | 1,713 | Issue converting owlv2 model to ONNX format | Hi Team,
I hope this message finds you well.
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
`! optimum-cli export onnx -m google/owlv2-base-patch16 --task 'zero-shot-object-detection' --framework 'pt' owlv2_onnx`
... | https://github.com/huggingface/optimum/issues/1713 | closed | [
"feature-request",
"onnx",
"exporters"
] | 2024-02-23T05:55:23Z | 2025-09-10T23:26:13Z | 6 | n9s8a |
huggingface/optimum-benchmark | 135 | How to import and use the quantized model with AutoGPTQ? | https://github.com/huggingface/optimum-benchmark/issues/135 | closed | [] | 2024-02-23T03:13:28Z | 2024-02-23T05:03:06Z | null | jhrsya | |
huggingface/optimum | 1,710 | Native Support for Gemma | ### System Info
```shell
python version : 3.10.12
optimum version : built from github
openvino : 2024.1.0-14548-688c71ce0ed
transformers : 4.38.1
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially suppo... | https://github.com/huggingface/optimum/issues/1710 | closed | [
"feature-request",
"onnx",
"exporters"
] | 2024-02-22T17:15:08Z | 2024-02-28T08:37:36Z | 5 | Kaya-P |
huggingface/sentence-transformers | 2,499 | how can i save fine_tuned cross-encoder to HF and then download it from HF | I'm looking for ways to share fine-tuned cross-encoder with my teacher.
Cross encoder model does not have native push_to_hub() method. So i decided to use general approach:
```
from transformers import AutoModelForSequenceClassification
import torch
# read from disk, model was saved as ft_model.save("model/cr... | https://github.com/huggingface/sentence-transformers/issues/2499 | closed | [
"good first issue"
] | 2024-02-22T15:29:37Z | 2025-03-25T16:07:25Z | null | satyrmipt |
huggingface/transformers | 29,214 | How to get input embeddings from PatchTST with (batch_size, sequence_length, hidden_size) dimensions | ### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following sni... | https://github.com/huggingface/transformers/issues/29214 | open | [
"Feature request"
] | 2024-02-22T14:17:10Z | 2024-03-25T03:56:58Z | null | nikhilajoshy |
huggingface/huggingface_hub | 2,039 | How to find out the type of files in the repository | Hello
Is there an option to determine the type of file in the repository, such as "Checkpoint", "LORA", "Textual_Inversion", etc?
I didn't know where to ask the question so sorry if I'm wrong. | https://github.com/huggingface/huggingface_hub/issues/2039 | closed | [] | 2024-02-22T01:41:29Z | 2024-03-25T11:39:31Z | null | suzukimain |
huggingface/datasets | 6,686 | Question: Is there any way for uploading a large image dataset? | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si... | https://github.com/huggingface/datasets/issues/6686 | open | [] | 2024-02-21T22:07:21Z | 2024-05-02T03:44:59Z | 1 | zhjohnchan |
huggingface/accelerate | 2,474 | how to turn off fp16 auto_cast? | i notice that the deepspeed config always set my `auto_cast=True` and this is my data
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_multinode_launcher: standard
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_offload_param_pin_memory: true
... | https://github.com/huggingface/accelerate/issues/2474 | closed | [] | 2024-02-21T11:54:51Z | 2025-02-18T08:53:20Z | null | haorannlp |
huggingface/chat-ui | 852 | what is the difference between "chat-ui-db" docker image and "chat-ui" docker image? | I found there are 2 packages in the chat-ui repository: one is chat-ui and the other is chat-ui-db. what is the difference between "chat-ui-db" docker image and "chat-ui" docker image?
I've pulled two images from the mirror site: huggingface/text-generation-inference:1.4 and mongo:latest.
I hope to use the two i... | https://github.com/huggingface/chat-ui/issues/852 | closed | [] | 2024-02-21T09:31:07Z | 2024-02-23T02:58:03Z | null | majestichou |
huggingface/instruction-tuned-sd | 22 | How to use a custom image for validation | Hello,
I tried using a custom image for validation since I'm training it on a custom style i uploaded my val image on hub as the mountain.png but it always gives me error for unidentified also for mountain.png it shows validation summary on wandb but for my val image it shows nothing.
Do i need to change something s... | https://github.com/huggingface/instruction-tuned-sd/issues/22 | closed | [] | 2024-02-21T08:15:30Z | 2024-02-22T05:49:11Z | null | roshan2024nar |
huggingface/gsplat.js | 67 | How to set the background color of the scene | Hi:
Want to know how to set the background color of the scene,now it's black | https://github.com/huggingface/gsplat.js/issues/67 | open | [] | 2024-02-21T05:49:33Z | 2024-02-26T09:32:25Z | null | jamess922 |
huggingface/gsplat.js | 66 | How to adjust the axis of rotation? | When the model's z-axis is not perpendicular to the ground plane, the rotation effect may feel unnatural, as is the case with this model: testmodel.splat.
[testmodel.zip](https://github.com/huggingface/gsplat.js/files/14353919/testmodel.zip)
I would like to rotate the model along an axis that is perpendicular ... | https://github.com/huggingface/gsplat.js/issues/66 | closed | [] | 2024-02-21T04:13:01Z | 2024-02-23T02:37:59Z | null | gotoeasy |
huggingface/sentence-transformers | 2,494 | How to get embedding vector when input is tokenized already | First, thank you so much for sentence-transformer.
How to get embedding vector when input is tokenized already?
i guess sentence-transformer can `.encode(original text)`.
But i want to know there is way like `.encode(token_ids )` or `.encode(token_ids, attention_masks)`
This is my background b... | https://github.com/huggingface/sentence-transformers/issues/2494 | open | [] | 2024-02-20T22:38:18Z | 2024-02-23T10:01:07Z | null | sogmgm |
huggingface/optimum | 1,703 | How can I export onnx-model for Qwen/Qwen-7B? | ### Feature request
I need to export the model named qwen to accelerate.
```optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code```
### Motivation
I want to export the model qwen to use onnxruntime
### Your contribution
I can give the input and output. | https://github.com/huggingface/optimum/issues/1703 | open | [
"onnx"
] | 2024-02-20T13:22:08Z | 2024-02-26T13:19:19Z | 1 | smile2game |
huggingface/accelerate | 2,463 | How to initialize Accelerator twice but with different setup within the same code ? | ### System Info
```Shell
Hello I want to initialize accelerate once for the training and another time for the inference.
Looks like it does not work and the error message is not clear. Is there a way to reset the previously initialized accelerate and then initialize with inference setup?
For training I am doi... | https://github.com/huggingface/accelerate/issues/2463 | closed | [] | 2024-02-20T13:17:26Z | 2024-03-30T15:06:15Z | null | soneyahossain |
huggingface/chat-ui | 840 | LLama.cpp error - String must contain at least 1 character(s)" | I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.
```
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
```
Not sure how to fix it.
```
[
{
"code": "too_small",
"min... | https://github.com/huggingface/chat-ui/issues/840 | open | [
"bug",
"models"
] | 2024-02-19T13:33:24Z | 2024-02-22T14:51:48Z | 2 | szymonrucinski |
huggingface/datatrove | 93 | Tokenization for Non English data | Hi HF team
I want to thank you for this incredible work.
And I have a question, I want to apply pipeline of deduplication for Arabic data.
For this I should change the tokenizer I think, And if yes is there a tip for this,
for this should I just edit the tokenizer here
`class SentenceDedupFilter(PipelineStep):
... | https://github.com/huggingface/datatrove/issues/93 | closed | [
"question"
] | 2024-02-19T11:02:04Z | 2024-04-11T12:47:24Z | null | Manel-Hik |
huggingface/safetensors | 443 | Efficient key-wise streaming | ### Feature request
I'm interested in streaming the tensors in a model key by key without having to hold all keys at the same time in memory. Something like this:
```python
with safe_open("model.safetensors", framework="pt", device="cpu") as f:
for key in f.keys():
tensor = f.get_tensor(stream=True... | https://github.com/huggingface/safetensors/issues/443 | closed | [
"Stale"
] | 2024-02-18T23:22:09Z | 2024-04-17T01:47:28Z | 4 | ljleb |
huggingface/community-events | 200 | How to prepare audio dataset for whisper fine-tuning with timestamps? | I am trying to prepare a dataset for whisper fine-tuning , and I have a lot of small segment clip , most of them less than 6 seconds, I read the paper, but didn’t understand this paragraph:
“ When a final transcript segment is only partially included in the current 30- second audio chunk, we predict only its start t... | https://github.com/huggingface/community-events/issues/200 | open | [] | 2024-02-18T19:50:33Z | 2024-02-18T19:55:06Z | null | omarabb315 |
huggingface/diffusers | 7,010 | How to set export HF_HOME on Kaggle? | Kaggle temporary disk is slow once again and I want models to be downloaded into working directory.
I have used the below command but it didn't work. Which command I need?
`!export HF_HOME="/kaggle/working"`
| https://github.com/huggingface/diffusers/issues/7010 | closed | [
"bug"
] | 2024-02-18T11:15:21Z | 2024-02-18T14:39:08Z | null | FurkanGozukara |
huggingface/optimum-benchmark | 126 | How to obtain the data from the 'forward' and 'generate' stages? | I used the same configuration file to test the model, but the results obtained are different from those of a month ago. In the result files from a month ago, data from both the forward and generate stages were included; however, the current generated result files only contain information from the prefill and decode sta... | https://github.com/huggingface/optimum-benchmark/issues/126 | closed | [] | 2024-02-18T09:48:44Z | 2024-02-19T16:06:24Z | null | WCSY-YG |
huggingface/chat-ui | 838 | Explore the possibility for chat-ui to use OpenAI assistants API structure. | Hi @nsarrazin , I wanted to explore how we could collaborate in making chat-ui more work with OpenAI standards to make it more less opinionated over hosted inference provider. I need it as I am part of a team open-sourcing the GPTs platform https://github.com/OpenGPTs-platform and we will be leveraging chat-ui as the c... | https://github.com/huggingface/chat-ui/issues/838 | open | [
"enhancement",
"good first issue",
"back"
] | 2024-02-17T21:39:49Z | 2024-12-26T05:55:47Z | 4 | CakeCrusher |
huggingface/candle | 1,720 | How to define custom ops with arbitrary number of tensors ? | I dived into the issues and repo about the subject, because I wanted to be able to call cuda kernels regarding 3D gaussian splatting, and the way to invoke those kernel seems to be custom ops. But right now, we only have
```
CustomOp1(Tensor, std::sync::Arc<Box<dyn CustomOp1 + Send + Sync>>),
CustomOp2(
... | https://github.com/huggingface/candle/issues/1720 | open | [] | 2024-02-16T21:38:16Z | 2024-03-13T13:44:17Z | null | jeanfelixM |
huggingface/chat-ui | 837 | Cannot find assistants UI in the repo | Hi @nsarrazin I recently cloned the chat-ui and I noticed that the new assistants ui is missing, at the very least from the main branch.
Is the assistants ui in the repo somwhere?
If not is there any plans on making it open-source?
If so when? | https://github.com/huggingface/chat-ui/issues/837 | closed | [] | 2024-02-16T20:13:39Z | 2024-02-17T21:29:08Z | 4 | CakeCrusher |
huggingface/dataset-viewer | 2,456 | Link to the endpoint doc page in case of error? | eg. https://datasets-server.huggingface.co/parquet
could return
```json
{"error":"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet"}
```
or
```json
{"error":"Parameter 'dataset' is required.", "docs": "https://huggingface.co/docs/datasets-server/parqu... | https://github.com/huggingface/dataset-viewer/issues/2456 | open | [
"documentation",
"question",
"api",
"P2"
] | 2024-02-15T11:11:44Z | 2024-02-15T11:12:12Z | null | severo |
huggingface/gsplat.js | 64 | How to render from a set of camera position? | Hi, I am trying to render the scene from a set of camera position/rotation that I load from a JSON file.
I think the right way is first to disable the "orbitControls" (engine.orbitControls.enabled = false;) and then set the camera position/rotation manually like this: 'camera.data.update(position, rotation);'. Am I... | https://github.com/huggingface/gsplat.js/issues/64 | closed | [] | 2024-02-14T16:11:28Z | 2024-02-19T18:13:38Z | null | vahidEtt |
huggingface/chat-ui | 824 | what port is used by the websearch? | i put the chat in a container in a cluster with my mongodb.
the web search stopped working, i think it might be related to me not opening a port for the web search to access the web and could not find a doc that describes how the web search works.
would love to know what port/s i should open and bit more details in ... | https://github.com/huggingface/chat-ui/issues/824 | open | [
"support",
"websearch"
] | 2024-02-14T11:15:22Z | 2024-02-14T12:52:25Z | null | kaplanyaniv |
huggingface/transformers.js | 586 | Does `WEBGPU` Truly Enhance Inference Time Acceleration? | ### Question
Recently, I've been extensively utilizing transformers.js to load transformer models, and Kudos to the team for this wonderful library ...
Specifically, I've been experimenting with version 2.15.0 of transformers.js.
Despite the fact that the model runs on the `web-assembly backend`, I've noticed ... | https://github.com/huggingface/transformers.js/issues/586 | closed | [
"question"
] | 2024-02-14T09:23:52Z | 2024-10-18T13:30:13Z | null | kishorekaruppusamy |
huggingface/chat-ui | 823 | WebSearch uses the default model instead of current model selected | I have multiple models in my .env.local and it seems the WebSearch uses the default model to perform its search content extraction instead of the currently selected model (the one that I'm asking the question to...) Is it possible to add a config option to use same model for everything? | https://github.com/huggingface/chat-ui/issues/823 | open | [
"enhancement",
"back",
"models"
] | 2024-02-14T07:52:59Z | 2024-02-14T13:07:20Z | 4 | ihubanov |
huggingface/trl | 1,327 | how to save/load model? | I've tried save model via:
ppo_trainer.save_pretrained("./model_after_rl")
and load the model via:
model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
ref_model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
But the performance is same to without any reinf... | https://github.com/huggingface/trl/issues/1327 | closed | [] | 2024-02-14T06:56:07Z | 2024-04-24T15:05:14Z | null | ADoublLEN |
huggingface/accelerate | 2,440 | How to properly gather results of PartialState for inference on 4xGPUs | ### System Info
```Shell
torch==2.2.0
transformers==4.37.2
accelerate==0.27.0
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `example... | https://github.com/huggingface/accelerate/issues/2440 | closed | [] | 2024-02-13T14:00:13Z | 2024-03-23T15:07:26Z | null | ZeusFSX |
huggingface/chat-ui | 818 | Settings Page Freezes | When I go to settings to change model (after I ran a convo with a model), the UI settings page can't be closed. It freezes. Right now I have to keep reloading the page to use it | https://github.com/huggingface/chat-ui/issues/818 | closed | [
"question",
"support"
] | 2024-02-13T13:30:01Z | 2024-02-16T09:41:23Z | null | lordsoffallen |
huggingface/candle | 1,701 | How to train my own YOLOv8 model? | Candle provides an example of YOLOv8, which is very useful to use.
But I don't know how to train on my own dataset? Can handle directly load the model trained by pytorch? | https://github.com/huggingface/candle/issues/1701 | open | [] | 2024-02-13T01:56:49Z | 2024-03-18T13:45:07Z | null | mzdk100 |
huggingface/transformers.js | 585 | Using a server backend to generate masks - doublelotus | ### Question
Hi there, just continuing on from my question on - https://huggingface.co/posts/Xenova/240458016943176#65ca9d9c8e0d94e48742fad7.
I've just been reading through your response and initially I was trying it using a python backend and attempted to mimic the worekr.js code like so:
```py
from transfo... | https://github.com/huggingface/transformers.js/issues/585 | open | [
"question"
] | 2024-02-13T00:06:20Z | 2024-02-28T19:29:26Z | null | jeremiahmark |
huggingface/chat-ui | 817 | Question: Can someone explain "public app data sharing with model authors" please? | I am struggling to understand in which way data can or is actually shared with whom when the setting `shareConversationsWithModelAuthors` is activated (which it is by default)?
```javascript
{#if PUBLIC_APP_DATA_SHARING === "1"}
<!-- svelte-ignore a11y-label-has-associated-control -->
<label class="flex items-cen... | https://github.com/huggingface/chat-ui/issues/817 | closed | [
"question"
] | 2024-02-12T19:18:03Z | 2024-02-16T14:32:18Z | null | TomTom101 |
huggingface/transformers.js | 581 | How can we use the sam-vit-huge in the production? | ### Question
The size of ONNX files for sam-vit-huge is around 600MB. If I am using the implementation mentioned in the documentation, it downloads these files first before performing the image segmentation. Is there a better way to avoid downloading these files and reduce the time it takes? Additionally, the model is... | https://github.com/huggingface/transformers.js/issues/581 | open | [
"question"
] | 2024-02-09T17:54:43Z | 2024-02-09T17:54:43Z | null | moneyhotspring |
huggingface/dataset-viewer | 2,434 | Create a new step: `config-features`? | See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.
We could:
- add a new /features endpoint
- or add a `features... | https://github.com/huggingface/dataset-viewer/issues/2434 | open | [
"question",
"refactoring / architecture",
"P2"
] | 2024-02-09T14:13:10Z | 2024-02-15T10:26:35Z | null | severo |
huggingface/diffusers | 6,920 | How to merge a lot of embedding into a single file | I create a lot of embedding through textual inversion, but I couldn't found a file to merge this ckpt
| https://github.com/huggingface/diffusers/issues/6920 | open | [
"stale"
] | 2024-02-09T08:18:42Z | 2024-03-13T15:02:51Z | null | Eggwardhan |
huggingface/transformers | 28,924 | How to disable log history from getting printed every logging_steps | I'm writing a custom ProgressCallback that modifies the original ProgressCallback transformers implementation and adds some additional information/data to the tqdm progress bar. Here's what I have so far, and it works nicely and as intended.
```python
class ProgressCallback(TrainerCallback):
"""A [`TrainerCall... | https://github.com/huggingface/transformers/issues/28924 | closed | [] | 2024-02-08T10:23:28Z | 2024-02-08T17:26:02Z | null | arnavgarg1 |
huggingface/alignment-handbook | 120 | (QLoRA) DPO without previous SFT | Because of the following LLM-Leaderboard measurements, I want to perform QLoRA DPO without previous QLoRA SFT:
```
alignment-handbook/zephyr-7b-dpo-qlora: +Average: 63.51; +ARC 63.65; +HSwag 85.35; -+MMLU 63.82; ++TQA: 47.14; (+)Win 79.01; +GSM8K 42.08;
alignment-handbook/zephyr-7b-sft-qlora: -Averag... | https://github.com/huggingface/alignment-handbook/issues/120 | open | [] | 2024-02-08T09:56:50Z | 2024-02-09T22:15:10Z | 1 | DavidFarago |
huggingface/transformers.js | 577 | Getting 'fs is not defined' when trying the latest "background removal" functionality in the browser? | ### Question
I copied the code from https://github.com/xenova/transformers.js/blob/main/examples/remove-background-client/main.js to here, but I'm getting this error with v2.15.0 of @xenova/transformers.js:
```
Uncaught ReferenceError: fs is not defined
at env.js:36:31
at [project]/node_modules/.pnpm/@... | https://github.com/huggingface/transformers.js/issues/577 | open | [
"question"
] | 2024-02-08T04:34:59Z | 2024-11-26T05:20:22Z | null | lancejpollard |
huggingface/transformers.js | 575 | Can GPU acceleration be used when using this library in a node.js environment? | ### Question
Hello, I have looked into the GPU support related issue, but all mentioned content is related to webGPU. May I ask if GPU acceleration in the node.js environment is already supported? Refer: https://github.com/microsoft/onnxruntime/tree/main/js/node | https://github.com/huggingface/transformers.js/issues/575 | closed | [
"question"
] | 2024-02-07T03:37:50Z | 2025-01-20T15:05:00Z | null | SchneeHertz |
huggingface/dataset-viewer | 2,408 | Add task tags in /hub-cache? | On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.
Previously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425 | https://github.com/huggingface/dataset-viewer/issues/2408 | closed | [
"question",
"feature request",
"P2"
] | 2024-02-06T11:17:19Z | 2024-06-19T15:43:15Z | null | severo |
huggingface/dataset-viewer | 2,407 | Remove env var HF_ENDPOINT? | Is it still required to set HF_ENDPOINT as an environment variable?
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45
| https://github.com/huggingface/dataset-viewer/issues/2407 | closed | [
"duplicate",
"question",
"refactoring / architecture",
"P2"
] | 2024-02-06T11:11:24Z | 2024-02-06T14:53:12Z | null | severo |
huggingface/chat-ui | 786 | Can't get Mixtral to work with web-search | I have been following this project for a while and recently tried setting up oobabooga Mixtral-8x7b
I used the official prompt template used in huggingface.co :
```
<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifA... | https://github.com/huggingface/chat-ui/issues/786 | open | [] | 2024-02-06T07:14:08Z | 2024-02-16T10:45:40Z | 2 | iChristGit |
huggingface/dataset-viewer | 2,402 | Reduce resources for /filter and /search? | They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now
Should we reduce the number of pods? How to configure the right level? | https://github.com/huggingface/dataset-viewer/issues/2402 | closed | [
"question",
"infra",
"P2",
"prod"
] | 2024-02-05T21:44:56Z | 2024-02-28T17:55:50Z | null | severo |
huggingface/dataset-viewer | 2,390 | Store the repo visibility (public/private) to filter webhooks | See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050
Not sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets. | https://github.com/huggingface/dataset-viewer/issues/2390 | closed | [
"question",
"P2"
] | 2024-02-05T12:37:30Z | 2024-06-19T15:37:36Z | null | severo |
huggingface/transformers.js | 567 | Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order. | ### Question
Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order. | https://github.com/huggingface/transformers.js/issues/567 | open | [
"question"
] | 2024-02-05T11:12:34Z | 2024-02-05T11:12:34Z | null | a414166402 |
huggingface/transformers.js | 565 | How can i use this Model for image matting? | ### Question
https://github.com/ZHKKKe/MODNet?tab=readme-ov-file
They have ONNX file and the python cli usage looks simple, but I can't find how to use with transformers.js.
```
!python -m demo.image_matting.colab.inference \
--input-path demo/image_matting/colab/input \
--output-path demo/image... | https://github.com/huggingface/transformers.js/issues/565 | closed | [
"question"
] | 2024-02-05T09:28:28Z | 2024-02-07T11:33:26Z | null | cyio |
huggingface/transformers.js | 564 | Can models from user disks load and run in my HF space? | ### Question
Im fiddling around with the react-translator template.
What I have accomplished so far:
- Run local (on disk in public folder) model in localhost webapp.
- Run hosted (on HF) model in localhost webapp.
- Run hosted (on HF) model in HF Space webapp.
What i want to accomplish but can't figure out:
... | https://github.com/huggingface/transformers.js/issues/564 | closed | [
"question"
] | 2024-02-05T08:00:55Z | 2024-06-07T01:17:24Z | null | saferugdev |
huggingface/transformers | 28,860 | Question: How do LLMs learn to be "Generative", as we often describe them? | (Please forgive me and let me know if I'm not allowed to ask this kind of question here. I'm so sorry if I'm bothering everyone.)
AFAIK to be called "generative", a model should have the ability to learn the joint probability over the training data. In the case of LLMs, we apply the chain rule of Bayes' formula to a... | https://github.com/huggingface/transformers/issues/28860 | closed | [] | 2024-02-05T07:10:23Z | 2024-02-05T12:22:27Z | null | metalwhale |
huggingface/sentence-transformers | 2,470 | BGE Reranker / BERT Crossencoder Onnx model latency issue | I am using the Int8 quantized version of BGE-reranker-base model converted to the Onnx model. I am processing the inputs in batches. Now the scenario is that I am experiencing a latency of 20-30 secs with the original model. With the int8 quantized and onnx optimized model, the latency was reduced to 8-15 secs keeping ... | https://github.com/huggingface/sentence-transformers/issues/2470 | open | [
"question"
] | 2024-02-05T05:54:18Z | 2024-02-09T06:59:51Z | null | ojasDM |
huggingface/chat-ui | 774 | Where are the image and pdf upload features when running on locally using this repo? | I see there are issues and features being talked about and added for the image upload and parsing PDFs as markdown etc. However, I dont see these features in when I cloned this repo and started chatui using "npm run dev" locally.
Am I missing something?
#641 are the features I am talking about. | https://github.com/huggingface/chat-ui/issues/774 | closed | [] | 2024-02-05T00:41:05Z | 2024-02-05T08:48:29Z | 1 | zubu007 |
huggingface/chat-ui | 771 | using openai api key for coporate | Hi
We are working with an open ai key for our corporate ( it has a corporate endpoint)
this is how we added the model to .env.local
```
MODELS=`[
{
"name": "Corporate local instance of GPT 3.5 Model",
"endpoints": [{
"type": "openai",
"url": "corporate url"
}],
"userMessageTo... | https://github.com/huggingface/chat-ui/issues/771 | open | [
"models"
] | 2024-02-04T11:23:59Z | 2024-02-06T15:01:50Z | 1 | RachelShalom |
huggingface/optimum-neuron | 460 | [QUESTION] What is the difference between optimum-neuron and transformers-neuronx? | I would like to understand the differences between this optimum-neuron and [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx). | https://github.com/huggingface/optimum-neuron/issues/460 | closed | [] | 2024-02-02T18:27:46Z | 2024-03-27T11:04:52Z | null | leoribeiro |
huggingface/dataset-viewer | 2,376 | Should we increment "failed_runs" when error is "ResponseAlreadyComputedError"? | Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error? | https://github.com/huggingface/dataset-viewer/issues/2376 | closed | [
"question",
"P2"
] | 2024-02-02T12:08:31Z | 2024-02-22T21:16:12Z | null | severo |
huggingface/autotrain-advanced | 484 | How to ask question AutoTrained LLM , If I ask question dosn't return any answer | Hi,
LLM training was successful , But I asked any question from my trained context and it was not answered.How to ask proper question?
rom transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "bert-base-uncased_finetuning"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoMode... | https://github.com/huggingface/autotrain-advanced/issues/484 | closed | [
"stale"
] | 2024-02-02T09:29:07Z | 2024-03-04T15:01:36Z | null | charles-123456 |
huggingface/chat-ui | 761 | Does chat-ui support offline deployment? I have downloaded the weights to my local computer. | I have downloaded the weights to my local computer. Due to network issues, I am unable to interact with the huggingface website. Can I do offline deployment based on chat-ui and downloaded weights from huggingface? Do I not need to set HF_TOKEN=<your access token>?Does that mean I don't need to set HF_TOKEN=<your acce... | https://github.com/huggingface/chat-ui/issues/761 | closed | [
"support"
] | 2024-02-02T07:57:19Z | 2024-02-04T03:23:25Z | 2 | majestichou |
huggingface/transformers.js | 557 | how to cast types? | ### Question
I have the following code:
```
const pipe = await pipeline('embeddings');
const output = await pipe([
'The quick brown fox jumps over the lazy dog',
]);
const embedding = output[0][0];
```
`output[0][0]` causes a typescript error:
<img width="748" alt="CleanShot 2024... | https://github.com/huggingface/transformers.js/issues/557 | open | [
"question"
] | 2024-02-02T04:38:20Z | 2024-02-08T19:01:06Z | null | pthieu |
huggingface/diffusers | 6,819 | How to let diffusers use local code for pipelineinstead of download it online everytime We use it? | I tried to use the instaflowpipeline from example/community to.run my test However, even after i git cloned the repository to my environment it still Keep trying to Download the latest object of the instaflow pipeline code Unfortunately in my area is hard for the environment to download it directly from rawgithub. ... | https://github.com/huggingface/diffusers/issues/6819 | closed | [] | 2024-02-02T02:53:48Z | 2024-11-28T05:44:10Z | null | Kevin-shihello-world |
huggingface/diffusers | 6,817 | How to use class_labels in the Unet2DConditionalModel or Unet2DModel when forward? | Hi, I want to know what the shape or format of "class" is if I want to add the class condition to the unet? Just set the **classe_labels** 0, 1, 2, 3?
Unet2DModel: **class_labels** (torch.FloatTensor, optional, defaults to None) — Optional class labels for conditioning. Their embeddings will be summed with the times... | https://github.com/huggingface/diffusers/issues/6817 | closed | [] | 2024-02-02T02:17:40Z | 2024-02-07T07:31:35Z | null | boqian-li |
huggingface/sentence-transformers | 2,465 | How to load lora model to sentencetransformer model? | Dear UKPlab team,
My team and myself are working on a RAG project and right now we are fine tuning a retrieval model using peft library. The issue is once we have the model fine-tuned, we couldn't load the local config and checkpoints using `sentencetransformer`.
Here is our hierarchy of the local path of the peft... | https://github.com/huggingface/sentence-transformers/issues/2465 | closed | [] | 2024-02-02T00:18:04Z | 2024-11-08T12:32:36Z | null | Shengyun-Si |
huggingface/amused | 3 | How to generate multiple images? | Thank you for your amazing work! Could you kindly explain how to generate multiple images at a time? Thankyou | https://github.com/huggingface/amused/issues/3 | closed | [] | 2024-02-01T18:03:30Z | 2024-02-02T10:36:09Z | null | aishu194 |
huggingface/alignment-handbook | 110 | DPO loss on different datasets | In parallel with #38, tho i am relating to full training instead of lora.
When i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the cas... | https://github.com/huggingface/alignment-handbook/issues/110 | open | [] | 2024-02-01T15:49:29Z | 2024-02-01T15:49:29Z | 0 | wj210 |
huggingface/chat-ui | 757 | Which (temperature) configurations for Zephyr chat interface? | Hi, I apologise for what is maybe an obvious question but where can I find the exact configurations for the model offered on the HF Zephyr Chat interface on https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat for Zephyr 7B beta? I'm especially interested to see the temperature settings and wasn't able to find this ... | https://github.com/huggingface/chat-ui/issues/757 | closed | [
"support"
] | 2024-02-01T14:27:12Z | 2024-02-01T14:47:13Z | 3 | AylaRT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.