repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 25,138 | How to return detected language using whisper with asr pipeline? | ### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensor... | https://github.com/huggingface/transformers/issues/25138 | closed | [] | 2023-07-27T10:51:31Z | 2025-02-11T11:24:49Z | null | arso1er |
huggingface/text-generation-inference | 703 | Is there an example how to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (e.g. ghcr.io/huggingface/text-generation-inference:0.9.3) | ### System Info
0.9.3
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
NA
### Expected behavior
A command to quantize a model (e.g. meta-llama/Llama-2-7b-chat-hf) using the prebuilt docker image (... | https://github.com/huggingface/text-generation-inference/issues/703 | closed | [] | 2023-07-27T01:08:54Z | 2023-07-28T21:41:46Z | null | taoari |
huggingface/sentence-transformers | 2,262 | How to pass more than sentence pairs to InputExamples for fine-tuning? | I have more information about each data point such as language and contextual data that could potentially help (maybe) for our task. The task is to generate sentence similarity embedding and labels.
For the time being, I was able to expand the input examples code to get these features in to expand the input.
``... | https://github.com/huggingface/sentence-transformers/issues/2262 | open | [] | 2023-07-26T18:29:54Z | 2023-07-30T15:39:24Z | null | cyriltw |
huggingface/trl | 578 | How to load a trained reward model? Different (random) results each time the model is loaded. | I trained a reward model using QLoRA and now I want to load it. I followed the instructions from this example from peft:
https://github.com/huggingface/peft/blob/main/examples/sequence_classification/LoRA.ipynb
This leads me to the following code:
```
import torch
from peft import PeftModel, PeftConfig
from trans... | https://github.com/huggingface/trl/issues/578 | closed | [] | 2023-07-26T15:02:13Z | 2023-07-26T19:00:10Z | null | vincentmin |
huggingface/datasets | 6,078 | resume_download with streaming=True | ### Describe the bug
I used:
```
dataset = load_dataset(
"oscar-corpus/OSCAR-2201",
token=True,
language="fr",
streaming=True,
split="train"
)
```
Unfortunately, the server had a problem during the training process. I saved the step my training stopped at.
But how can I resume download f... | https://github.com/huggingface/datasets/issues/6078 | closed | [] | 2023-07-26T14:08:22Z | 2023-07-28T11:05:03Z | 3 | NicolasMICAUX |
huggingface/diffusers | 4,281 | how o convert trained LoRA bin format file to A111 safetensor format | ### Describe the bug
I find script convert_lora_safetensor_to_diffusers.py,but it seems like convert safetensors to bin,not bin to safetensors,I try run this script,error like this:
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ C:\Users\fut\Desktop\tinaniu\c... | https://github.com/huggingface/diffusers/issues/4281 | closed | [
"bug",
"stale"
] | 2023-07-26T08:16:48Z | 2023-09-04T15:03:46Z | null | futureflsl |
huggingface/llm-vscode | 50 | the vsix doesn't work?,how to fix it | i download the vsix from https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode&ssr=false#version-history๏ผbut in vscode when i installed it ,it doesn't work ใcould you fix this? | https://github.com/huggingface/llm-vscode/issues/50 | closed | [] | 2023-07-26T07:05:17Z | 2023-10-17T14:34:58Z | null | CuteBadEgg |
huggingface/transformers.js | 216 | [Question] Getting a lot of ERR 404s when running in browser. | When implementing code that accesses bart-large-mnli in the front-end part of my code, the browser console tells me every attempt to use the pipeline fails with an error 404. (at least that's what I think it's telling me)
So I am trying to use the bart-large-mnli to analyze a bunch of 'post' objects, and only displa... | https://github.com/huggingface/transformers.js/issues/216 | closed | [
"question"
] | 2023-07-26T00:42:20Z | 2023-08-20T23:43:04Z | null | eklavyaisabird |
huggingface/transformers.js | 215 | [Question] How to use a sharp buffer as input to "image-classification" pipeline ? | hi,
i am looking to use a sharp buffer as an input to "image-classification" pipeline, it seems that only url can be provided as an input, i am using the model in nodejs environment (backend) , can anyone provide a solution to this.
thanks
| https://github.com/huggingface/transformers.js/issues/215 | closed | [
"question"
] | 2023-07-25T21:10:06Z | 2023-07-25T21:42:18Z | null | geminigeek |
huggingface/chat-ui | 368 | Ability to pass in request headers for model endpoints | Hello.
I am trying to add an AWS Sagemaker model endpoint to chat-ui and I am getting stuck on the authorization part because I can't pass in request headers to the endpoint. I am able to pass in the authorization string but then I get the following error:
```
Could not parse last message {"message":"Authorizati... | https://github.com/huggingface/chat-ui/issues/368 | closed | [] | 2023-07-25T20:12:28Z | 2023-08-18T15:26:41Z | 3 | lotif |
huggingface/autotrain-advanced | 161 | How to save every X steps on cli? | You could set --save_strategy steps, but how do you specify the number of steps so that the model is saved every X steps?
My command:
```
autotrain llm --train --project_name project --model ./llama/llama_models/7B-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs... | https://github.com/huggingface/autotrain-advanced/issues/161 | closed | [] | 2023-07-25T16:10:22Z | 2023-12-18T15:29:08Z | null | astarostap |
huggingface/setfit | 400 | From which number of training samples does it not make sense anymore to use SetFit? | I'm building a classifier that assigns news articles to one of 8 categories, I was wondering if there was a rule of thumb that over a certain number of training samples per class it would make more sense to use a traditional transformer classifier such as roberta-large? Or will SetFit always be more accurate?
| https://github.com/huggingface/setfit/issues/400 | open | [
"question"
] | 2023-07-25T06:56:04Z | 2023-08-01T14:13:48Z | null | lbelpaire |
huggingface/diffusers | 4,234 | How to train instruct-pix2pix with controlnet and inference | Hi guys,
I want to train instruct-pix2pix using controlnet condition. As you know, currently available for [instruct-pix2pix](https://huggingface.co/docs/diffusers/training/instructpix2pix) and [control net](https://huggingface.co/docs/diffusers/training/controlnet) separately.
**Q1)** Have you plan about this probl... | https://github.com/huggingface/diffusers/issues/4234 | closed | [
"stale"
] | 2023-07-24T13:47:02Z | 2023-08-31T15:04:14Z | null | mzeynali |
huggingface/chat-ui | 366 | v0.4.0 Not on GitHub | The hosted version is already at v0.4.0. This is at least not reflected in the tags or releases here. Is there other non public code? | https://github.com/huggingface/chat-ui/issues/366 | closed | [] | 2023-07-24T11:35:38Z | 2023-07-24T13:19:30Z | 2 | claell |
huggingface/chat-ui | 364 | Facing Error 403 after deployment | Hi folks!
My Chat-UI setup along with a custom LangChain model works perfect on localhost. I tried to deploy it on an Azure VM with Docker Containers and I have been facing this issue which might be due to MongoDB.
... | https://github.com/huggingface/chat-ui/issues/364 | closed | [
"back",
"support"
] | 2023-07-24T10:57:53Z | 2024-04-25T16:29:38Z | 13 | awsum0225 |
huggingface/chat-ui | 363 | When starting with build files, it becomes impossible to change the model. | When starting with pm2 following the Docker file's instructions, I encounter an issue where I cannot change the model. Specifically, after clicking on "Current Model," a popup to select the model appears, but even after selecting "Apply," no changes are observed. Upon inspecting the developer tools, I noticed a 403 Err... | https://github.com/huggingface/chat-ui/issues/363 | closed | [
"bug",
"support"
] | 2023-07-24T08:30:03Z | 2023-10-16T16:07:25Z | 4 | suzuki-shm |
huggingface/diffusers | 4,222 | How to train ldm on a low-resolution image dataset (128*128) | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear an... | https://github.com/huggingface/diffusers/issues/4222 | closed | [
"stale"
] | 2023-07-24T03:14:20Z | 2023-08-31T15:04:25Z | null | crowningwang |
huggingface/text-generation-inference | 679 | How to load a model from a given path? | ### System Info
tgi version:0.9.0
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
I just want to use tgi to run llama-7b model to get the throughput on A100. The model files are preloaded in a given path. I followed ... | https://github.com/huggingface/text-generation-inference/issues/679 | closed | [] | 2023-07-23T06:35:16Z | 2023-07-24T01:34:10Z | null | zhaoyang-star |
huggingface/controlnet_aux | 67 | Please I want to know how to install | Hello, I am new to this and I want to know how to install this particular package. I have installed other packages, but this one I do not know how. Please help with this.
| https://github.com/huggingface/controlnet_aux/issues/67 | open | [] | 2023-07-22T18:57:33Z | 2023-07-26T01:03:21Z | null | sohaib19922 |
huggingface/diffusers | 4,210 | How to use "attention_mask" in "forward" function of "UNet2DConditionModel" defined in "diffusers/src/diffusers/models /unet_2d_condition.py"? | ### Describe the bug
How to use the "attention_mask" in UNet2DConditionModel? What should the size of "attention_mask" look like?
And "attention_mask" can not be used when opening "enable_xformers_memory_efficient_attention" in "examples/text_to_image/train_text_to_image.py"?
` File "/usr/local/lib/python3.9/... | https://github.com/huggingface/diffusers/issues/4210 | closed | [
"bug",
"stale"
] | 2023-07-22T17:28:56Z | 2024-10-18T16:34:37Z | null | ZihaoW123 |
huggingface/accelerate | 1,758 | How to use c10 backend for fault tolerance | Hi,
I found little to no documentation on how to use c10 backend for fault tolerance with accelerate. PyTorch seems to be having this:
https://pytorch.org/docs/stable/elastic/rendezvous.html
I am looking for fault tolerance in case of crash in few nodes, which also means adjusting batch size dynamically to accou... | https://github.com/huggingface/accelerate/issues/1758 | closed | [] | 2023-07-22T08:26:33Z | 2023-08-29T15:06:00Z | null | geekyGoku |
huggingface/autotrain-advanced | 155 | How to do inference via autotrain-advanced? | I see an option to do inference autotrain llm --help.
1. Can you share command to do inference on say llama2 model ? How do you pass lora files to do inference?
2. Any option to do merge and unload while saving the model locally?
3. Any option for multi-gpu training with single node - specify local rank? | https://github.com/huggingface/autotrain-advanced/issues/155 | closed | [] | 2023-07-22T05:55:25Z | 2023-12-15T00:14:28Z | null | sujithjoseph |
huggingface/transformers.js | 206 | [Question] Output always equal to Input in text-generation | I tried a different types of input and always get the output equals the input... What I'm missing?
```
const answerer = await pipeline('text-generation', 'Xenova/LaMini-Cerebras-590M');
let zica = await answerer(`Based on this history:
Andrรฉ de Mattos Ferraz is an engineering manager in Rio de Janeiro, Brazil. ... | https://github.com/huggingface/transformers.js/issues/206 | closed | [
"question"
] | 2023-07-21T21:18:02Z | 2023-07-22T02:21:05Z | null | AndreEneva |
huggingface/transformers.js | 205 | [Question] Is transformers.js expected to work with react native? | I've naively been trying to run the transformers js library via react native on android.
Note that onnxruntime-react-native explicitly supports react native, however the transformers.js package depends only on onnxruntime-web and onnruntime-node.
Importing the transformers.js works fine, however as I try to load a mo... | https://github.com/huggingface/transformers.js/issues/205 | closed | [
"question"
] | 2023-07-21T20:55:44Z | 2023-07-21T21:35:35Z | null | Wehzie |
huggingface/setfit | 398 | hyperparameters to control how to handle long documents | It's common that one might want to use setfit for classifying documents that are longer than max_token_len.
There are several strategies for handling long documents, and the efficacy of each is data dependent:
* Break the document up at max_token_length, possibly avoiding breaking word boundaries.
* Optionally usi... | https://github.com/huggingface/setfit/issues/398 | open | [] | 2023-07-21T11:53:13Z | 2023-07-21T11:53:13Z | null | turian |
huggingface/text-generation-inference | 672 | What is optimal max batch size max sequence length (max_total_tokens) for running llama 2 70b chat on 4 A100 80GB? | This is what i have in my current config
validation_workers: 2, max_total_tokens: 4096, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 4096, max_batch_total_tokens: None, max_waiting_tokens: 20
What do you recommend I should use to get the most out of inference for this setup? | https://github.com/huggingface/text-generation-inference/issues/672 | closed | [] | 2023-07-21T11:17:49Z | 2023-07-21T12:45:31Z | null | yakotoka |
huggingface/datasets | 6,057 | Why is the speed difference of gen example so big? | ```python
def _generate_examples(self, metadata_path, images_dir, conditioning_images_dir):
with open(metadata_path, 'r') as file:
metadata = json.load(file)
for idx, item in enumerate(metadata):
image_path = item.get('image_path')
text_content = item.get('tex... | https://github.com/huggingface/datasets/issues/6057 | closed | [] | 2023-07-21T03:34:49Z | 2023-10-04T18:06:16Z | 1 | pixeli99 |
huggingface/transformers.js | 203 | how to do embeddings? | I want to create an AI assistant for my personal website using Node.js. While I can easily create it using OpenAI embeddings, their API costs are prohibitively expensive. Therefore, I am looking for an alternative method and wondering how I can perform embeddings using a CSV file. Can you advise me on how to do this?
... | https://github.com/huggingface/transformers.js/issues/203 | closed | [
"question"
] | 2023-07-21T02:41:40Z | 2024-06-26T14:09:51Z | null | putuoka |
huggingface/chat-ui | 361 | Configuration for Llama 2 | I am trying to self host Llama 2 with https://github.com/huggingface/text-generation-inference and https://github.com/huggingface/chat-ui . If I give configuration for chat-ui like this:
```
{
"name": "llama2-7b-chat",
"datasetName": "llama2-7b-chat",
"description": "A good alternative to ChatGPT",... | https://github.com/huggingface/chat-ui/issues/361 | closed | [
"support",
"models"
] | 2023-07-20T14:04:29Z | 2023-08-22T13:54:46Z | 3 | aisensiy |
huggingface/text-generation-inference | 658 | How to use AutoGPTQ model in tgi |

command๏ผ
export GPTQ_BITS=4
export GPTQ_GROUPSIZE=128
text-generation-launcher --model-id Ziya-LLaMA-13B_4bit --disable-custom-kernels --port 6006 --revision gptq-4bit-128g-actorder_True -... | https://github.com/huggingface/text-generation-inference/issues/658 | closed | [] | 2023-07-20T08:42:57Z | 2023-07-31T23:50:55Z | null | Minami-su |
huggingface/chat-ui | 358 | Broken encoding for Korean and possibly other languages | I was testing the llama2 and noticed there are some encoding errors (Ignore that the output is total nonsense):
<img width="1618" alt="image" src="https://github.com/huggingface/chat-ui/assets/15624271/61868780-efa0-4670-84d9-734410a05451">
I though It could be because of weird mid-unicode tokenization but I also not... | https://github.com/huggingface/chat-ui/issues/358 | closed | [
"question",
"models"
] | 2023-07-20T05:00:03Z | 2023-09-11T09:34:12Z | null | cceyda |
huggingface/diffusers | 4,160 | How to use diffusers force zeros? | it seems that it only has effect if its used on instance of diffusers class before model is loaded,
but i only get instance when i call from_pretrained or from_single_file
| https://github.com/huggingface/diffusers/issues/4160 | closed | [
"stale",
"SD.Next"
] | 2023-07-19T22:36:38Z | 2023-09-01T13:09:28Z | null | patrickvonplaten |
huggingface/transformers.js | 200 | [Question] Translation models | <!-- QUESTION GOES HERE -->
@xenova is there a model that do the text translation that have lighter weight i mean with minimum size? | https://github.com/huggingface/transformers.js/issues/200 | closed | [
"question"
] | 2023-07-19T22:07:37Z | 2023-07-27T00:17:24Z | null | jedLahrim |
huggingface/dataset-viewer | 1,532 | provide one "partial" field per entry in aggregated responses | For example, https://datasets-server.huggingface.co/size?dataset=c4 only provides a global `partial: true` field and the response does not explicit that the "train" split is partial, while the "test" one is complete.
Every entry in `configs` and `splits` should also include its own `partial` field, to be able to sho... | https://github.com/huggingface/dataset-viewer/issues/1532 | open | [
"question",
"feature request",
"P2"
] | 2023-07-19T20:01:58Z | 2024-05-16T09:36:20Z | null | severo |
huggingface/datasets | 6,053 | Change package name from "datasets" to something less generic | ### Feature request
I'm repeatedly finding myself in situations where I want to have a package called `datasets.py` or `evaluate.py` in my code and can't because those names are being taken up by Huggingface packages. While I can understand how (even from the user's perspective) it's aesthetically pleasing to have n... | https://github.com/huggingface/datasets/issues/6053 | closed | [
"enhancement"
] | 2023-07-19T19:53:28Z | 2024-11-20T21:22:36Z | 2 | jack-jjm |
huggingface/trl | 542 | Supervised Finetuning - How to mask loss for prompts | How can I mask the loss in supervised fine-tuning for prompts similar to how it is done in the LLAMA-2 paper?
Specifically, I have a dataset of prompts and ideal answers. When fine-tuning my model with a `SFTTrainer` using a `ConstantLengthDataset` (similar to the StackExchange example), how can I ensure that promp... | https://github.com/huggingface/trl/issues/542 | closed | [] | 2023-07-19T14:55:17Z | 2023-08-16T15:02:50Z | null | jvhoffbauer |
huggingface/chat-ui | 351 | Starchat-beta doesn't stop generating text properly | Hi, I am deploying starchat-beta and chat-ui locally, it is strange that I found the chat will generate some useful text in the beginning, then it will not stop, then generates some unrelated text, like below
, Neural Magic's inference runtime for sparse execution on CPUs. If it is an ope... | https://github.com/huggingface/optimum/issues/1202 | closed | [
"question",
"Stale"
] | 2023-07-18T18:07:14Z | 2025-05-13T02:14:09Z | null | mgoin |
huggingface/accelerate | 1,743 | what is the possible reason for accelerate running on cuda 12.2 8xA100 with error accelerate multiprocessing.api:failed (exitcode: -9) | ### System Info
```Shell
ubuntu 22.04
gpu A100 80G
cuda version 12.2
accelerate version 0.21.0
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `exa... | https://github.com/huggingface/accelerate/issues/1743 | closed | [] | 2023-07-18T13:33:35Z | 2023-08-15T09:18:05Z | null | garychan22 |
huggingface/datasets | 6,048 | when i use datasets.load_dataset, i encounter the http connect error! | ### Describe the bug
`common_voice_test = load_dataset("audiofolder", data_dir="./dataset/",cache_dir="./cache",split=datasets.Split.TEST)`
when i run the code above, i got the error as below:
--------------------------------------------
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/... | https://github.com/huggingface/datasets/issues/6048 | closed | [] | 2023-07-18T10:16:34Z | 2023-07-18T16:18:39Z | 1 | yangy1992 |
huggingface/safetensors | 299 | Any plan to support Nvidia GPUDirect Storage? | ### Feature request
Nvidia GPUDirect Storage has better performance to load model from NVMe disk or supported distributed storage. It will do the real `zero copy`.
### Motivation
It will get better performance with Nvidia GDS.
### Your contribution
Not sure. | https://github.com/huggingface/safetensors/issues/299 | closed | [
"Stale"
] | 2023-07-17T06:36:51Z | 2025-11-22T05:21:50Z | 9 | carmark |
huggingface/optimum | 1,191 | ONNX Generation - Support for Donut | ### Feature request
I have been trying to convert my custom Donut model to ONNX by using this specific command:
!python3 -m optimum.exporters.onnx --model={custom_model_id} --task=vision2seq-lm ./models/onnx --optimize O4 --atol 1e-2 --opset=13
The following exception occurs at the end of the process, by which ... | https://github.com/huggingface/optimum/issues/1191 | closed | [
"feature-request",
"onnx"
] | 2023-07-16T13:38:38Z | 2024-10-15T16:14:33Z | 3 | ghost |
huggingface/transformers.js | 194 | [Question] Transformers.js bundle size | I'm building a small project that runs `transformers.js` in a `Worker` to do client side embedding.
I noticed that including `import { pipeline } from '@xenova/transformers';` immediately increases my bundle size to over **3MB**.
 | I have tried several methods, but it still download to my home directory | https://github.com/huggingface/trl/issues/520 | closed | [] | 2023-07-16T04:21:45Z | 2023-07-17T08:11:02Z | null | zyzisastudyreallyhardguy |
huggingface/peft | 711 | How to change the location of soft tokens in prompt tuning | ### Feature request
In fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's important for us to change the location of soft tokens.
### Motivation
In fact, when prompt tuning, we will not always add it to the front, it may be in the middle. So I think it's import... | https://github.com/huggingface/peft/issues/711 | closed | [] | 2023-07-15T13:57:52Z | 2024-04-09T06:39:55Z | null | XueTianci |
huggingface/datasets | 6,038 | File "/home/zhizhou/anaconda3/envs/pytorch/lib/python3.10/site-packages/datasets/builder.py", line 992, in _download_and_prepare if str(split_generator.split_info.name).lower() == "all": AttributeError: 'str' object has no attribute 'split_info'. Did you mean: 'splitlines'? | Hi, I use the code below to load local file
```
def _split_generators(self, dl_manager):
# TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
# If several configurations are possible (listed in BUILDER_CONFIGS), the configurati... | https://github.com/huggingface/datasets/issues/6038 | closed | [] | 2023-07-15T07:58:08Z | 2023-07-24T11:54:15Z | 1 | BaiMeiyingxue |
huggingface/datasets | 6,033 | `map` function doesn't fully utilize `input_columns`. | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select co... | https://github.com/huggingface/datasets/issues/6033 | closed | [] | 2023-07-14T08:49:28Z | 2023-07-14T09:16:04Z | 0 | kwonmha |
huggingface/text-generation-inference | 614 | How to make it? How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192? | ### System Info
How can we extend the 'max_new_tokens' from 1512 to either 4096 or 8192?
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [X] My own modifications
### Reproduction
'max_new_tokens' from 1512 to either 4096 or 8192
### Expected behavior
'max_... | https://github.com/huggingface/text-generation-inference/issues/614 | closed | [] | 2023-07-14T08:46:29Z | 2023-07-19T06:04:32Z | null | DiamondYuanqi |
huggingface/transformers.js | 193 | all-MiniLM-L6-v2 vector lengths | Hey, is there any way to programmatically set fix the vector embedding array lengths to a certain length? I was using https://huggingface.co/Xenova/all-MiniLM-L6-v2 with nodejs and every input I ran through the pipe gave a different length, and it would be nice to be able to keep it consistent.
| https://github.com/huggingface/transformers.js/issues/193 | closed | [
"question"
] | 2023-07-13T20:31:06Z | 2023-07-13T22:32:03Z | null | unkn-wn |
huggingface/chat-ui | 344 | 404 not found error when exporting data | https://github.com/huggingface/chat-ui/blob/1eff97d9fd47d8c486480d4d9a5208437c519cbb/src/routes/admin/export/%2Bserver.ts#L16
I am using the main branch and tried to export the dataset with the curl request given in the code, but the server returns 404 not found.
Its behind an reverse proxy with ssl, do i need to c... | https://github.com/huggingface/chat-ui/issues/344 | closed | [
"question",
"back"
] | 2023-07-13T08:40:27Z | 2023-11-10T09:50:22Z | null | flozi00 |
huggingface/sentence-transformers | 2,254 | How to prepare label for the dataset that has two pairs of text, but not labels? | Hi,
Thank you for the great information, I have a question. My data has two column of texts, one as description of a request, the other one like an answer for that request. I want to use the Contrasiveloss to make the pairs of request and answer close and the other answer that are not related far, but I do not know ... | https://github.com/huggingface/sentence-transformers/issues/2254 | open | [] | 2023-07-12T21:30:07Z | 2023-07-30T15:38:09Z | null | Yarmohamadshr |
huggingface/optimum | 1,183 | Cannot convert owlvit-base-patch32 model to ONNX and run inference | ### System Info
```shell
Optimum version: 1.9.1
Python version: 3.11.3
OS: MacOS
```
### Who can help?
@mich
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dat... | https://github.com/huggingface/optimum/issues/1183 | closed | [
"bug"
] | 2023-07-12T13:20:12Z | 2024-07-27T14:27:58Z | 9 | Pedrohgv |
huggingface/chat-ui | 341 | SSL Wrong version number error | i have added this
"endpoints": [
{"url": "http://127.0.0.1:8080/generate_stream", "weight": 100}
],
in the model but i am getting this error
TypeError: fetch failed
at fetch (/home/fm-pc-lt-215/Desktop/chat-ui/chat-ui/node_modules/undici/index.js:109:13)
at process.processTicksAndReject... | https://github.com/huggingface/chat-ui/issues/341 | closed | [
"support"
] | 2023-07-12T04:40:58Z | 2023-09-18T14:00:27Z | 4 | swikrit21 |
huggingface/diffusers | 4,054 | [SD-XL] How to apply invisible-watermark for latent output | ### Describe the bug
As a part of the license with SAI, we need to ensure the invisible watermark is applied across all images output by these models, including the Img2Img pipeline.
### Reproduction
```py
# if xformers or torch_2_0 is used attention block does not need
# to be in float32 which can... | https://github.com/huggingface/diffusers/issues/4054 | closed | [
"bug"
] | 2023-07-12T03:58:04Z | 2023-07-12T10:21:29Z | null | bghira |
huggingface/transformers.js | 192 | Table Question Answering Support? | Hi - Interested in support for table question answering models. It's noted that these aren't supported, but is there any reason they wouldn't work if leveraged?
| https://github.com/huggingface/transformers.js/issues/192 | open | [
"question"
] | 2023-07-12T01:12:07Z | 2023-07-13T16:18:19Z | null | timtutt |
huggingface/peft | 685 | Matrix mistmatch when trying to adapt Falcon with QLoRA, how to fix? | ### System Info
```
(data_quality) brando9~ $ python collect_env.py
Collecting environment information...
PyTorch version: 2.0.1
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang ver... | https://github.com/huggingface/peft/issues/685 | closed | [] | 2023-07-11T20:01:37Z | 2023-07-24T00:11:02Z | null | brando90 |
huggingface/diffusers | 4,047 | How to set lora scale when loading a LoRA model? | Hey there, first of all thanks for your fantastic work!
I am loading LoRA weights, and I would like to set the scale of them being applied. Checking the code, it appears to be possible as shown [here](https://github.com/huggingface/diffusers/blob/fc7aa64ea8f5979b67bd730777e8e1c32e3adb05/src/diffusers/loaders.py#L109... | https://github.com/huggingface/diffusers/issues/4047 | closed | [] | 2023-07-11T17:38:05Z | 2023-08-29T05:30:44Z | null | pietrobolcato |
huggingface/diffusers | 4,042 | How to combine the reference-only with inpainting and depth control? | ### Model/Pipeline/Scheduler description
Hi, I recently want to combine the reference-only with image inpaint , with depth control to replace background for portrait images. However, I have no idea to build this pipeline as for there is no reference with inpaint pipeline example. Could you please help me to figure it... | https://github.com/huggingface/diffusers/issues/4042 | closed | [] | 2023-07-11T12:17:24Z | 2023-07-14T06:12:29Z | null | AmberCheng |
huggingface/chat-ui | 340 | [WebSearch] "Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 1000 `inputs` tokens and 1024 `max_new_tokens`" | Hello there,
Title says it all.
We are not using any custom endpoints/models. We're just relying on the HuggingFace's API inferences.
Is there a way to increase/decrease the inputs token when using WebSearch (or even just increase the max sum)? Because it works fine if `max_new_tokens` is set to 512 BUT it, obv... | https://github.com/huggingface/chat-ui/issues/340 | closed | [
"question",
"models"
] | 2023-07-11T07:33:18Z | 2023-07-12T09:16:21Z | null | gollumeo |
huggingface/diffusers | 4,029 | How can I make diffuser pipeline to use .safetensors file for SDXL? | Cloning entire repo is taking 100 GB
How can I make below code to use .safetensors file instead of diffusers?
Lets say I have downloaded my safetensors file into path.safetensors
How to provide it?
The below code working but we are cloning 100 GB instead of just single 14 GB safetensors. Waste of bandwidth... | https://github.com/huggingface/diffusers/issues/4029 | closed | [] | 2023-07-10T21:52:22Z | 2023-12-11T18:45:18Z | null | FurkanGozukara |
huggingface/chat-ui | 337 | Feature Request: Save messages and error message even if text generation endpoint fails | Situation: Text generation endpoint is not running. Then user sends a message.
Current Behavior: UI throws an error and saves conversation to mongodb like this, with an empty message list.
```
{
_id: ObjectId('64ac1abc2ac09222e24cc984'),
title: 'Untitled 5',
messages: [],
model: 'GPT',
creat... | https://github.com/huggingface/chat-ui/issues/337 | closed | [
"enhancement",
"back",
"p2"
] | 2023-07-10T15:18:52Z | 2023-10-10T11:16:22Z | 1 | loganlebanoff |
huggingface/transformers.js | 187 | [Question] Performance and size of models | Great project, tons of potential! I have a general question I thought I may ask. Using the convert.py scripts, I took a Pytorch model and converted it to ONNX. With quantizing, I get a full 428MB model and a 110MB _quantized model. Now how does it work for the user exactly? Does the user automatically download the _qua... | https://github.com/huggingface/transformers.js/issues/187 | closed | [
"question"
] | 2023-07-10T14:39:31Z | 2023-07-11T17:06:38Z | null | sabatale |
huggingface/chat-ui | 336 | how to work in chat-ui with non streaming data? | I was working in a chat-ui by providing my endpoints only which is hosted in a localhost:8000/generate. I dont have any model but endpoints only so can you provide me a solution for working in only endpoints and non streaming data( application/json or application/plain). I have model hosted in this server.
in modelE... | https://github.com/huggingface/chat-ui/issues/336 | closed | [] | 2023-07-10T13:43:17Z | 2023-07-11T08:29:40Z | null | swikrit21 |
huggingface/transformers.js | 186 | [Question] How to interpret boxes in object detection example ? | hi,
can anyone help me how to interpret boxes while using object detection with this model "Xenova/detr-resnet-50".
i want to crop out the detected object from the image using sharp (nodejs) ? how can i pass these boxes to sharp resize function ?
| https://github.com/huggingface/transformers.js/issues/186 | closed | [
"question"
] | 2023-07-10T12:59:22Z | 2023-07-11T00:55:13Z | null | geminigeek |
huggingface/chat-ui | 335 | Bug: Unexpected execution result on Firefox browser with Chat-UI ver. 0.3.0 | I recently installed the 0.3.0 version of the HF Chat-UI software.
I then performed an evaluation using the **HuggingFaceH4/starchat-beta** model.
At that time, I typed the question "_Could you tell me about the weather in Toyko City in Japan on July-10-2023_?" and ran it.
Unfortunately, the results varied bet... | https://github.com/huggingface/chat-ui/issues/335 | closed | [
"support"
] | 2023-07-10T04:40:40Z | 2023-09-11T09:32:14Z | 2 | leemgs |
huggingface/chat-ui | 334 | Chat-ui is starting, but nothing happends | # Description:
When starting the Chat-ui, the initialization process begins as expected but stalls indefinitely, without any evident progress. The application doesn't crash nor gives any errors. This issue occurs across multiple attempts, regardless of browser type or device.
# Steps to reproduce:
- Install prer... | https://github.com/huggingface/chat-ui/issues/334 | closed | [
"support"
] | 2023-07-09T13:53:34Z | 2023-09-11T09:31:49Z | 2 | Notespeak |
huggingface/diffusers | 3,988 | how to use part of the controlnet models with a "StableDiffusionControlNetInpaintPipeline" object? | I created a "StableDiffusionControlNetInpaintPipeline" object with a list of controlnet models such as "canny","openpose", but sometimes I want to use canny only or openpose only.Is there's a way to reuse part of the controlnet models with a already inited "StableDiffusionControlNetInpaintPipeline" object? | https://github.com/huggingface/diffusers/issues/3988 | closed | [] | 2023-07-07T09:18:18Z | 2023-08-01T04:51:41Z | null | AdamMayor2018 |
huggingface/optimum-habana | 292 | Where in the directory "/tmp/tst-summarization", is the summarization output stored? | ### System Info
```shell
Optimum Habana : 1.6.0
SynapseAI : 1.10.0
Docker Image : Habanaยฎ Deep Learning Base AMI (Ubuntu 20.04)
Volume : 1000 GiB
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as G... | https://github.com/huggingface/optimum-habana/issues/292 | closed | [
"bug"
] | 2023-07-07T03:24:31Z | 2023-07-18T08:30:21Z | null | Abhaycnvrg |
huggingface/trl | 503 | How to get labels into the SFTTrainer | Hi!
I am trying to prompt tune medalpaca 7b using prompt tuning or lora with the SFTTrainer. I have a prompt and I have labels that I want the model to output. I have made a Dataset class that inherits from torch.utils.data.Dataset to prepare my inputs, but I am wondering, if there is some way to make the trainer use ... | https://github.com/huggingface/trl/issues/503 | closed | [] | 2023-07-06T22:19:21Z | 2023-08-14T15:05:10Z | null | MaggieK410 |
huggingface/transformers.js | 182 | Website and extension using same model | Per the chrome extension example, you pack the model with the extension. Is there a way for a website and chrome extension to use the same cached model? If my project has both a website and extension, I hope they could use a single model instead of having store 2 on the user's machine.
| https://github.com/huggingface/transformers.js/issues/182 | open | [
"question"
] | 2023-07-06T17:43:48Z | 2023-07-16T17:26:09Z | null | escottgoodwin |
huggingface/chat-ui | 331 | How to send model name as a input to API endpoint | I want to host two models and query them by switching between . The problem is I'm not able to send model name as a parameter from UI to API endpoints.
Can someone help on this? | https://github.com/huggingface/chat-ui/issues/331 | closed | [
"question"
] | 2023-07-06T13:04:04Z | 2023-09-18T14:03:18Z | null | sankethgadadinni |
huggingface/transformers | 24,685 | How to get the last 4 Hidden states from the feature extraction pipeline | I have defined a pipeline for Feature extraction
```
# Create the pipeline
p = pipeline(
task="feature-extraction",
tokenizer="microsoft/biogpt",
model="microsoft/biogpt",
framework="pt",
device=0
)
bio_gpt = AutoModel.from_pretrained("microsoft/biogpt", output_hidden_states= True)
bio_gp... | https://github.com/huggingface/transformers/issues/24685 | closed | [] | 2023-07-06T08:45:08Z | 2023-08-14T15:02:35Z | null | Luke-4 |
huggingface/setfit | 393 | AttributeError: 'list' object has no attribute 'shuffle' | I am getting the "AttributeError: 'list' object has no attribute 'shuffle'" error when I try to use setfit.
The dataset has two columns; one text and the second is the label column. | https://github.com/huggingface/setfit/issues/393 | closed | [
"question"
] | 2023-07-05T16:47:17Z | 2023-12-05T14:41:13Z | null | gpirge |
huggingface/datasets | 6,008 | Dataset.from_generator consistently freezes at ~1000 rows | ### Describe the bug
Whenever I try to create a dataset which contains images using `Dataset.from_generator`, it freezes around 996 rows. I suppose it has something to do with memory consumption, but there's more memory available. I
Somehow it worked a few times but mostly this makes the datasets library much more ... | https://github.com/huggingface/datasets/issues/6008 | closed | [] | 2023-07-05T16:06:48Z | 2023-07-10T13:46:39Z | 3 | andreemic |
huggingface/dataset-viewer | 1,482 | diagnose why the mongo server uses so much CPU | we have many alerts on the use of CPU on the mongo server.
```
System: CPU (User) % has gone above 95
```
Why? | https://github.com/huggingface/dataset-viewer/issues/1482 | closed | [
"question",
"infra",
"improvement / optimization",
"P1"
] | 2023-07-04T16:04:06Z | 2024-02-06T14:49:20Z | null | severo |
huggingface/text-generation-inference | 536 | How to enable vllm | ### Feature request
How to enable vllm
### Motivation
How to enable vllm
### Your contribution
How to enable vllm | https://github.com/huggingface/text-generation-inference/issues/536 | closed | [] | 2023-07-04T05:20:21Z | 2023-07-04T10:56:29Z | null | lucasjinreal |
huggingface/transformers.js | 180 | [Question] Running transformers.js in a browser extension | Hello,
I'm trying to build a chrome extension that uses Transformers.js. When I try to import it in the background worker script, I first get an error that says process is not available, because apparently someone decided browser plugins shouldn't use process.env anymore. I found a solution that said to put
```
... | https://github.com/huggingface/transformers.js/issues/180 | closed | [
"question"
] | 2023-07-04T01:09:29Z | 2023-07-16T15:58:30Z | null | davidtbo |
huggingface/datasets | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
... | https://github.com/huggingface/datasets/issues/6003 | open | [] | 2023-07-03T17:15:31Z | 2023-07-03T17:15:31Z | 0 | PonteIneptique |
huggingface/dataset-viewer | 1,472 | How to show fan-in jobs' results in response ("pending" and "failed" keys) | In cache entries of fan-in jobs we have keys `pending` and `failed`. For example, config-level `/parquet` response has the following format (only "parquet_files" key):
```python
{
"parquet_files": [
{
"dataset": "duorc",
"config": "ParaphraseRC",
"split": "test",
... | https://github.com/huggingface/dataset-viewer/issues/1472 | open | [
"question",
"api",
"P2"
] | 2023-07-03T16:49:10Z | 2023-08-11T15:26:24Z | null | polinaeterna |
huggingface/blog | 1,281 | How to push or shere lora adapter to hugging face hub? | hi, i trained falcon model and already set push_to_hub paramter in training argument, but they not working.
```
from transformers import TrainingArguments
output_dir = "chatb_f"
per_device_train_batch_size = 4
gradient_accumulation_steps = 4
optim = "paged_adamw_32bit"
save_steps = 60
logging_steps = 10
le... | https://github.com/huggingface/blog/issues/1281 | open | [] | 2023-07-01T13:56:47Z | 2023-07-01T13:57:40Z | null | imrankh46 |
huggingface/diffusers | 3,918 | How to control the position of an object in an image using text in a txt2img model? | How to control the position of an object in an image using text in a txt2img model? I know this is easy to achieve in an img2img model, but how can it be done in a txt2img model?
Or, how can a model be fine-tuned to achieve this effect? For example, specifying x=0, y=1, which corresponds to the top-left corner.
I... | https://github.com/huggingface/diffusers/issues/3918 | closed | [
"stale"
] | 2023-07-01T02:44:24Z | 2023-08-08T15:03:15Z | null | XiaoyuZhuang |
huggingface/dataset-viewer | 1,464 | Change the way we represent ResponseAlreadyComputedError in the cache | When a "parallel" step has already been computed, an error is stored in the cache with `ResponseAlreadyComputedError`error_code, and http status 500 (ie: if `split-first-rows-from-streaming` exists, then `split-first-rows-from-parquet` does not need to be computed).
But it makes it hard to monitor the "true" errors.... | https://github.com/huggingface/dataset-viewer/issues/1464 | closed | [
"question",
"improvement / optimization",
"P2"
] | 2023-06-30T18:13:34Z | 2024-02-23T09:56:05Z | null | severo |
huggingface/transformers.js | 176 | [Question] Embeddings for the Entire Document | <!-- QUESTION GOES HERE -->
Hi Thanks for all the effort, I really appreciate it. I enjoy coding in JS and do all things in JS.
Is it a good idea to load the entire json document to get embeddings? What tokenizer should I choose? I have a tone of valuable information in my key and value pairs? or should I craft a s... | https://github.com/huggingface/transformers.js/issues/176 | closed | [
"question"
] | 2023-06-30T16:20:37Z | 2023-06-30T22:43:03Z | null | hadminh |
huggingface/sentence-transformers | 2,247 | how to tune hyperparameters using optuna or raytune | I want to finetune the MiniLM model and tune the hyperparameters of the same, but the model.fit function doesn't return any loss. Nor does it shows any performance metrics while training the model. What do you suggest in this case? | https://github.com/huggingface/sentence-transformers/issues/2247 | open | [] | 2023-06-30T13:16:04Z | 2023-06-30T13:16:04Z | null | nikshrimali |
huggingface/diffusers | 3,914 | how to fine-tuning the sd model in low resolutions | When fine-tuning the stable diffusion model, there is a parameter called 'resolution' which, if set to a value like 128 or 256 to reduce GPU memory usage, could potentially have negative effects on training performance and results.
Would setting the resolution to a value other than 512, such as 128 or 256, have any ... | https://github.com/huggingface/diffusers/issues/3914 | closed | [
"stale"
] | 2023-06-30T12:42:12Z | 2023-08-08T15:03:16Z | null | XiaoyuZhuang |
huggingface/optimum | 1,148 | Falcon-40b-instruct on Runpod | ### System Info
```shell
2 x A100 80GB
32 vCPU 251 GB RAM
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give detai... | https://github.com/huggingface/optimum/issues/1148 | closed | [
"bug"
] | 2023-06-29T18:48:05Z | 2023-06-30T15:39:29Z | 3 | Mrin7 |
huggingface/text-generation-inference | 509 | Question: How to estimate memory requirements for a certain batch size/ | I was just wondering how the GPU memory requirements vary depending on model size/batch size of request/max tokens. In doing some experiments where I needed the server to keep running for a long time, I found that it often ran out of memory and shut down - is there a way to estimate the memory footprint based on these ... | https://github.com/huggingface/text-generation-inference/issues/509 | closed | [] | 2023-06-29T15:39:51Z | 2023-07-03T01:41:02Z | null | vaishakkrishna |
huggingface/transformers.js | 171 | [Doc request] Add an example guide of how to use it in Svelte (and deploy to HF Spaces) | Similar to the cool React guide, would be awesome to showcase how to use transformers.js from Svelte (and how to deploy the resulting app to Spaces)
No need to do a SvelteKit version IMO, Svelte would be sufficient
Maybe a good first issue for the community? | https://github.com/huggingface/transformers.js/issues/171 | open | [
"enhancement",
"help wanted",
"good first issue"
] | 2023-06-29T10:25:10Z | 2023-08-21T20:36:59Z | null | julien-c |
huggingface/optimum | 1,145 | How to use mean pooling with ONNX export with optimum-cli | ### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
###... | https://github.com/huggingface/optimum/issues/1145 | open | [
"bug"
] | 2023-06-29T05:57:35Z | 2023-06-29T05:57:35Z | null | aunwesha |
huggingface/chat-ui | 328 | Is there a way to see all of a user's history? | I want to see the chat history of all my users. | https://github.com/huggingface/chat-ui/issues/328 | closed | [
"question"
] | 2023-06-29T05:01:55Z | 2023-07-03T10:43:53Z | null | ildoonet |
huggingface/chat-ui | 327 | Tokens limits issue | Input validation error: `inputs` tokens + `max_new_tokens` must be <= 1512. Given: 603 `inputs tokens and 1024 `max_new_tokens
When deployed, the ui is working fine for like 2 or 3 promts, then every prompt we try we get a red line on top with a pop-up having this message. Please how can we remove this limitation o... | https://github.com/huggingface/chat-ui/issues/327 | open | [
"question",
"back"
] | 2023-06-28T18:09:19Z | 2023-09-18T14:03:59Z | null | Billyroot |
huggingface/diffusers | 3,890 | How to apply the schedulers in diffusers to original SD | Hi! Thanks for this great work! Diffusers helps me a lot in many aspects!
Because of my recent work, I would like to know wether the schedulers in diffusers can be directly used in original SD? If yes, what should I do?
Any response will be greatly appreciated! Again, thank you all for this convenient framework! | https://github.com/huggingface/diffusers/issues/3890 | closed | [
"stale"
] | 2023-06-28T11:02:41Z | 2023-08-05T15:04:00Z | null | volcverse |
huggingface/dataset-viewer | 1,446 | Add fields `viewer` and `preview` to /is-valid | For coherence with /valid, we should add the `viewer` and `preview` fields to /is-valid.
We should also consider deprecating the current `valid` field (as in https://github.com/huggingface/datasets-server/issues/1445). Note that it's in use in https://github.com/search?q=org%3Ahuggingface+datasets-server.huggingface... | https://github.com/huggingface/dataset-viewer/issues/1446 | closed | [
"question",
"api"
] | 2023-06-28T09:19:56Z | 2023-06-29T14:13:16Z | null | severo |
huggingface/dataset-viewer | 1,445 | Remove `.valid` from `/valid` endpoint? | We recently added to fields to `/valid`:
- `viewer`: all the datasets that have a valid dataset viewer
- `preview`: all the datasets that don't have a valid dataset viewer, but have a dataset preview
And the Hub does not use the original field `valid` anymore. We still fill it with the union of both sets.
Shoul... | https://github.com/huggingface/dataset-viewer/issues/1445 | closed | [
"question",
"api"
] | 2023-06-28T09:17:13Z | 2023-07-26T15:47:35Z | null | severo |
huggingface/diffusers | 3,882 | How to use models like chilloutmix to do inpainting task? | I tried as https://huggingface.co/docs/diffusers/api/diffusion_pipeline mentioned:
`text2img = StableDiffusionPipeline.from_pretrained("/data/cx/ysp/aigc-smart-painter/models/chilloutmix_NiPrunedFp32Fix")
inpaint = StableDiffusionInpaintPipeline(**text2img.components)
seger = RawSeger()
REST_API_URL = 'http://local... | https://github.com/huggingface/diffusers/issues/3882 | closed | [
"stale"
] | 2023-06-27T15:25:31Z | 2023-08-05T15:04:07Z | null | AdamMayor2018 |
huggingface/diffusers | 3,881 | How many images and how many epochs are required to fine tune LORA for stable diffusion on custom image dataset | I am trying to finetune LORA on a movie dataset , but I am using custom dataset which has 3-4 movie characters , instead of using the actual names of the actor we are using in movie name of the characters , how big the dataset would be required in terms of total number of images, and number of images per character and ... | https://github.com/huggingface/diffusers/issues/3881 | closed | [
"stale"
] | 2023-06-27T11:05:53Z | 2023-08-04T15:03:17Z | null | atharmzaalo2023 |
huggingface/peft | 636 | How to save full model weights and not just the adapters ? | ### System Info
peft==0.4.0.dev0
I'm not sure if this should be a bug report, so sorry if this is not convenient.
According to the `save_pretrained`method docstring, this saves the adapter model only and not the full model weights, is there an option where I can save the full model weights ? The use case is that ... | https://github.com/huggingface/peft/issues/636 | closed | [] | 2023-06-26T15:30:48Z | 2025-03-13T11:52:23Z | null | azayz |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.