repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 β | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | 6,267 | Multi label class encoding | ### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
... | https://github.com/huggingface/datasets/issues/6267 | open | [
"enhancement"
] | 2023-09-27T22:48:08Z | 2023-10-26T18:46:08Z | 7 | jmif |
huggingface/huggingface_hub | 1,698 | How to change cache dir? | ### Describe the bug
by default, all downloaded models are stored on
> cache_path = '/root/.cache/huggingface/hub'
Is there a way to change this dir to something else?
I tried to set "HUGGINGFACE_HUB_CACHE"
```
import os
os.environ['HUGGINGFACE_HUB_CACHE'] = '/my_workspace/models_cache'
```
but it d... | https://github.com/huggingface/huggingface_hub/issues/1698 | closed | [
"bug"
] | 2023-09-27T07:45:30Z | 2023-09-27T09:08:34Z | null | adhikjoshi |
huggingface/accelerate | 2,010 | How to set different seed for DDP data sampler for every epoch | Hello there!
I am using the following code to build my data loader.
```python
data_loader_train = DataLoader(
dataset_train,
collate_fn=collate_fn,
batch_size=cfg.data.train_batch_size,
num_workers=cfg.data.num_workers,
pin_memory=cfg.data.pin_memory,
)
data_loader... | https://github.com/huggingface/accelerate/issues/2010 | closed | [] | 2023-09-27T02:46:10Z | 2023-09-27T11:32:22Z | null | Mountchicken |
huggingface/transformers | 26,412 | How to run Trainer + DeepSpeed + Zero3 + PEFT | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+c... | https://github.com/huggingface/transformers/issues/26412 | open | [
"WIP"
] | 2023-09-26T10:31:46Z | 2024-01-11T15:40:02Z | null | BramVanroy |
huggingface/setfit | 423 | [Q] How to examine correct/wrong predictions in trainer.evaluate() | Hello,
After doing "metrics = trainer.evalute()" as shown in the example code, is there a way to examine which rows in the evaluation data set were predicted correctly?
Thanks! | https://github.com/huggingface/setfit/issues/423 | closed | [
"question"
] | 2023-09-25T23:41:53Z | 2023-11-24T13:04:45Z | null | youngjin-lee |
huggingface/chat-ui | 461 | The custom endpoint response doesn't stream even though the endpoint is sending streaming content | @nsarrazin I'm transmitting the streaming response to the chat UI, but it displays all the content simultaneously rather than progressively streaming the text generation part. Can you help me address this issue?
Reference: #380 | https://github.com/huggingface/chat-ui/issues/461 | open | [
"support"
] | 2023-09-25T07:43:57Z | 2023-10-29T11:21:04Z | 2 | nandhaece07 |
huggingface/autotrain-advanced | 279 | How to run AutoTrain Advanced UI locally | How to run AutoTrain Advanced UI locally π’ | https://github.com/huggingface/autotrain-advanced/issues/279 | closed | [] | 2023-09-25T07:25:51Z | 2024-04-09T03:20:17Z | null | LronDC |
huggingface/transformers.js | 328 | [Question] React.js serve sentence bert in browser keep reporting models not found. | my codes:
```javascript
export const useInitTransformers = () => {
const init = async () => {
// @ts-ignore
env.allowLocalModels = false;
extractor = await pipeline(
"feature-extraction",
"Xenova/all-mpnet-base-v2",
);
};
return { init };
};
```
I'm building a frontend ... | https://github.com/huggingface/transformers.js/issues/328 | closed | [
"question"
] | 2023-09-24T15:51:47Z | 2024-10-18T13:30:11Z | null | bianyuanop |
huggingface/candle | 944 | Question: How to tokeninize text for Llama? | Hello everybody,
How can I tokenize text to use with Llama? I want to fine-tune Llama on my custom data, so how can I tokenize from a String and then detokenize the logits into a String?
I have looked at the Llama example for how to detokenize, but cannot find any clear documentation on how the implementation actuall... | https://github.com/huggingface/candle/issues/944 | closed | [] | 2023-09-23T18:19:56Z | 2023-09-23T23:01:13Z | null | EricLBuehler |
huggingface/transformers.js | 327 | Calling pipeline returns `undefined`. What are possible reasons? | The repository if you need it βΆβΆβΆ [China Cups](https://github.com/piscopancer/china-cups)
## Next 13.5 / server-side approach
Just started digging into your library. Sorry for stupidity.
### `src/app/api/translate/route.ts` π
```ts
import { NextRequest, NextResponse } from 'next/server'
import { PipelineSi... | https://github.com/huggingface/transformers.js/issues/327 | closed | [
"question"
] | 2023-09-23T15:57:24Z | 2023-09-24T06:55:08Z | null | piscopancer |
huggingface/optimum | 1,410 | Export TrOCR to ONNX | I was trying to export my fine-tuned TrOCR model to ONNX using following command. I didn't get any errors, but in onnx folder only encoder model is saved.
```
!python -m transformers.onnx --model=model_path --feature=vision2seq-lm onnx/ --atol 1e-2
```
So, regarding this, I have 2 questions.
1. How to save decoder... | https://github.com/huggingface/optimum/issues/1410 | closed | [
"onnx"
] | 2023-09-23T09:19:50Z | 2024-10-15T16:21:52Z | 2 | VallabhMahajan1 |
huggingface/chat-ui | 459 | Chats Stop generation button is broken? | whenever I'm using the Chat UI on hf.co/chat, and I press the stop generation button it deletes both the prompt and the response? | https://github.com/huggingface/chat-ui/issues/459 | open | [
"support"
] | 2023-09-21T19:38:38Z | 2023-10-08T00:44:44Z | 4 | VatsaDev |
huggingface/chat-ui | 457 | Custom Models breaking Chat-ui | Setting a custom model in .env.local is now breaking chat-ui for me. @jackielii @nsarrazin
If I start mongo and then run ```npm run dev``` with a .env.local file including only the mongo url, there is no issue.
Then I add the following:
```
MODELS=`[
{
"name": "OpenAssistant/oasst-sft-4-pythia-12b-ep... | https://github.com/huggingface/chat-ui/issues/457 | closed | [
"support"
] | 2023-09-21T11:12:42Z | 2023-09-21T16:03:30Z | 10 | RonanKMcGovern |
huggingface/datasets | 6,252 | exif_transpose not done to Image (PIL problem) | ### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca... | https://github.com/huggingface/datasets/issues/6252 | closed | [
"enhancement"
] | 2023-09-21T08:11:46Z | 2024-03-19T15:29:43Z | 2 | rhajou |
huggingface/optimum | 1,401 | BUG: running python file called onnx.py causes circular errors. | ### System Info
```shell
latest optimum, python 3.10, linux cpu.
```
### Who can help?
@JingyaHuang, @echarlaix, @michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as ... | https://github.com/huggingface/optimum/issues/1401 | open | [
"bug"
] | 2023-09-21T04:12:49Z | 2023-10-05T14:32:40Z | 1 | gidzr |
huggingface/diffusers | 5,124 | How to fine tune checkpoint .safetensor | ### Describe the bug
I tried to fine tuning a model from a checkpoint (i.e https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model)I converted the checkpoint to diffuser format using this library:
https://github.com/waifu-diffusion/sdxl-ckpt-converter/
The model converted works fine for inference a... | https://github.com/huggingface/diffusers/issues/5124 | closed | [
"bug",
"stale"
] | 2023-09-20T22:45:38Z | 2023-11-22T15:06:19Z | null | EnricoBeltramo |
huggingface/diffusers | 5,118 | how to use controlnet's reference_only fuction with diffusers?? | ### Model/Pipeline/Scheduler description
can anyone help me to understand how to use controlnet's reference_only fuction with diffusers
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links fo... | https://github.com/huggingface/diffusers/issues/5118 | closed | [
"stale"
] | 2023-09-20T10:17:53Z | 2023-11-08T15:07:34Z | null | sudip550 |
huggingface/transformers.js | 321 | [Question] Image Embeddings for ViT | Is it possible to get image embeddings using Xenova/vit-base-patch16-224-in21k model? We use feature_extractor to get embeddings for sentences. Can we use feature_extractor to get image embeddings?
```js
const model_id = "Xenova/vit-base-patch16-224-in21k";
const image = await RawImage.read("https://huggingface.co/... | https://github.com/huggingface/transformers.js/issues/321 | closed | [
"question"
] | 2023-09-20T01:22:08Z | 2024-01-13T01:25:03Z | null | hadminh |
huggingface/optimum | 1,395 | TensorrtExecutionProvider documentation | ### System Info
```shell
main, docs
```
### Who can help?
@fxmarty
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction... | https://github.com/huggingface/optimum/issues/1395 | open | [
"documentation",
"onnxruntime"
] | 2023-09-19T09:06:17Z | 2023-09-19T09:57:26Z | 1 | IlyasMoutawwakil |
huggingface/transformers.js | 317 | How to use xenova/transformers in VSCode Extension | Hey guys! I am trying to use xenova/transformers in CodeStory, we roll a vscode extension as well and I am hitting issues with trying to get the import working, here's every flavor of importing the library which I have tried to date.
```
const TransformersApi = Function('return import("@xenova/transformers")')();
... | https://github.com/huggingface/transformers.js/issues/317 | open | [
"question"
] | 2023-09-19T01:35:21Z | 2024-07-27T20:36:37Z | null | theskcd |
huggingface/candle | 894 | How to fine-tune Llama? | Hello everybody,
I am trying to fine-tune the Llama model, but cannot load the safetensors file. I have modified the training loop for debugging and development:
```rust
pub fn run(args: &crate::TrainingCmd, common_args: &crate::Args) -> Result<()> {
let config_path = match &args.config {
Some(config... | https://github.com/huggingface/candle/issues/894 | closed | [] | 2023-09-18T22:18:04Z | 2023-09-21T10:05:57Z | null | EricLBuehler |
huggingface/candle | 891 | How to do fine-tuning? | Hello everybody,
I was looking through the Candle examples and cannot seem to find an example of fine-tuning for Llama. It appears the only example present is for training from scratch. How should I fine-tune a pretrained model on my own data? Or, more generally, how should I fine tune a model that it loaded from a ... | https://github.com/huggingface/candle/issues/891 | closed | [] | 2023-09-18T18:37:42Z | 2024-07-08T15:13:01Z | null | EricLBuehler |
huggingface/transformers | 26,218 | How to manually set the seed of randomsampler generator when training using transformers trainer | ### System Info
I used a [script](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py) to continue pre-training the llama2 model. In the second epoch, the loss began to explode, so I chose to reload the checkpoint to continue training, but the loss changes were comp... | https://github.com/huggingface/transformers/issues/26218 | closed | [] | 2023-09-18T14:19:11Z | 2023-11-20T08:05:37Z | null | young-chao |
huggingface/transformers.js | 313 | [Question] How to use remote models for automatic-speech-recognition | I have an html file that is
```
<!DOCTYPE html>
<html>
<body>
<script type="module">
import { pipeline,env } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.0';
env.allowLocalModels = false;
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en');
... | https://github.com/huggingface/transformers.js/issues/313 | closed | [
"question"
] | 2023-09-18T04:56:52Z | 2023-09-18T05:19:00Z | null | LehuyH |
huggingface/candle | 883 | Question: How to properly use VarBuilder? | Hello everybody,
I am working on implementing LoRA and want to use the VarBuilder system. However, when I try to get a tensor with get_with_hints, I get a CannotFindTensor Err. To create the Tensor, I do:
```rust
vb.pp("a").get_with_hints(
...lora specific shape...
"weight",
...lora specific hints...
)
```
However, th... | https://github.com/huggingface/candle/issues/883 | closed | [] | 2023-09-17T20:40:27Z | 2023-09-17T21:02:24Z | null | EricLBuehler |
huggingface/transformers.js | 310 | How to load model from the static folder path in nextjs or react or vanilla js? | <!-- QUESTION GOES HERE -->
| https://github.com/huggingface/transformers.js/issues/310 | closed | [
"question"
] | 2023-09-17T14:13:57Z | 2023-09-27T08:36:29Z | null | adnankarim |
huggingface/safetensors | 360 | The default file format used when loading the modelοΌ | I guess that huggingface loads .safetensor files by default when loading models. Is this mandatory? Can I choose to load files in. bin format? (Because I only downloaded weights in bin format, and it reported an error β could not find a file in safeTensor formatβ). I do not find related infomation in docs.
Thanks for ... | https://github.com/huggingface/safetensors/issues/360 | closed | [] | 2023-09-15T14:56:13Z | 2023-09-19T10:34:57Z | 1 | Kong-Aobo |
huggingface/diffusers | 5,055 | How to download config.json if it is not in the root directory. | Is there any way to download vae for a model where config.json is not in the root directory?
```python
vae = AutoencoderKL.from_pretrained("redstonehero/kl-f8-anime2")
```
For example, as shown above, there is no problem if config.json exists in the root directory, but if it does not exist, an error will occur... | https://github.com/huggingface/diffusers/issues/5055 | closed | [] | 2023-09-15T11:37:47Z | 2023-09-16T00:15:58Z | null | suzukimain |
huggingface/transformers.js | 305 | [Question] Can I work with Peft models through the API? | Let's say I have the following code in Python. How would I translate that to js?
````
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "samwit/bloom-7b1-lora-tagger"
config = PeftConfig.from_pretrained(peft_model_id)
model = Aut... | https://github.com/huggingface/transformers.js/issues/305 | open | [
"question"
] | 2023-09-14T21:02:59Z | 2023-09-16T00:16:03Z | null | chrisfel-dev |
huggingface/diffusers | 5,042 | How to give number of inference steps to Wuerstchen prior pipeline | **this below working with default DEFAULT_STAGE_C_TIMESTEPS but it always generates with exactly 29 number of prior inference steps**
```
prior_output = prior_pipeline(
prompt=prompt,
height=height,
width=width,
num_inference_steps=prior_num_inference_steps,
timesteps=DEF... | https://github.com/huggingface/diffusers/issues/5042 | closed | [
"bug"
] | 2023-09-14T15:21:31Z | 2023-09-20T07:41:19Z | null | FurkanGozukara |
huggingface/chat-ui | 440 | Web Search not working | i have been having this issues where it just searches something but then never shows me the answer it shows max tokens
i just keep seeing this
first i see the links of the resources
but then it does nothing at all

base.fuse_lora(lora_scale=.7)
base.load_lora_weights("models/safetensors/SDXL/sd_xl_offset_example-lora_1.0.safetensors")
base.fuse_lora(lora_scale=.8)
Now, When I execute unfuse_lora() only the most recent one has been unfuse .
so,how to un... | https://github.com/huggingface/diffusers/issues/5032 | closed | [
"stale"
] | 2023-09-14T08:10:46Z | 2023-10-30T15:06:34Z | null | yanchaoguo |
huggingface/optimum | 1,384 | Documentation Request: Table or heuristic for Ortmodel Method to Encoder/Decoder to .onnx File to Task | ### Feature request
Hi there
Could you provide either a table (where explicit rules apply - see attached image), or a heuristic, so I can tell which ML models, optimised file types, with which tasks, apply to which inference methods and inference tasks?
The example table below will help to clarify, and isn't ... | https://github.com/huggingface/optimum/issues/1384 | closed | [
"Stale"
] | 2023-09-14T01:45:38Z | 2025-04-24T02:11:24Z | 4 | gidzr |
huggingface/optimum | 1,379 | Can't use bettertransformer to train vit? | ### System Info
```shell
Traceback (most recent call last):
File "test_bettertransformer_vit.py", line 95, in <module>
main()
File "test_bettertransformer_vit.py", line 92, in main
test_train_time()
File "test_bettertransformer_vit.py", line 86, in test_train_time
out_vit = model(pixel_values).... | https://github.com/huggingface/optimum/issues/1379 | closed | [
"bug"
] | 2023-09-13T12:49:53Z | 2025-02-20T08:38:26Z | 1 | lijiaoyang |
huggingface/text-generation-inference | 1,015 | how to text-generation-benchmark through the local tokenizer | The command i run in docker is
```
text-generation-benchmark --tokenizer-name /data/checkpoint-5600/
```
The error log is
```
2023-09-12T11:22:01.245495Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer
2023-09-12T11:22:01.245966Z INFO text_generation_benchmark: benchmark/src/... | https://github.com/huggingface/text-generation-inference/issues/1015 | closed | [
"Stale"
] | 2023-09-12T12:10:41Z | 2024-06-07T09:39:32Z | null | jessiewiswjc |
huggingface/autotrain-advanced | 260 | How to create instruction dataset (Q&A) for fine-tuning from PDFs? | https://github.com/huggingface/autotrain-advanced/issues/260 | closed | [] | 2023-09-12T02:54:07Z | 2023-12-18T15:31:13Z | null | mahimairaja | |
huggingface/transformers.js | 295 | [Question] Issue with deploying model to Vercel using NextJS and tRPC | Hi I'm trying to deploy my model to Vercel via NextJS and tRPC and have the .cache folder generated using the postinstall script
```
// @ts-check
let fs = require("fs-extra");
let path = require("path");
async function copyXenovaToLocalModules() {
const paths = [["../../../node_modules/@xenova", "../node_m... | https://github.com/huggingface/transformers.js/issues/295 | closed | [
"question"
] | 2023-09-11T11:13:11Z | 2023-09-12T15:23:17Z | null | arnabtarwani |
huggingface/transformers.js | 291 | [Question] Using transformers.js inside an Obsidian Plugin | I'm trying to run transfomer.js inside of Obsidian but running into some errors:
<img width="698" alt="Screenshot 2023-09-10 at 3 05 43 PM" src="https://github.com/xenova/transformers.js/assets/11430621/a6b4b83e-6a1e-44bb-9a46-c3966d058146">
This code is triggering the issues:
```js
class MyClassificationPipe... | https://github.com/huggingface/transformers.js/issues/291 | open | [
"question"
] | 2023-09-10T22:12:07Z | 2024-04-30T13:52:06Z | null | benjaminshafii |
huggingface/candle | 807 | How to use the kv_cache? | Hi, how would I use the kv_cache? Let's say I want a chat like type of thing, how would I save the kv_cache and load it so that all the tokens won't have to be computed again? | https://github.com/huggingface/candle/issues/807 | closed | [] | 2023-09-10T21:39:31Z | 2025-11-22T23:18:58Z | null | soupslurpr |
huggingface/transformers | 26,061 | How to perform batch inference? | ### Feature request
I want to pass a list of tests to model.generate.
text = "hey there"
inputs = tokenizer(text, return_tensors="pt").to(0)
out = model.generate(**inputs, max_new_tokens=184)
print(tokenizer.decode(out[0], skip_special_tokens=True))
### Motivation
I want to do batch inference.
### Y... | https://github.com/huggingface/transformers/issues/26061 | closed | [] | 2023-09-08T20:59:37Z | 2023-10-23T16:04:20Z | null | ryanshrott |
huggingface/text-generation-inference | 998 | How to insert a custom stop symbol, like </s>? | ### Feature request
nothing
### Motivation
nothing
### Your contribution
nothing | https://github.com/huggingface/text-generation-inference/issues/998 | closed | [] | 2023-09-08T07:06:08Z | 2023-09-08T07:13:38Z | null | babytdream |
huggingface/safetensors | 355 | Safe tensors cannot be easily freed! | ### System Info
Hi,
I am using the safetensors for loading Falcon-180B model. I am loading the ckpts one by one on CPU, and then try to remove the tensors by simply calling `del` function. However, I am seeing that CPU memory keeps increasing until it runs out of memory and system crashes (I am also calling `gc.co... | https://github.com/huggingface/safetensors/issues/355 | closed | [
"Stale"
] | 2023-09-07T22:13:15Z | 2024-08-30T10:22:01Z | 4 | RezaYazdaniAminabadi |
huggingface/transformers.js | 285 | The generate API always returns the same number of tokens as output nomatter what is min_tokens | Here is the code I am trying
```js
import { pipeline } from '@xenova/transformers';
import { env } from '@xenova/transformers';
let generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
let output = await generator('write a blog on Kubernetes?', {
max_new_tokens: 512,min_new_toke... | https://github.com/huggingface/transformers.js/issues/285 | closed | [
"bug"
] | 2023-09-07T13:30:39Z | 2023-09-17T21:57:14Z | null | allthingssecurity |
huggingface/chat-ui | 430 | Server does not support event stream content error for custom endpoints | is there anyone faced the issue such as "Server does not support event stream content" when parsing the custom endpoint results.
what is the solution for this error?
In order to reproduce the issue,
User enter prompts saying "how are you" -> call goes to custom endpoint -> Endpoint returns response as string -> er... | https://github.com/huggingface/chat-ui/issues/430 | closed | [] | 2023-09-07T10:01:18Z | 2023-09-15T00:01:56Z | 3 | nandhaece07 |
huggingface/sentence-transformers | 2,300 | How to convert embedding vector to text οΌ | I use the script below to convert text to embeddings
```
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(text)
```
But how to convert embeddings to text οΌ | https://github.com/huggingface/sentence-transformers/issues/2300 | closed | [] | 2023-09-07T09:19:22Z | 2025-09-01T11:44:34Z | null | chengzhen123 |
huggingface/transformers.js | 283 | [Question] Model type for tt/ee not found, assuming encoder-only architecture | Reporting this as requested by the warning message, but as a question because I'm not entirely sure if it's a bug:

Here's the code I ran:
```js
let quantized = false; // change to `true` for a much smaller ... | https://github.com/huggingface/transformers.js/issues/283 | closed | [
"question"
] | 2023-09-07T05:01:34Z | 2023-09-08T13:17:07Z | null | josephrocca |
huggingface/safetensors | 354 | Is it possible to append to tensors along a primary axis? | ### Feature request
it would be really cool to be able to append to a safetensor file so you can continue to add data along, say, a batch dimension
### Motivation
for logging data during train runs that can be visualized from an external tool. something like a live application that lazily loads the saved data. this ... | https://github.com/huggingface/safetensors/issues/354 | closed | [
"Stale"
] | 2023-09-06T17:54:56Z | 2023-12-11T01:48:44Z | 2 | verbiiyo |
huggingface/huggingface_hub | 1,643 | We couldn't connect to 'https://huggingface.co/' to load this model and it looks like distilbert-base-uncased is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mod... | ### System Info
Hello, I have been using hugging face transformers with a lot of success. I have been able to create many successful fine-tuned pre-trained text classification models using various HF transformers and have been using HF integration with SageMaker in a SageMaker conda_pytorch_310 notebook.
my co... | https://github.com/huggingface/huggingface_hub/issues/1643 | closed | [] | 2023-09-06T17:18:45Z | 2023-09-07T15:51:12Z | null | a-rhodes-vcu |
huggingface/setfit | 417 | Passing multiple evaluation metrics to SetFitTrainer | Hi there, after reading the docs I find that one can easily get the f1 score or accuracy by passing the respective string as the `metric` argument to the trainer. However, how can I get both or even other metrics, such as f1_per_class?
Thanks :) | https://github.com/huggingface/setfit/issues/417 | closed | [
"question"
] | 2023-09-06T11:38:08Z | 2023-11-24T13:31:08Z | null | fhamborg |
huggingface/optimum | 1,357 | [RFC] MusicGen `.to_bettertransformer()` integration | ### Feature request
Add support for MusicGen Better Transformer integration. MusicGen is composed of three sub-models:
1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
2. MusicGen decoder: a lang... | https://github.com/huggingface/optimum/issues/1357 | closed | [] | 2023-09-06T10:25:50Z | 2024-01-10T17:31:44Z | 1 | sanchit-gandhi |
huggingface/diffusers | 4,906 | How to check whether the image is flagged as inappropriate automated? | Is there a way to know whether the generated image (without seeing it) was flagged as inappropriate? | https://github.com/huggingface/diffusers/issues/4906 | closed | [] | 2023-09-05T17:51:07Z | 2023-09-07T05:49:46Z | null | sarmientoj24 |
huggingface/diffusers | 4,905 | How to convert pretrained SDXL .safetensors model to diffusers folder format | As SDXL is gaining adoption, more and more community based models pop up that that are just saved as a .safetensors file. E.g the popular Realistic Vision: https://civitai.com/models/139562?modelVersionId=154590
When running train_dreambooth_lora_sdxl.py, the training script expects the diffusers folder format to ac... | https://github.com/huggingface/diffusers/issues/4905 | closed | [] | 2023-09-05T17:01:27Z | 2023-09-06T09:55:54Z | null | agcty |
huggingface/transformers.js | 280 | [Question] How to run multiple pipeline or multiple modal? | <!-- QUESTION GOES HERE -->
I am trying to transcribe from audio source and need to do multi language translation. I had tried transcribing using Xenova/whisper- and and take text input and feed in to "Xenova/m2m100_418M" modal but due to multiple pipeline it's failed. Is there any way to achieve
this? | https://github.com/huggingface/transformers.js/issues/280 | closed | [
"question"
] | 2023-09-05T11:33:44Z | 2023-11-01T11:32:15Z | null | sundarshahi |
huggingface/optimum | 1,346 | BetterTransfomer Support for the GPTBigCode model | ### Feature request
is it possible to support GPTBigCode with BetterTransformer?
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode
### Motivation
A very popular Decoder model for Code.
### Your contribution
hope you can achieve it. Thanks. | https://github.com/huggingface/optimum/issues/1346 | closed | [] | 2023-09-04T16:52:56Z | 2023-09-08T14:51:17Z | 5 | amarazad |
huggingface/chat-ui | 426 | `stream` is not supported for this model | Hello Eperts,
Trying to run https://github.com/huggingface/chat-ui by providing models like EleutherAI/pythia-1b, gpt2-large. With all these models, there is this consitent error
{"error":["Error in `stream`: `stream` is not supported for this model"]}
Although I can see that hosted inference API for these models ar... | https://github.com/huggingface/chat-ui/issues/426 | open | [
"question",
"models"
] | 2023-09-02T05:30:47Z | 2023-12-24T16:39:21Z | null | newUserForTesting |
huggingface/diffusers | 4,871 | How to run "StableDiffusionXLPipeline.from_single_file"? | I got an error when I ran the following code and it got an error on the line "pipe = StableDiffusionXLPipeline." and how to solve it?
notes:
I don't have a model refiner, I just want to run a model with a DIffuser XL
```
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import t... | https://github.com/huggingface/diffusers/issues/4871 | closed | [] | 2023-09-01T22:42:25Z | 2023-09-09T03:35:53Z | null | Damarcreative |
huggingface/optimum | 1,334 | Enable CLI export of decoder-only models without present outputs | ### Feature request
Currently `optimum-cli export onnx` only supports exporting text-generation models with present outputs (`--task text-generation`) or with past+present outputs (``--task text-generation-with-past`). It would be useful to be able to export a variant without any caching structures if they will not ... | https://github.com/huggingface/optimum/issues/1334 | closed | [] | 2023-09-01T15:56:27Z | 2023-09-13T11:43:36Z | 3 | mgoin |
huggingface/transformers.js | 274 | [Question]Β How to convert to ONNX a fine-tuned model | Hi, we're playing with this library to see if it can be useful for our project. I find it very easy and well done (congratulations).
The idea is not to use it directly as a frontend library but via node.js.
We've tried scripting a model directly from HF (google/flan-t5-small) and it worked but we're having trouble... | https://github.com/huggingface/transformers.js/issues/274 | open | [
"question"
] | 2023-09-01T15:27:21Z | 2023-09-01T16:12:12Z | null | mrddter |
huggingface/datasets | 6,203 | Support loading from a DVC remote repository | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible thr... | https://github.com/huggingface/datasets/issues/6203 | closed | [
"enhancement"
] | 2023-09-01T14:04:52Z | 2023-09-15T15:11:27Z | 4 | bilelomrani1 |
huggingface/optimum | 1,328 | Documentation for OpenVINO missing half() | ### System Info
```shell
N/A
```
### Who can help?
@echarlaix
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (min... | https://github.com/huggingface/optimum/issues/1328 | closed | [
"bug"
] | 2023-08-31T20:44:28Z | 2023-08-31T20:46:34Z | 1 | ngaloppo |
huggingface/autotrain-advanced | 249 | How to save model locally after sft | I am wondering how to save model locally after sft | https://github.com/huggingface/autotrain-advanced/issues/249 | closed | [] | 2023-08-31T14:59:04Z | 2023-08-31T17:01:44Z | null | Diego0511 |
huggingface/chat-ui | 425 | Is it possible to modify it so that .env.local environment variables are set at runtime? | Currently for every different deployment of Chat-UI it is required to rebuild the Docker image with different .env.local environment variables. Is it theoretically possible to have it so that 1 image can be used for all deployments, but with different secrets passed at runtime? What environment variables and for what r... | https://github.com/huggingface/chat-ui/issues/425 | open | [
"enhancement",
"back",
"hacktoberfest"
] | 2023-08-31T12:55:17Z | 2024-03-14T20:05:38Z | 4 | martinkozle |
huggingface/text-generation-inference | 959 | How to enter the docker image to modify the environment | ### System Info
dokcer image: ghcr.io/huggingface/text-generation-inference:1.0.2
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [X] My own modifications
### Reproduction
I want to enter the image to modify the environmentοΌlike: tiktoken.
`docker run -it ... | https://github.com/huggingface/text-generation-inference/issues/959 | closed | [] | 2023-08-31T11:14:13Z | 2023-08-31T20:12:55Z | null | Romaosir |
huggingface/safetensors | 352 | Attempt to convert `PygmalionAI/pygmalion-2.7b` to `safetensors` | ### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.15.0-1039-gcp-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow ... | https://github.com/huggingface/safetensors/issues/352 | closed | [
"Stale"
] | 2023-08-31T10:25:19Z | 2023-12-11T01:48:45Z | 2 | JulesBelveze |
huggingface/autotrain-advanced | 246 | how to load the fine-tuned model in the local? | hi
thz for your super convenient package makes easier for cookies like me to fine-tune a new model. However, as a cookie, I dont really know how to load my fine-tuned model and apply.
I was fine-tuning in Google colab and download on my PC but know how to call it out?
thz bro | https://github.com/huggingface/autotrain-advanced/issues/246 | closed | [] | 2023-08-31T08:15:11Z | 2023-12-18T15:31:11Z | null | kennyluke1023 |
huggingface/diffusers | 4,849 | how to use multiple GPUs to train textual inversion? |
I train the textual inversion fine tuning cat toy example from [here](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
my env:
diffusers: 0.20.0
torch: 1.12.1+cu113
accelerate: 0.22.0
train script, as follow:
```
CUDA_VISIBLE_DEVICES="0,1,2,3" python -u textual_inversion.py ... | https://github.com/huggingface/diffusers/issues/4849 | closed | [] | 2023-08-31T02:56:39Z | 2023-09-11T01:07:49Z | null | Adorablepet |
huggingface/chat-ui | 423 | AI response appears without user message, then both appear after refresh. | I was experimenting with my own back-end and was wanting to get a feel for the interface. Here is what my code looks like:
```py
import json
import random
from fastapi import FastAPI, Request
from fastapi.responses import Response, StreamingResponse
app = FastAPI()
async def yielder():
yield "data:" +... | https://github.com/huggingface/chat-ui/issues/423 | closed | [] | 2023-08-30T19:04:14Z | 2023-09-13T19:44:23Z | 5 | konst-aa |
huggingface/datasets | 6,195 | Force to reuse cache at given path | ### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 ... | https://github.com/huggingface/datasets/issues/6195 | closed | [] | 2023-08-30T18:44:54Z | 2023-11-03T10:14:21Z | 2 | Luosuu |
huggingface/trl | 713 | How to use custom evaluate function with multi-gpu deepspeed | I am trying to use `deepspeed` multi-gpu training with `SFTTrainer` for a hh-rlhf. My modified trainer looks something like this
```python
class SFTCustomEvalTrainer(SFTTrainer):
def evaluate(
self,
eval_dataset = None,
ignore_keys = None,
metric_key_prefix: ... | https://github.com/huggingface/trl/issues/713 | closed | [] | 2023-08-30T17:33:40Z | 2023-11-10T15:05:23Z | null | abaheti95 |
huggingface/optimum | 1,323 | Optimisation and Quantisation for Translation models / tasks | ### Feature request
Currently, the opimisation and quantisation functions look for mode.onnx in a folder, and will perform opt and quant on those files. When exporting a translation targeted ONNX, multiple files for encoding and decoding, and these can't be optimised or quantised.
I've tried a hacky approach to ch... | https://github.com/huggingface/optimum/issues/1323 | closed | [] | 2023-08-30T06:36:17Z | 2023-09-29T00:47:39Z | 2 | gidzr |
huggingface/datasets | 6,193 | Dataset loading script method does not work with .pyc file | ### Describe the bug
The huggingface dataset library specifically looks for β.pyβ file while loading the dataset using loading script approach and it does not work with β.pycβ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
#... | https://github.com/huggingface/datasets/issues/6193 | open | [] | 2023-08-29T19:35:06Z | 2023-08-31T19:47:29Z | 3 | riteshkumarumassedu |
huggingface/transformers.js | 270 | [Question] How to stop warning log | I am using NodeJS to serve a translation model.
There are so many warning log when translation processing. How to stop this?
`2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061977 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.2/encoder_att... | https://github.com/huggingface/transformers.js/issues/270 | open | [
"question"
] | 2023-08-29T16:08:41Z | 2025-08-02T15:48:45Z | null | tuannguyen90 |
huggingface/chat-ui | 420 | Error: ENOSPC: System limit for number of file watchers reached | Error: ENOSPC: System limit for number of file watchers reached, watch '/home/alvyn/chat-ui/vite.config.ts'
at FSWatcher.<computed> (node:internal/fs/watchers:247:19)
at Object.watch (node:fs:2418:34)
at createFsWatchInstance (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:... | https://github.com/huggingface/chat-ui/issues/420 | closed | [
"support"
] | 2023-08-29T14:54:49Z | 2023-09-20T15:11:26Z | 2 | alvynabranches |
huggingface/transformers.js | 268 | [Question] Chunks from transcription always empty text | This example works fine:

But ATM I am sending Float32 to the worker here (i also confirm the audio is valid by playing it back)
https://github.com/quantuminformation/coherency/blob/main/components/audio-recorder... | https://github.com/huggingface/transformers.js/issues/268 | open | [
"question"
] | 2023-08-29T13:49:00Z | 2023-11-04T19:48:30Z | null | quantuminformation |
huggingface/diffusers | 4,831 | How to preview the image during generation,any demo for gradio? | How to preview the image during generation,any demo for gradio? | https://github.com/huggingface/diffusers/issues/4831 | closed | [] | 2023-08-29T13:32:07Z | 2023-08-30T15:31:31Z | null | wodsoe |
huggingface/transformers.js | 267 | [Question] multilingual-e5-* models don't work with pipeline | I just noticed that the `Xenova/multilingual-e5-*` model family doesn't work in the transformers.js pipeline for feature-extraction with your (@xenova) onnx versions on HF.
My code throws an error.
```Javascript
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4';
async function... | https://github.com/huggingface/transformers.js/issues/267 | closed | [
"question"
] | 2023-08-29T12:39:26Z | 2023-08-30T12:05:02Z | null | do-me |
huggingface/transformers | 25,803 | [Model] How to evaluate Idefics Model's ability with in context examples? | Hi the recent release of Idefics-9/80B-Instruct model is superbly promising!
We would like to evaluate them on a customized benchmarks with in context examples. May I ask how should I arrange the prompt template, especially for `instruct` version?
We had some problems previously when evaluating the model on sin... | https://github.com/huggingface/transformers/issues/25803 | closed | [] | 2023-08-28T19:39:02Z | 2023-10-11T08:06:48Z | null | Luodian |
huggingface/chat-ui | 417 | CodeLlama Instruct Configuration | Hello Guys,
Could you guide me in the right direction to get the configuration of the Code Llama Instruct model right?
I have this config so far:
```
{
"name": "Code Llama",
"endpoints": [{"url": "http://127.0.0.1:8080"}],
"description": "Programming Assistant",
"userMessageToken": "[I... | https://github.com/huggingface/chat-ui/issues/417 | open | [
"support",
"models"
] | 2023-08-28T13:42:09Z | 2023-09-13T18:17:50Z | 9 | schauppi |
huggingface/transformers.js | 265 | Unexpected token | I added this code to my React project.
```
import { pipeline } from "@xenova/transformers";
async function sentimentAnalysis() {
// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline("sentiment-analysis");
let out = await pipe("I love transformers!");
console.log(out);
}
sentim... | https://github.com/huggingface/transformers.js/issues/265 | closed | [
"question"
] | 2023-08-28T13:34:42Z | 2023-08-28T16:00:10Z | null | patrickinminneapolis |
huggingface/diffusers | 4,814 | How to add more weight to the text prompt in ControlNet? | Hi,
I want to know if there is a quick way of adding more weight to the text prompt in ControlNet during inference.
If so, which parameter needs to be changed?
Thanks, | https://github.com/huggingface/diffusers/issues/4814 | closed | [
"stale"
] | 2023-08-28T13:05:16Z | 2023-10-30T15:07:45Z | null | miquel-espinosa |
huggingface/autotrain-advanced | 239 | how to start without " pip install autotrain-advanced" | Dear,
Thanks for your work.
After installing through `pip`, running
**`autotrain llm --train --project_name my-llm --model luodian/llama-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 12 --num_train_epochs 3 --trainer sft`**
can achieve fine-tuning on your own data.
I... | https://github.com/huggingface/autotrain-advanced/issues/239 | closed | [] | 2023-08-28T10:02:37Z | 2023-12-18T15:30:42Z | null | RedBlack888 |
huggingface/datasets | 6,186 | Feature request: add code example of multi-GPU processing | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with π€ Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f... | https://github.com/huggingface/datasets/issues/6186 | closed | [
"documentation",
"enhancement"
] | 2023-08-28T10:00:59Z | 2024-10-07T09:39:51Z | 18 | NielsRogge |
huggingface/autotrain-advanced | 238 | How to Train Consecutively Using Checkpoints | Hi, I've been using your project and it's been great.
I'm a complete beginner in the field of AI, so sorry for such a basic question.
Is there a way to train consecutively with checkpoints?
Thank you!
| https://github.com/huggingface/autotrain-advanced/issues/238 | closed | [] | 2023-08-28T08:31:30Z | 2023-12-18T15:30:42Z | null | YOUNGASUNG |
huggingface/transformers.js | 264 | [Question] TypeScript rewrite | <!-- QUESTION GOES HERE -->
Hi Joshua. I found your idea is extremely exciting.
I am a frontend developer who has worked on TypeScript professionally for three years. Would you mind me doing a TypeScript re-write, so this npm package can have a better DX. If I successfully transform the codebase into TypeScript and p... | https://github.com/huggingface/transformers.js/issues/264 | open | [
"question"
] | 2023-08-28T08:29:06Z | 2024-04-27T12:05:24Z | null | Lantianyou |
huggingface/text-generation-inference | 934 | How to use fine tune model in text-generation-inference | Hi Team
I fine tune the llama 2 13b model and using merge_and_upload() functionality, I merge the model.
How I can use this merge model using text-generation-inference.
**Following command given an error**

### Reproduct... | https://github.com/huggingface/peft/issues/869 | closed | [] | 2023-08-27T18:03:06Z | 2024-11-05T09:49:01Z | null | Vincent-Li-9701 |
huggingface/transformers | 25,783 | How to re-tokenize the training set in each epoch? | I have a special tokenizer which can tokenize the sentence based on some propability distribution.
For example, 'I like green apple' ->'[I],[like],[green],[apple]'(30%) or '[I],[like],[green apple]' (70%).
Now in the training part, I want the Trainer can retokenize the dataset in each epoch. How can I do so? | https://github.com/huggingface/transformers/issues/25783 | closed | [] | 2023-08-27T16:23:25Z | 2023-09-01T13:01:43Z | null | tic-top |
huggingface/optimum | 1,318 | Is it possible to compile pipeline (with tokenizer) to ONNX Runtime? | ### Feature request
Is it possible to compile the entire pipeline, tokenizer and transformer, to run with ONNX Runtime? My goal is to remove the `transformers` dependency entirely for runtime, to reduce serverless cold start.
### Motivation
I could not find any examples, and could not make this work, so I wonder if ... | https://github.com/huggingface/optimum/issues/1318 | open | [
"feature-request",
"onnxruntime"
] | 2023-08-26T17:57:52Z | 2023-08-28T07:58:13Z | 1 | j-adamczyk |
huggingface/trl | 695 | Reward is getting lower and lower with each epoch, What can be the issue in training? | Hello,
I am trying to optimize a T5 fine-tuned model for text generation task. At the moment, I am using BLEU score (between two texts) as a reward function. Before the optimization with PPO, model is able to produce an average BLEU score of 35% however with ppo, after each epoch, the reward is reducing so far. What... | https://github.com/huggingface/trl/issues/695 | closed | [] | 2023-08-26T00:22:04Z | 2023-11-01T15:06:14Z | null | sakinafatima |
huggingface/dataset-viewer | 1,733 | Add API fuzzer to the tests? | Tools exist, see https://openapi.tools/ | https://github.com/huggingface/dataset-viewer/issues/1733 | closed | [
"question",
"tests"
] | 2023-08-25T21:44:10Z | 2023-10-04T15:04:16Z | null | severo |
huggingface/diffusers | 4,778 | [Discussion] How to allow for more dynamic prompt_embed scaling/weighting/fusion? | We have a couple of issues and requests for the community that ask for the possibility to **dynamically** change certain knobs of Stable Diffusion that are applied at **every denoising step**.
- 1. **Prompt Fusion**. as stated [here](https://github.com/huggingface/diffusers/issues/4496). To implement prompt fusion ... | https://github.com/huggingface/diffusers/issues/4778 | closed | [
"stale"
] | 2023-08-25T10:03:17Z | 2023-11-09T21:42:39Z | null | patrickvonplaten |
huggingface/transformers.js | 260 | [Question] CDN download for use in a worker | Is there a way to get this to work inside a worker:
```html
<script type="module">
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.3';
</script>
```
I noticed you do this:
```js
import { pipeline, env } from "@xenova/transformers";
```
I'm trying to avoid any node modu... | https://github.com/huggingface/transformers.js/issues/260 | closed | [
"question"
] | 2023-08-24T18:24:51Z | 2023-08-29T13:57:19Z | null | quantuminformation |
huggingface/notebooks | 428 | How to load idefics fine tune model for inference? | Hi, recently I fine tune idefics model with peft. I am not able to load the model.
Is there any way to load the model with peft back for inference? | https://github.com/huggingface/notebooks/issues/428 | open | [] | 2023-08-24T13:39:22Z | 2024-04-25T10:39:55Z | null | imrankh46 |
huggingface/peft | 857 | How to load fine tune IDEFICS model with peft for inference? | ### Feature request
Request for IDEFICS model.
### Motivation
I fine tune IDEFICS on custom dataset, but when I load they showing error.
### Your contribution
Add class like AutoPeftModelforVisionTextToText() class, to easily load the model. | https://github.com/huggingface/peft/issues/857 | closed | [] | 2023-08-24T12:34:44Z | 2023-09-01T15:46:50Z | null | imrankh46 |
huggingface/datasets | 6,176 | how to limit the size of memory mapped file? | ### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ... | https://github.com/huggingface/datasets/issues/6176 | open | [] | 2023-08-24T05:33:45Z | 2023-10-11T06:00:10Z | null | williamium3000 |
huggingface/autotrain-advanced | 225 | How to make inference the model | When I launch
**autotrain llm --train --project_name my-llm --model meta-llama/Llama-2-7b-hf --data_path . --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft**
I have this output
. How can I achieve that using autotrain? If I understand [this line](https://github.com/huggingface/autotrain-advanced/blob/main/src/autotrain/train... | https://github.com/huggingface/autotrain-advanced/issues/223 | closed | [] | 2023-08-23T15:32:16Z | 2023-12-18T15:30:39Z | null | MaxGfeller |
huggingface/trl | 677 | how to run reward_trainer.py | ValueError: Some specified arguments are not used by the HfArgumentParser: ['-f', '/Users/samittan/Library/Jupyter/runtime/kernel-32045810-5e16-48f4-8d44-c7a7f975f8a4.json']
| https://github.com/huggingface/trl/issues/677 | closed | [] | 2023-08-23T09:39:52Z | 2023-11-02T15:05:32Z | null | samitTAN |
huggingface/chat-ui | 412 | preprompt not being injected for Llama 2 | 1. When I alter the preprompt for a Llama 2 type model, it appears to have no impact. It's as though the preprompt is not there. Sample config for .env.local:
```
MODELS=`[
{
"name": "Trelis/Llama-2-7b-chat-hf-function-calling",
"datasetName": "Trelis/function_calling_extended",
"descrip... | https://github.com/huggingface/chat-ui/issues/412 | closed | [
"support",
"models"
] | 2023-08-23T09:15:24Z | 2023-09-18T12:48:07Z | 7 | RonanKMcGovern |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.