repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | 6,721 | Hi,do you know how to load the dataset from local file now? | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| https://github.com/huggingface/datasets/issues/6721 | open | [] | 2024-03-07T13:58:40Z | 2024-03-31T08:09:25Z | null | Gera001 |
huggingface/transformers.js | 633 | Is 'aggregation_strategy' parameter available for token classification pipeline? | ### Question
Hi. I have question.
From HuggingFace Transformers documentation, they have **'aggregation_strategy'** parameter in token classification pipeline. [Link](https://huggingface.co/docs/transformers/en/main_classes/pipelines#transformers.TokenClassificationPipeline.aggregation_strategy)
Need to know in this library provide this parameter?
Thanks.
| https://github.com/huggingface/transformers.js/issues/633 | open | [
"help wanted",
"good first issue",
"question"
] | 2024-03-07T07:02:55Z | 2024-06-09T15:16:56Z | null | boat-p |
huggingface/swift-coreml-diffusers | 93 | Blocked at "loading" screen - how to reset the app / cache ? | After playing a bit with the app, it now stays in "Loading" state at startup (see screenshot)
I tried to remove the cache in `~/Library/Application Support/hf-diffusion-models` but it just cause a re-download.
How can I reset the app, delete all files created and start like on a fresh machine again ?
Alternatively, how can I pass the "Loading" screen ?
<img width="1016" alt="image" src="https://github.com/huggingface/swift-coreml-diffusers/assets/401798/15c7c67a-f61f-4855-a11e-ea7bd61b0a09">
| https://github.com/huggingface/swift-coreml-diffusers/issues/93 | open | [] | 2024-03-06T12:50:29Z | 2024-03-10T11:24:49Z | null | sebsto |
huggingface/chat-ui | 905 | Fail to create assistant. | I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model. Using the image and model mentioned above, I set up a large language model dialog service on server A. Assume that the IP address of the server A is x.x.x.x.
I use docker compose to deploy it. The content of docker-compose.yml is as follows:
```
services:
chat-ui:
image: chat-ui-db:latest
ports:
- "3000:3000"
restart: unless-stopped
textgen:
image: huggingface/text-generation-inference:1.4
ports:
- "8080:80"
command: ["--model-id", "/data/models/meta-llamaLlama-2-70b-chat-hf"]
volumes:
- /home/test/llm-test/serving/data:/data
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 8
capabilities: [gpu]
restart: unless-stopped
```
I set ENABLE_ASSISTANTS=true in .env.local to enable assistants feature.
I logged into localhost:3000 using chrome, clicked the settings button, and then clicked the create new assistant button. Enter the information in the Name and Description text boxes, select a model, and enter the information in the User start messages and Instructions (system prompt) text boxes. Finally, click the Create button. I can create an assistant just fine.
When I go to xxxx:3000 from a browser on a different server and access the service. (One may ask, how can I achieve access to server A's services from other servers without logging. The solution is to use nginx as a http to https anti-proxy(https://www.inovex.de/de/blog/code-assistant-how-to-self-host-your-own/)). I clicked the settings button, and then clicked the create new assistant button. Enter the information in the Name and Description text boxes, select a model, and enter the information in the User start messages and Instructions (system prompt) text boxes. Finally, click the Create button. The webpage is not responding. The container logs don't show anything either. I couldn't create an assistant.
What should i do?
Do I have to enable login authentication to create an assistant? unless I'm accessing it from localhost. I'm on a LAN and I can't get user authentication through Huggingface or google. I have also tried to set up a user authentication service using keycloak and configure .env.local to enable open id login. But the attempt failed. See this page(https://github.com/huggingface/chat-ui/issues/896) for the specific problem.
| https://github.com/huggingface/chat-ui/issues/905 | open | [] | 2024-03-06T08:33:03Z | 2024-03-06T08:33:03Z | 0 | majestichou |
huggingface/chat-ui | 904 | Running the project with `npm run dev`, but it does not hot reload. | Am I alone in this issue or are you just developing without hot reload? Does anyone have any ideas on how to resolve it?
**UPDATES:**
It has to do whenever you're running it on WSL.
I guess this is an unrelated issue so feel free to close, but would still be nice to know how to resolve this. | https://github.com/huggingface/chat-ui/issues/904 | closed | [] | 2024-03-06T03:34:21Z | 2024-03-06T16:07:11Z | 2 | CakeCrusher |
huggingface/dataset-viewer | 2,550 | More precise dataset size computation | Currently, the Hub uses the `/size` endpoint's `num_bytes_original_files` value to display the `Size of downloaded dataset files` on a dataset's card page. However, this value does not consider a possible overlap between the configs' data files (and simply [sums](https://github.com/huggingface/datasets-server/blob/e4aac49c4d3c245cb3c0e48695b7d24a934a8377/services/worker/src/worker/job_runners/dataset/size.py#L97-L98) all the configs' sizes up), in which case the shared files need to be downloaded only once. Both `datasets` and `hfh` recognize this (by downloading them once), so the size computation should account for it, too.
cc @guipenedo who reported this behavior first | https://github.com/huggingface/dataset-viewer/issues/2550 | open | [
"question",
"P2"
] | 2024-03-05T22:22:24Z | 2024-05-24T20:59:36Z | null | mariosasko |
huggingface/datasets | 6,719 | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset_by_node`, it results in shards that are not equal sizes due to unequal samples filtered from each one.
The distributed process hangs when trying to accomplish this. Is there any way to resolve this or is it impossible to implement?
### Steps to reproduce the bug
Here is a toy example of what I am trying to do that reproduces the behavior
```
# torchrun --nproc-per-node 2 file.py
import os
import pandas as pd
import torch
from accelerate import Accelerator
from datasets import Features, Value, load_dataset
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
accelerator = Accelerator(device_placement=True, dispatch_batches=False)
if accelerator.is_main_process:
if not os.path.exists("scratch_data"):
os.mkdir("scratch_data")
n_shards = 4
for i in range(n_shards):
df = pd.DataFrame({"id": list(range(10 * i, 10 * (i + 1)))})
df.to_parquet(f"scratch_data/shard_{i}.parquet")
world_size = accelerator.num_processes
local_rank = accelerator.process_index
def collate_fn(examples):
input_ids = []
for example in examples:
input_ids.append(example["id"])
return torch.LongTensor(input_ids)
dataset = load_dataset(
"parquet", data_dir="scratch_data", split="train", streaming=True
)
dataset = (
split_dataset_by_node(dataset, rank=local_rank, world_size=world_size)
.filter(lambda x: x["id"] < 35)
.shuffle(seed=42, buffer_size=100)
)
batch_size = 2
train_dataloader = DataLoader(
dataset,
batch_size=batch_size,
collate_fn=collate_fn,
num_workers=2
)
for x in train_dataloader:
x = x.to(accelerator.device)
print({"rank": local_rank, "id": x})
y = accelerator.gather_for_metrics(x)
if accelerator.is_main_process:
print("gathered", y)
```
### Expected behavior
Is there any way to continue training/inference on the GPUs that have remaining data left without waiting for the others? Is it impossible to filter when
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.6.0 | https://github.com/huggingface/datasets/issues/6719 | open | [] | 2024-03-05T15:55:13Z | 2024-03-05T15:55:13Z | 0 | ssharpe42 |
huggingface/chat-ui | 899 | Bug--Llama-2-70b-chat-hf error: `truncate` must be strictly positive and less than 1024. Given: 3072 | I use the docker image chat-ui-db as the frontend, text-generation-inference as the inference backend, and meta-llamaLlama-2-70b-chat-hf as the model.
In the model field of the .env.local file, I have the following settings
```
MODELS=`[
{
"name": "meta-llama/Llama-2-70b-chat-hf",
"endpoints": [{
"type" : "tgi",
"url": "http://textgen:80",
}],
"preprompt": " ",
"chatPromptTemplate" : "<s>[INST] <<SYS>>\n{{preprompt}}\n<</SYS>>\n\n{{#each messages}}{{#ifUser}}{{content}} [/INST] {{/ifUser}}{{#ifAssistant}}{{content}} </s><s>[INST] {{/ifAssistant}}{{/each}}",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop" : ["</s>", "</s><s>[INST]"]
}
}
]`
```
This setting is the same as the setting for Llama-2-70b-chat-hf in the .env.template file in the chat-ui repository.
Then I type the question in the input box. An error has occurred.
The following error information is found in the log:
```
textgen | 2024-03-05T20:00:38.883413Z ERROR compat_generate{default_return_full_text=false compute_type=Extension(ComputeType("8-nvidia-a100-sxm4-40gb"))}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.1), repetition_penalty: Some(1.2), frequency_penalty: None, top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: Some(1024), return_full_text: Some(false), stop: ["</s>", "</s><s>[INST]"], truncate: Some(3072), watermark: false, details: false, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None }}:async_stream:generate_stream: text_generation_router::infer: router/src/infer.rs:123: `truncate` must be strictly positive and less than 1024. Given: 3072
chat-ui | Error: Input validation error: `truncate` must be strictly positive and less than 1024. Given: 3072
chat-ui | at streamingRequest (file:///app/node_modules/@huggingface/inference/dist/index.mjs:323:19)
chat-ui | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
chat-ui | at async textGenerationStream (file:///app/node_modules/@huggingface/inference/dist/index.mjs:673:3)
chat-ui | at async generateFromDefaultEndpoint (file:///app/.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:39:20)
chat-ui | at async summarize (file:///app/.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:287:10)
chat-ui | at async file:///app/.svelte-kit/output/server/entries/endpoints/conversation/_id_/_server.ts.js:607:26
textgen | 2024-03-05T20:00:38.910266Z ERROR compat_generate{default_return_full_text=false compute_type=Extension(ComputeType("8-nvidia-a100-sxm4-40gb"))}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.1), repetition_penalty: Some(1.2), frequency_penalty: None, top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: Some(1024), return_full_text: Some(false), stop: ["</s>", "</s><s>[INST]"], truncate: Some(3072), watermark: false, details: false, decoder_input_details: false, seed: None, top_n_tokens: None, grammar: None }}:async_stream:generate_stream: text_generation_router::infer: router/src/infer.rs:123: `truncate` must be strictly positive and less than 1024. Given: 3072
```
I set "truncate" to 1000, everything is ok.
**"truncate" for Llama-2-70b-chat-hf in the .env.template file in the chat-ui repository is 3072. I think the 3072 should work fine. I don't know how webpage https://huggingface.co/chat/ sets this parameter.**
| https://github.com/huggingface/chat-ui/issues/899 | open | [
"support",
"models"
] | 2024-03-05T12:27:45Z | 2024-03-06T00:59:10Z | 4 | majestichou |
huggingface/tokenizers | 1,468 | How to convert tokenizers.tokenizer to XXTokenizerFast in transformers? | ### Motivation
I followed the guide [build-a-tokenizer-from-scratch](https://huggingface.co/docs/tokenizers/quicktour#build-a-tokenizer-from-scratch) and got a single tokenizer.json from my corpus. Since I'm not sure if it is compatible with the trainer, I want to convert it back to XXTokenizerFast in transformers.
### Observation
In [llama2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main), tokenizer file seems consist of
[tokenizer.json](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/tokenizer.json) ✅ I have
[tokenizer.model](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/tokenizer.model) ✖ I don't have, not sure its usage
[tokenizer_config.json](https://huggingface.co/meta-llama/Llama-2-7b-hf/blob/main/tokenizer_config.json) ✖ I don't have, but this looks like not that important. I can manually set this.
Initialize a LlamaTokenizerFast from scratch through \_\_init\_\_ function seems to require tokenizer.model and tokenizer.json, but I don't get a tokenizer.model.
```
def __init__(
self,
vocab_file=None,
tokenizer_file=None,
clean_up_tokenization_spaces=False,
unk_token="<unk>",
bos_token="<s>",
eos_token="</s>",
add_bos_token=True,
add_eos_token=False,
use_default_system_prompt=False,
add_prefix_space=None,
**kwargs,
):
```
After dive deeper in [transformers.PreTrainedTokenizerFast._save_pretrained](https://github.com/huggingface/transformers/blob/4fc708f98c9c8d5cb48e8a2639e3f7a21c65802f/src/transformers/tokenization_utils_fast.py#L678), I found a code snippet in which fastTokenizer in transformers seems save tokenizer.json only without tokenizer.model
```
if save_fast:
tokenizer_file = os.path.join(
save_directory, (filename_prefix + "-" if filename_prefix else "") + TOKENIZER_FILE
)
self.backend_tokenizer.save(tokenizer_file)
file_names = file_names + (tokenizer_file,)
```
### Trial
So I just typically use xxTokenizerFast.from_pretrained('dir_contained_my_tokenizer.json'), and it works with default config, I can modified it manually and save_pretrained to get tokenizer_config.json
### Query
I still have some query needed help.
1. What's the role of tokenizer.model? Is it a subset of tokenizer.json ?
2. Is my conversion method correct ? or is there any better method? | https://github.com/huggingface/tokenizers/issues/1468 | closed | [
"Stale",
"planned"
] | 2024-03-05T06:32:27Z | 2024-07-21T01:57:17Z | null | rangehow |
huggingface/gsplat.js | 71 | How to support VR? | It's great to be able to use vr on a vr device. | https://github.com/huggingface/gsplat.js/issues/71 | closed | [] | 2024-03-05T05:03:17Z | 2024-03-05T07:55:53Z | null | did66 |
huggingface/tgi-gaudi | 95 | How to use FP8 feature in TGI-gaudi | ### System Info
The FP8 quantization feature has been incorporated into the TGI-Gaudi branch. However, guidance is needed on how to utilize this feature. The process involves running the FP8 quantization through Measurement Mode and Quantization Mode. How to enable FP8 using the TGI 'docker run' command? Could you kindly provide a step-by-step guide on utilizing this feature?"
### Information
- [ ] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
Run the FP8 quantization feature using "docker run" command.
### Expected behavior
A clear guide can be provided to use the FP8 quantization feature. | https://github.com/huggingface/tgi-gaudi/issues/95 | closed | [] | 2024-03-05T02:50:08Z | 2024-05-06T09:03:15Z | null | lvliang-intel |
huggingface/accelerate | 2,521 | how to set `num_processes` in multi-node training | Is it the total num of gpus or the number of gpus on a single node?
I have seen contradictory signals in the code.
https://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/docs/source/usage_guides/ipex.md?plain=1#L139 https://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/src/accelerate/state.py#L154
here, it seems like the total number of gpus.
https://github.com/huggingface/accelerate/blob/ee004674b9560976688e1a701b6d3650a09b2100/examples/slurm/submit_multigpu.sh#L27
here, it sees like the number of gpus per node. | https://github.com/huggingface/accelerate/issues/2521 | closed | [] | 2024-03-04T13:03:57Z | 2025-12-22T01:53:32Z | null | lxww302 |
huggingface/distil-whisper | 95 | How to use distil-whisper-large-v3-de-kd model from HF? | Officially, multi-language support is still not implemented in distil-whisper.
But I noticed, that the esteemed @sanchit-gandhi uploaded a German model for distil-whisper to HuggingFace, called 'distil-whisper-large-v3-de-kd'
How can I use this specific model for transcribing something? | https://github.com/huggingface/distil-whisper/issues/95 | open | [] | 2024-03-04T12:01:13Z | 2024-04-02T09:40:46Z | null | Arche151 |
huggingface/transformers.js | 623 | Converted QA model answers in lower case, original model does not. What am I doing wrong? | ### Question
I have converted [deutsche-telekom/electra-base-de-squad2](https://huggingface.co/deutsche-telekom/electra-base-de-squad2) to ONNX using ```python -m scripts.convert --quantize --model_id deutsche-telekom/electra-base-de-squad2```. The ONNX model, used with the same code, yields returns in lower case, whereas the original model returns the answer respecting case sensitivity. I noticed that the ```tokenizer_config.json" in the original model contains ```"do_lower_case": false```. But even setting this to ```true``` before converting does not work. What am I dpoing wrong?
Code is straight forward:
```javascript
import { pipeline } from '@xenova/transformers';
const pipe = await pipeline('question-answering', 'conventic/electra-base-de-squad2-onnx');
const context = "<context here, cased>";
const question = "<question here, cased>";
const out = await pipe(question, context);
console.log(out);
´´´ | https://github.com/huggingface/transformers.js/issues/623 | open | [
"question"
] | 2024-03-04T11:56:44Z | 2024-03-04T11:56:44Z | null | MarceloEmmerich |
huggingface/transformers.js | 618 | How do I convert a DistilBERT Model to Quantized ONNX - | ### Question
Note, https://huggingface.co/docs/transformers.js/en/index#convert-your-models-to-onnx is a broken link.
I have a simple DistilBERT model I'm trying to load with the examples/next-server (wdavies/public-question-in-text)
I tried the simplest version of converting to ONNX (wdavies/public-onnx-test following https://huggingface.co/docs/transformers/en/serialization#exporting-a--transformers-model-to-onnx-with-optimumonnxruntime), but I'm still getting an error message saying its looking for quantized_onnx.
According to all I can see, including this blog post, you seem to have choose a specific hardware architecture? Is this true? How will I know what the client browser (or even mine) is running on? Help? I just want to run this simple model in example/next-server ?
https://huggingface.co/blog/optimum-inference#34-use-the-ortquantizer-to-apply-dynamic-quantization
| https://github.com/huggingface/transformers.js/issues/618 | closed | [
"question"
] | 2024-03-01T16:55:16Z | 2024-03-02T00:47:40Z | null | davies-w |
huggingface/sentence-transformers | 2,521 | Is the implementation of `MultipleNegativesRankingLoss` right? | It is confusing why the labels are `range(len(scores))`.
```python
class MultipleNegativesRankingLoss(nn.Module):
def __init__(self, model: SentenceTransformer, scale: float = 20.0, similarity_fct=util.cos_sim):
super(MultipleNegativesRankingLoss, self).__init__()
self.model = model
self.scale = scale
self.similarity_fct = similarity_fct
self.cross_entropy_loss = nn.CrossEntropyLoss()
def forward(self, sentence_features: Iterable[Dict[str, Tensor]], labels: Tensor):
reps = [self.model(sentence_feature)["sentence_embedding"] for sentence_feature in sentence_features]
embeddings_a = reps[0]
embeddings_b = torch.cat(reps[1:])
scores = self.similarity_fct(embeddings_a, embeddings_b) * self.scale
labels = torch.tensor(
range(len(scores)), dtype=torch.long, device=scores.device
) # Example a[i] should match with b[i]
return self.cross_entropy_loss(scores, labels)
def get_config_dict(self):
return {"scale": self.scale, "similarity_fct": self.similarity_fct.__name__}
``` | https://github.com/huggingface/sentence-transformers/issues/2521 | closed | [
"question"
] | 2024-03-01T10:13:35Z | 2024-03-04T07:01:12Z | null | ghost |
huggingface/text-embeddings-inference | 178 | How to specify a local model | ### Feature request
model=BAAI/bge-reranker-large
volume=$PWD/data
docker run -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.0 --model-id $model
### Motivation
model=BAAI/bge-reranker-large
volume=$PWD/data
docker run -p 8080:80 -v $volume:/data --pull always ghcr.io/huggingface/text-embeddings-inference:cpu-1.0 --model-id $model
### Your contribution
null | https://github.com/huggingface/text-embeddings-inference/issues/178 | closed | [] | 2024-03-01T09:40:07Z | 2024-03-01T16:54:27Z | null | yuanjie-ai |
huggingface/chat-ui | 889 | How does huggingchat prompt the model to generate HTML output? | How does Huggingchat prompt the LLM to generate HTML output? Where can I find that prompt? I'd like to tweak it. thanks! | https://github.com/huggingface/chat-ui/issues/889 | open | [] | 2024-02-29T17:20:01Z | 2024-03-05T18:45:56Z | null | vgoklani |
huggingface/chat-ui | 888 | Code LLAMA doesn't work | I am simply entering this prompt:
```
You're given the following regex in python: \| *([^|]+?) *\|
This captures text values in markdown tables but fails to capture numbers. Update this regex to capture numbers as well
```
Then what happens is that my 1 core of CPU is used 100% for at least for 5 mins until I close the browser. Not sure what is going on?
Same prompt works when I use the Mistral 8 X 7B | https://github.com/huggingface/chat-ui/issues/888 | closed | [] | 2024-02-29T12:44:20Z | 2025-01-01T11:54:48Z | 1 | lordsoffallen |
huggingface/text-generation-inference | 1,615 | How to use the grammar support feature? | ### Feature request

Can you please clarify how we can use this? what is it for?
### Motivation

Can you please clarify how we can use this? what is it for?
### Your contribution

Can you please clarify how we can use this? what is it for? | https://github.com/huggingface/text-generation-inference/issues/1615 | closed | [] | 2024-02-29T12:35:24Z | 2024-03-04T14:49:39Z | null | Stealthwriter |
huggingface/datasets | 6,700 | remove_columns is not in-place but the doc shows it is in-place | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Steps to reproduce the bug
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Expected behavior
Actually remove the columns.
### Environment info
1. datasets v2.17.0
2. transformers v4.38.1 | https://github.com/huggingface/datasets/issues/6700 | closed | [] | 2024-02-28T12:36:22Z | 2024-04-02T17:15:28Z | 3 | shelfofclub |
huggingface/optimum | 1,729 | tflite support for gemma | ### Feature request
As per the title, is there plans to support gemma in tfilte
### Motivation
necessary format for current work
### Your contribution
no | https://github.com/huggingface/optimum/issues/1729 | closed | [
"feature-request",
"tflite",
"Stale"
] | 2024-02-27T17:15:54Z | 2025-01-19T02:04:34Z | 2 | Kaya-P |
huggingface/huggingface_hub | 2,051 | How edit cache dir and in bad net download how to redownload with last download point | OSError: Consistency check failed: file should be of size 1215993967 but has size 118991296 (pytorch_model.bin).
We are sorry for the inconvenience. Please retry download and pass `force_download=True, resume_download=False` as argument.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
Downloading pytorch_model.bin: 10%|████▌ | 119M/1.22G [06:51<1:03:13, 289kB/s]
Hi , I use this in windows and space C: is not enouth space, I want to set download or install cache dir is in D: ,how to do this.
And beacuse I have bad network so it is everytime error in one big file download, and how to download this file in a bad network. | https://github.com/huggingface/huggingface_hub/issues/2051 | closed | [] | 2024-02-27T14:45:10Z | 2024-02-27T15:59:35Z | null | caihua |
huggingface/candle | 1,769 | [Question] How to modify Mistral to enable multiple batches? | Hello everybody,
I am attempting to implement multiple batches for the Mistral forward pass. However, the `forward` method takes an argument `seqlen_offset` which seems to be specific to the batch. I have attempted to implement it with a `position_ids` tensor in [this](https://github.com/EricLBuehler/mistral.rs/blob/mistralrunner/mistralrs-core/src/models/mistral.rs) file.
Specifically, I rewrote the rotary embedding function:
```rust
fn apply_rotary_emb_qkv(
&self,
q: &Tensor,
k: &Tensor,
position_ids: &Tensor,
) -> Result<(Tensor, Tensor)> {
let cos = self.cos.i(position_ids)?;
let sin = self.sin.i(position_ids)?;
let q_embed = (q.broadcast_mul(&cos)? + rotate_half(q)?.broadcast_mul(&sin))?;
let k_embed = (k.broadcast_mul(&cos)? + rotate_half(k)?.broadcast_mul(&sin))?;
Ok((q_embed, k_embed))
}
```
I create the position ids with the following line:
```rust
let position_ids = Tensor::arange(
past_key_values_length as i64,
(past_key_values_length + seq_len) as i64,
input_ids.device(),
)?;
```
With `past_key_values_length` as the result of
```rust
fn calculate_past_kv_len(&self, seq_len: usize) -> Result<usize> {
let kv_cache_1 = &self.layers.first().as_ref().unwrap().self_attn.kv_cache;
if kv_cache_1.is_none() {
return Ok(0);
}
let k_cache_1 = &kv_cache_1.as_ref().unwrap().0;
if k_cache_1.dims()[0] <= seq_len {
Ok(0)
} else {
let indexed = k_cache_1.i(seq_len)?;
let dims = indexed.dims();
Ok(dims[dims.len() - 2])
}
}
```
My implementation attempts to follow the [transformers implementation of calculating position ids](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L977-L985) and for the [implementation of `apply_rotary_emb_qkv`](https://github.com/huggingface/transformers/blob/5c341d4555ba3e4b656053317e372ebed0c5af37/src/transformers/models/mistral/modeling_mistral.py#L139-L164). However, when I copy and run the candle-examples inference script, with the only change being that I do not pass the `seqlen_offset` variable, it does not produce coherent output. While the model runs, it does not "work".
How can I implement multiple-batch forward passes? Is there a way to do it using the `seqlen_offset` variable? Thank you for any help. | https://github.com/huggingface/candle/issues/1769 | closed | [] | 2024-02-27T13:18:18Z | 2024-03-01T14:01:21Z | null | EricLBuehler |
huggingface/datatrove | 108 | How to load a dataset with the output a tokenizer? | I planned to use datatrove to apply my tokenizer so that data is ready to use with nanotron.
I am using DocumentTokenizer[Merger] which produces *.ds and *ds.index binary files, although, from what I understood, nanotron is expecting datasets (with "input_ids" keys).
I see that things like ParquetWriter cannot be piped after DocumentTokenizer.
Am I missing a piece?
Are there some helpers to convert ds files into parquet files (or something loadable with datasets) for a given context size? | https://github.com/huggingface/datatrove/issues/108 | closed | [] | 2024-02-27T08:58:09Z | 2024-05-07T12:33:47Z | null | Jeronymous |
huggingface/chat-ui | 875 | Difficulty configuring multiple instances of the same model with distinct parameters | I am currently self-deploying an application that requires setting up multiple instances of the same model, each configured with different parameters. For example:
```
MODELS=`[{
"name": "gpt-4-0125-preview",
"displayName": "GPT 4",
"endpoints" : [{
"type": "openai"
}]
},
{
"name": "gpt-4-0125-preview",
"displayName": "GPT 4 temp 0",
"parameters": {
"temperature": 0.0
},
"endpoints" : [{
"type": "openai"
}]
}
]`
```
This results in a state which looks like that both models are active simultaneously.

However, in practice, I cannot activate the second model ("GPT 4 temp 0"); only "GPT 4" is utilized during chat operations. It appears as if the system defaults to the first model instance and ignores subsequent ones with the same model name.
I tried to distinguish between the models by modifying the `name` field and introducing an `id` field, using the appropriate model identifier. However, this approach resulted in a loss of model reference, indicating that these fields cannot be arbitrarily configured on the client side.
Is there a recommended approach to deploying two instances of the same model with varying parameters? Any guidance or suggestions on how to achieve this would be greatly appreciated.
| https://github.com/huggingface/chat-ui/issues/875 | open | [] | 2024-02-26T10:48:43Z | 2024-02-27T17:28:21Z | 1 | mmtpo |
huggingface/optimum-nvidia | 76 | How to install optimum-nvidia properly without building a docker image | It's quite hard for me to build a docker image, so I started from a docker environment with TensorRT LLM 0.6.1 inside.
I checked your dockerfile, followed the process, and built TensorRT LLM using (I am using 4090 so that cuda arch is 89):
```
python3 scripts/build_wheel.py -j --trt_root /usr/local/tensorrt --python_bindings --cuda_architectures="89-real" --clean
```
Afterwards, I copied the resulting bindings*.so into tensorrt_llm's directory inside the dist-packages dir -- according to the dockerfile. Then I followed it to install nvidia-ammo 0.3, then added the optimum-nvidia dir to python path.
I also went into optimum-nvidia directory, and ran `pip install -e .`, so that in my environment, when using `pip list | grep optimum` I could get:
```
optimum 1.17.1
optimum-nvidia 0.1.0b2 /root/autodl-tmp/optimum-nvidia
```
However, I still could not import optimum.nvidia properly, while it's okay to `import tensorrt_llm` and `tensorrt_llm.bindings`.
```
>>> from optimum.nvidia.pipelines import pipeline
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'optimum.nvidia'
>>>
```
Could someone please help me on how to install optimum nvidia properly without building a new image or pulling from dockerhub?
Thank you! | https://github.com/huggingface/optimum-nvidia/issues/76 | closed | [] | 2024-02-26T05:05:24Z | 2024-03-11T13:36:18Z | null | Yuchen-Cao |
huggingface/diffusers | 7,088 | Vague error: `ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.` how to fix? | Trying to convert a .safetensors stable diffusion model to whatever the format is that hugging face requires. It throws a vague nonsequitur of an error:
`pipe = diffusers.StableDiffusionPipeline.from_single_file(str(aPathlibPath/"vodkaByFollowfoxAI_v40.safetensors") )`
```...
[1241](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1241) )
[1242](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1242) else:
[1243](file:///C:/Users/openSourcerer9000/anaconda3/envs/fuze/lib/site-packages/diffusers/loaders/single_file_utils.py:1243) return {"text_encoder": text_encoder, "tokenizer": tokenizer}
ValueError: With local_files_only set to False, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.
```
What tokenizer? What path? Where would I get this file? This script already downloaded something locally, why not download this extra thing as well instead of throwing an error?
When I pass local_files_only=True, it says the SAME thing:
`ValueError: With local_files_only set to True, you must first locally save the tokenizer in the following path: 'openai/clip-vit-large-patch14'.` | https://github.com/huggingface/diffusers/issues/7088 | closed | [
"stale",
"single_file"
] | 2024-02-25T15:03:07Z | 2024-09-17T21:56:26Z | null | openSourcerer9000 |
huggingface/diffusers | 7,085 | how to train controlnet with lora? | train full controlnet need much resource and time, so how to train controlnet with lora?
| https://github.com/huggingface/diffusers/issues/7085 | closed | [
"should-move-to-discussion"
] | 2024-02-25T06:31:47Z | 2024-03-03T06:38:35Z | null | akk-123 |
huggingface/optimum-benchmark | 138 | How to set trt llm backend parameters | I am trying to run the trt_llama example: https://github.com/huggingface/optimum-benchmark/blob/main/examples/trt_llama.yaml
It seems optimem-benchmark will automatically transform the huggingface model to inference engine file then benchmarking its performance. When we use tensorrt llm, there is a model "build" process (during which we set some quantization parameters) in order to get the `.engine` file. How can we set these parameters when using optimum benchmark? | https://github.com/huggingface/optimum-benchmark/issues/138 | closed | [] | 2024-02-24T17:12:12Z | 2024-02-27T12:48:44Z | null | Yuchen-Cao |
huggingface/optimum-nvidia | 75 | How to build this environment without docker? | My computer does not support the use of docker. How do I deploy this environment on my computer? | https://github.com/huggingface/optimum-nvidia/issues/75 | open | [] | 2024-02-24T16:59:37Z | 2024-03-06T13:45:18Z | null | lemon-little |
huggingface/accelerate | 2,485 | How to log information into a local logging file? | ### System Info
```Shell
Hi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file.
### Expected behavior
Hi, I want to save a copy of logs to a local file, how to achieve this? Specifically, I want accelerator.log also write information in my local file. | https://github.com/huggingface/accelerate/issues/2485 | closed | [] | 2024-02-24T07:52:55Z | 2024-04-03T15:06:24Z | null | Luciennnnnnn |
huggingface/optimum-benchmark | 136 | (question)When I use the memory tracking feature on the GPU, I find that my VRAM is reported as 0. Is this normal, and what might be causing it? | 
| https://github.com/huggingface/optimum-benchmark/issues/136 | closed | [] | 2024-02-24T02:57:49Z | 2024-03-08T16:59:41Z | null | WCSY-YG |
huggingface/optimum | 1,716 | Optimum for Jetson Orin Nano | ### System Info
```shell
optimum version: 1.17.1
platform: Jetson Orin Nano, Jetpack 6.0
Python: 3.10.13
CUDA: 12.2
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
Here is how I installed.
1. install Pytorch 2.2.0 following https://elinux.org/Jetson_Zoo
2. install onnxruntime-gpu 1.17.0 following following https://elinux.org/Jetson_Zoo
3. install Optimum by using `pip install optimum[onnxruntime-gpu]`
### Expected behavior
The Optimum installed on my Jetson Orin Nano not support GPU for Jetpack 6.0 and Python 3.10.13.
Can anybody let me know how to install it? | https://github.com/huggingface/optimum/issues/1716 | open | [
"bug"
] | 2024-02-23T23:22:08Z | 2024-02-26T10:03:59Z | 1 | JunyiYe |
huggingface/transformers | 29,244 | Google Gemma don't know what 1+1 is equal to? | ### System Info
[v4.38.1](https://github.com/huggingface/transformers/releases/tag/v4.38.1)
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("./gemma_2B")
model = AutoModelForCausalLM.from_pretrained("./gemma_2B", device_map="auto", torch_dtype=torch.float32)
input_text = "1+1=?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids,max_length=50)
# print(outputs)
print(tokenizer.decode(outputs[0]))
```
### Expected behavior
output is bellow
```
<bos>1+1=?
1+1=?
1+1=?
1+1=?
1+1=?
1+1=?
1+1=?
1+1=?
1
``` | https://github.com/huggingface/transformers/issues/29244 | closed | [] | 2024-02-23T12:16:17Z | 2024-03-07T10:54:09Z | null | zhaoyun0071 |
huggingface/optimum | 1,713 | Issue converting owlv2 model to ONNX format | Hi Team,
I hope this message finds you well.
I've been working with the owlv2 model and have encountered an issue while attempting to convert it into ONNX format using the provided command:
`! optimum-cli export onnx -m google/owlv2-base-patch16 --task 'zero-shot-object-detection' --framework 'pt' owlv2_onnx`
Unfortunately, I'm facing the following error:
`ValueError: Trying to export a owlv2 model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`.`
As I am relatively new to this process, I'm unsure about the necessity and usage of custom ONNX configuration. Could you please provide some guidance on how to address this issue? Any assistance or insights would be greatly appreciated.
Thank you for your attention to this matter. | https://github.com/huggingface/optimum/issues/1713 | closed | [
"feature-request",
"onnx",
"exporters"
] | 2024-02-23T05:55:23Z | 2025-09-10T23:26:13Z | 6 | n9s8a |
huggingface/optimum-benchmark | 135 | How to import and use the quantized model with AutoGPTQ? | https://github.com/huggingface/optimum-benchmark/issues/135 | closed | [] | 2024-02-23T03:13:28Z | 2024-02-23T05:03:06Z | null | jhrsya | |
huggingface/optimum | 1,710 | Native Support for Gemma | ### System Info
```shell
python version : 3.10.12
optimum version : built from github
openvino : 2024.1.0-14548-688c71ce0ed
transformers : 4.38.1
```
### Who can help?
@JingyaHuang @echarlaix
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction (minimal, reproducible, runnable)
Currently there is no support to export gemma, google's new opensource model.
After connecting to huggingface and requesting permission to access the gemma repo
running the following line
`model_ov = OVModelForCausalLM.from_pretrained("google/gemma-2b", export = True)`
produces the following error
`
ValueError: Trying to export a gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type gemma to be supported natively in the ONNX export.`
### Expected behavior
Expected behavior is for the line of code to successfully run and such that we can export the IR format of the model as well. | https://github.com/huggingface/optimum/issues/1710 | closed | [
"feature-request",
"onnx",
"exporters"
] | 2024-02-22T17:15:08Z | 2024-02-28T08:37:36Z | 5 | Kaya-P |
huggingface/sentence-transformers | 2,499 | how can i save fine_tuned cross-encoder to HF and then download it from HF | I'm looking for ways to share fine-tuned cross-encoder with my teacher.
Cross encoder model does not have native push_to_hub() method. So i decided to use general approach:
```
from transformers import AutoModelForSequenceClassification
import torch
# read from disk, model was saved as ft_model.save("model/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2")
cross_ft_model = AutoModelForSequenceClassification.from_pretrained("model\\crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2")
# push to hub
cross_ft_model.push_to_hub("satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2")
```
Now model is available on HF. Commit info was like:
CommitInfo(commit_url='https://huggingface.co/satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2/commit/d81fe317cb037940e09db256d8a0926e80c358e5', commit_message='Upload BertForSequenceClassification', commit_description='', oid='d81fe317cb037940e09db256d8a0926e80c358e5', pr_url=None, pr_revision=None, pr_num=None)
then i decided to ensure the model is workable:
```
cross_ft_model = CrossEncoder("satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2")
cross_ft_model.predict([('SentenceTransformer is well-documented library','but saving crossencoder to HF is a bit tricky')])
```
and get the error:
_Traceback (most recent call last):
Cell In[18], line 1
cross_ft_model = CrossEncoder("satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2")
File ~\anaconda3\Lib\site-packages\sentence_transformers\cross_encoder\CrossEncoder.py:72 in __init__
self.tokenizer = AutoTokenizer.from_pretrained(model_name, **tokenizer_args)
File ~\anaconda3\Lib\site-packages\transformers\models\auto\tokenization_auto.py:745 in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File ~\anaconda3\Lib\site-packages\transformers\tokenization_utils_base.py:1838 in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'satyroffrost/crerankingeval-30e-4000-ms-marco-MiniLM-L-6-v2' is the correct path to a directory containing all relevant files for a BertTokenizerFast tokenizer._
I compare local model folder and uploaded HF model files, last ones don't include tokenizer files. Uploaded model don't work on HF too. How can i correctly upload model with tokenizer to HF and the use it from HF like model = CrossEncoder(path_to_hf)?
| https://github.com/huggingface/sentence-transformers/issues/2499 | closed | [
"good first issue"
] | 2024-02-22T15:29:37Z | 2025-03-25T16:07:25Z | null | satyrmipt |
huggingface/transformers | 29,214 | How to get input embeddings from PatchTST with (batch_size, sequence_length, hidden_size) dimensions | ### System Info
-
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The following snippet outputs the last hidden state but it has (batch_size, num_channels, num_patches, d_model) dimensions
`inputs = encoder(
past_values=series_list, output_hidden_states=True
).last_hidden_state`
Here, series_list has (batch_size, sequence_length, num_input_channels) shape.
To incorporate this with [EncoderDecoderModel](https://huggingface.co/docs/transformers/v4.37.2/en/model_doc/encoder-decoder#transformers.EncoderDecoderModel), I want the dimensions of the input embedding to be (batch_size, sequence_length, hidden_size). How do you get that?
### Expected behavior
- | https://github.com/huggingface/transformers/issues/29214 | open | [
"Feature request"
] | 2024-02-22T14:17:10Z | 2024-03-25T03:56:58Z | null | nikhilajoshy |
huggingface/huggingface_hub | 2,039 | How to find out the type of files in the repository | Hello
Is there an option to determine the type of file in the repository, such as "Checkpoint", "LORA", "Textual_Inversion", etc?
I didn't know where to ask the question so sorry if I'm wrong. | https://github.com/huggingface/huggingface_hub/issues/2039 | closed | [] | 2024-02-22T01:41:29Z | 2024-03-25T11:39:31Z | null | suzukimain |
huggingface/datasets | 6,686 | Question: Is there any way for uploading a large image dataset? | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB")
```
where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it.
Thanks in advance!
Best, | https://github.com/huggingface/datasets/issues/6686 | open | [] | 2024-02-21T22:07:21Z | 2024-05-02T03:44:59Z | 1 | zhjohnchan |
huggingface/accelerate | 2,474 | how to turn off fp16 auto_cast? | i notice that the deepspeed config always set my `auto_cast=True` and this is my data
```
compute_environment: LOCAL_MACHINE
deepspeed_config:
deepspeed_multinode_launcher: standard
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_offload_param_pin_memory: true
zero3_offload_optimizer_pin_memory: true
zero3_init_flag: true
zero3_save_16bit_model: true
zero_stage: 3
max_live_parameters: 1e9
max_reuse_distance: 1e9
round_robin_gradients: true
deepspeed_hostfile: /opt/tiger/hostfile
distributed_type: DEEPSPEED
fsdp_config: {}
main_training_function: main
mixed_precision: fp16
use_cpu: false
```
this is my deepspeed log:
```
[2024-02-21 19:35:40,143] [INFO] [config.py:958:print_user_config] json = {
"train_batch_size": 512,
"train_micro_batch_size_per_gpu": 64,
"gradient_accumulation_steps": 1,
"zero_optimization": {
"stage": 3,
"offload_optimizer": {
"device": "cpu",
"nvme_path": null
},
"offload_param": {
"device": "cpu",
"nvme_path": null
},
"stage3_gather_16bit_weights_on_model_save": true
},
"gradient_clipping": 1.0,
"steps_per_print": inf,
"fp16": {
"enabled": true,
"auto_cast": true
},
"bf16": {
"enabled": false
},
"zero_allow_untested_optimizer": true
}
``` | https://github.com/huggingface/accelerate/issues/2474 | closed | [] | 2024-02-21T11:54:51Z | 2025-02-18T08:53:20Z | null | haorannlp |
huggingface/chat-ui | 852 | what is the difference between "chat-ui-db" docker image and "chat-ui" docker image? | I found there are 2 packages in the chat-ui repository: one is chat-ui and the other is chat-ui-db. what is the difference between "chat-ui-db" docker image and "chat-ui" docker image?
I've pulled two images from the mirror site: huggingface/text-generation-inference:1.4 and mongo:latest.
I hope to use the two images( huggingface/text-generation-inference:1.4 and mongo:latest.) and the image of chat-ui or chat-ui-db to implement the local large model Q&A service. What should I do? Should I use "chat-ui-db" docker image or Should I use "chat-ui" docker image.
What should i do to complete my task of local large model Q&A service? Can anyone give detailed help?
| https://github.com/huggingface/chat-ui/issues/852 | closed | [] | 2024-02-21T09:31:07Z | 2024-02-23T02:58:03Z | null | majestichou |
huggingface/instruction-tuned-sd | 22 | How to use a custom image for validation | Hello,
I tried using a custom image for validation since I'm training it on a custom style i uploaded my val image on hub as the mountain.png but it always gives me error for unidentified also for mountain.png it shows validation summary on wandb but for my val image it shows nothing.
Do i need to change something somewhere also how does it compare the val images for loss do i need to put the style image of original image somewhere | https://github.com/huggingface/instruction-tuned-sd/issues/22 | closed | [] | 2024-02-21T08:15:30Z | 2024-02-22T05:49:11Z | null | roshan2024nar |
huggingface/gsplat.js | 67 | How to set the background color of the scene | Hi:
Want to know how to set the background color of the scene,now it's black | https://github.com/huggingface/gsplat.js/issues/67 | open | [] | 2024-02-21T05:49:33Z | 2024-02-26T09:32:25Z | null | jamess922 |
huggingface/gsplat.js | 66 | How to adjust the axis of rotation? | When the model's z-axis is not perpendicular to the ground plane, the rotation effect may feel unnatural, as is the case with this model: testmodel.splat.
[testmodel.zip](https://github.com/huggingface/gsplat.js/files/14353919/testmodel.zip)
I would like to rotate the model along an axis that is perpendicular to the ground. Are there any parameters available to adjust the axis of rotation? | https://github.com/huggingface/gsplat.js/issues/66 | closed | [] | 2024-02-21T04:13:01Z | 2024-02-23T02:37:59Z | null | gotoeasy |
huggingface/sentence-transformers | 2,494 | How to get embedding vector when input is tokenized already | First, thank you so much for sentence-transformer.
How to get embedding vector when input is tokenized already?
i guess sentence-transformer can `.encode(original text)`.
But i want to know there is way like `.encode(token_ids )` or `.encode(token_ids, attention_masks)`
This is my background below
>
> I trained model using sentence-transformer. and i add few layers to this model for classification.
>
> and then i want to train model to update all of parameter (including added layers).
>
> but DataLoader cuda() support only tokens_id not text , so first i tokenized text using `model.tokenizer()` .
>
> so, it is already tokenized i need to know how to get embedding if i have token_ids,
regards
| https://github.com/huggingface/sentence-transformers/issues/2494 | open | [] | 2024-02-20T22:38:18Z | 2024-02-23T10:01:07Z | null | sogmgm |
huggingface/optimum | 1,703 | How can I export onnx-model for Qwen/Qwen-7B? | ### Feature request
I need to export the model named qwen to accelerate.
```optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code```
### Motivation
I want to export the model qwen to use onnxruntime
### Your contribution
I can give the input and output. | https://github.com/huggingface/optimum/issues/1703 | open | [
"onnx"
] | 2024-02-20T13:22:08Z | 2024-02-26T13:19:19Z | 1 | smile2game |
huggingface/accelerate | 2,463 | How to initialize Accelerator twice but with different setup within the same code ? | ### System Info
```Shell
Hello I want to initialize accelerate once for the training and another time for the inference.
Looks like it does not work and the error message is not clear. Is there a way to reset the previously initialized accelerate and then initialize with inference setup?
For training I am doing :
accelerator = Accelerator(kwargs_handlers=[process_group_kwargs])
model,test_loader, valid_loader, optimizer, scheduler = accelerator.prepare(
model, test_loader, valid_loader, optimizer, scheduler)
For inference I want to do: accelerator = Accelerator()
model, valid_loader, optimizer = eval_accelerator.prepare(model, valid_loader, optimizer)
For inference, I do no want to use optimizer but I get error as I am using zero_stage: 1, So I used the optimizer I used during training. But then I was getting batch size error for the valid set then I prepare the valid loader one more time after initializing the Accelerator. Still during inference I am getting error on the preparation.
Any idea how to fix this?
```
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
1. Initialize Accelerator for training
2. Once the training is done, initialize again for the inference.
### Expected behavior
I just want to prepare the accelerate for the inference task once the training is done. | https://github.com/huggingface/accelerate/issues/2463 | closed | [] | 2024-02-20T13:17:26Z | 2024-03-30T15:06:15Z | null | soneyahossain |
huggingface/chat-ui | 840 | LLama.cpp error - String must contain at least 1 character(s)" | I keep getting this error after adding LLAMA-CPP inference endpoint locally. Adding this line causes this error.
```
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
```
Not sure how to fix it.
```
[
{
"code": "too_small",
"minimum": 1,
"type": "string",
"inclusive": true,
"exact": false,
"message": "String must contain at least 1 character(s)",
"path": [
0,
"endpoints",
0,
"accessToken"
]
}
]
ZodError: [
{
"code": "too_small",
"minimum": 1,
"type": "string",
"inclusive": true,
"exact": false,
"message": "String must contain at least 1 character(s)",
"path": [
0,
"endpoints",
0,
"accessToken"
]
}
]
at get error [as error] (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:538:31)
at ZodArray.parse (file:///C:/Users/SRU/Desktop/chatui/node_modules/zod/lib/index.mjs:638:22)
at C:\Users\SRU\Desktop\chatui\src\lib\server\models.ts:75:40
at async instantiateModule (file:///C:/Users/SRU/Desktop/chatui/node_modules/vite/dist/node/chunks/dep-529
```
Full Config:
```
# Use .env.local to change these variables
# DO NOT EDIT THIS FILE WITH SENSITIVE DATA
MONGODB_URL=mongodb://localhost:27017/
MONGODB_DB_NAME=chat-ui
MONGODB_DIRECT_CONNECTION=false
COOKIE_NAME=hf-chat
HF_TOKEN=#hf_<token> from from https://huggingface.co/settings/token
HF_API_ROOT=https://api-inference.huggingface.co/models
OPENAI_API_KEY=#your openai api key here
HF_ACCESS_TOKEN=#LEGACY! Use HF_TOKEN instead
# used to activate search with web functionality. disabled if none are defined. choose one of the following:
YDC_API_KEY=#your docs.you.com api key here
SERPER_API_KEY=#your serper.dev api key here
SERPAPI_KEY=#your serpapi key here
SERPSTACK_API_KEY=#your serpstack api key here
USE_LOCAL_WEBSEARCH=#set to true to parse google results yourself, overrides other API keys
SEARXNG_QUERY_URL=# where '<query>' will be replaced with query keywords see https://docs.searxng.org/dev/search_api.html eg https://searxng.yourdomain.com/search?q=<query>&engines=duckduckgo,google&format=json
WEBSEARCH_ALLOWLIST=`[]` # if it's defined, allow websites from only this list.
WEBSEARCH_BLOCKLIST=`[]` # if it's defined, block websites from this list.
# Parameters to enable open id login
OPENID_CONFIG=`{
"PROVIDER_URL": "",
"CLIENT_ID": "",
"CLIENT_SECRET": "",
"SCOPES": ""
}`
# /!\ legacy openid settings, prefer the config above
OPENID_CLIENT_ID=
OPENID_CLIENT_SECRET=
OPENID_SCOPES="openid profile" # Add "email" for some providers like Google that do not provide preferred_username
OPENID_PROVIDER_URL=https://huggingface.co # for Google, use https://accounts.google.com
OPENID_TOLERANCE=
OPENID_RESOURCE=
# Parameters to enable a global mTLS context for client fetch requests
USE_CLIENT_CERTIFICATE=false
CERT_PATH=#
KEY_PATH=#
CA_PATH=#
CLIENT_KEY_PASSWORD=#
REJECT_UNAUTHORIZED=true
MODELS=`[
{
"name": "mistralai/Mistral-7B-Instruct-v0.1",
"displayName": "mistralai/Mistral-7B-Instruct-v0.1",
"description": "Mistral 7B is a new Apache 2.0 model, released by Mistral AI that outperforms Llama2 13B in benchmarks.",
"chatPromptTemplate" : "<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}",
"parameters": {
"temperature": 0.1,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 1024,
"stop": ["</s>"]
},
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"endpoints": [
{
"url": "http://localhost:8080",
"type": "llamacpp"
}
]
}
]`
OLD_MODELS=`[]`
PUBLIC_ORIGIN=#https://huggingface.co
PUBLIC_SHARE_PREFIX=#https://hf.co/chat
PUBLIC_GOOGLE_ANALYTICS_ID=#G-XXXXXXXX / Leave empty to disable
PUBLIC_PLAUSIBLE_SCRIPT_URL=#/js/script.js / Leave empty to disable
PUBLIC_ANNOUNCEMENT_BANNERS=`[
{
"title": "Code Llama 70B is available! 🦙",
"linkTitle": "try it",
"linkHref": "https://huggingface.co/chat?model=codellama/CodeLlama-70b-Instruct-hf"
}
]`
PARQUET_EXPORT_DATASET=
PARQUET_EXP | https://github.com/huggingface/chat-ui/issues/840 | open | [
"bug",
"models"
] | 2024-02-19T13:33:24Z | 2024-02-22T14:51:48Z | 2 | szymonrucinski |
huggingface/datatrove | 93 | Tokenization for Non English data | Hi HF team
I want to thank you for this incredible work.
And I have a question, I want to apply pipeline of deduplication for Arabic data.
For this I should change the tokenizer I think, And if yes is there a tip for this,
for this should I just edit the tokenizer here
`class SentenceDedupFilter(PipelineStep):
type = "🫂 - DEDUPS"
name = "💥 sentence-deduplication stage 3"
def __init__(
self,
data_folder: DataFolderLike,
n_sentences: int = 3,
min_doc_words: int = 50,
exclusion_writer: DiskWriter = None,
):
"""Args:
data_folder: data folder to get duplicate files.
min_doc_words: min amount of words for each document
"""
from nltk import load
super().__init__()
self.data_folder = get_datafolder(data_folder)
self.n_sentences = n_sentences
self.min_doc_words = min_doc_words
**self._tokenizer = load("tokenizers/punkt/english.pickle")**
self.exclusion_writer = exclusion_writer`
any recommendations please?
Thanks | https://github.com/huggingface/datatrove/issues/93 | closed | [
"question"
] | 2024-02-19T11:02:04Z | 2024-04-11T12:47:24Z | null | Manel-Hik |
huggingface/safetensors | 443 | Efficient key-wise streaming | ### Feature request
I'm interested in streaming the tensors in a model key by key without having to hold all keys at the same time in memory. Something like this:
```python
with safe_open("model.safetensors", framework="pt", device="cpu") as f:
for key in f.keys():
tensor = f.get_tensor(stream=True)
# `tensor` will be garbage collected in the next GC pass
# as soon as the next iteration removes the only reference to it
```
### Motivation
When I use `safetensors.safe_open` to load multiple models, the memory usage does not drop down even when the deserialized tensors do not have a reference held to them. This is a key by key streamed merge of 5 stable diffusion 1.5 checkpoints using a weighted sum:
(each vertical gray line is ~8GB)

For reference, this is my successful attempt at reading keys memory efficient in python:
https://github.com/ljleb/sd-mecha/blob/9548ef83dd5d3fccdaf09c8b22dee7a0a7727613/sd_mecha/streaming.py#L12
And this is my successful attempt at making writing keys memory efficient:
https://github.com/ljleb/sd-mecha/blob/9548ef83dd5d3fccdaf09c8b22dee7a0a7727613/sd_mecha/streaming.py#L156
Which looks like this:

Note that my implementation is relatively slow compared to simply using safetensors directly (approximately 1.1x to 1.3x slower according to some quick test I made). Is there any way the same could be achieved but in a more computationally efficient way using the rust bindings? Specifically, I need to stream the keys and the tensors without them being held somewhere else in memory.
### Your contribution
I don't really know Rust but if nobody has time for this and there isn't a problem with my suggested approach to the API above, I will eventually have to implement this efficiently in one way or another for my merging lib. | https://github.com/huggingface/safetensors/issues/443 | closed | [
"Stale"
] | 2024-02-18T23:22:09Z | 2024-04-17T01:47:28Z | 4 | ljleb |
huggingface/community-events | 200 | How to prepare audio dataset for whisper fine-tuning with timestamps? | I am trying to prepare a dataset for whisper fine-tuning , and I have a lot of small segment clip , most of them less than 6 seconds, I read the paper, but didn’t understand this paragraph:
“ When a final transcript segment is only partially included in the current 30- second audio chunk, we predict only its start time token for the segment when in timestamp mode, to indicate that the subsequent decoding should be performed on an audio window aligned with that time, otherwise we truncate the audio to not include the segment”
So when should I add the final segment if it is partially included in the current 30-second chunk, and when should I truncate the chunk without it, and if I added it how to extract only relevant transcription?
To make it clear:
```
| window | window |
|segment|-----segment---|--segment--|
```
assume that every window is 30 seconds, how to get the correct relevant transcription of the partially included segments?
Anyone could help? | https://github.com/huggingface/community-events/issues/200 | open | [] | 2024-02-18T19:50:33Z | 2024-02-18T19:55:06Z | null | omarabb315 |
huggingface/diffusers | 7,010 | How to set export HF_HOME on Kaggle? | Kaggle temporary disk is slow once again and I want models to be downloaded into working directory.
I have used the below command but it didn't work. Which command I need?
`!export HF_HOME="/kaggle/working"`
| https://github.com/huggingface/diffusers/issues/7010 | closed | [
"bug"
] | 2024-02-18T11:15:21Z | 2024-02-18T14:39:08Z | null | FurkanGozukara |
huggingface/optimum-benchmark | 126 | How to obtain the data from the 'forward' and 'generate' stages? | I used the same configuration file to test the model, but the results obtained are different from those of a month ago. In the result files from a month ago, data from both the forward and generate stages were included; however, the current generated result files only contain information from the prefill and decode stages. Here is the configuration file:
defaults:
- backend: pytorch # default backend
- launcher: process # default launcher
- benchmark: inference # default benchmark
- experiment # inheriting experiment schema
- _self_ # for hydra 1.1 compatibility
- override hydra/job_logging: colorlog # colorful logging
- override hydra/hydra_logging: colorlog # colorful logging
experiment_name: pytorch_qwen7b
model: Qwen/Qwen-7B
device: cpu
launcher:
device_isolation: true
benchmark:
memory: true
input_shapes:
batch_size: 1
sequence_length: 256
new_tokens: 1000
hub_kwargs:
trust_remote_code: true
hydra:
run:
dir: runs/${experiment_name}
sweep:
dir: sweeps/${experiment_name}
job:
chdir: true
env_set:
OVERRIDE_BENCHMARKS: 1
CUDA_VISIBLE_DEVICES: 0
CUDA_DEVICE_ORDER: PCI_BUS_ID | https://github.com/huggingface/optimum-benchmark/issues/126 | closed | [] | 2024-02-18T09:48:44Z | 2024-02-19T16:06:24Z | null | WCSY-YG |
huggingface/chat-ui | 838 | Explore the possibility for chat-ui to use OpenAI assistants API structure. | Hi @nsarrazin , I wanted to explore how we could collaborate in making chat-ui more work with OpenAI standards to make it more less opinionated over hosted inference provider. I need it as I am part of a team open-sourcing the GPTs platform https://github.com/OpenGPTs-platform and we will be leveraging chat-ui as the client. So I was hoping we could align our objectives so that we can have a healthy collaboration instead of just diverging. The main point I wanted to touch on is as follows.
Is there any interest in transforming the backend to one that follows the OpenAI assistants API structure so that we may better align ourselves to the OpenAI standard? Based on the disord announcement "...Message API with OpenAI compatibility for HF...", HF seems to signal that they are pushing in that direction so it would make sense to support that on the chat-ui. I havent looked too deep into the codebase but I imagine we will need to refactor the backend endpoints to support assistants API endpoints and then use the openai client to make the requests.
I am more than open to suggestions, and I look forward to exploring how we could collab! | https://github.com/huggingface/chat-ui/issues/838 | open | [
"enhancement",
"good first issue",
"back"
] | 2024-02-17T21:39:49Z | 2024-12-26T05:55:47Z | 4 | CakeCrusher |
huggingface/candle | 1,720 | How to define custom ops with arbitrary number of tensors ? | I dived into the issues and repo about the subject, because I wanted to be able to call cuda kernels regarding 3D gaussian splatting, and the way to invoke those kernel seems to be custom ops. But right now, we only have
```
CustomOp1(Tensor, std::sync::Arc<Box<dyn CustomOp1 + Send + Sync>>),
CustomOp2(
Tensor,
Tensor,
std::sync::Arc<Box<dyn CustomOp2 + Send + Sync>>,
),
CustomOp3(
Tensor,
Tensor,
Tensor,
std::sync::Arc<Box<dyn CustomOp3 + Send + Sync>>,
)
```
And those gsplat kernels have way more in and/or out tensors depending on the operation.
I can think of ways to do it, but I was wondering if there was a _**good**_ way to do it? | https://github.com/huggingface/candle/issues/1720 | open | [] | 2024-02-16T21:38:16Z | 2024-03-13T13:44:17Z | null | jeanfelixM |
huggingface/chat-ui | 837 | Cannot find assistants UI in the repo | Hi @nsarrazin I recently cloned the chat-ui and I noticed that the new assistants ui is missing, at the very least from the main branch.
Is the assistants ui in the repo somwhere?
If not is there any plans on making it open-source?
If so when? | https://github.com/huggingface/chat-ui/issues/837 | closed | [] | 2024-02-16T20:13:39Z | 2024-02-17T21:29:08Z | 4 | CakeCrusher |
huggingface/dataset-viewer | 2,456 | Link to the endpoint doc page in case of error? | eg. https://datasets-server.huggingface.co/parquet
could return
```json
{"error":"Parameter 'dataset' is required. Read the docs at https://huggingface.co/docs/datasets-server/parquet"}
```
or
```json
{"error":"Parameter 'dataset' is required.", "docs": "https://huggingface.co/docs/datasets-server/parquet"}
```
instead of
```json
{"error":"Parameter 'dataset' is required"}
``` | https://github.com/huggingface/dataset-viewer/issues/2456 | open | [
"documentation",
"question",
"api",
"P2"
] | 2024-02-15T11:11:44Z | 2024-02-15T11:12:12Z | null | severo |
huggingface/gsplat.js | 64 | How to render from a set of camera position? | Hi, I am trying to render the scene from a set of camera position/rotation that I load from a JSON file.
I think the right way is first to disable the "orbitControls" (engine.orbitControls.enabled = false;) and then set the camera position/rotation manually like this: 'camera.data.update(position, rotation);'. Am I right?
Any suggestion/recommendation is welcome!
| https://github.com/huggingface/gsplat.js/issues/64 | closed | [] | 2024-02-14T16:11:28Z | 2024-02-19T18:13:38Z | null | vahidEtt |
huggingface/chat-ui | 824 | what port is used by the websearch? | i put the chat in a container in a cluster with my mongodb.
the web search stopped working, i think it might be related to me not opening a port for the web search to access the web and could not find a doc that describes how the web search works.
would love to know what port/s i should open and bit more details in general.
thank in advance. | https://github.com/huggingface/chat-ui/issues/824 | open | [
"support",
"websearch"
] | 2024-02-14T11:15:22Z | 2024-02-14T12:52:25Z | null | kaplanyaniv |
huggingface/transformers.js | 586 | Does `WEBGPU` Truly Enhance Inference Time Acceleration? | ### Question
Recently, I've been extensively utilizing transformers.js to load transformer models, and Kudos to the team for this wonderful library ...
Specifically, I've been experimenting with version 2.15.0 of transformers.js.
Despite the fact that the model runs on the `web-assembly backend`, I've noticed some slowness in inference. In an attempt to address this issue, I experimented with` webgpu inference` using the `v3` branch. However, the inference time did not meet my expectations.
Is it possible for webgpu to significantly accelerate the inference time? | https://github.com/huggingface/transformers.js/issues/586 | closed | [
"question"
] | 2024-02-14T09:23:52Z | 2024-10-18T13:30:13Z | null | kishorekaruppusamy |
huggingface/chat-ui | 823 | WebSearch uses the default model instead of current model selected | I have multiple models in my .env.local and it seems the WebSearch uses the default model to perform its search content extraction instead of the currently selected model (the one that I'm asking the question to...) Is it possible to add a config option to use same model for everything? | https://github.com/huggingface/chat-ui/issues/823 | open | [
"enhancement",
"back",
"models"
] | 2024-02-14T07:52:59Z | 2024-02-14T13:07:20Z | 4 | ihubanov |
huggingface/trl | 1,327 | how to save/load model? | I've tried save model via:
ppo_trainer.save_pretrained("./model_after_rl")
and load the model via:
model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
ref_model = AutoModelForCausalLMWithValueHead.from_pretrained("./model_after_rl")
But the performance is same to without any reinforcement learning, when I add the loaded model to a new PPO trainer, freeze the model and test again.
| https://github.com/huggingface/trl/issues/1327 | closed | [] | 2024-02-14T06:56:07Z | 2024-04-24T15:05:14Z | null | ADoublLEN |
huggingface/accelerate | 2,440 | How to properly gather results of PartialState for inference on 4xGPUs | ### System Info
```Shell
torch==2.2.0
transformers==4.37.2
accelerate==0.27.0
```
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] One of the scripts in the examples/ folder of Accelerate or an officially supported `no_trainer` script in the `examples` folder of the `transformers` repo (such as `run_no_trainer_glue.py`)
- [X] My own task or dataset (give details below)
### Reproduction
Hi, my question may look like stupid but I want to ask for clarification, because I didn't find it in [documentation](https://huggingface.co/docs/accelerate/main/en/usage_guides/distributed_inference#sending-chunks-of-a-batch-automatically-to-each-loaded-model)
I have 2 million documents to process with ner model. And also I have 4 GPU. I don't wanna write script with multiprocess and manually handle each gpu. I decided to try use accelerate.
```python
# Assume there are two processes
from accelerate import PartialState
from transformers import AutoTokenizer, AutoModelForTokenClassification, pipeline
model = AutoModelForTokenClassification.from_pretrained('ner')
tokenizer = AutoTokenizer.from_pretrained('ner')
ner = pipeline('token-classification', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
state = PartialState()
ner.to(state)
# here the list of the list, I wanna treat like a list of batches
data = [[{'text': 'text1', 'id': 1}, {'text': 'text2', 'id': 2}], [{'text': 'text3', 'id': 3}, {'text': 'text4', 'id': 4}] ]
results = []
with state.split_between_processes(data) as inputs:
output = ner([i['text'] for i in inputs], max_length=128)
for i, o in zip(inputs, outputs):
i['annotation'] = o
results.append(i)
```
And my question is: Am I properly gather results or it could be problems because its distributed between different process.
How to properly gather results when use `split_between_processes`?
### Expected behavior
Documentation will have more examples how to gather data. | https://github.com/huggingface/accelerate/issues/2440 | closed | [] | 2024-02-13T14:00:13Z | 2024-03-23T15:07:26Z | null | ZeusFSX |
huggingface/chat-ui | 818 | Settings Page Freezes | When I go to settings to change model (after I ran a convo with a model), the UI settings page can't be closed. It freezes. Right now I have to keep reloading the page to use it | https://github.com/huggingface/chat-ui/issues/818 | closed | [
"question",
"support"
] | 2024-02-13T13:30:01Z | 2024-02-16T09:41:23Z | null | lordsoffallen |
huggingface/candle | 1,701 | How to train my own YOLOv8 model? | Candle provides an example of YOLOv8, which is very useful to use.
But I don't know how to train on my own dataset? Can handle directly load the model trained by pytorch? | https://github.com/huggingface/candle/issues/1701 | open | [] | 2024-02-13T01:56:49Z | 2024-03-18T13:45:07Z | null | mzdk100 |
huggingface/transformers.js | 585 | Using a server backend to generate masks - doublelotus | ### Question
Hi there, just continuing on from my question on - https://huggingface.co/posts/Xenova/240458016943176#65ca9d9c8e0d94e48742fad7.
I've just been reading through your response and initially I was trying it using a python backend and attempted to mimic the worekr.js code like so:
```py
from transformers import SamModel, SamProcessor, AutoProcessor
import numpy as np
model = SamModel.from_pretrained("Xenova/sam-vit-large")
processor = AutoProcessor.from_pretrained("Xenova/sam-vit-large")
```
but was running into this error (as I'm assuming that model isn't supported for a python backend
OSError: Xenova/sam-vit-large does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
The main reason behind trying this is because when I tried with sam-vit-base on the web app it was quite slow in generating the image embeddings, would using a node.js server to do that with the onnx server as you suggested be much faster or is there a better way to achieve that? | https://github.com/huggingface/transformers.js/issues/585 | open | [
"question"
] | 2024-02-13T00:06:20Z | 2024-02-28T19:29:26Z | null | jeremiahmark |
huggingface/chat-ui | 817 | Question: Can someone explain "public app data sharing with model authors" please? | I am struggling to understand in which way data can or is actually shared with whom when the setting `shareConversationsWithModelAuthors` is activated (which it is by default)?
```javascript
{#if PUBLIC_APP_DATA_SHARING === "1"}
<!-- svelte-ignore a11y-label-has-associated-control -->
<label class="flex items-center">
<Switch
name="shareConversationsWithModelAuthors"
bind:checked={$settings.shareConversationsWithModelAuthors}
/>
<div class="inline cursor-pointer select-none items-center gap-2 pl-2">
Share conversations with model authors
</div>
</label>
<p class="text-sm text-gray-500">
Sharing your data will help improve the training data and make open models better over time.
</p>
{/if}
```
What exactly will or can happen when this is activated?
Thanks! | https://github.com/huggingface/chat-ui/issues/817 | closed | [
"question"
] | 2024-02-12T19:18:03Z | 2024-02-16T14:32:18Z | null | TomTom101 |
huggingface/transformers.js | 581 | How can we use the sam-vit-huge in the production? | ### Question
The size of ONNX files for sam-vit-huge is around 600MB. If I am using the implementation mentioned in the documentation, it downloads these files first before performing the image segmentation. Is there a better way to avoid downloading these files and reduce the time it takes? Additionally, the model is taking too much time to generate embeddings when using sam-vit-huge or sam-vit-large. | https://github.com/huggingface/transformers.js/issues/581 | open | [
"question"
] | 2024-02-09T17:54:43Z | 2024-02-09T17:54:43Z | null | moneyhotspring |
huggingface/dataset-viewer | 2,434 | Create a new step: `config-features`? | See https://github.com/huggingface/datasets-server/issues/2215: the `features` part can be heavy, and on the Hub, when we call /rows, /filter or /search, the features content does not change; there is no need to create / serialize / transfer / parse it.
We could:
- add a new /features endpoint
- or add a `features: bool` parameter to all the endpoints that return rows to include the features in the response.
The only exception is when a new commit happens, and the features have changed. But the Hub could check the `X-Revision` value and reload the page in case of a mismatch. | https://github.com/huggingface/dataset-viewer/issues/2434 | open | [
"question",
"refactoring / architecture",
"P2"
] | 2024-02-09T14:13:10Z | 2024-02-15T10:26:35Z | null | severo |
huggingface/diffusers | 6,920 | How to merge a lot of embedding into a single file | I create a lot of embedding through textual inversion, but I couldn't found a file to merge this ckpt
| https://github.com/huggingface/diffusers/issues/6920 | open | [
"stale"
] | 2024-02-09T08:18:42Z | 2024-03-13T15:02:51Z | null | Eggwardhan |
huggingface/transformers | 28,924 | How to disable log history from getting printed every logging_steps | I'm writing a custom ProgressCallback that modifies the original ProgressCallback transformers implementation and adds some additional information/data to the tqdm progress bar. Here's what I have so far, and it works nicely and as intended.
```python
class ProgressCallback(TrainerCallback):
"""A [`TrainerCallback`] that displays the progress of training or evaluation.
Specifically, it shows:
1. Time spent so far in training or evaluation.
2. Estimated time remaining for training or evaluation.
3. Iterations per second.
4. Loss.
5. Number of input tokens seen so far.
"""
def __init__(self):
self.training_bar = None
self.prediction_bar = None
self.current_step: int = 0
self.loss: float = math.nan
self.num_input_tokens_seen = format_number_suffix(0)
def on_train_begin(self, args, state, control, **kwargs):
if state.is_world_process_zero:
self.training_bar = tqdm(total=state.max_steps, dynamic_ncols=True)
def on_step_end(self, args, state, control, **kwargs):
if state.is_world_process_zero:
self.training_bar.update(state.global_step - self.current_step)
self.current_step = state.global_step
def on_prediction_step(self, args, state, control, eval_dataloader=None, **kwargs):
if state.is_world_process_zero and has_length(eval_dataloader):
if self.prediction_bar is None:
self.prediction_bar = tqdm(
total=len(eval_dataloader),
leave=self.training_bar is None,
dynamic_ncols=True,
)
self.prediction_bar.update(1)
def on_evaluate(self, args, state, control, **kwargs):
if state.is_world_process_zero:
if self.prediction_bar is not None:
self.prediction_bar.close()
self.prediction_bar = None
def on_predict(self, args, state, control, **kwargs):
if state.is_world_process_zero:
if self.prediction_bar is not None:
self.prediction_bar.close()
self.prediction_bar = None
def on_log(self, args, state, control, logs=None, **kwargs):
if state.is_world_process_zero and self.training_bar is not None:
# The last callback_handler.on_log() call in the training loop logs `train_loss` as opposed to `loss`.
# From some digging through transformers code, the `train_loss` is the average training loss
# during training.
# See: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2025-L2026
self.loss = (
state.log_history[-1]["loss"]
if state.log_history and "loss" in state.log_history[-1]
else state.log_history[-1]["train_loss"]
)
self.num_input_tokens_seen = format_number_suffix(state.num_input_tokens_seen)
self.training_bar.set_postfix_str(
f"loss: {self.loss:.4f}, tokens: {self.num_input_tokens_seen}",
)
def on_train_end(self, args, state, control, **kwargs):
if state.is_world_process_zero:
self.training_bar.close()
self.training_bar = None
```
In my trainer arguments, I explicitly `disable_tdqm` so I can pass this as a custom callback in place of the original ProgressCallback. I also set `logging_steps` to 1 so that I can get metrics back from every step through the `log_history` attribute in the TrainerState object.
The challenge I'm having is that it logs the metric to stdout, but I am not sure where that actually comes from in the code. I don't want that behavior since I want to surface relevant information directly in my TQDM progress back through my callback. Looking at the transformers trainer, I've narrowed down that metrics get pass to `on_log` in the callback, and that seems to happen from within this function at the end of each step of training and then again at the end of training: https://github.com/huggingface/transformers/blob/v4.27.2/src/transformers/trainer.py#L2224
When I set a breakpoint at the end of `on_log` in my callback, I can confirm that the logs object doesn't get printed to stdout. So it happens somewhere between that and this looping to get to the next train step, but not sure if I am missing something obvious since I'm still new to the transformers codebase.
Here's what I see in my output:
```
***** Running training *****
Num examples = 183
Num Epochs = 3
Instantaneous batch size per device = 1
Total train batch size (w. parallel, distributed & accumulation) = 16
Gradient Accumulation steps = 16
Total optimization steps = 33
Number of trainable parameters = 256
3%|██▍ | 1/33 [00:01<00:34, 1.07s/it, loss | https://github.com/huggingface/transformers/issues/28924 | closed | [] | 2024-02-08T10:23:28Z | 2024-02-08T17:26:02Z | null | arnavgarg1 |
huggingface/alignment-handbook | 120 | (QLoRA) DPO without previous SFT | Because of the following LLM-Leaderboard measurements, I want to perform QLoRA DPO without previous QLoRA SFT:
```
alignment-handbook/zephyr-7b-dpo-qlora: +Average: 63.51; +ARC 63.65; +HSwag 85.35; -+MMLU 63.82; ++TQA: 47.14; (+)Win 79.01; +GSM8K 42.08;
alignment-handbook/zephyr-7b-sft-qlora: -Average: 59; (+)ARC 60.07; (-)HSwag 82.36; -MMLU 61.65; -TQA: 38.88; -Win 76.8; -GSM8K 34.27;
mistralai/Mistral-7B-v0.1: Average: 60.97; ARC 59.98; HSwag 83.31; MMLU 64.16; TQA: 42.15; Win 78.37; GSM8K 37.83;
```
As you can see, there is catastrophic forgetting in `zephyr-7b-sft-qlora` in almost all tasks, especially in MMLU, TruthfulQA, and GSM8K. Thus I wonder why do SFT at all?
In more detail
============
Q1: Why is there so much catastrophic forgetting in `zephyr-7b-sft-qlora` ? Due to the following improvements by DPO, the dataset seems to be apt.
Q2: Why is SFT performed before DPO at all? Is it some prerequisite, like SFT training the model to follow instructions at all, before DPO aligning the responses to instructions with human preferences?
Q3: I tried the following for DPO without previous SFT:
Modify `recipes/zephyr-7b-beta/dpo/config_qlora.yaml` by using `model_name_or_path: mistralai/Mistral-7B-v0.1` and then calling `scripts/run_dpo.py` on it:
```
echo -e "2,3c2\n< model_name_or_path: mistralai/Mistral-7B-v0.1\n< model_revision: main\n---\n> model_name_or_path: alignment-handbook/zephyr-7b-sft-qlora\n36c35\n< gradient_accumulation_steps: 8\n---\n> gradient_accumulation_steps: 2\n40c39\n< hub_model_id: zephyr-7b-dpo-qlora-no-sft\n---\n> hub_model_id: zephyr-7b-dpo-qlora\n49,51c48,50\n< output_dir: data/zephyr-7b-dpo-qlora-no-sft # It is handy to append `hub_model_revision` to keep track of your local experiments\n< per_device_train_batch_size: 1\n< per_device_eval_batch_size: 2\n---\n> output_dir: data/zephyr-7b-dpo-qlora # It is handy to append `hub_model_revision` to keep track of your local experiments\n> per_device_train_batch_size: 4\n> per_device_eval_batch_size: 8\n53,55d51\n< report_to:\n< - tensorboard\n< - wandb" | patch recipes/zephyr-7b-beta/dpo/config_qlora.yaml
ACCELERATE_LOG_LEVEL=info accelerate launch --config_file recipes/accelerate_configs/multi_gpu.yaml --num_processes=1 scripts/run_dpo.py recipes/zephyr-7b-beta/dpo/config_qlora.yaml
```
However, I get the error described at https://github.com/huggingface/alignment-handbook/issues/93. The solution there inspired me to do the following (so I don't have to go into the cache to replace tokenizer configs): Add in line 77 of `src/alignment/data.py`
```
tokenizer.chat_template = "{% for message in messages %}\n{% if message['role'] == 'user' %}\n{{ '<|user|>\n' + message['conten\
t'] + eos_token }}\n{% elif message['role'] == 'system' %}\n{{ '<|system|>\n' + message['content'] + eos_token }}\n{% elif message['role'] \
== 'assistant' %}\n{{ '<|assistant|>\n' + message['content'] + eos_token }}\n{% endif %}\n{% if loop.last and add_generation_prompt %}\n{{\
'<|assistant|>' }}\n{% endif %}\n{% endfor %}"
```
But Mistral's `default_chat_template` already allows system messages, so the problem seems to be that the dialogs in the dataset really do not alternate between user and assistant messages. Right? What is the reason for this?
Mistrals `default_chat_template` causing the error message:
```
{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% elif false == true and not '<<SYS>>' in messages[0]['content'] %}{% set loop_messages = messages %}{% set system_message = 'You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical,
racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don\'t know
the answer to a question, please don\'t share false information.' %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if loop.index0 == 0 and system_message != false %}{% set conte
nt = '<<SYS>>\n' + system_message + '\n<</SYS>>\n\n' + message['content'] %}{% else %}{% set content = message['content'] %}{% endif %}{% if message['role'] == 'user' %}{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}{% elif message['role'] == 'system' %}{{ '<<SYS>>\n' + content.strip() + '\n<</SYS>>\n\n' }}{% elif message['role'] == 'assistant' %}{{ ' ' + content.strip() + ' ' + eos_token }}{% endif %}{% | https://github.com/huggingface/alignment-handbook/issues/120 | open | [] | 2024-02-08T09:56:50Z | 2024-02-09T22:15:10Z | 1 | DavidFarago |
huggingface/transformers.js | 577 | Getting 'fs is not defined' when trying the latest "background removal" functionality in the browser? | ### Question
I copied the code from https://github.com/xenova/transformers.js/blob/main/examples/remove-background-client/main.js to here, but I'm getting this error with v2.15.0 of @xenova/transformers.js:
```
Uncaught ReferenceError: fs is not defined
at env.js:36:31
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/env.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:258:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at hub.js:6:2
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/utils/hub.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:783:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at tokenizers.js:21:2
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/tokenizers.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:6729:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at pipelines.js:14:2
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/pipelines.js [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:17183:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at 8484b_@xenova_transformers_src_5fe153._.js:17215:237
at [project]/node_modules/.pnpm/@xenova+transformers@2.15.0/node_modules/@xenova/transformers/src/transformers.js [app-client] (ecmascript) {module evaluation} (http://localhost:3001/_next/static/chunks/8484b_%40xenova_transformers_src_5fe153._.js:17228:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at esmImport (runtime-utils.ts:205:18)
at _b29e97._.js:19146:268
at [project]/app/remove/background/page.tsx [app-client] (ecmascript) (http://localhost:3001/_next/static/chunks/_b29e97._.js:19389:3)
at runtime-base.ts:322:21
at runModuleExecutionHooks (runtime-base.ts:376:5)
at instantiateModule (runtime-base.ts:321:5)
at getOrInstantiateModuleFromParent (runtime-base.ts:424:10)
at commonJsRequire (runtime-utils.ts:230:18)
at requireModule (react-server-dom-turbopack-client.browser.development.js:154:23)
at initializeModuleChunk (react-server-dom-turbopack-client.browser.development.js:1336:17)
at readChunk (react-server-dom-turbopack-client.browser.development.js:1146:7)
at mountLazyComponent (react-dom.development.js:16652:19)
at beginWork$1 (react-dom.development.js:18388:16)
at beginWork (react-dom.development.js:26791:14)
at performUnitOfWork (react-dom.development.js:25637:12)
at workLoopSync (react-dom.development.js:25353:5)
```
Any idea what is wrong and how to fix it? Here is my code, which basically a direct React.js port of the background removal example you all shared:
```tsx
'use client'
import {
AutoModel,
AutoProcessor,
env,
PreTrainedModel,
Processor,
RawImage,
} from '@xenova/transformers'
import React, {
MouseEvent,
useCallback,
useEffect,
useRef,
useState,
} from 'react'
import _ from 'lodash'
import FileDropzone from '~/components/FileDropzone'
// Since we will download the model from the Hugging Face Hub, we can skip the local model check
env.allowLocalModels = false
// Proxy the WASM backend to prevent the UI from freezing
env.backends.onnx.wasm.proxy = true
function useModel(): {
model?: PreTrainedModel
processor?: Processor
} {
const [model, setModel] = useState<PreTrainedModel>()
const [processor, setProcessor] = useState<Processor>()
useEffect(() => {
AutoModel.from_pretrained('briaai/RMBG-1.4', {
config: { model_type: 'custom' },
}).then(m => {
setModel(m)
})
AutoProcessor.from_pretrained('briaai/RMBG-1.4', {
config: {
| https://github.com/huggingface/transformers.js/issues/577 | open | [
"question"
] | 2024-02-08T04:34:59Z | 2024-11-26T05:20:22Z | null | lancejpollard |
huggingface/transformers.js | 575 | Can GPU acceleration be used when using this library in a node.js environment? | ### Question
Hello, I have looked into the GPU support related issue, but all mentioned content is related to webGPU. May I ask if GPU acceleration in the node.js environment is already supported? Refer: https://github.com/microsoft/onnxruntime/tree/main/js/node | https://github.com/huggingface/transformers.js/issues/575 | closed | [
"question"
] | 2024-02-07T03:37:50Z | 2025-01-20T15:05:00Z | null | SchneeHertz |
huggingface/dataset-viewer | 2,408 | Add task tags in /hub-cache? | On the same model as https://github.com/huggingface/datasets-server/pull/2386, detect and associate tags to a dataset to describe the tasks it can be used for.
Previously discussed at https://github.com/huggingface/datasets-server/issues/561#issuecomment-1250029425 | https://github.com/huggingface/dataset-viewer/issues/2408 | closed | [
"question",
"feature request",
"P2"
] | 2024-02-06T11:17:19Z | 2024-06-19T15:43:15Z | null | severo |
huggingface/dataset-viewer | 2,407 | Remove env var HF_ENDPOINT? | Is it still required to set HF_ENDPOINT as an environment variable?
https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/resources.py#L41-L45
| https://github.com/huggingface/dataset-viewer/issues/2407 | closed | [
"duplicate",
"question",
"refactoring / architecture",
"P2"
] | 2024-02-06T11:11:24Z | 2024-02-06T14:53:12Z | null | severo |
huggingface/chat-ui | 786 | Can't get Mixtral to work with web-search | I have been following this project for a while and recently tried setting up oobabooga Mixtral-8x7b
I used the official prompt template used in huggingface.co :
```
<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}}</s> {{/ifAssistant}}{{/each}}
```
Normal chat works, and summarization for the title works, but web-search does not.
It always gives the full answer instead of a search term.

Here is my local.env:
```
MONGODB_URL=mongodb://localhost:27017
USE_LOCAL_WEBSEARCH=true
PUBLIC_APP_ASSETS=chatui
HF_ACCESS_TOKEN=hf_none
PUBLIC_APP_DESCRIPTION="ChatGPT But Open Source!"
PUBLIC_APP_NAME=ChatGPT
MODELS=`[
{
"name": "LocalGPT",
"description": "Mixtral is a great overall model",
"chatPromptTemplate" : "<s> {{#each messages}}{{#ifUser}}[INST]{{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}} {{content}} [/INST]{{/ifUser}}{{#ifAssistant}} {{content}}</s> {{/ifAssistant}}{{/each}}",
"preprompt": "",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python and give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.3,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 3072,
"max_new_tokens": 2048,
"stop": ["</s>"]
},
"endpoints": [{
"type" : "openai",
"baseURL": "http://127.0.0.1:5000/v1"
}]
}
]`
```
| https://github.com/huggingface/chat-ui/issues/786 | open | [] | 2024-02-06T07:14:08Z | 2024-02-16T10:45:40Z | 2 | iChristGit |
huggingface/dataset-viewer | 2,402 | Reduce resources for /filter and /search? | They have nearly 0 traffic. https://grafana.huggingface.tech/d/i7gwsO5Vz/global-view?orgId=1&from=now-6h&to=now
Should we reduce the number of pods? How to configure the right level? | https://github.com/huggingface/dataset-viewer/issues/2402 | closed | [
"question",
"infra",
"P2",
"prod"
] | 2024-02-05T21:44:56Z | 2024-02-28T17:55:50Z | null | severo |
huggingface/dataset-viewer | 2,390 | Store the repo visibility (public/private) to filter webhooks | See https://github.com/huggingface/datasets-server/pull/2389#pullrequestreview-1862425050
Not sure if we want to do it, or wait for the Hub to provide more finely scoped webhooks. See also #2208, where we wanted to store metadata about the datasets. | https://github.com/huggingface/dataset-viewer/issues/2390 | closed | [
"question",
"P2"
] | 2024-02-05T12:37:30Z | 2024-06-19T15:37:36Z | null | severo |
huggingface/transformers.js | 567 | Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order. | ### Question
Does await pipeline() support multithreading? I've tried all kinds of multithreaded calls and it still returns the results one by one in order. | https://github.com/huggingface/transformers.js/issues/567 | open | [
"question"
] | 2024-02-05T11:12:34Z | 2024-02-05T11:12:34Z | null | a414166402 |
huggingface/transformers.js | 565 | How can i use this Model for image matting? | ### Question
https://github.com/ZHKKKe/MODNet?tab=readme-ov-file
They have ONNX file and the python cli usage looks simple, but I can't find how to use with transformers.js.
```
!python -m demo.image_matting.colab.inference \
--input-path demo/image_matting/colab/input \
--output-path demo/image_matting/colab/output \
--ckpt-path ./pretrained/modnet_photographic_portrait_matting.ckpt
``` | https://github.com/huggingface/transformers.js/issues/565 | closed | [
"question"
] | 2024-02-05T09:28:28Z | 2024-02-07T11:33:26Z | null | cyio |
huggingface/transformers.js | 564 | Can models from user disks load and run in my HF space? | ### Question
Im fiddling around with the react-translator template.
What I have accomplished so far:
- Run local (on disk in public folder) model in localhost webapp.
- Run hosted (on HF) model in localhost webapp.
- Run hosted (on HF) model in HF Space webapp.
What i want to accomplish but can't figure out:
- Use local (on disk in any folder) model in HF Space webapp.
Is this possible?
From what i understand so far, local models have to be in the public folder of the webapp, but that defeats the purpose of my webapp, which would be to allow users to benchmark models from any folder of their disk in my HF Space.
Preferably the user would provide a path or use drag'n'drop to provide their model folder location on the disk and the webapp would then proceed to load the model from the provided location into the application cache.
The reason i need this specific setup is because i work on a benchmarking tool and I don't want to force users to host their models on HF in order to be able to benchmark them. | https://github.com/huggingface/transformers.js/issues/564 | closed | [
"question"
] | 2024-02-05T08:00:55Z | 2024-06-07T01:17:24Z | null | saferugdev |
huggingface/transformers | 28,860 | Question: How do LLMs learn to be "Generative", as we often describe them? | (Please forgive me and let me know if I'm not allowed to ask this kind of question here. I'm so sorry if I'm bothering everyone.)
AFAIK to be called "generative", a model should have the ability to learn the joint probability over the training data. In the case of LLMs, we apply the chain rule of Bayes' formula to achieve this by leveraging the autoregressive method for every token of each input text sequence. For example, with a text sequence of 4 tokens, it can be written as:
```
p(x4,x3,x2,x1) = p(x4|x3,x2,x1) * p(x3|x2,x1) * p(x2|x1) * p(x1)
```
where `x1` denotes the 1st token, `x2` denotes the 2nd token and so on, respectively.
I understand the conditional terms `p(x_n|...)` where we use cross-entropy to calculate their losses. However, I'm unsure about the probability of the very first token `p(x1)`. How is it calculated? Is it in some configurations of the training process, or in the model architecture, or in the loss function?
IMHO, if the model doesn't learn `p(x1)` properly, the entire formula for Bayes' rule cannot be completed, and we can't refer to LLMs as "truly generative". Am I missing something here?
I asked the [same question on `nanoGPT` repo](https://github.com/karpathy/nanoGPT/issues/432) and [on HN](https://news.ycombinator.com/item?id=39249301). I'm also reading Transformer codes from this repo, but I haven't found the answer I'm looking for yet. Could someone please enlighten me? Thank in advance! | https://github.com/huggingface/transformers/issues/28860 | closed | [] | 2024-02-05T07:10:23Z | 2024-02-05T12:22:27Z | null | metalwhale |
huggingface/sentence-transformers | 2,470 | BGE Reranker / BERT Crossencoder Onnx model latency issue | I am using the Int8 quantized version of BGE-reranker-base model converted to the Onnx model. I am processing the inputs in batches. Now the scenario is that I am experiencing a latency of 20-30 secs with the original model. With the int8 quantized and onnx optimized model, the latency was reduced to 8-15 secs keeping all the configurations the same like hardware, batch processing, and everything I used with the original torch model.
I am using Flask as an API server, on a quad-core machine.
I want further to reduce the model latency of the Onnx model. How can I do so?
Also please suggest anything more I can do during the deployment | https://github.com/huggingface/sentence-transformers/issues/2470 | open | [
"question"
] | 2024-02-05T05:54:18Z | 2024-02-09T06:59:51Z | null | ojasDM |
huggingface/chat-ui | 774 | Where are the image and pdf upload features when running on locally using this repo? | I see there are issues and features being talked about and added for the image upload and parsing PDFs as markdown etc. However, I dont see these features in when I cloned this repo and started chatui using "npm run dev" locally.
Am I missing something?
#641 are the features I am talking about. | https://github.com/huggingface/chat-ui/issues/774 | closed | [] | 2024-02-05T00:41:05Z | 2024-02-05T08:48:29Z | 1 | zubu007 |
huggingface/chat-ui | 771 | using openai api key for coporate | Hi
We are working with an open ai key for our corporate ( it has a corporate endpoint)
this is how we added the model to .env.local
```
MODELS=`[
{
"name": "Corporate local instance of GPT 3.5 Model",
"endpoints": [{
"type": "openai",
"url": "corporate url"
}],
"userMessageToken": "User: ",
"assistantMessageToken": "Assistant: ",
"messageEndToken": "</s>",
"preprompt": " ",
"prepromptUrl": "http://127.0.0.1:8000/preprompt.txt",
"parameters": {
"temperature": 0.9,
"max_new_tokens": 1024,
"truncate": 31000
},
```
The problem I can't connet t to the model there are authentications issues. this is what we get:
anyone else tried to connect with corporate openai api key?
How can we solve this?
we can connect to the model using python so this is not an issue with the credentials. | https://github.com/huggingface/chat-ui/issues/771 | open | [
"models"
] | 2024-02-04T11:23:59Z | 2024-02-06T15:01:50Z | 1 | RachelShalom |
huggingface/optimum-neuron | 460 | [QUESTION] What is the difference between optimum-neuron and transformers-neuronx? | I would like to understand the differences between this optimum-neuron and [transformers-neuronx](https://github.com/aws-neuron/transformers-neuronx). | https://github.com/huggingface/optimum-neuron/issues/460 | closed | [] | 2024-02-02T18:27:46Z | 2024-03-27T11:04:52Z | null | leoribeiro |
huggingface/dataset-viewer | 2,376 | Should we increment "failed_runs" when error is "ResponseAlreadyComputedError"? | Related to https://github.com/huggingface/datasets-server/issues/1464: is it really an error? | https://github.com/huggingface/dataset-viewer/issues/2376 | closed | [
"question",
"P2"
] | 2024-02-02T12:08:31Z | 2024-02-22T21:16:12Z | null | severo |
huggingface/autotrain-advanced | 484 | How to ask question AutoTrained LLM , If I ask question dosn't return any answer | Hi,
LLM training was successful , But I asked any question from my trained context and it was not answered.How to ask proper question?
rom transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "bert-base-uncased_finetuning"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="cuda",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
Some weights of BertLMHeadModel were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
example
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1128: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/utils.py:1136: UserWarning: Input length of input_ids is 24, but `max_length` is set to 20. This can lead to unexpected behavior. You should consider increasing `max_new_tokens`.
warnings.warn(
| https://github.com/huggingface/autotrain-advanced/issues/484 | closed | [
"stale"
] | 2024-02-02T09:29:07Z | 2024-03-04T15:01:36Z | null | charles-123456 |
huggingface/chat-ui | 761 | Does chat-ui support offline deployment? I have downloaded the weights to my local computer. | I have downloaded the weights to my local computer. Due to network issues, I am unable to interact with the huggingface website. Can I do offline deployment based on chat-ui and downloaded weights from huggingface? Do I not need to set HF_TOKEN=<your access token>?Does that mean I don't need to set HF_TOKEN=<your access token> in the .env.local file? | https://github.com/huggingface/chat-ui/issues/761 | closed | [
"support"
] | 2024-02-02T07:57:19Z | 2024-02-04T03:23:25Z | 2 | majestichou |
huggingface/transformers.js | 557 | how to cast types? | ### Question
I have the following code:
```
const pipe = await pipeline('embeddings');
const output = await pipe([
'The quick brown fox jumps over the lazy dog',
]);
const embedding = output[0][0];
```
`output[0][0]` causes a typescript error:
<img width="748" alt="CleanShot 2024-02-01 at 23 38 04@2x" src="https://github.com/xenova/transformers.js/assets/2908721/6e7a1e58-bfbf-4a9d-96e3-83b771c7be99">
| https://github.com/huggingface/transformers.js/issues/557 | open | [
"question"
] | 2024-02-02T04:38:20Z | 2024-02-08T19:01:06Z | null | pthieu |
huggingface/diffusers | 6,819 | How to let diffusers use local code for pipelineinstead of download it online everytime We use it? | I tried to use the instaflowpipeline from example/community to.run my test However, even after i git cloned the repository to my environment it still Keep trying to Download the latest object of the instaflow pipeline code Unfortunately in my area is hard for the environment to download it directly from rawgithub. I tried to change the downloaded code to let it just use these code already in my environment But find it hard to change the path to url.
I would be appreciated if someone could find an proper answer . Thank you for your time and happy lunar new year! | https://github.com/huggingface/diffusers/issues/6819 | closed | [] | 2024-02-02T02:53:48Z | 2024-11-28T05:44:10Z | null | Kevin-shihello-world |
huggingface/diffusers | 6,817 | How to use class_labels in the Unet2DConditionalModel or Unet2DModel when forward? | Hi, I want to know what the shape or format of "class" is if I want to add the class condition to the unet? Just set the **classe_labels** 0, 1, 2, 3?
Unet2DModel: **class_labels** (torch.FloatTensor, optional, defaults to None) — Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings.
Unet2DConditionalModel: **class_labels** (torch.Tensor, optional, defaults to None) — Optional class labels for conditioning. Their embeddings will be summed with the timestep embeddings. timestep_cond — (torch.Tensor, optional, defaults to None): Conditional embeddings for timestep. If provided, the embeddings will be summed with the samples passed through the self.time_embedding layer to obtain the timestep embeddings. | https://github.com/huggingface/diffusers/issues/6817 | closed | [] | 2024-02-02T02:17:40Z | 2024-02-07T07:31:35Z | null | boqian-li |
huggingface/sentence-transformers | 2,465 | How to load lora model to sentencetransformer model? | Dear UKPlab team,
My team and myself are working on a RAG project and right now we are fine tuning a retrieval model using peft library. The issue is once we have the model fine-tuned, we couldn't load the local config and checkpoints using `sentencetransformer`.
Here is our hierarchy of the local path of the peft model
- adapter_config.json
- adapter_model.safetensors
- ....
When I look into the `sentence-transformers` package, the issue comes from the class```Transformer.py``` which doesn't consider the situation that the model path is a ```peftmodel``` path:
` config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)`
So we have to comment this line and delete the `config` attribute at all and in the `_load_model` method, only keep this code:
`self.auto_model = AutoModel.from_pretrained(model_name_or_path, cache_dir=cache_dir)`
Sincerely request. Could you please fix this issue or could you please tell me the correct way to load a peft model using sentencetransformer class?
| https://github.com/huggingface/sentence-transformers/issues/2465 | closed | [] | 2024-02-02T00:18:04Z | 2024-11-08T12:32:36Z | null | Shengyun-Si |
huggingface/amused | 3 | How to generate multiple images? | Thank you for your amazing work! Could you kindly explain how to generate multiple images at a time? Thankyou | https://github.com/huggingface/amused/issues/3 | closed | [] | 2024-02-01T18:03:30Z | 2024-02-02T10:36:09Z | null | aishu194 |
huggingface/alignment-handbook | 110 | DPO loss on different datasets | In parallel with #38, tho i am relating to full training instead of lora.
When i use a different set of prefs (ie chosen and rejected) but still same instructions (ultrafeedback), i get extremely low eval/train loss, where it drops sharply in the beginning. In contrast to training on the original prefs as in the case of ultrafeedback_binarised.
On my pref dataset (Eval loss)

on original pref dataset (eval loss)

train loss (mine)

original

reward margin (mine)

original reward

This huge diff in scale seems to occur when i use pref datasets that are sampled from the reference policy instead of in the case of ultrafeedback, where it is sampled from various policies.
Moreover this huge decrease in loss actually cause the DPO-ed model to perform worse across various benchmarks. Is there any intuition regarding this? | https://github.com/huggingface/alignment-handbook/issues/110 | open | [] | 2024-02-01T15:49:29Z | 2024-02-01T15:49:29Z | 0 | wj210 |
huggingface/chat-ui | 757 | Which (temperature) configurations for Zephyr chat interface? | Hi, I apologise for what is maybe an obvious question but where can I find the exact configurations for the model offered on the HF Zephyr Chat interface on https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat for Zephyr 7B beta? I'm especially interested to see the temperature settings and wasn't able to find this information. | https://github.com/huggingface/chat-ui/issues/757 | closed | [
"support"
] | 2024-02-01T14:27:12Z | 2024-02-01T14:47:13Z | 3 | AylaRT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.