repo stringclasses 147 values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2 values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/peft | 631 | How to train multiple LoRAs at once? | Hi! I would like to train multiple LoRAs at once (for some reason). Although `requires_grad` is True for all LoRA weight matrices, only the first LoRA weight matrix will calculate the gradient, and the others will not calculate the gradient - and will not be updated. How can I train them in one forward process?
1. I initialize multiple LoRAs using the `add_adapter()` method
```python
bert_path = "prajjwal1/bert-tiny"
rank = 8
LoRA_amount = 6
model = CustomBert.from_pretrained(bert_path)
peft_config = LoraConfig(
inference_mode=False,
r=rank,
lora_alpha=32,
lora_dropout=0.1
)
model = PeftModel(model, peft_config, adapter_name="0")
for LoRA_index in range(1, LoRA_amount):
model.add_adapter(str(LoRA_index), peft_config)
```
2. This is the printed model architecture
```
testModel(
(model): PeftModel(
(base_model): LoraModel(
(model): CustomBert(
(bert): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(30522, 128, padding_idx=0)
(position_embeddings): Embedding(512, 128)
(token_type_embeddings): Embedding(2, 128)
(LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0): BertLayer(
(attention): BertAttention(
(self): BertSelfAttention(
(query): Linear(
in_features=128, out_features=128, bias=True
(lora_dropout): ModuleDict(
(0): Dropout(p=0.1, inplace=False)
(1): Dropout(p=0.1, inplace=False)
(2): Dropout(p=0.1, inplace=False)
(3): Dropout(p=0.1, inplace=False)
(4): Dropout(p=0.1, inplace=False)
(5): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(0): Linear(in_features=128, out_features=16, bias=False)
(1): Linear(in_features=128, out_features=16, bias=False)
(2): Linear(in_features=128, out_features=16, bias=False)
(3): Linear(in_features=128, out_features=16, bias=False)
(4): Linear(in_features=128, out_features=16, bias=False)
(5): Linear(in_features=128, out_features=16, bias=False)
)
(lora_B): ModuleDict(
(0): Linear(in_features=16, out_features=128, bias=False)
(1): Linear(in_features=16, out_features=128, bias=False)
(2): Linear(in_features=16, out_features=128, bias=False)
(3): Linear(in_features=16, out_features=128, bias=False)
(4): Linear(in_features=16, out_features=128, bias=False)
(5): Linear(in_features=16, out_features=128, bias=False)
)
(lora_embedding_A): ParameterDict()
(lora_embedding_B): ParameterDict()
)
(key): Linear(in_features=128, out_features=128, bias=True)
(value): Linear(
in_features=128, out_features=128, bias=True
(lora_dropout): ModuleDict(
(0): Dropout(p=0.1, inplace=False)
(1): Dropout(p=0.1, inplace=False)
(2): Dropout(p=0.1, inplace=False)
(3): Dropout(p=0.1, inplace=False)
(4): Dropout(p=0.1, inplace=False)
(5): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(0): Linear(in_features=128, out_features=16, bias=False)
(1): Linear(in_features=128, out_features=16, bias=False)
(2): Linear(in_features=128, out_features=16, bias=False)
(3): Linear(in_features=128, out_features=16, bias=False)
(4): Linear(in_features=128, out_features=16, bias=False)
(5): Linear(in_features=128, out_features=16, bias=False)
)
(lora_B): ModuleDict(
(0): Linear(in_features=16, out_features=128, bias=False)
(1): Linear(in_features=16, out_features=128, bias=False)
(2): Linear(in_features=16, out_features=128, bias=False)
(3): Linear(in_features=16, out_features=128, bias=False)
(4): Linear(in_features=16, out_features=128, bias=False)
(5 | https://github.com/huggingface/peft/issues/631 | closed | [
"enhancement"
] | 2023-06-26T09:30:16Z | 2023-08-18T13:41:32Z | null | meteorlin |
huggingface/optimum | 1,135 | Donut document parsing export to onnx does not work. | ### System Info
```shell
optimum==1.8.8
python==3.11.3
system linux
```
### Who can help?
The donut export does not work with the following commands, does anybody know how to get this running or know about the status.
```
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/
...
...
...
Exception: The post-processing of the ONNX export failed. The export can still be performed by passing the option --no-post-process. Detailed error: Unable to merge decoders. Detailed error: Expected
a dynamic shape for the axis zero of onnx::Reshape_1045, found a static shape: 2
```
````
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/ --no-post-process
...
...
...
- last_hidden_state: max diff = 0.0012216567993164062
Validation 1 for the model donut_cord2_onnx/decoder_model.onnx raised: The exported ONNX model does not have the exact same outputs as what is provided in VisionEncoderDecoderOnnxConfig. Difference: onnx::Reshape_1263, onnx::Reshape_1359, onnx::Reshape_1364, onnx::Reshape_1045, onnx::Reshape_1146, onnx::Reshape_1258, onnx::Reshape_1151, onnx::Reshape_1050
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:encoder_hidden_states
An error occured during validation, but the model was saved nonetheless at donut_cord2_onnx. Detailed error: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Invalid Feed Input Name:encoder_hidden_states.
```
Changing the task name to image-to-text instead of image-to-text-with-past does seem to run. However, I assume that this task is set specifically. Although, for me it is unclear why it is set to that particular task.
```
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/ --no-post-process --task image-to-text
Validating ONNX model donut_cord2_onnx/encoder_model.onnx...
-[✓] ONNX model output names match reference model (last_hidden_state)
- Validating ONNX Model output "last_hidden_state":
-[✓] (2, 1200, 1024) matches (2, 1200, 1024)
-[x] values not close enough, max diff: 0.00121307373046875 (atol: 0.001)
Validating ONNX model donut_cord2_onnx/decoder_model.onnx...
Validation 0 for the model donut_cord2_onnx/encoder_model.onnx raised: The maximum absolute difference between the output of the reference model and the ONNX exported model is not within the set tolerance 0.001:
- last_hidden_state: max diff = 0.00121307373046875
The ONNX export succeeded with the warning: The exported ONNX model does not have the exact same outputs as what is provided in VisionEncoderDecoderOnnxConfig. Difference: onnx::Reshape_1359, onnx::Reshape_1258, onnx::Reshape_1146, onnx::Reshape_1151, onnx::Reshape_1050, onnx::Reshape_1045, onnx::Reshape_1364, onnx::Reshape_1263.
The exported model was saved at: donut_cord2_onnx
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/
### Expected behavior
export to run correctly and validation report. | https://github.com/huggingface/optimum/issues/1135 | closed | [
"bug"
] | 2023-06-26T08:57:01Z | 2023-06-26T10:17:32Z | 3 | casperthuis |
huggingface/peft | 630 | How to switch to P-Tuning v2 | We can find the `P-Tuning v2` in
https://github.com/huggingface/peft/blob/8af8dbd2ec9b4b8f664541e9625f898db7c7c78f/README.md?plain=1#L29
But how can I switch to `P-Tuning v2`? | https://github.com/huggingface/peft/issues/630 | closed | [
"solved"
] | 2023-06-26T08:52:42Z | 2023-08-04T15:03:30Z | null | jiahuanluo |
huggingface/optimum | 1,134 | ValueError: ..set the option `trust_remote_code=True` to remove this error | ### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
### Who can help?
Hello,
I am running the optimum cli command
`optimum-cli export onnx --model mosaicml/mpt-7b-chat --task text-generation mpt-7b-chat\`
when I am getting this error:
```
File "C:\Users\dutta\AppData\Local\Programs\Python\Python311\Lib\site-packages\transformers\dynamic_module_utils.py", line 553, in resolve_trust_remote_code
raise ValueError(
ValueError: Loading mosaicml/mpt-7b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option `trust_remote_code=True` to remove this error.
```
How to deal with this error? @michaelbenayoun
Thanks
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the same command replacing the output directory name to a name of your choice
### Expected behavior
I expect the command to run without error and product the ONNX model and other files in the output directory | https://github.com/huggingface/optimum/issues/1134 | closed | [
"bug"
] | 2023-06-24T12:47:35Z | 2023-07-06T16:38:30Z | 5 | diptenduLF |
huggingface/chat-ui | 322 | Chat using WizardCoder | Hello,
Can you please post an example of .env.local for:
WizardLM/WizardCoder-15B-V1.0 | https://github.com/huggingface/chat-ui/issues/322 | open | [] | 2023-06-23T18:44:07Z | 2023-08-14T20:52:39Z | 2 | vitalyshalumov |
huggingface/chat-ui | 321 | Chat-UI not loading Tailwind colors. | **Problem**
When specifying `PUBLIC_APP_COLOR` in either the `.env` or the `.env.local` file, the chat-UI color does not change regardless of which color is used. Even when `PUBLIC_APP_COLOR=blue` as set in this repository, the chat-UI color does not match with TailwindCSS's blue color palette:
**TailwindCSS blue color palette:**
<img width="452" alt="blue" src="https://github.com/huggingface/chat-ui/assets/48559179/216923cf-6941-4629-b444-65a4930f3979">
**Chat-UI color palette:**
<img width="692" alt="chat" src="https://github.com/huggingface/chat-ui/assets/48559179/809aece3-3efe-4dd5-ac48-5cc0b6f32221">
**Observation**
Upon investigating the code, I noticed that the switchTheme.ts file contains the following code:
```
export function switchTheme() {
const { classList } = document.querySelector("html") as HTMLElement;
if (classList.contains("dark")) {
classList.remove("dark");
localStorage.theme = "light";
} else {
classList.add("dark");
localStorage.theme = "dark";
}
}
```
I think that instead of loading the Tailwind colors specified in either `.env` or `.env.local`, the chat-UI is actually using these `"light"` and `"dark"` themes. I couldn't find where these themes are specified in the repositories or if they can be changed at all.
**Requested Solution:**
I want to load the Tailwind colors by setting `PUBLIC_APP_COLOR` in `.env` and/or `.env.local`. However, if it turns out that the chat-UI laods colors based on the `"light"` and `"dark"`, adjusting these themes could also be a viable solution. Thank you in advance for your assistance. | https://github.com/huggingface/chat-ui/issues/321 | closed | [
"question",
"front"
] | 2023-06-23T15:54:43Z | 2023-09-18T13:12:15Z | null | ckanaar |
huggingface/peft | 622 | LoRA results in 4-6% lower performance compared to full fine-tuning | I am working on fine-tuning LLMs (6B to 40B parameters) using the LoRA framework on an instruction tuning dataset comprising of instructions corresponding to ~20 tasks (a mix of factual as well as open-ended tasks). The input to the model consists of a conversation snippet between two individuals along with a task-specific prompt. The results I am observing do not align with the performance improvements reported in the [paper](https://arxiv.org/pdf/2106.09685.pdf). Specifically, the paper reports that fine-tuning using LoRA generally results in performance at par with or better than full fine-tuning of the model, however, throughout our experiments I observe a performance lower than full fine-tuning by an absolute margin of ~4-6% in terms of RougeL score.
Sharing some of the training details below:
**[Framework versions]**
Python: 3.8
PyTorch: 1.13.1
Transformers: 4.27.4
PEFT: 0.3.0
**[Infrastructure]**
8 X A100 40 GB GPUs
**[Hyper-parameter Range]**
Learning rate: 5e-5 to 3e-3
Learning rate scheduler: [Constant, Linear]
Epochs: [1, 2]
Batch size: [2, 4, 8]
Weight decay: 0.0
Precision: bf16
Specifically, I tried fine-tuning of `google/flan-t5-xxl` model in following two scenarios:
- **Scenario 1**
Full fine-tuning with constant `learning rate = 5e-5`, `batch size = 8`, `epochs = 1`
- **Scenario 2**
Fine-tuning using LoRA with constant `learning rate = 1e-3`, `batch size = 8`, `epochs = 1` and LoraConfig as follows:
`LoraConfig(r=8, lora_alpha=16, lora_dropout=0.05, bias='none', task_type="SEQ_2_SEQ_LM")`
**Observation:** Scenario 2 resulted in 4% lower RougeL as compared to scenario 1. I have also tried tuning the hyper-parameters in Scenario 2 as per the range specified above, however, the best I could get is to a gap of ~4% RougeL.
Thank you very much for your time and consideration. Looking forward to any relevant insights here. | https://github.com/huggingface/peft/issues/622 | closed | [
"question"
] | 2023-06-23T10:50:24Z | 2023-07-24T12:12:18Z | null | digvijayingle016 |
huggingface/setfit | 389 | gradient_accumulation | Is there a way in setFitTrainer to change the gradient_accumulation like you can do in the regular Trainer class in TrainingArguments? Also just in general I am looking for tips to make training faster. | https://github.com/huggingface/setfit/issues/389 | closed | [
"question"
] | 2023-06-22T21:18:37Z | 2023-11-11T05:32:34Z | null | zackduitz |
huggingface/datasets | 5,982 | 404 on Datasets Documentation Page | ### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co | https://github.com/huggingface/datasets/issues/5982 | closed | [] | 2023-06-22T20:14:57Z | 2023-06-26T15:45:03Z | 2 | kmulka-bloomberg |
huggingface/chat-ui | 317 | Issues when trying to deploy on cPanel (shared hosting) | Hello there,
Is there something special to do to be able to deploy chat-ui on a shared hosting using cPanel?
I tried using the Node.JS Apps Manager as follows

But even when switching my entry point to server/index.js, it doesn't work.
I also tried to NPM install using the manager, but then it doesn't seem to be able to use vite, even when forcing any `npm install vite`...
So, if you could me out on this, it would be highly appreciated!
In advance, thanks a lot.
Regards,
Golluméo | https://github.com/huggingface/chat-ui/issues/317 | closed | [
"support"
] | 2023-06-22T17:32:00Z | 2023-09-18T13:12:53Z | 1 | gollumeo |
huggingface/transformers.js | 161 | [Question] whisper vs. ort-wasm-simd-threaded.wasm | While looking into https://cdn.jsdelivr.net/npm/@xenova/transformers@2.2.0/dist/transformers.js I can see a reference to **ort-wasm-simd-threaded.wasm** however that one never seem to be loaded for whisper/automatic-speech-recognition ( https://huggingface.co/spaces/Xenova/whisper-web ) while it always use **ort-wasm-simd.wasm** . I wonder if there is a way to enable or enforce threaded wasm and so improve transcription speed? | https://github.com/huggingface/transformers.js/issues/161 | open | [
"question"
] | 2023-06-22T06:41:31Z | 2023-08-15T16:36:01Z | null | jozefchutka |
huggingface/datasets | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
| https://github.com/huggingface/datasets/issues/5975 | closed | [] | 2023-06-21T19:10:02Z | 2023-06-30T05:55:39Z | 9 | Veluchs |
huggingface/transformers.js | 158 | [Question] How do I use this library with ts-node? | I have a non-Web/browser-based project that uses TypeScript with ts-node.
The "pipeline" function attempts to use the JavaScript Fetch API, which is not included with NodeJS, and the code therefore fails with an error: "fetch is not defined."
The "node-fetch" package doesn't seem to provide a compatible API.
| https://github.com/huggingface/transformers.js/issues/158 | open | [
"question"
] | 2023-06-21T17:42:11Z | 2023-08-17T13:20:51Z | null | moonman239 |
huggingface/chat-ui | 314 | 500 Internal Error | 
| https://github.com/huggingface/chat-ui/issues/314 | closed | [
"question",
"support"
] | 2023-06-21T08:58:52Z | 2023-06-22T13:13:57Z | null | kasinadhsarma |
huggingface/datasets | 5,971 | Docs: make "repository structure" easier to find | The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages. | https://github.com/huggingface/datasets/issues/5971 | open | [
"documentation"
] | 2023-06-21T08:26:44Z | 2023-07-05T06:51:38Z | 5 | severo |
huggingface/chat-ui | 313 | MongoDB | I have a free teir MongoDB acount but not sure how to get url plz help | https://github.com/huggingface/chat-ui/issues/313 | closed | [
"support"
] | 2023-06-21T07:47:18Z | 2023-06-23T08:34:42Z | 5 | Toaster496 |
huggingface/peft | 607 | trainer with multi-gpu | I want to use trainer.predict to predict datasets by multi-gpu, but actually I only use single one gpu
when I print Seq2SeqTrainingArguments , I get

It shows 8 gpu
I check my code, when I load model, I find something strange
base_model.device: cpu
peftModel is as follows:

it print cuda
how can i fix?
| https://github.com/huggingface/peft/issues/607 | closed | [
"question"
] | 2023-06-20T08:58:37Z | 2023-07-28T15:03:31Z | null | hrdxwandg |
huggingface/chat-ui | 311 | Unable to build with Docker | Hey,
I'm trying to create a docker container with Chat-Ui but i'm facing a wall.
I cloned this repo in a folder on a server and modified the `.env` file, thinking that it would be easy to deploy a docker container out of it but I could not be more wrong !
After trying to build my container with `docker build -t chat-ui .` I went to the same problem as [here](https://github.com/huggingface/chat-ui/issues/301).
I tried to build the docker container before and after running `npm install` but I went through the exact same problem, which is that it cannot run in the Dockerfile :
```
RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local \
npm run build
```
At first I thought it was an issue with docker not being able to run` npm install` so I added, at the begining of my dockerfile `CMD npm install` and went also throughout the same issue, I'm guessing it has something to do with the dockerfile itself.
To reproduce my error, here are the steps :
1. `git clone https://github.com/huggingface/chat-ui.git`
2. `cp .env .env.local `
3. modify my .env.local with my variables
4. `docker build -t chat-ui .`
Here is the error I'm getting when I launch the docker build command :
```
docker build -t chat-ui .
[+] Building 4.3s (16/17)
=> [internal] load .dockerignore 0.0s
=> => transferring context: 122B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 954B 0.0s
=> [internal] load metadata for docker.io/library/node:19 0.6s
=> [internal] load metadata for docker.io/library/node:19-slim 0.6s
=> [builder-production 1/4] FROM docker.io/library/node:19@sha256:92f06f 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 10.45kB 0.0s
=> [stage-2 1/5] FROM docker.io/library/node:19-slim@sha256:f58f1fcf5c9f 0.0s
=> CACHED [builder-production 2/4] WORKDIR /app 0.0s
=> CACHED [builder-production 3/4] COPY --link --chown=1000 package-lock 0.0s
=> CACHED [builder-production 4/4] RUN --mount=type=cache,target=/app/.n 0.0s
=> CACHED [builder 1/3] RUN --mount=type=cache,target=/app/.npm 0.0s
=> CACHED [builder 2/3] COPY --link --chown=1000 . . 0.0s
=> CACHED [stage-2 2/5] RUN npm install -g pm2 0.0s
=> CACHED [stage-2 3/5] COPY --from=builder-production /app/node_modules 0.0s
=> CACHED [stage-2 4/5] COPY --link --chown=1000 package.json /app/packa 0.0s
=> ERROR [builder 3/3] RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env. 3.7s
------
> [builder 3/3] RUN --mount=type=secret,id=DOTENV_LOCAL,dst=.env.local npm run build:
#0 0.622
#0 0.622 > chat-ui@0.3.0 build
#0 0.622 > vite build
#0 0.622
#0 0.831 ▲ [WARNING] Cannot find base config file "./.svelte-kit/tsconfig.json" [tsconfig.json]
#0 0.831
#0 0.831 tsconfig.json:2:12:
#0 0.831 2 │ "extends": "./.svelte-kit/tsconfig.json",
#0 0.831 ╵ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#0 0.831
#0 1.551
#0 1.551 vite v4.3.9 building SSR bundle for production...
#0 1.583 transforming...
#0 3.551 ✓ 165 modules transformed.
#0 3.551 ✓ built in 2.00s
#0 3.551 "PUBLIC_APP_ASSETS" is not exported by "$env/static/public", imported by "src/lib/components/icons/Logo.svelte".
#0 3.551 file: /app/src/lib/components/icons/Logo.svelte:3:10
#0 3.551 1: <script lang="ts">
#0 3.551 2: import { page } from "$app/stores";
#0 3.551 3: import { PUBLIC_APP_ASSETS, PUBLIC_APP_NAME, PUBLIC_ORIGIN } from "$env/static/public";
#0 3.551 ^
#0 3.551 4: import { base } from "$app/paths";
#0 3.553 error during build:
#0 3.553 RollupError: "PUBLIC_APP_ASSETS" is not exported by "$env/static/public", imported by "src/lib/components/icons/Logo.svelte".
#0 3.553 at error (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:2125:30)
#0 3.553 at Module.error (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:13452:16)
#0 3.553 at Module.traceVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:13863:29)
#0 3.553 at ModuleScope.findVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:12418:39)
#0 3.553 at ReturnValueScope.findVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:6966:38)
#0 3.553 at ChildScope.findVariable (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:6966:38)
#0 3.553 at Identifier.bind (file:///app/node_modules/rollup/dist/es/shared/node-entry.js:8116:40 | https://github.com/huggingface/chat-ui/issues/311 | closed | [
"support"
] | 2023-06-19T15:11:36Z | 2023-09-18T13:14:04Z | 1 | samichaignonmejai |
huggingface/chat-ui | 310 | Dockerfile issue : can't modify .env.local before building the docker | Hey, I'm having an issue building chat-ui dockerfile.
Indeed, i have to point my DB and my endpoints (or my HF token) in the .env.local file, but the file is built after running the `npm install`, therefore I can't modify my .env.local before building my Docker.
The issues are that both my connection with mongoDB and with my endpoints (or HF tokens) are impossible if I don't modify the .env.local file.
I think it is possible since coyotte508 (here https://github.com/huggingface/chat-ui/issues/204) mentioned that it is not possible to share a public container since it includes personal data but said that it was possible doing so privately.
I already launched a database with Docker with `docker run -d -p 27017:27017 --name mongo-chatui mongo:latest` and I pointed the link of my database in my .env file prior building the Docker of chat-ui but here it seems like it is not working (see the error below).
My questions are :
- how to build the Docker while pointing in the .env.local my endpoints and my database ?;
- how to can I link the database to avoid the following error ?
Here is the error showing for the database after launching my docker :
```
docker run chat-ui
-------------
__/\\\\\\\\\\\\\____/\\\\____________/\\\\____/\\\\\\\\\_____
_\/\\\/////////\\\_\/\\\\\\________/\\\\\\__/\\\///////\\\___
_\/\\\_______\/\\\_\/\\\//\\\____/\\\//\\\_\///______\//\\\__
_\/\\\\\\\\\\\\\/__\/\\\\///\\\/\\\/_\/\\\___________/\\\/___
_\/\\\/////////____\/\\\__\///\\\/___\/\\\________/\\\//_____
_\/\\\_____________\/\\\____\///_____\/\\\_____/\\\//________
_\/\\\_____________\/\\\_____________\/\\\___/\\\/___________
_\/\\\_____________\/\\\_____________\/\\\__/\\\\\\\\\\\\\\\_
_\///______________\///______________\///__\///////////////__
Runtime Edition
PM2 is a Production Process Manager for Node.js applications
with a built-in Load Balancer.
Start and Daemonize any application:
$ pm2 start app.js
Load Balance 4 instances of api.js:
$ pm2 start api.js -i 4
Monitor in production:
$ pm2 monitor
Make pm2 auto-boot at server restart:
$ pm2 startup
To go further checkout:
http://pm2.io/
-------------
pm2 launched in no-daemon mode (you can add DEBUG="*" env variable to get more messages)
2023-06-19T09:18:59: PM2 log: Launching in no daemon mode
2023-06-19T09:18:59: PM2 log: [PM2] Starting /app/build/index.js in cluster_mode (0 instance)
2023-06-19T09:18:59: PM2 log: App [index:0] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:0] online
2023-06-19T09:18:59: PM2 log: App [index:1] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:1] online
2023-06-19T09:18:59: PM2 log: App [index:2] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:2] online
2023-06-19T09:18:59: PM2 log: App [index:3] starting in -cluster mode-
2023-06-19T09:18:59: PM2 log: App [index:3] online
2023-06-19T09:18:59: PM2 log: [PM2] Done.
2023-06-19T09:18:59: PM2 log: ┌────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ index │ default │ 0.3.0 │ cluster │ 19 │ 0s │ 0 │ online │ 0% │ 61.7mb │ root │ disabled │
│ 1 │ index │ default │ 0.3.0 │ cluster │ 26 │ 0s │ 0 │ online │ 0% │ 52.9mb │ root │ disabled │
│ 2 │ index │ default │ 0.3.0 │ cluster │ 33 │ 0s │ 0 │ online │ 0% │ 51.0mb │ root │ disabled │
│ 3 │ index │ default │ 0.3.0 │ cluster │ 44 │ 0s │ 0 │ online │ 0% │ 45.3mb │ root │ disabled │
└────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
2023-06-19T09:18:59: PM2 log: [--no-daemon] Continue to stream logs
2023-06-19T09:18:59: PM2 log: [--no-daemon] Exit on target PM2 exit pid=8
09:18:59 0|index | Listening on 0.0.0.0:3000
09:18:59 1|index | Listening on 0.0.0.0:3000
09:18:59 2|index | Listening on 0.0.0.0:3000
09:18:59 3|index | Listening on 0.0.0.0:3000
09:19:29 0|index | MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27017
09:19:29 0|index | at Timeout._onTimeout (/app/node_modules/mongodb/lib/sdam/topology.js:277:38)
09:19:29 0|index | at listOnTimeout (node:internal/timers:573:17)
09:19:29 0 | https://github.com/huggingface/chat-ui/issues/310 | open | [
"support"
] | 2023-06-19T10:48:04Z | 2023-07-05T03:09:16Z | 1 | samichaignonmejai |
huggingface/chat-ui | 309 | 'Task not found in this model' when running another model | Hello there,
I tried to change the original model to guanaco-33d (also tried with the 65-b) but I always end up having the error "Task not found in this model".
Here's what I changed in the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/openassistant-guanaco",
"description": "",
"websiteUrl": "",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
}
}
]`
```
Any ideas about this one? It works fine in the dedicated playground.
In advance, thanks a lot!
Regards, | https://github.com/huggingface/chat-ui/issues/309 | closed | [
"support",
"models"
] | 2023-06-19T09:42:41Z | 2023-06-23T12:27:50Z | 1 | gollumeo |
huggingface/chat-ui | 308 | 'Task not found' when trying to use the guacano-33b model | Hello there,
I tried to change the original model, so my team can work with the guanaco-33b model. But now, I always end up having "Task not found for this model" errors.
Here's what I changed on the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/openassistant-guanaco",
"description": "",
"websiteUrl": "",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
"preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n",
"promptExamples": [
{
"title": "Write an email from bullet list",
"prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)"
}, {
"title": "Code a snake game",
"prompt": "Code a basic snake game in python, give explanations for each step."
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
}
],
"parameters": {
"temperature": 0.9,
"top_p": 0.95,
"repetition_penalty": 1.2,
"top_k": 50,
"truncate": 1000,
"max_new_tokens": 1024
}
}
]
```
Any ideas about that one?
In advance, thanks a lot!
Regards, | https://github.com/huggingface/chat-ui/issues/308 | closed | [] | 2023-06-19T09:38:55Z | 2023-06-19T09:39:08Z | 0 | gollumeo |
huggingface/chat-ui | 307 | Add API endpoints documentation | We want to make it easy for people to build cool apps on top of chat-ui, and this requires API specs that are easily accessible.
I'm not sure what tools are available in the sveltekit ecosystem for this. My first guess would be to generate an openAPI spec somehow from our server endpoints (or do it manually if that isn't possible with sveltekit?) and pass the spec to a tool like [swagger-ui](https://github.com/swagger-api/swagger-ui) so we can display them somewhere.
This would help with issues like #299 and other requests I've received about API specs. | https://github.com/huggingface/chat-ui/issues/307 | open | [
"documentation",
"enhancement",
"back",
"p2"
] | 2023-06-19T09:08:19Z | 2024-05-29T13:43:10Z | 5 | nsarrazin |
huggingface/api-inference-community | 295 | What is the ratelimit for inference api for pro users? | What is the rate limit for inference API for pro users?
Also can we use the endpoint for prod, which makes 3 to 10 RPS? | https://github.com/huggingface/api-inference-community/issues/295 | closed | [] | 2023-06-18T07:17:23Z | 2023-06-19T09:01:02Z | null | bigint |
huggingface/chat-ui | 304 | Code blocks | How do code blocks like img attached work under the hood?
Is it the model that generates ``` & it gets detected and converted to code?
Or is it the UI/Backend that detects code and converts it to look like a code block?
<img width="434" alt="Screenshot 2023-06-17 at 3 26 39 PM" src="https://github.com/huggingface/chat-ui/assets/62820084/d5b79272-d3d9-46c5-9761-e38515f3c73c">
| https://github.com/huggingface/chat-ui/issues/304 | closed | [
"question"
] | 2023-06-17T13:27:20Z | 2023-09-18T13:17:47Z | null | Muennighoff |
huggingface/optimum | 1,118 | Corrupted-tflite-weights while getting a model from huggingface | ### System Info
```shell
System: MacOS
Onnx: 1.14
tensorflow: 2.11
While converting a model from hugging face to tflite using huggingface-cli, the model conversion ran okay, but later in inferencing(in python and on edge-device), the model started producing random results, as if it wasn't trained at all.
Virtually seeming the weights are corrupted
```
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
minimum reproducible example
`optimum-cli export tflite --model unitary/toxic-bert --sequence_length 128 toxic_bert/`
After tflite conversion is done, simply do an inference in python using WordPiece Bert tokenizer.
Detailed logs while conversion process
```
2023-06-17 02:53:29.604798: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
/Users/saurabhkumar/opt/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
2023-06-17 02:53:54.973334: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
/Users/saurabhkumar/opt/anaconda3/lib/python3.7/site-packages/sklearn/externals/joblib/externals/cloudpickle/cloudpickle.py:47: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
Loading PyTorch model in TensorFlow before exporting.
2023-06-17 02:54:06.422503: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForSequenceClassification: ['bert.embeddings.position_ids']
- This IS expected if you are initializing TFBertForSequenceClassification from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFBertForSequenceClassification from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of TFBertForSequenceClassification were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForSequenceClassification for predictions without further training.
Using TensorFlow: 2.11.0
Overriding 1 configuration item(s)
- use_cache -> False
WARNING:absl:Found untraced functions such as embeddings_layer_call_fn, embeddings_layer_call_and_return_conditional_losses, encoder_layer_call_fn, encoder_layer_call_and_return_conditional_losses, pooler_layer_call_fn while saving (showing 5 of 420). These functions will not be directly callable after loading.
2023-06-17 02:55:02.650365: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.
2023-06-17 02:55:02.650918: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
2023-06-17 02:55:02.652373: I tensorflow/cc/saved_model/reader.cc:45] Reading SavedModel from: /var/folders/q4/dklkx0m970scm0m4w3m_nzvc0000gn/T/tmpod9lpuk_
2023-06-17 02:55:02.718684: I tensorflow/cc/saved_model/reader.cc:89] Reading meta graph with tags { serve }
2023-06-17 02:55:02.718712: I tensorflow/cc/saved_model/reader.cc:130] Reading SavedModel debug info (if present) from: /var/folders/q4/dklkx0m970scm0m4w3m_nzvc0000gn/T/tmpod9lpuk_
2023-06-17 02:55:02.945563: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:357] MLIR V1 optimization pass is not enabled
2023-06-17 02:55:02.997217: I tensorflow/cc/saved_model/loader.cc:229] Restoring SavedModel bundle.
2023-06-17 02:55:03.837625: I tensorflow/cc/saved_model/loader.cc:213] Running initialization op on SavedModel bundle at path: /var/folders/q4/dklkx0m970scm0m4w3m_nzvc0000gn/T/tmpod9l | https://github.com/huggingface/optimum/issues/1118 | open | [
"bug"
] | 2023-06-16T18:56:06Z | 2023-06-19T05:18:10Z | 1 | saurabhkumar8112 |
huggingface/pytorch-pretrained-BigGAN | 20 | Is the model trained on truncated noise? What was input noise vector characteristics for training? | Hi,
I have noticed in the "utils.py" line 32, you truncated the normal noise in the range [-2,2] by this line of code:
`values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state).astype(np.float32)`
Could you please let me know whether the pre-trained model is also trained using this truncated noise? If not, could you please let me know the characteristics of the input noise vectors during training your model? Thanks!
| https://github.com/huggingface/pytorch-pretrained-BigGAN/issues/20 | open | [] | 2023-06-16T08:02:52Z | 2023-06-16T08:02:52Z | null | MHVali |
huggingface/chat-ui | 301 | Error when deploying on a distant server : Cannot find base config file "./.svelte-kit/tsconfig.json" | Hey,
I'm having troubles deploying HuggingChat on a distant server, when I run HuggingChat, I get the following error :
```
ai@1.0.0 start-chat-ui
> cd ../chat-ui && npm run dev -- --host 127.0.0.1
> chat-ui@0.3.0 dev
> vite dev --host 127.0.0.1
▲ [WARNING] Cannot find base config file "./.svelte-kit/tsconfig.json" [tsconfig.json]
tsconfig.json:2:12:
2 │ "extends": "./.svelte-kit/tsconfig.json",
╵ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
failed to load config from /home/paperspace/***/chat-ui/vite.config.ts
error when starting dev server:
Error [ERR_MODULE_NOT_FOUND]: Cannot find package 'unplugin-icons' imported from /home/paperspace/***/chat-ui/vite.config.ts.timestamp-1686857376175-9d68e4b73b2d7.mjs
at new NodeError (node:internal/errors:405:5)
at packageResolve (node:internal/modules/esm/resolve:781:9)
at moduleResolve (node:internal/modules/esm/resolve:830:20)
at defaultResolve (node:internal/modules/esm/resolve:1035:11)
at DefaultModuleLoader.resolve (node:internal/modules/esm/loader:269:12)
at DefaultModuleLoader.getModuleJob (node:internal/modules/esm/loader:153:32)
at ModuleWrap.<anonymous> (node:internal/modules/esm/module_job:76:33)
at link (node:internal/modules/esm/module_job:75:36)
```
I tried to reinstall svelte but I can't understand where this warning comes from as I have the latest version and my file tsconfig.json exists in the installation folder of svelte...
I tried to modify the package.json as suggested here https://github.com/sveltejs/kit/issues/7028 but it is still unable to work properly...
Anyone has an idea of why I'm still having this issue ?
| https://github.com/huggingface/chat-ui/issues/301 | closed | [
"support"
] | 2023-06-15T19:55:36Z | 2023-06-19T10:50:26Z | 2 | samichaignonmejai |
huggingface/transformers.js | 150 | [Question] How to use transformers.js like the python sentence_transformers library? | Hello all,
Thanks for this great library. I've just discovered it and I'm familiar with the python sentence_transformers module. I know from experience that sentence_transformers wraps a lot of the complexity compared to using transformers directly.
Can you point to an example of using this to replace python's sentence_transformers for semantic search document and question embedding? Does this solution handle the tokenization and attention windows automatically like sentence_transformers, or do I need to break my inputs into chunks, process them separately, and then mean pool them back together or something?
Thanks,
Dave
| https://github.com/huggingface/transformers.js/issues/150 | closed | [
"question"
] | 2023-06-15T15:30:49Z | 2023-06-18T15:17:04Z | null | davidtbo |
huggingface/chat-ui | 299 | Using HuggingChat in a JavaScript/node.js setting? | Hi, I'm not sure whether this is relevant here, but I'd like to use the HuggingChat in a personal web design project, and I'd like to access it through REST/axios, similar to this [here](https://stackoverflow.com/questions/75714587/node-js-turn-hugging-face-image-response-to-buffer-and-send-as-a-discord-attac) (stable diffusion hugging face example)
So far the only thing I could find was the [huggingChat python](https://github.com/Soulter/hugging-chat-api), and I'm not really sure how to use that in what I'm looking for. Can anyone help?
| https://github.com/huggingface/chat-ui/issues/299 | closed | [] | 2023-06-15T02:59:29Z | 2023-09-18T13:19:32Z | 3 | VatsaDev |
huggingface/chat-ui | 297 | Is there a way to deploy without the HF token ? | I'm trying to use chat-ui with my own endpoints and I would like to know if I can get rid of the HF_ACCESS_TOKEN variable and also allow to run every model I want.
I tried to modify the TS in modelEndpoint.ts and model.ts but I can't figure how to run it independently to HF (I want it offline), here are the parts I suspect to prevent me from doing it :
modelEndpoint.ts :
```
if (!model.endpoints) {
return {
url: 'https://api-inference.huggingface.co/models/${model.name}',
authorization: 'Bearer ${HF_ACCESS_TOKEN}',
weight: 1,
};
}
```
model.ts :
```
endpoints: z
.array(
z.object({
url: z.string().url(),
authorization: z.string().min(1).default(`Bearer ${HF_ACCESS_TOKEN}`),
weight: z.number().int().positive().default(1),
})
)
```
Any thoughts about this ?
| https://github.com/huggingface/chat-ui/issues/297 | closed | [
"support"
] | 2023-06-14T12:11:04Z | 2023-06-15T09:52:39Z | 2 | samichaignonmejai |
huggingface/chat-ui | 296 | Issue when deploying model : Error in 'stream': 'stream' is not supported for this model | I'm trying to use bigscience/bloom-560m with chat-ui
I already have an API for the model and it's working well, same for chat-ui when I use my HF token but i get the following error message when I launch a request to my bloom-560m API from chat-ui :
```
Could not parse last message {"error":["Error in `stream`: `stream` is not supported for this model"]}
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:180:32)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:95:26)
```
I tried modifying the URL of my API from http://xxx.xxx.x.xxx:8080/generate_stream to http://xxx.xxx.x.xxx:8080/generate but it is not working as well ... any thoughts about this ? | https://github.com/huggingface/chat-ui/issues/296 | closed | [
"support",
"models"
] | 2023-06-14T09:04:07Z | 2023-06-19T10:57:01Z | 2 | samichaignonmejai |
huggingface/datasets | 5,951 | What is the Right way to use discofuse dataset?? | [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** are:
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
1. **coherent_first_sentence**
2. **coherent_second_sentence**
3. **incoherent_first_sentence**
4. **incoherent_second_sentence**
[Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
The **`encoder` will take these four columns as input and encode them into a sequence of hidden states. The `decoder` will then take these hidden states as input and decode them into a new sentence that fuses the two original sentences together.**
The **discourse type, connective_string, has_coref_type_pronoun, and has_coref_type_nominal columns will not be used as input to the encoder or decoder.** These columns are used to provide additional information about the dataset, but they are not necessary for the task of sentence fusion.
Please correct me if I am wrong; otherwise, if this understanding is right, how shall I implement this task practically? | https://github.com/huggingface/datasets/issues/5951 | closed | [] | 2023-06-14T08:38:39Z | 2023-06-14T13:25:06Z | null | akesh1235 |
huggingface/chat-ui | 295 | Facing issue for using custom model deployed locally on flask | I have a chat model which responds on
```
@app.route("/get")
#function for the bot response
def get_bot_response():
userText = request.args.get('msg')
data = T.getResponse(userText)
return str(data)
```
I'm not sure about the configuration but I have added `MODELS=[{"name": "mymodel", "endpoints": [{"url": "http://127.0.0.1:5000/get"}]}]` in the` .env.local` file
Getting following error:

Can someone please help me to configure my local model with HuggingChat chat-ui
| https://github.com/huggingface/chat-ui/issues/295 | closed | [
"support"
] | 2023-06-14T08:20:41Z | 2023-07-24T10:53:41Z | 6 | awsum0225 |
huggingface/optimum | 1,106 | Onnxruntime support for multiple modalities model types | ### Feature request
Add support for layout and multi-modal models (e.g. LayoutLM, LayoutLMv3, LILT) to the ORTModels.
### Motivation
ORTModels allows to interact with onnxruntime models in the same way as transformers API, which is very convenient, as optimum is a part of huggingface ecosystem and the compatibility between all the components is crucial. But unfortunately currently ORTModels do not support models that accept multiple modalities, e.g. text+layout ot text+layout+image. As of now only _input_ids, attention_mask and token_type_ids_ are processed for in _**ORTModelForFeatureExtraction, ORTModelForQuestionAnswering, ORTModelForSequenceClassification, ORTModelForTokenClassification**_ in modeling_ort.py.
### Your contribution
I can submit a PR, but since there are a lot of ways how this can be implemented - I would like to agree how to do this better.
For example:
**first way:**
* Implement it in similar way how AutoModels* works in transformers: have the mapping for the model and the ort model class which suits it .
`{ "bert": OrtModelForTokenCkassification,
"roberta": OrtModelForTokenCkassification,
"layoutlm": OrtLayoutLMForTokenClassification
}`
For the models that does not text-only we will need to add a separate class and substitute the class when initializing the ORTModel* with corresponding model, while for the models that are already supported nothing will change.
**second way:**
* Add mapping for the model name as key and input attr as value:
`{ "bert": ["input_ids", "attention_mask", "token_type_ids"],
"layoutlm": ["input_ids", "attention_mask", "token_type_ids", "bbox"]
}`
* Substitute the model inputs with the given model map in modeling_ort.py. | https://github.com/huggingface/optimum/issues/1106 | open | [
"feature-request",
"onnxruntime"
] | 2023-06-13T14:30:10Z | 2023-06-14T11:10:49Z | 0 | mariababich |
huggingface/optimum | 1,105 | IO Binding for ONNX Non-CUDAExecutionProviders | ### Feature request
When using use_io_binding=True with TensorrtExecutionProvider, a warning appears :
```
No need to enable IO Binding if the provider used is not CUDAExecutionProvider. IO Binding will be turned off.
```
I don't understand the reason for this, as data movement optimization should also work for TensorrtExecutionProvider at least. If this is not possible, can someone explain the reason? Thank you.
### Motivation
Being able to decouple data movement between CPU DRAM and GPU DRAM from computation makes it possible to overlap computation with communication.
### Your contribution
Theoretically, the iobinding implementation for CUDAExecutionProvider should work for TensorrtExecutionProvider too. | https://github.com/huggingface/optimum/issues/1105 | open | [
"help wanted",
"onnxruntime"
] | 2023-06-13T14:11:31Z | 2023-09-26T11:47:17Z | 5 | cyang49 |
huggingface/datasets | 5,946 | IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ?? | ### Describe the bug
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train │
│ │
│ 1534 │ │ inner_training_loop = find_executable_batch_size( │
│ 1535 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1536 │ │ ) │
│ ❱ 1537 │ │ return inner_training_loop( │
│ 1538 │ │ │ args=args, │
│ 1539 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1540 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1789 in _inner_training_loop │
│ │
│ 1786 │ │ │ │ rng_to_sync = True │
│ 1787 │ │ │ │
│ 1788 │ │ │ step = -1 │
│ ❱ 1789 │ │ │ for step, inputs in enumerate(epoch_iterator): │
│ 1790 │ │ │ │ total_batched_samples += 1 │
│ 1791 │ │ │ │ if rng_to_sync: │
│ 1792 │ │ │ │ │ self._load_rng_state(resume_from_checkpoint) │
│ │
│ /usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py:377 in __iter__ │
│ │
│ 374 │ │ dataloader_iter = super().__iter__() │
│ 375 │ │ # We iterate one batch ahead to check when we are at the end │
│ 376 │ │ try: │
│ ❱ 377 │ │ │ current_batch = next(dataloader_iter) │
│ 378 │ │ except StopIteration: │
│ 379 │ │ │ yield │
│ 380 │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:633 in __next__ │
│ │
│ 630 │ │ │ if self._sampler_iter is None: │
│ 631 │ │ │ │ # TODO(https://github.com/pytorch/pytorch/issues/76750) │
│ 632 │ │ │ │ self._reset() # type: ignore[call-arg] │
│ ❱ 633 │ │ │ data = self._next_data() │
│ 634 │ │ │ self._num_yielded += 1 │
│ 635 │ │ │ if self._dataset_kind == _DatasetKind.Iterable and \ │
│ 636 │ │ │ │ │ self._IterableDataset_len_called is not None and \ │
│ │
│ /usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:677 in _next_data │
│ │
│ 674 │ │
│ 675 │ def _next_data(self): │
│ 676 │ │ index = self._next_index() # may raise StopIteration │
│ ❱ 677 │ │ data = self._dataset_fetcher.fetch(index) # may raise StopIteration │
│ 678 │ │ if self._pin_memory: | https://github.com/huggingface/datasets/issues/5946 | open | [] | 2023-06-13T07:34:15Z | 2023-07-14T12:04:48Z | 6 | syngokhan |
huggingface/safetensors | 273 | Issue with Loading Model in safetensors Format | ### System Info
- `transformers` version: 4.30.1
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Reproduction
I'm trying to load a model saved in safetensors format using the Transformers library. Here's the code I'm using:
```python
from transformers import LlamaForCausalLM, LlamaTokenizer
tokenizer = LlamaTokenizer.from_pretrained("path/to/model")
model = LlamaForCausalLM.from_pretrained("path/to/model", use_safetensors=True)
```
However, I'm running into this error:
```
Traceback (most recent call last):
File "/Users/maxhager/Projects2023/nsfw/model_run.py", line 4, in <module>
model = LlamaForCausalLM.from_pretrained("path/to/model", use_safetensors=True)
File "/Users/maxhager/.virtualenvs/nsfw/lib/python3.11/site-packages/transformers/modeling_utils.py", line 2449, in from_pretrained
raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory path/to/model.
```
In my model directory, I have the following files (its [this](https://huggingface.co/notstoic/pygmalion-13b-4bit-128g) model locally):
- 4bit-128g.safetensors
- config.json
- generation_config.json
- pytorch_model.bin.index.json
- special_tokens_map.json
- tokenizer.json
- tokenizer.model
- tokenizer_config.json
### Expected behavior
I would expect that setting use_safetensors=True would inform the from_pretrained method to load the model from the safetensors format. However, it appears the method is looking for the usual model file formats (pytorch_model.bin, tf_model.h5, etc) instead of recognizing the safetensors format.
I'm looking for a solution or guidance on how to successfully load a model stored in the safetensors format using the Transformers library. | https://github.com/huggingface/safetensors/issues/273 | closed | [
"Stale"
] | 2023-06-12T21:25:33Z | 2024-03-08T13:28:30Z | 11 | yachty66 |
huggingface/transformers.js | 144 | Question-Answer Examples | Ca you please send us an example of question-answer please | https://github.com/huggingface/transformers.js/issues/144 | closed | [
"question"
] | 2023-06-09T21:54:37Z | 2023-06-09T22:59:17Z | null | Zenyker |
huggingface/optimum | 1,095 | Installation issue on Openvino NNcf | ### System Info
```shell
LINUX WSL 2
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
OPTIMUM
Name: optimum
Version: 1.8.6
Summary: Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party libraries from Hardware Partners and interface with their specific functionality.
Home-page: https://github.com/huggingface/optimum
Author: HuggingFace Inc. Special Ops Team
Author-email: hardware@huggingface.co
License: Apache
Location: /home/debayan/CT_with_LLM/opvino/lib/python3.11/site-packages
Requires: coloredlogs, datasets, huggingface-hub, numpy, packaging, sympy, torch, torchvision, transformers
PYTHON
3.11.3
```
### Who can help?
@echarlaix , while trying to install openvino nncf, i am getting this issue and cannot figure out how to fix this problem.
The hardware is intel and hence was working via this approach. I am trying to optimize blip model for image captioning
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-f8b_3uou/onnx_22d50665ccb74d03a417ba4977874f9c/setup.py", line 318, in <module>
raise FileNotFoundError("Unable to find " + requirements_file)
FileNotFoundError: Unable to find requirements.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
python -m pip install optimum[openvino,nncf] is failing with the issue
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [7 lines of output]
fatal: not a git repository (or any of the parent directories): .git
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-_g6qzuag/onnx_aebd33cd3ee44e7daf5f0a07afd43101/setup.py", line 318, in <module>
raise FileNotFoundError("Unable to find " + requirements_file)
FileNotFoundError: Unable to find requirements.txt
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
### Expected behavior
installation should be successful | https://github.com/huggingface/optimum/issues/1095 | closed | [
"bug"
] | 2023-06-09T09:55:45Z | 2024-01-05T11:10:06Z | 5 | DebayanChakraborty |
huggingface/transformers.js | 140 | [Question] OrtRun error code 6 with a longer string for question-answering | Why do I keep running into an OrtRun error code 6 with a longer string for question-answering task:
`const result = await model(question, context, {
padding: true,
truncation: true,
});
`
Error:
`
models.js:158 An error occurred during model execution: "Error: failed to call OrtRun(). error code = 6.".
models.js:159 Inputs given to model:
{input_ids: Proxy(Tensor), attention_mask: Proxy(Tensor), token_type_ids: Proxy(Tensor)}
attention_mask
:
Proxy(Tensor) {dims: Array(2), type: 'int64', data: BigInt64Array(550), size: 550}
input_ids
:
Proxy(Tensor) {dims: Array(2), type: 'int64', data: BigInt64Array(550), size: 550}
token_type_ids
:
Proxy(Tensor) {dims: Array(2), type: 'int64', data: BigInt64Array(550), size: 550}
[[Prototype]]
:
Object
ort-web.min.js:6 Uncaught (in promise) Error: failed to call OrtRun(). error code = 6.
at Object.run (ort-web.min.js:6:454854)
at ort-web.min.js:6:444202
at Object.run (ort-web.min.js:6:447121)
at InferenceSession.run (inference-session-impl.js:91:1)
at sessionRun (models.js:153:1)
at Function._call (models.js:639:1)
at Function._call (models.js:1091:1)
at Function.closure [as model] (core.js:62:1)
at Function._call (pipelines.js:253:1)
at closure (core.js:62:1)
(anonymous) @ ort-web.min.js:6
(anonymous) @ ort-web.min.js:6
run @ ort-web.min.js:6
run @ inference-session-impl.js:91
sessionRun @ models.js:153
_call @ models.js:639
_call @ models.js:1091
closure @ core.js:62
_call @ pipelines.js:253
closure @ core.js:62
(anonymous) @ background.js:146
await in (anonymous) (async)
` | https://github.com/huggingface/transformers.js/issues/140 | closed | [
"bug",
"question"
] | 2023-06-09T04:07:28Z | 2023-07-11T11:07:26Z | null | iamfiscus |
huggingface/datasets | 5,931 | `datasets.map` not reusing cached copy by default | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| https://github.com/huggingface/datasets/issues/5931 | closed | [] | 2023-06-07T09:03:33Z | 2023-06-21T16:15:40Z | 1 | bhavitvyamalik |
huggingface/chat-ui | 282 | OpenID login | How to get providerURL, client ID and client token to create azure openid login????? | https://github.com/huggingface/chat-ui/issues/282 | closed | [
"support"
] | 2023-06-06T10:45:46Z | 2023-06-19T09:38:34Z | 1 | sankethgadadinni |
huggingface/transformers.js | 137 | [Question] Failed to fetch onnx model when to use AutoModel.from_pretrained | **The code here:**
```
import { AutoModel, AutoTokenizer } from '@xenova/transformers';
const modelPath = 'Xenova/distilgpt2'
let tokenizer = await AutoTokenizer.from_pretrained(modelPath); // **successful to fetch model**
let model = await AutoModel.from_pretrained(modelPath); // **failed to fetch model**
let inputs = await tokenizer('I love transformers!');
let { logits } = await model(inputs);
```
**Error information:**
file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/utils/hub.js:223
throw Error(`Could not locate file: "${remoteURL}".`)
^
Error: Could not locate file: "https://huggingface.co/Xenova/distilgpt2/resolve/main/onnx/model_quantized.onnx".
at handleError (file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/utils/hub.js:223:19)
at getModelFile (file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/utils/hub.js:412:24)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async constructSession (file:///Users/xxx/Documents/github/transformers.js/examples/node/esm/node_modules/@xenova/transformers/src/models.js:88:18)
transformers.js version: 2.1.1
| https://github.com/huggingface/transformers.js/issues/137 | closed | [
"question"
] | 2023-06-06T02:03:41Z | 2023-06-20T13:24:37Z | null | peter-up |
huggingface/transformers.js | 136 | [Question] Using CLIP for simple image-text similarity | I'm trying to get a simple image-text similarity thing working with CLIP, and I'm not sure how to do it, or whether it's currently supported with Transformers.js outside of the zero-shot image classification pipeline.
Is there a code example somewhere to get me started? Here's what I have so far:
```js
import { AutoModel, AutoTokenizer } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.1.1';
let tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
let model = await AutoModel.from_pretrained('Xenova/clip-vit-base-patch16');
let inputIds = await tokenizer(["cat", "astronaut"]);
let image = await fetch("https://i.imgur.com/fYhUGoY.jpg").then(r => r.blob());
// how to process the image, and how to pass the image and inputIds to `model`?
```
Here's what I see if I inspect the `model` function in DevTools:

I also tried this:
```js
import { AutoModel, AutoTokenizer, AutoProcessor } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.1.1';
let model = await AutoModel.from_pretrained('Xenova/clip-vit-base-patch16');
let processor = await AutoProcessor.from_pretrained("Xenova/clip-vit-base-patch16");
let inputs = await processor({text:["a photo of a cat", "a photo of an astronaut"], images:["https://i.imgur.com/fYhUGoY.jpg"]});
let outputs = await model(inputs);
```
But it seems that `processor` expects an array of images, or something? The above code throws an error saying that an `.rgb()` method should exist on the input. | https://github.com/huggingface/transformers.js/issues/136 | closed | [
"question"
] | 2023-06-05T14:24:56Z | 2023-06-06T13:35:45Z | null | josephrocca |
huggingface/diffusers | 3,669 | General question: what are the steps to debug if the image produced is just wrong? | I have a lora(lycoris) that I have tested with A1111's webui and I'm pretty happy with the result. When I tried to use it with `diffusers` it just give me corrupted image. The lora brings some desired effect (like white background), but the overall image is just not right.
I have included some personal code to use lycoris (AFAIK diffusers currently doesn't support lycoris, correct me if I'm wrong). But the question is more general as what should I do in case like this, what experiments to run, where should I check? I printed the sum of weight for each layer and was sure they match with A1111's version.
Thank you. | https://github.com/huggingface/diffusers/issues/3669 | closed | [
"stale"
] | 2023-06-05T01:44:49Z | 2023-07-13T15:03:51Z | null | wangdong2023 |
huggingface/chat-ui | 275 | web search hallucination and prompt results | Hello, great job building web search module. Just a few things i noticed using it for the past hours.
1- It does connect to the web perfectly.
2- It tend to take only the first page result and not contextualize enough the data, trying to mix it with the model data and it ends up destroying the final output. So maybe should take the first 3 results to do a summary.
3- Takes time, maybe it's ok, but I think making sure that it takes less time might be good, but it's not critical at this stage.
4- Various output from serp api : as serp api allows to get not only text result but also video and maps, would be cool to allow the end user for example to prompt "give me the best yoga video tutorials" and get a reply with shortcuts and/or small views on maybe 3 youtube vid . The best real case doing that is on perplexity ai , you can check with a request.
5- Maps can be book. "what is the best itineray from x to y location" result prompting using google map query. and same of air tickets with google flights.
Just a few options and reco from a fan, great job again, I know you already did a lot. | https://github.com/huggingface/chat-ui/issues/275 | open | [] | 2023-06-02T23:09:11Z | 2023-06-05T08:36:41Z | 1 | Billyroot |
huggingface/peft | 537 | Where is the PeftModel weights stored? | ## expect behavior
I am going to check if the model (mt0-xxl [13B](https://huggingface.co/bigscience/mt0-xxl)) weights have been updated.
Could you tell me how to check the weights of the model original before using peft?
How to check loaded Lora Module weights when using the peft?
## script
modified from [this file](https://github.com/huggingface/peft/blob/main/examples/conditional_generation/peft_lora_seq2seq_accelerate_ds_zero3_offload.py#L71)
```python
model = AutoModelForSeq2SeqLM.from_pretrained(model_name_or_path)
model.enable_input_require_grads()
model.gradient_checkpointing_enable()
.....
for epoch in range(num_epochs):
with TorchTracemalloc() as tracemalloc:
model.train()
accelerator.print('train epoch{}'.format(epoch))
total_loss = 0
for step, batch in enumerate(tqdm(train_dataloader)):
outputs = model(**batch, use_cache=False) # dsj
# outputs = model(**batch) # dsj
loss = outputs.loss
# loss.requires_grad=True # dsj
total_loss += loss.detach().float()
==== =========>>pdb.set_trace() # where I pdb
```
## debug process
```
(Pdb) model.module.base_model.model.encoder.block[0].layer[0].SelfAttention.q
Linear(
in_features=4096, out_features=4096, bias=False
(lora_dropout): ModuleDict(
(default): Dropout(p=0.1, inplace=False)
)
(lora_A): ModuleDict(
(default): Linear(in_features=4096, out_features=8, bias=False)
)
(lora_B): ModuleDict(
(default): Linear(in_features=8, out_features=4096, bias=False)
)
)
(Pdb) model.module.base_model.model.encoder.block[0].layer[0].SelfAttention.q.weight
Parameter containing:
tensor([], device='cuda:0', dtype=torch.bfloat16)
(Pdb) model.module.base_model.model.encoder.block[0].layer[0].SelfAttention.q.lora_A.default.weight
Parameter containing:
tensor([], device='cuda:0', dtype=torch.bfloat16, requires_grad=True)
``` | https://github.com/huggingface/peft/issues/537 | closed | [] | 2023-06-02T09:10:09Z | 2023-07-10T15:03:40Z | null | dsj96 |
huggingface/chat-ui | 273 | Documentation about how to configure custom model endpoints is missing | It seems it has been removed in https://github.com/huggingface/chat-ui/commit/fae93d9fc3be9a39d8efd9ab9993dea13f0ae844. | https://github.com/huggingface/chat-ui/issues/273 | closed | [
"documentation"
] | 2023-06-01T19:37:44Z | 2023-06-19T08:59:15Z | 4 | djmaze |
huggingface/optimum | 1,078 | [SAM] Split encoder and mask decoder into separate .onnx files | ### Feature request
Currently, exporting SAM models with optimum results in a single .onnx file (https://huggingface.co/Xenova/sam-vit-base/tree/main/onnx). It would be great if we could add an option to separate the encoder and decoder into separate onnx files (like traditional seq2seq models).
Example SAM exports for which this has been done:
- https://huggingface.co/visheratin/segment-anything-vit-b/tree/main
- https://huggingface.co/visheratin/segment-anything-vit-l/tree/main
- https://huggingface.co/visheratin/segment-anything-vit-h/tree/main
### Motivation
The primary motivation for this feature request is to reuse the encoded image (which should only be computed once), and then use the decoder for querying. At the moment, users would have to encode the image each time they wish to perform a query.
This would be great for Transformers.js.
### Your contribution
I can integrate this into Transformers.js once it's available. | https://github.com/huggingface/optimum/issues/1078 | closed | [] | 2023-05-31T10:47:19Z | 2023-08-24T16:05:39Z | 8 | xenova |
huggingface/diffusers | 3,602 | What is the default for VAE option? | If "VAE" is not specified for "Stable Diffusion," what is the default applied? | https://github.com/huggingface/diffusers/issues/3602 | closed | [] | 2023-05-29T15:42:19Z | 2023-06-08T10:30:27Z | null | Michi-123 |
huggingface/transformers.js | 125 | [Question] Why running transformer in js is faster than python? | I created a repo to test how to use transformers.
https://github.com/pitieu/huggingface-transformers
I was wondering why is it that running the same models in javascript is faster than running them in python?
Is `Xenova/vit-gpt2-image-captioning` optimized somehow compared to `nlpconnect/vit-gpt2-image-captioning` ?
I run it on my MAC M1. | https://github.com/huggingface/transformers.js/issues/125 | closed | [
"question"
] | 2023-05-28T05:23:05Z | 2023-07-16T17:21:39Z | null | pitieu |
huggingface/safetensors | 258 | ONNX has just become twice as fast as before. Can SafeTensors also achieve that? | Here are some announcements and technical details. It's nice to see that they are making significant improvements. Could some of that be useful and implemented for SafeTensors?
https://devblogs.microsoft.com/directx/dml-stable-diffusion/
https://www.tomshardware.com/news/nvidia-geforce-driver-promises-doubled-stable-diffusion-performance
https://build.microsoft.com/en-US/sessions/47fe414f-97b8-4b71-ae9e-be9602713667
 | https://github.com/huggingface/safetensors/issues/258 | closed | [] | 2023-05-27T12:23:01Z | 2023-06-07T09:26:24Z | 2 | WEBPerformace |
huggingface/datasets | 5,906 | Could you unpin responses version? | ### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this library due to dependency conflict.
### Expected behavior
can install datasets
### Environment info
linux 64 | https://github.com/huggingface/datasets/issues/5906 | closed | [] | 2023-05-26T20:02:14Z | 2023-05-30T17:53:31Z | 0 | kenimou |
huggingface/datasets | 5,905 | Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently | ### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally intensive audio processing to do. As a result I want to load data from my remote when it is needed and perform all processing on the fly.
I am currently using the iterable dataset feature of _datasets_. It does everything I need with one exception. My issue is that when resuming training at a step n, we have to download all the data and perform the processing of steps < n, just to get the iterable at the right step. In my case it takes almost as long as training for the same steps, which make resuming training from a checkpoint useless in practice.
I understand that the nature of iterators make it probably nearly impossible to quickly resume training.
I thought about a possible solution nonetheless :
I could in fact index my large dataset and make it a mapped dataset. Then I could use set_transform to perform the processing on the fly. Finally, if I'm not mistaken, the _accelerate_ package allows to [skip steps efficiently](https://github.com/huggingface/accelerate/blob/a73898027a211c3f6dc4460351b0ec246aa824aa/src/accelerate/data_loader.py#L827) for a mapped dataset.
Is it possible to lazily load samples of a mapped dataset ? I'm used to [dataset scripts](https://huggingface.co/docs/datasets/dataset_script), maybe something can be done there.
If not, I could do it using a plain _Pytorch_ dataset. Then I would need to convert it to a _datasets_' dataset to get all the features of _datasets_. Is it something possible ?
### Your contribution
I could provide a PR to allow lazy loading of mapped dataset or the conversion of a mapped _Pytorch_ dataset into a _Datasets_ dataset if you think it is an useful new feature. | https://github.com/huggingface/datasets/issues/5905 | open | [
"enhancement"
] | 2023-05-26T12:33:02Z | 2023-06-15T13:34:18Z | 1 | bruno-hays |
huggingface/chat-ui | 263 | [question] Where should we discuss chat-ui roadmap? | Is there a forum to discuss future features?
I need to implement some sort of UI component for answer references. Something like perplexity.ai "pills" under the answer.
I guess this is useful for others and I would like to discuss how should I implement such thing before hand.
- should I use pills?
- should I create a special message component?
- maybe horizontal scrolling on "facts"/references?
Is there a place for this kind of discussion? Am I the only one with this demand? | https://github.com/huggingface/chat-ui/issues/263 | closed | [] | 2023-05-24T13:17:47Z | 2023-05-26T02:22:29Z | 1 | fredguth |
huggingface/optimum | 1,069 | llama-7b inference report Failed to allocate memory for requested buffer of size 180355072 | ### System Info
```shell
optimum 1.8.5, 32g v100
```
### Who can help?
@JingyaHuang
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
model_id = "my finetund llama-7b"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = ORTModelForCausalLM.from_pretrained(model_id, export=True)
# Load the optimization configuration detailing the optimization we wish to apply
optimization_config = AutoOptimizationConfig.O3(for_gpu=True)
optimizer = ORTOptimizer.from_pretrained(model)
optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)
model = ORTModelForCausalLM.from_pretrained(save_dir,provider="CUDAExecutionProvider")
```
### Expected behavior
Successfully loaded and ready for generation.
But it gives
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization:
/onnxruntime_src/onnxruntime/core/framework/bfc_arena.cc:368 void*
onnxruntime::BFCArena::AllocateRawInternal(size_t, bool, onnxruntime::Stream*, bool,
onnxruntime::WaitNotificationFn) Failed to allocate memory for requested buffer of size 180355072
```
I guess this is actually OOM? It seems that fp16 onnx conversion has some issue. But with fp32, two llama-7b model(normal and with_past) is too big for a single card. Is there any solution for this? i don'y see any multi-gpu inference in optimum's doc.
`model = ORTModelForCausalLM.from_pretrained(model_id, export=True)`
I think this model is fp32? Is there a way to make this model fp16? Then maybe I don't need onnx to convert to fp16.
Thank you! | https://github.com/huggingface/optimum/issues/1069 | closed | [
"bug",
"onnxruntime"
] | 2023-05-23T09:50:36Z | 2023-06-19T05:05:01Z | 6 | drxmy |
huggingface/chat-ui | 258 | Language change during chat | While writing in German, it answers in English. Before it always used to work...
Photo:

| https://github.com/huggingface/chat-ui/issues/258 | closed | [
"support"
] | 2023-05-23T08:41:44Z | 2023-07-24T11:46:33Z | 2 | Mbuni21 |
huggingface/transformers.js | 122 | [Question] Basic Whisper Inference vs Speed of Demo Site | Hello, I love the library~ thanks for making it!
I am trying to use the Whisper inference method displayed on the demo site, but it's running really slow,
It's taking me about 20 seconds to run it locally vs a few seconds on the demo site.
Is there some magic behind the scenes that I'm missing?
I'm just running a simple post message and listening for the updates:
```
worker.postMessage({
task: 'automatic-speech-recognition',
audio: file,
generation: {
do_sample: false,
max_new_tokens: 50,
num_beams: 1,
temperature: 1,
top_k: 0
}
});
worker.addEventListener('message', event => {
const data = event.data;
if(data.type === 'update') {
let elem = document.getElementById("whisper");
elem.value = data.data
}
});
```
| https://github.com/huggingface/transformers.js/issues/122 | closed | [
"question"
] | 2023-05-23T05:55:40Z | 2023-06-10T22:41:15Z | null | jpg-gamepad |
huggingface/datasets | 5,880 | load_dataset from s3 file system through streaming can't not iterate data | ### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1">
we can change 4 lines to fix this bug, you can check whether it is ok for us.
<img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3">
### Steps to reproduce the bug
1. storage a file in you s3 file system
2. use load_dataset to read it through streaming
3. iterate it
### Expected behavior
can iterate it successfully
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
| https://github.com/huggingface/datasets/issues/5880 | open | [] | 2023-05-22T07:40:27Z | 2023-05-26T12:52:08Z | 4 | janineguo |
huggingface/chat-ui | 256 | changing model to 30B in the .env file | here is the model am using which is 12B i want to change to 30B:
defual one:
`MODELS=`[
{
"name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",`
this is what i change to:
`"name": "OpenAssistant/oasst-rlhf-2-llama-30b-7k-steps-xor",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>",
"messageEndToken": "</s>",
`
i got error when i run the model/chat-ui
`Model not found & Could not parse last message {"error":"Task not found for this model"}
SyntaxError: Unexpected end of JSON input
at JSON.parse (<anonymous>)
at parseGeneratedText (/src/routes/conversation/[id]/+server.ts:178:32)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async saveMessage (/src/routes/conversation/[id]/+server.ts:94:26)`
plz help if you know how to change the model to `30B OpenAssistant` | https://github.com/huggingface/chat-ui/issues/256 | closed | [
"support"
] | 2023-05-21T18:30:04Z | 2023-06-19T09:34:10Z | 5 | C0deXG |
huggingface/transformers.js | 119 | [Question] A WebGPU-accelerated ONNX inference run-time | Is it possible to use https://github.com/webonnx/wonnx with transformersjs?
| https://github.com/huggingface/transformers.js/issues/119 | closed | [
"question"
] | 2023-05-21T06:11:20Z | 2024-10-18T13:30:07Z | null | ansarizafar |
huggingface/chat-ui | 255 | how to prompt it | how can i prompt this model to act certain way like be `your food assistant and you will provide the best food assistant` how can i prompt it because it all around the place when i run this model :( | https://github.com/huggingface/chat-ui/issues/255 | closed | [
"support"
] | 2023-05-20T21:41:46Z | 2023-06-01T13:00:48Z | 1 | C0deXG |
huggingface/setfit | 376 | How to get the number of parameters in a SetFitModel object? | The context is I would like to compare the parameter sizes of different models. Is there a way to count the model parameters in a SetFitModel object? Something like model.count_params() in keras. Thanks! | https://github.com/huggingface/setfit/issues/376 | closed | [
"question"
] | 2023-05-19T23:58:53Z | 2023-12-05T14:47:55Z | null | yihangit |
huggingface/chat-ui | 252 | Users can't get passed "Start Chatting" modal - ethicsModelAcceptedAt not getting set? | <img width="836" alt="image" src="https://github.com/huggingface/chat-ui/assets/1438064/28a3d7f1-65e4-4b61-a82b-ffc78eb3e074">
let me know what more info you need to debug. just keeps redirecting back to home and never clears the modal. | https://github.com/huggingface/chat-ui/issues/252 | open | [
"support",
"p2"
] | 2023-05-19T19:33:33Z | 2024-01-26T08:44:39Z | 7 | cfregly |
huggingface/optimum | 1,061 | mpt model support? | ### Feature request
Can you please add mpt model support to this library?
### Motivation
just testing things, and mpt seems to be unsupported by multiple huggingface libraries
### Your contribution
im just getting started, im not sure if ill be of any help | https://github.com/huggingface/optimum/issues/1061 | closed | [] | 2023-05-19T09:28:28Z | 2023-07-06T16:37:01Z | 7 | sail1369 |
huggingface/datasets | 5,875 | Why split slicing doesn't behave like list slicing ? | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | https://github.com/huggingface/datasets/issues/5875 | closed | [
"duplicate"
] | 2023-05-19T07:21:10Z | 2024-01-31T15:54:18Z | 1 | astariul |
huggingface/chat-ui | 246 | Documentation Request - Clarity around login flow outside of HuggingFace context | Could the docs (if not the code) be improved to make it clear how to:
- run this without requiring users to authenticate
- handle authentication via a 3rd party cloud (Azure, AWS, GCP, etc)
- run this with an arbitrary 3rd party model (OpenAI, Rasa, etc)
I originally thought this was the purpose of `OPENID_CLIENT_ID` and `OPENID_CLIENT_SECRET`, but it seems not... (?).
| https://github.com/huggingface/chat-ui/issues/246 | closed | [
"documentation",
"enhancement"
] | 2023-05-19T02:57:56Z | 2023-06-01T06:26:49Z | 3 | hack-r |
huggingface/chat-ui | 245 | Strange DNS Behavior | Apparently some part of this leverages DNS right away when you run it, but it doesn't work on any privacy-respecting DNS resolvers. I can demonstrate this via toggling firewall options, resolv.conf, or packet inspection, but I'm not sure what in the code is related to this or how to fix it. | https://github.com/huggingface/chat-ui/issues/245 | closed | [] | 2023-05-19T01:19:11Z | 2023-05-19T02:53:11Z | 1 | hack-r |
huggingface/optimum | 1,057 | owlvit is not supported | ### Feature request
The conversion is supported in transfomers[onnx], but not yet supported in optimum.
### Motivation
convert open world vocabulary to onnx model for faster inference.
### Your contribution
If there is a guideline on how to do it, I think I can help | https://github.com/huggingface/optimum/issues/1057 | closed | [] | 2023-05-17T07:01:39Z | 2023-07-12T13:20:52Z | 11 | darwinharianto |
huggingface/datasets | 5,870 | Behaviour difference between datasets.map and IterableDatasets.map | ### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.
This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:
"pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch.
In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.
Please look into this. Thank you
My databuilder class is inherited as such:
def _info(self):
print ("Config: ",self.config.__dict__.keys())
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"labels": datasets.Sequence(datasets.Value("uint16")),
# "labels_name": datasets.Value("string"),
# "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"),
"pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"),
"image_s3_path": datasets.Value("string"),
}
),
supervised_keys=None,
homepage="none",
citation="",
)
def _split_generators(self, dl_manager):
records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]
records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]
# print (len(records),self.config.num_shards)
# shard_size_train = len(records_train)//self.config.num_shards
# sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]
# shard_size_val = len(records_val)//self.config.num_shards
# sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over
),
]
def _generate_examples(self, records):
# print ("Generating examples for [{}] shards".format(len(shards)))
# initiate_db_connection()
# records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]
id_ = 0
# for records in shards:
for i,rec in enumerate(records):
img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)
# t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze()
# print (t.shape, type(t),type(t[0][0][0]))
# sys.exit()
pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh
# pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze()
# print (type(pvs[0][0][0]))
lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating
# print (len(lblids),type(lblids[0]))
# print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))
yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']}
id_+=1
os.remove(img_local_path)
and I load it inside my trainer script as such
`ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls`
or also as
`ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset`
Thank you to the team for having such a great library, and for this bug fix in advance!
### Steps to reproduce the bug
Above config can allow one to reproduce the said bug
### Expected behavior
.map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figur | https://github.com/huggingface/datasets/issues/5870 | open | [] | 2023-05-16T14:32:57Z | 2023-05-16T14:36:05Z | 1 | llStringll |
huggingface/chat-ui | 232 | Possible performance regression in the production model? | I have been using it for 5 days , it could write simple codes for me but now it can't ;/ | https://github.com/huggingface/chat-ui/issues/232 | closed | [
"bug",
"question"
] | 2023-05-16T08:39:19Z | 2023-09-11T09:30:26Z | null | overvalue |
huggingface/chat-ui | 230 | Task not found for this model | I tried running code on my local system and updated the model name in the .env file from "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" to "OpenAssistant/oasst-sft-6-llama-30b-xor" and now for every prompt I am getting "Task not found for this model" | https://github.com/huggingface/chat-ui/issues/230 | closed | [
"support"
] | 2023-05-16T05:18:25Z | 2024-12-13T01:28:06Z | 4 | newway-anshul |
huggingface/datasets | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | https://github.com/huggingface/datasets/issues/5868 | closed | [
"enhancement"
] | 2023-05-16T03:45:42Z | 2023-05-17T11:21:36Z | 2 | zyh3826 |
huggingface/chat-ui | 225 | Special tokens for user and assistant turns? | Hi,
I've been checking the example that used `OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5` model. This model uses the following tokens to specify the beginning of the user and assistant:
```
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>"
```
I'm trying to run `bigcode/starcoder` model along with `bigcode/the-stack-dedup` dataset, but I'm not sure which values do those variables need for this particular model and how they influence the model's answer generation.
Could you please briefly guide me into this? I'm kinda new to this. | https://github.com/huggingface/chat-ui/issues/225 | closed | [] | 2023-05-15T10:32:06Z | 2023-05-15T11:06:23Z | 3 | frandominguezl |
huggingface/chat-ui | 218 | Support for Contrastive Search? | Context: https://huggingface.co/blog/introducing-csearch
Passing only:
"penalty_alpha":0.6,
"top_k": 4,
Does not seem to work, as truncate, and temperature is still required. When passing this:
<pre>
"parameters": {
"temperature": 0.9,
"penalty_alpha":0.6,
"top_k": 4,
"truncate": 512,
"max_new_tokens": 512
}
</pre>
penalty_alpha seems to be ignored:
GenerateParameters { best_of: None, temperature: Some(0.9), repetition_penalty: None, top_k: Some(4), top_p: None, typical_p: None, do_sample: false, max_new_tokens: 512, return_full_text: Some(false), stop: [], truncate: Some(512), watermark: false, details: false, seed: None } })
| https://github.com/huggingface/chat-ui/issues/218 | closed | [] | 2023-05-13T22:02:37Z | 2023-09-18T13:27:20Z | 2 | PhNyx |
huggingface/setfit | 374 | Resolving confusion between fine-grained classes | My dataset has 131 classes. Some of them are fine-grained, for example:
- Flag fraud on the account -> **Open Dispute**
- Find out if there is a fraud hold on my debit card ->**Dispute Inquiry**
The model is getting confused between such classes. I have roughly 20 samples per class in my dataset and I am using `mpnet-base-v2` with `num_iterations=25`. Is there a way to specify which classes to draw the negative samples from given a positive class? Should I just add more data into the confusing classes? | https://github.com/huggingface/setfit/issues/374 | closed | [
"question"
] | 2023-05-13T10:13:15Z | 2023-11-24T15:09:55Z | null | vahuja4 |
huggingface/transformers.js | 108 | [Question] Problem when converting an embedding model. | A thirst I would like to thank everyone for providing and maintaining this library. It makes working with ML in JavaScript a breeze.
I was working with the embedding models and tried to convert a multilingual model [("paraphrase-multilingual-MiniLM-L12-v2")](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) for use with transformers.js. I used the flow command to do the conversion:
```
python -m scripts.convert --quantize --model_id sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2 --task semantic-segmentation --from_hub
```
But I got the following error back:
```
File "/opt/saturncloud/envs/saturn/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 470, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.bert.configuration_bert.BertConfig'> for this kind of AutoModel: AutoModelForSemanticSegmentation.
Model type should be one of BeitConfig, Data2VecVisionConfig, DPTConfig, MobileNetV2Config, MobileViTConfig, SegformerConfig, UperNetConfig.
```
I think I am using the wrong type of task, but I am not sure. Can anyone help me with this problem at hand.
Thanks in advance. Falcon | https://github.com/huggingface/transformers.js/issues/108 | closed | [
"question"
] | 2023-05-13T09:54:12Z | 2023-05-15T17:24:16Z | null | falcon027 |
huggingface/setfit | 372 | Update Previous Model with New Categories | Is there a way to add categories based on new data?
For example - Initially I trained a model with 5 categories and saved the model. I now have new data that I want to feed into the model but this new data has 8 categories. Would I have to start from scratch or can I use the original model I trained?
Thank you! | https://github.com/huggingface/setfit/issues/372 | closed | [
"question"
] | 2023-05-12T21:22:12Z | 2023-11-24T15:10:46Z | null | ronils428 |
huggingface/dataset-viewer | 1,174 | Add a field, and rename another one, in /opt-in-out-urls | The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- rename `num_urls` into `num_scanned_urls`
- add `num_rows` with the total number of rows in the dataset/config/split. It would help understand which proportion of the dataset has been scanned. Note that the information is already available in `/size`, but I think it would be handy to have this information here. wdyt? | https://github.com/huggingface/dataset-viewer/issues/1174 | closed | [
"question"
] | 2023-05-12T13:15:40Z | 2023-05-12T13:54:14Z | null | severo |
huggingface/chat-ui | 207 | MongoParseError: Invalid scheme | I tried to run chat-ui on my mac (Intel 2020, MacOS Ventura 13.3.1), and I get the following error:
```bash
(base) thibo@mac-M:~/Documents/chat-ui$ npm install
added 339 packages, and audited 340 packages in 39s
72 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
(base) thibo@mac:~/Documents/chat-ui$ npm run dev
> chat-ui@0.1.0 dev
> vite dev
(node:3340) ExperimentalWarning: Import assertions are not a stable feature of the JavaScript language. Avoid relying on their current behavior and syntax as those might change in a future version of Node.js.
(Use `node --trace-warnings ...` to show where the warning was created)
(node:3340) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
Forced re-optimization of dependencies
VITE v4.3.5 ready in 2136 ms
➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press h to show help
9:25:43 AM [vite] Error when evaluating SSR module /src/lib/server/database.ts:
|- MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
9:25:43 AM [vite] Error when evaluating SSR module /src/hooks.server.ts: failed to import "/src/lib/server/database.ts"
|- MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/connection_string.js:191:17)
at new MongoClient (/Users/thibo/Documents/chat-ui/node_modules/mongodb/lib/mongo_client.js:46:63)
at eval (/src/lib/server/database.ts:7:16)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async instantiateModule (file:///Users/thibo/Documents/chat-ui/node_modules/vite/dist/node/chunks/dep-934dbc7c.js:54360:9)
MongoParseError: Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"
at new ConnectionString (/Users/thibo/Documents/chat-ui/node_modules/mongodb-connection-string-url/lib/index.js:86:19)
at parseOptions (/Users/thibo/Documents/cha | https://github.com/huggingface/chat-ui/issues/207 | closed | [] | 2023-05-12T07:32:22Z | 2023-05-12T08:26:39Z | 1 | thiborose |
huggingface/chat-ui | 202 | Help wanted: Installing `@huggingface` package from NPM registry | 👋🏻
Sorry if I am opening a dumb issue but I was just looking into fixing some UI issues and not entirely sure how to run this project locally. I've created a `.env.local` with:
```
MONGODB_URL=
HF_ACCESS_TOKEN=XXX
```
Haven't actually set the `MONGODB_URL` but did create an access token for HF.
Running into the following error when running `yarn`
```
yarn install v1.22.11
info No lockfile found.
warning package-lock.json found. Your project contains lock files generated by tools other than Yarn. It is advised not to mix package managers in order to avoid resolution inconsistencies caused by unsynchronized lock files. To clear this warning, remove package-lock.json.
[1/4] 🔍 Resolving packages...
error Couldn't find package "@huggingface/shared@*" required by "@huggingface/inference@^2.2.0" on the "npm" registry.
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
```
I suppose I need a secret or something for Yarn to be able to fetch that package from a different registry than NPM?
**use NPM instead of Yarn?**
Yes, I've also tried using NPM, ran into the same issue.
Again sorry if I am mistaking the readme and doing things wrong.
Thanks! 👋🏻 | https://github.com/huggingface/chat-ui/issues/202 | closed | [] | 2023-05-11T17:38:24Z | 2023-05-12T11:07:10Z | 5 | eertmanhidde |
huggingface/datasets | 5,841 | Abusurdly slow on iteration | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | https://github.com/huggingface/datasets/issues/5841 | closed | [] | 2023-05-11T08:04:09Z | 2023-05-15T15:38:13Z | 4 | fecet |
huggingface/optimum | 1,046 | Make torchvision optional? | ### Feature request
Currently torchvision is a required dependency
https://github.com/huggingface/optimum/blob/22e4fd6de3ac5e7780571570f962947bd8777fd4/setup.py#L20
### Motivation
I only work on text so I don't need vision support
### Your contribution
I am sure the change would be more difficult than just "remove the line from the setup.py" file but if you have other suggestions how to tackle the removal, I am happy to help. | https://github.com/huggingface/optimum/issues/1046 | closed | [] | 2023-05-10T10:49:18Z | 2023-05-12T23:05:46Z | 4 | BramVanroy |
huggingface/datasets | 5,838 | Streaming support for `load_from_disk` | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | https://github.com/huggingface/datasets/issues/5838 | closed | [
"enhancement"
] | 2023-05-10T06:25:22Z | 2024-10-28T14:19:44Z | 12 | Nilabhra |
huggingface/datasets | 5,834 | Is uint8 supported? | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | https://github.com/huggingface/datasets/issues/5834 | closed | [] | 2023-05-09T17:31:13Z | 2023-05-13T05:04:21Z | 5 | ryokan0123 |
huggingface/transformers.js | 104 | [Question] npm install error in windows | I install transformers.js with npm but I get an error:
```
2135 info run canvas@2.11.2 install node_modules/canvas node-pre-gyp install --fallback-to-build --update-binary
2136 info run sharp@0.32.1 install node_modules/sharp (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-compile && node-gyp rebuild && node install/dll-copy)
2137 info run sharp@0.32.1 install { code: 1, signal: null }
2138 warn cleanup Failed to remove some directories [
2138 warn cleanup [
2138 warn cleanup 'D:\\project\\BLOGKLIN\\node_modules',
2138 warn cleanup [Error: EBUSY: resource busy or locked, rmdir 'D:\project\BLOGKLIN\node_modules\canvas'] {
2138 warn cleanup errno: -4082,
2138 warn cleanup code: 'EBUSY',
2138 warn cleanup syscall: 'rmdir',
2138 warn cleanup path: 'D:\\project\\BLOGKLIN\\node_modules\\canvas'
2138 warn cleanup }
2138 warn cleanup ]
2138 warn cleanup ]
2139 timing reify:rollback:createSparse Completed in 4980ms
2140 timing reify:rollback:retireShallow Completed in 0ms
2141 timing command:i Completed in 46786ms
2142 verbose stack Error: command failed
2142 verbose stack at ChildProcess.<anonymous> (C:\Users\admin\AppData\Roaming\npm\node_modules\npm\node_modules\@npmcli\promise-spawn\lib\index.js:63:27)
2142 verbose stack at ChildProcess.emit (node:events:390:28)
2142 verbose stack at maybeClose (node:internal/child_process:1064:16)
2142 verbose stack at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)
2143 verbose pkgid sharp@0.32.1
2144 verbose cwd D:\project\BLOGKLIN
2145 verbose Windows_NT 10.0.19044
2146 verbose node v16.13.0
2147 verbose npm v8.7.0
2148 error code 1
2149 error path D:\project\BLOGKLIN\node_modules\sharp
2150 error command failed
2151 error command C:\Windows\system32\cmd.exe /d /s /c (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-compile && node-gyp rebuild && node install/dll-copy)
2152 error sharp: Downloading https://github.com/lovell/sharp-libvips/releases/download/v8.14.2/libvips-8.14.2-win32-x64.tar.br
2152 error sharp: Please see https://sharp.pixelplumbing.com/install for required dependencies
2153 error sharp: Installation error: read ECONNRESET
2154 verbose exit 1
2155 timing npm Completed in 46886ms
2156 verbose unfinished npm timer reify 1683364060656
2157 verbose unfinished npm timer reify:build 1683364075028
2158 verbose unfinished npm timer build 1683364075029
2159 verbose unfinished npm timer build:deps 1683364075029
2160 verbose unfinished npm timer build:run:install 1683364075174
2161 verbose unfinished npm timer build:run:install:node_modules/canvas 1683364075175
2162 verbose unfinished npm timer build:run:install:node_modules/sharp 1683364075190
2163 verbose code 1
2164 error A complete log of this run can be found in:
2164 error C:\Users\admin\AppData\Local\npm-cache\_logs\2023-05-06T09_07_40_559Z-debug-0.log
```
os: windows 10
node: v16.13.0 | https://github.com/huggingface/transformers.js/issues/104 | closed | [
"question"
] | 2023-05-06T09:13:41Z | 2023-05-06T12:48:23Z | null | DominguitoLamo |
huggingface/datasets | 5,818 | Ability to update a dataset | ### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.save_to_disk("data/test1")
```
With the error:
```
PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself.
```
### Motivation
My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again.
Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow.
The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing.
### Your contribution
na | https://github.com/huggingface/datasets/issues/5818 | open | [
"enhancement"
] | 2023-05-04T01:08:13Z | 2023-05-04T20:43:39Z | 3 | davidgilbertson |
huggingface/datasets | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:

Is there some mechanism from huggingface to represent a dataset (such as that from `load_dataset('wmt14', 'de-en', split='train')` as a single file? Or, some other way to get that into a Kaggle dataset so that I can use the huggingface `datasets` module to process and consume it inside of a Kaggle notebook?
Thanks in advance!
| https://github.com/huggingface/datasets/issues/5815 | open | [] | 2023-05-02T21:43:33Z | 2023-07-26T16:13:31Z | 4 | hrbigelow |
huggingface/optimum | 1,024 | How to decrease inference time of LayoutXLM and LiLT models through Optimum? | ### System Info
```shell
Last version of transformers and Optimum libraries.
```
### Who can help?
@JingyaHuang , @echarlaix, @mi
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Example with LiLT model:
```
from transformers import AutoTokenizer, AutoModelForTokenClassification
model_id = "pierreguillou/lilt-xlm-roberta-base-finetuned-with-DocLayNet-base-at-paragraphlevel-ml512"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForTokenClassification.from_pretrained(model_id, device_map="auto")
from optimum.bettertransformer import BetterTransformer
model = BetterTransformer.transform(model, keep_original_model=False)
```
Error message
```
NotImplementedError: The model type lilt is not yet supported to be used with BetterTransformer. Feel free to open
an issue at https://github.com/huggingface/optimum/issues if you would like this model type to be supported.
Currently supported models are: dict_keys(['albert', 'bart', 'bert', 'bert-generation', 'blenderbot', 'camembert',
'clip', 'codegen', 'data2vec-text', 'deit', 'distilbert', 'electra', 'ernie', 'fsmt', 'gpt2', 'gptj', 'gpt_neo',
'gpt_neox', 'hubert', 'layoutlm', 'm2m_100', 'marian', 'markuplm', 'mbart', 'opt', 'pegasus', 'rembert',
'prophetnet', 'roberta', 'roc_bert', 'roformer', 'splinter', 'tapas', 't5', 'vilt', 'vit', 'vit_mae', 'vit_msn',
'wav2vec2', 'whisper', 'xlm-roberta', 'yolos']).
```
### Expected behavior
Hi,
I'm using Hugging Face libraries in order to run LayoutXLM and LiLT models.
How can I decrease inference time through Optimum? Which code to use?
I've already tested BetterTransformer (Optimum) and ONNX but none of them accepts LayoutXLM and LiLT models.
- BetterTransformer:
- "NotImplementedError: The model type layoutlmv2 is not yet supported to be used with BetterTransformer."
- "NotImplementedError: The model type lilt is not yet supported to be used with BetterTransformer."
- ONNX:
- "KeyError: 'layoutlmv2 is not supported yet.'"
- "KeyError: 'lilt is not supported yet.'"
Can you update the Optimum library so that `BetterTransformer() `and/or `ONNX `works on LayoutXLM and LiLT models?
Thank you. | https://github.com/huggingface/optimum/issues/1024 | open | [
"bug"
] | 2023-05-02T09:42:15Z | 2023-06-12T11:40:23Z | 4 | piegu |
huggingface/datasets | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | https://github.com/huggingface/datasets/issues/5809 | closed | [] | 2023-04-30T06:12:04Z | 2023-07-21T14:11:00Z | 1 | yulgok22 |
huggingface/datasets | 5,805 | Improve `Create a dataset` tutorial | Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide.
2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data).
Maybe we should actually rethink and restructure this tutorial somehow. | https://github.com/huggingface/datasets/issues/5805 | open | [
"documentation"
] | 2023-04-28T13:26:22Z | 2024-07-26T21:16:13Z | 4 | polinaeterna |
huggingface/dataset-viewer | 1,104 | Delete finished jobs immediately? | Currently, finished jobs are deleted after 7 days by an index. See https://github.com/huggingface/datasets-server/blob/259fd092c12d240d9b8d733c965c4b9362e90684/libs/libcommon/src/libcommon/queue.py#L144
But we never use the finished jobs, so:
- we could delete them immediately after finishing
- we could reduce the duration from 7 days to 1 hour (can be complementary to the previous action, to clean uncaught jobs)
For point 2, see https://github.com/huggingface/datasets-server/pull/1103
Stats:
- 9.805.591 jobs
- 13.345 are not finished! (0.1% of the jobs) | https://github.com/huggingface/dataset-viewer/issues/1104 | closed | [
"question",
"improvement / optimization"
] | 2023-04-28T11:49:10Z | 2023-05-31T12:20:38Z | null | severo |
huggingface/transformers.js | 102 | How to convert Whisper Large v2 | Hello!
How to convert whisper-large-v2 model to onnx?
I'm using this command
`python3.9 -m scripts.convert --model_id whisper-large-v2 --quantize --task automatic-speech-recognition`
But when i try to connect the converted model i get the following error:
`Error: File not found. Could not locate "encoder_model.onnx".`
Thank you! | https://github.com/huggingface/transformers.js/issues/102 | closed | [
"question"
] | 2023-04-27T13:30:33Z | 2023-05-31T13:18:33Z | null | hotmeatballs |
huggingface/datasets | 5,797 | load_dataset is case sentitive? | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, shell output:
```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx```
2 will only download single subset, shell output
```Downloading and preparing dataset bactrian-x/en to xxx```
### Environment info
Python 3.10.11
datasets Version: 2.11.0 | https://github.com/huggingface/datasets/issues/5797 | open | [] | 2023-04-26T18:19:04Z | 2023-04-27T11:56:58Z | 2 | haonan-li |
huggingface/chat-ui | 122 | Add pre-prompt | cc @OlivierDehaene
> Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.
> `-----`
> `<current prompt>`
> `-----`
Is this something we want to do ASAP @julien-c @gary149 ? | https://github.com/huggingface/chat-ui/issues/122 | closed | [] | 2023-04-26T15:58:55Z | 2023-04-26T16:46:05Z | 1 | coyotte508 |
huggingface/setfit | 367 | Massive Text Embedding Benchmark (MTEB) Leaderboard | https://huggingface.co/spaces/mteb/leaderboard
Can we use all of these with setfit? | https://github.com/huggingface/setfit/issues/367 | closed | [
"question"
] | 2023-04-26T09:18:27Z | 2023-12-05T14:48:55Z | null | vahuja4 |
huggingface/huggingface.js | 165 | Add E2E where the module is downloaded (or linked) to a TS project | To prevent things like #164 | https://github.com/huggingface/huggingface.js/issues/165 | closed | [
"tooling"
] | 2023-04-25T20:23:17Z | 2023-05-07T09:18:47Z | null | coyotte508 |
huggingface/transformers.js | 100 | Whisper on webGPU? | Somewhat related to [this thread](https://github.com/xenova/transformers.js/issues/20).
Is it within scope to implement a webGPU accelerated version of Whisper?
Not sure if this helps, but there is a [C port for Whisper wirh CPU implementation](https://github.com/ggerganov/whisper.cpp), and as mentioned in [this discussion](https://github.com/ggerganov/whisper.cpp/discussions/126), the main thing that needs to be offloaded to the GPU is the GGML_OP_MUL_MAT operator. | https://github.com/huggingface/transformers.js/issues/100 | closed | [
"question"
] | 2023-04-25T09:34:10Z | 2024-10-18T13:30:07Z | null | sandorkonya |
huggingface/optimum | 1,002 | Add a README & log at export | ### Feature request
The logs of the ONNX export are insightful.
Moreover, it would be good to generate automatically a README/json containing:
* which params were used at export
* For decoders, how to use the obtained `.onnx` models, as it can be a bit involved for somebody who does not use the Optimum ORT integration but wants to rewrite a custom implementation (in whatever language).
### Motivation
Readability for models on the Hub, reproducibility
### Your contribution
/ | https://github.com/huggingface/optimum/issues/1002 | open | [
"feature-request",
"onnx",
"tflite"
] | 2023-04-21T15:31:43Z | 2023-04-21T15:31:43Z | 0 | fxmarty |
huggingface/optimum | 999 | Remove attention mask creation for batch size = 1 when using SDPA | ### Feature request
Some pieces of transformers code are not useful when using SDPA with batch size = 1, for example:
https://github.com/huggingface/transformers/blob/874c7caf1966b1d0ee2749046703ada7a12ed797/src/transformers/models/gpt2/modeling_gpt2.py#L804-L822
https://github.com/huggingface/transformers/blob/874c7caf1966b1d0ee2749046703ada7a12ed797/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L495-L512
Removing them could speed up generation.
An example of how to do this is in https://github.com/huggingface/optimum/pull/998
### Motivation
Remove unnecessary overhead
### Your contribution
/ | https://github.com/huggingface/optimum/issues/999 | closed | [
"feature-request",
"bettertransformer",
"Stale"
] | 2023-04-21T14:41:04Z | 2025-05-29T02:14:32Z | 1 | fxmarty |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.