repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/peft | 631 | How to train multiple LoRAs at once? | Hi! I would like to train multiple LoRAs at once (for some reason). Although `requires_grad` is True for all LoRA weight matrices, only the first LoRA weight matrix will calculate the gradient, and the others will not calculate the gradient - and will not be updated. How can I train them in one forward process?
1. I... | https://github.com/huggingface/peft/issues/631 | closed | [
"enhancement"
] | 2023-06-26T09:30:16Z | 2023-08-18T13:41:32Z | null | meteorlin |
huggingface/optimum | 1,135 | Donut document parsing export to onnx does not work. | ### System Info
```shell
optimum==1.8.8
python==3.11.3
system linux
```
### Who can help?
The donut export does not work with the following commands, does anybody know how to get this running or know about the status.
```
optimum-cli export onnx -m naver-clova-ix/donut-base-finetuned-cord-v2 donut_cord2_onnx/... | https://github.com/huggingface/optimum/issues/1135 | closed | [
"bug"
] | 2023-06-26T08:57:01Z | 2023-06-26T10:17:32Z | 3 | casperthuis |
huggingface/peft | 630 | How to switch to P-Tuning v2 | We can find the `P-Tuning v2` in
https://github.com/huggingface/peft/blob/8af8dbd2ec9b4b8f664541e9625f898db7c7c78f/README.md?plain=1#L29
But how can I switch to `P-Tuning v2`? | https://github.com/huggingface/peft/issues/630 | closed | [
"solved"
] | 2023-06-26T08:52:42Z | 2023-08-04T15:03:30Z | null | jiahuanluo |
huggingface/optimum | 1,134 | ValueError: ..set the option `trust_remote_code=True` to remove this error | ### System Info
```shell
- `optimum` version: 1.8.8
- `transformers` version: 4.30.2
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- PyTorch version (GPU?): 2.0.1+cpu (cuda availabe: False)
- Tensorflow version (GPU?): not installed (cuda availabe: NA)
```
###... | https://github.com/huggingface/optimum/issues/1134 | closed | [
"bug"
] | 2023-06-24T12:47:35Z | 2023-07-06T16:38:30Z | 5 | diptenduLF |
huggingface/chat-ui | 322 | Chat using WizardCoder | Hello,
Can you please post an example of .env.local for:
WizardLM/WizardCoder-15B-V1.0 | https://github.com/huggingface/chat-ui/issues/322 | open | [] | 2023-06-23T18:44:07Z | 2023-08-14T20:52:39Z | 2 | vitalyshalumov |
huggingface/chat-ui | 321 | Chat-UI not loading Tailwind colors. | **Problem**
When specifying `PUBLIC_APP_COLOR` in either the `.env` or the `.env.local` file, the chat-UI color does not change regardless of which color is used. Even when `PUBLIC_APP_COLOR=blue` as set in this repository, the chat-UI color does not match with TailwindCSS's blue color palette:
**TailwindCSS bl... | https://github.com/huggingface/chat-ui/issues/321 | closed | [
"question",
"front"
] | 2023-06-23T15:54:43Z | 2023-09-18T13:12:15Z | null | ckanaar |
huggingface/peft | 622 | LoRA results in 4-6% lower performance compared to full fine-tuning | I am working on fine-tuning LLMs (6B to 40B parameters) using the LoRA framework on an instruction tuning dataset comprising of instructions corresponding to ~20 tasks (a mix of factual as well as open-ended tasks). The input to the model consists of a conversation snippet between two individuals along with a task-spec... | https://github.com/huggingface/peft/issues/622 | closed | [
"question"
] | 2023-06-23T10:50:24Z | 2023-07-24T12:12:18Z | null | digvijayingle016 |
huggingface/setfit | 389 | gradient_accumulation | Is there a way in setFitTrainer to change the gradient_accumulation like you can do in the regular Trainer class in TrainingArguments? Also just in general I am looking for tips to make training faster. | https://github.com/huggingface/setfit/issues/389 | closed | [
"question"
] | 2023-06-22T21:18:37Z | 2023-11-11T05:32:34Z | null | zackduitz |
huggingface/datasets | 5,982 | 404 on Datasets Documentation Page | ### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
#... | https://github.com/huggingface/datasets/issues/5982 | closed | [] | 2023-06-22T20:14:57Z | 2023-06-26T15:45:03Z | 2 | kmulka-bloomberg |
huggingface/chat-ui | 317 | Issues when trying to deploy on cPanel (shared hosting) | Hello there,
Is there something special to do to be able to deploy chat-ui on a shared hosting using cPanel?
I tried using the Node.JS Apps Manager as follows

But even when switching my entry point to ser... | https://github.com/huggingface/chat-ui/issues/317 | closed | [
"support"
] | 2023-06-22T17:32:00Z | 2023-09-18T13:12:53Z | 1 | gollumeo |
huggingface/transformers.js | 161 | [Question] whisper vs. ort-wasm-simd-threaded.wasm | While looking into https://cdn.jsdelivr.net/npm/@xenova/transformers@2.2.0/dist/transformers.js I can see a reference to **ort-wasm-simd-threaded.wasm** however that one never seem to be loaded for whisper/automatic-speech-recognition ( https://huggingface.co/spaces/Xenova/whisper-web ) while it always use **ort-wasm-s... | https://github.com/huggingface/transformers.js/issues/161 | open | [
"question"
] | 2023-06-22T06:41:31Z | 2023-08-15T16:36:01Z | null | jozefchutka |
huggingface/datasets | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I hav... | https://github.com/huggingface/datasets/issues/5975 | closed | [] | 2023-06-21T19:10:02Z | 2023-06-30T05:55:39Z | 9 | Veluchs |
huggingface/transformers.js | 158 | [Question] How do I use this library with ts-node? | I have a non-Web/browser-based project that uses TypeScript with ts-node.
The "pipeline" function attempts to use the JavaScript Fetch API, which is not included with NodeJS, and the code therefore fails with an error: "fetch is not defined."
The "node-fetch" package doesn't seem to provide a compatible API.
| https://github.com/huggingface/transformers.js/issues/158 | open | [
"question"
] | 2023-06-21T17:42:11Z | 2023-08-17T13:20:51Z | null | moonman239 |
huggingface/chat-ui | 314 | 500 Internal Error | 
| https://github.com/huggingface/chat-ui/issues/314 | closed | [
"question",
"support"
] | 2023-06-21T08:58:52Z | 2023-06-22T13:13:57Z | null | kasinadhsarma |
huggingface/datasets | 5,971 | Docs: make "repository structure" easier to find | The page https://huggingface.co/docs/datasets/repository_structure explains how to create a simple repository structure without a dataset script.
It's the simplest way to create a dataset and should be easier to find, particularly on the docs' first pages. | https://github.com/huggingface/datasets/issues/5971 | open | [
"documentation"
] | 2023-06-21T08:26:44Z | 2023-07-05T06:51:38Z | 5 | severo |
huggingface/chat-ui | 313 | MongoDB | I have a free teir MongoDB acount but not sure how to get url plz help | https://github.com/huggingface/chat-ui/issues/313 | closed | [
"support"
] | 2023-06-21T07:47:18Z | 2023-06-23T08:34:42Z | 5 | Toaster496 |
huggingface/peft | 607 | trainer with multi-gpu | I want to use trainer.predict to predict datasets by multi-gpu, but actually I only use single one gpu
when I print Seq2SeqTrainingArguments , I get

It shows 8 gpu
I check my code, when I load model, I find somethin... | https://github.com/huggingface/peft/issues/607 | closed | [
"question"
] | 2023-06-20T08:58:37Z | 2023-07-28T15:03:31Z | null | hrdxwandg |
huggingface/chat-ui | 311 | Unable to build with Docker | Hey,
I'm trying to create a docker container with Chat-Ui but i'm facing a wall.
I cloned this repo in a folder on a server and modified the `.env` file, thinking that it would be easy to deploy a docker container out of it but I could not be more wrong !
After trying to build my container with `docker build -t c... | https://github.com/huggingface/chat-ui/issues/311 | closed | [
"support"
] | 2023-06-19T15:11:36Z | 2023-09-18T13:14:04Z | 1 | samichaignonmejai |
huggingface/chat-ui | 310 | Dockerfile issue : can't modify .env.local before building the docker | Hey, I'm having an issue building chat-ui dockerfile.
Indeed, i have to point my DB and my endpoints (or my HF token) in the .env.local file, but the file is built after running the `npm install`, therefore I can't modify my .env.local before building my Docker.
The issues are that both my connection with mongoDB and... | https://github.com/huggingface/chat-ui/issues/310 | open | [
"support"
] | 2023-06-19T10:48:04Z | 2023-07-05T03:09:16Z | 1 | samichaignonmejai |
huggingface/chat-ui | 309 | 'Task not found in this model' when running another model | Hello there,
I tried to change the original model to guanaco-33d (also tried with the 65-b) but I always end up having the error "Task not found in this model".
Here's what I changed in the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/openassistant-gua... | https://github.com/huggingface/chat-ui/issues/309 | closed | [
"support",
"models"
] | 2023-06-19T09:42:41Z | 2023-06-23T12:27:50Z | 1 | gollumeo |
huggingface/chat-ui | 308 | 'Task not found' when trying to use the guacano-33b model | Hello there,
I tried to change the original model, so my team can work with the guanaco-33b model. But now, I always end up having "Task not found for this model" errors.
Here's what I changed on the .env:
```.env
MODELS=`[
{
"name": "timdettmers/guanaco-33b",
"datasetName": "timdettmers/opena... | https://github.com/huggingface/chat-ui/issues/308 | closed | [] | 2023-06-19T09:38:55Z | 2023-06-19T09:39:08Z | 0 | gollumeo |
huggingface/chat-ui | 307 | Add API endpoints documentation | We want to make it easy for people to build cool apps on top of chat-ui, and this requires API specs that are easily accessible.
I'm not sure what tools are available in the sveltekit ecosystem for this. My first guess would be to generate an openAPI spec somehow from our server endpoints (or do it manually if that ... | https://github.com/huggingface/chat-ui/issues/307 | open | [
"documentation",
"enhancement",
"back",
"p2"
] | 2023-06-19T09:08:19Z | 2024-05-29T13:43:10Z | 5 | nsarrazin |
huggingface/api-inference-community | 295 | What is the ratelimit for inference api for pro users? | What is the rate limit for inference API for pro users?
Also can we use the endpoint for prod, which makes 3 to 10 RPS? | https://github.com/huggingface/api-inference-community/issues/295 | closed | [] | 2023-06-18T07:17:23Z | 2023-06-19T09:01:02Z | null | bigint |
huggingface/chat-ui | 304 | Code blocks | How do code blocks like img attached work under the hood?
Is it the model that generates ``` & it gets detected and converted to code?
Or is it the UI/Backend that detects code and converts it to look like a code block?
<img width="434" alt="Screenshot 2023-06-17 at 3 26 39 PM" src="https://github.com/huggingfac... | https://github.com/huggingface/chat-ui/issues/304 | closed | [
"question"
] | 2023-06-17T13:27:20Z | 2023-09-18T13:17:47Z | null | Muennighoff |
huggingface/optimum | 1,118 | Corrupted-tflite-weights while getting a model from huggingface | ### System Info
```shell
System: MacOS
Onnx: 1.14
tensorflow: 2.11
While converting a model from hugging face to tflite using huggingface-cli, the model conversion ran okay, but later in inferencing(in python and on edge-device), the model started producing random results, as if it wasn't trained at all.
Virt... | https://github.com/huggingface/optimum/issues/1118 | open | [
"bug"
] | 2023-06-16T18:56:06Z | 2023-06-19T05:18:10Z | 1 | saurabhkumar8112 |
huggingface/pytorch-pretrained-BigGAN | 20 | Is the model trained on truncated noise? What was input noise vector characteristics for training? | Hi,
I have noticed in the "utils.py" line 32, you truncated the normal noise in the range [-2,2] by this line of code:
`values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state).astype(np.float32)`
Could you please let me know whether the pre-trained model is also trained using this truncated... | https://github.com/huggingface/pytorch-pretrained-BigGAN/issues/20 | open | [] | 2023-06-16T08:02:52Z | 2023-06-16T08:02:52Z | null | MHVali |
huggingface/chat-ui | 301 | Error when deploying on a distant server : Cannot find base config file "./.svelte-kit/tsconfig.json" | Hey,
I'm having troubles deploying HuggingChat on a distant server, when I run HuggingChat, I get the following error :
```
ai@1.0.0 start-chat-ui
> cd ../chat-ui && npm run dev -- --host 127.0.0.1
> chat-ui@0.3.0 dev
> vite dev --host 127.0.0.1
▲ [WARNING] Cannot find base config file "./.svelte-kit/ts... | https://github.com/huggingface/chat-ui/issues/301 | closed | [
"support"
] | 2023-06-15T19:55:36Z | 2023-06-19T10:50:26Z | 2 | samichaignonmejai |
huggingface/transformers.js | 150 | [Question] How to use transformers.js like the python sentence_transformers library? | Hello all,
Thanks for this great library. I've just discovered it and I'm familiar with the python sentence_transformers module. I know from experience that sentence_transformers wraps a lot of the complexity compared to using transformers directly.
Can you point to an example of using this to replace python's se... | https://github.com/huggingface/transformers.js/issues/150 | closed | [
"question"
] | 2023-06-15T15:30:49Z | 2023-06-18T15:17:04Z | null | davidtbo |
huggingface/chat-ui | 299 | Using HuggingChat in a JavaScript/node.js setting? | Hi, I'm not sure whether this is relevant here, but I'd like to use the HuggingChat in a personal web design project, and I'd like to access it through REST/axios, similar to this [here](https://stackoverflow.com/questions/75714587/node-js-turn-hugging-face-image-response-to-buffer-and-send-as-a-discord-attac) (stable ... | https://github.com/huggingface/chat-ui/issues/299 | closed | [] | 2023-06-15T02:59:29Z | 2023-09-18T13:19:32Z | 3 | VatsaDev |
huggingface/chat-ui | 297 | Is there a way to deploy without the HF token ? | I'm trying to use chat-ui with my own endpoints and I would like to know if I can get rid of the HF_ACCESS_TOKEN variable and also allow to run every model I want.
I tried to modify the TS in modelEndpoint.ts and model.ts but I can't figure how to run it independently to HF (I want it offline), here are the parts I... | https://github.com/huggingface/chat-ui/issues/297 | closed | [
"support"
] | 2023-06-14T12:11:04Z | 2023-06-15T09:52:39Z | 2 | samichaignonmejai |
huggingface/chat-ui | 296 | Issue when deploying model : Error in 'stream': 'stream' is not supported for this model | I'm trying to use bigscience/bloom-560m with chat-ui
I already have an API for the model and it's working well, same for chat-ui when I use my HF token but i get the following error message when I launch a request to my bloom-560m API from chat-ui :
```
Could not parse last message {"error":["Error in `stream`: ... | https://github.com/huggingface/chat-ui/issues/296 | closed | [
"support",
"models"
] | 2023-06-14T09:04:07Z | 2023-06-19T10:57:01Z | 2 | samichaignonmejai |
huggingface/datasets | 5,951 | What is the Right way to use discofuse dataset?? | [Click here for Dataset link](https://huggingface.co/datasets/discofuse/viewer/discofuse-wikipedia/train?row=6)
**Below is the following way, as per my understanding , Is it correct :question: :question:**
The **columns/features from `DiscoFuse dataset`** that will be the **input to the `encoder` and `decoder`** ar... | https://github.com/huggingface/datasets/issues/5951 | closed | [] | 2023-06-14T08:38:39Z | 2023-06-14T13:25:06Z | null | akesh1235 |
huggingface/chat-ui | 295 | Facing issue for using custom model deployed locally on flask | I have a chat model which responds on
```
@app.route("/get")
#function for the bot response
def get_bot_response():
userText = request.args.get('msg')
data = T.getResponse(userText)
return str(data)
```
I'm not sure about the configuration but I have added `MODELS=[{"name": "mymodel", "endpoints"... | https://github.com/huggingface/chat-ui/issues/295 | closed | [
"support"
] | 2023-06-14T08:20:41Z | 2023-07-24T10:53:41Z | 6 | awsum0225 |
huggingface/optimum | 1,106 | Onnxruntime support for multiple modalities model types | ### Feature request
Add support for layout and multi-modal models (e.g. LayoutLM, LayoutLMv3, LILT) to the ORTModels.
### Motivation
ORTModels allows to interact with onnxruntime models in the same way as transformers API, which is very convenient, as optimum is a part of huggingface ecosystem and the compatib... | https://github.com/huggingface/optimum/issues/1106 | open | [
"feature-request",
"onnxruntime"
] | 2023-06-13T14:30:10Z | 2023-06-14T11:10:49Z | 0 | mariababich |
huggingface/optimum | 1,105 | IO Binding for ONNX Non-CUDAExecutionProviders | ### Feature request
When using use_io_binding=True with TensorrtExecutionProvider, a warning appears :
```
No need to enable IO Binding if the provider used is not CUDAExecutionProvider. IO Binding will be turned off.
```
I don't understand the reason for this, as data movement optimization should also work f... | https://github.com/huggingface/optimum/issues/1105 | open | [
"help wanted",
"onnxruntime"
] | 2023-06-13T14:11:31Z | 2023-09-26T11:47:17Z | 5 | cyang49 |
huggingface/datasets | 5,946 | IndexError Not Solving -> IndexError: Invalid key: ?? is out of bounds for size 0 or ?? | ### Describe the bug
in <cell line: 1>:1 │
│ │
│ /usr/local/lib/python3.10/dist-packages/transformers/trainer.py:1537 in train ... | https://github.com/huggingface/datasets/issues/5946 | open | [] | 2023-06-13T07:34:15Z | 2023-07-14T12:04:48Z | 6 | syngokhan |
huggingface/safetensors | 273 | Issue with Loading Model in safetensors Format | ### System Info
- `transformers` version: 4.30.1
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.11.3
- Huggingface_hub version: 0.15.1
- Safetensors version: 0.3.1
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (... | https://github.com/huggingface/safetensors/issues/273 | closed | [
"Stale"
] | 2023-06-12T21:25:33Z | 2024-03-08T13:28:30Z | 11 | yachty66 |
huggingface/transformers.js | 144 | Question-Answer Examples | Ca you please send us an example of question-answer please | https://github.com/huggingface/transformers.js/issues/144 | closed | [
"question"
] | 2023-06-09T21:54:37Z | 2023-06-09T22:59:17Z | null | Zenyker |
huggingface/optimum | 1,095 | Installation issue on Openvino NNcf | ### System Info
```shell
LINUX WSL 2
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
OPTIMUM
Name: optimum
Version: 1.8.6
Summary: Optimum Library is an extension of the Hugging Face Transformers library, providing a framework to integrate third-party lib... | https://github.com/huggingface/optimum/issues/1095 | closed | [
"bug"
] | 2023-06-09T09:55:45Z | 2024-01-05T11:10:06Z | 5 | DebayanChakraborty |
huggingface/transformers.js | 140 | [Question] OrtRun error code 6 with a longer string for question-answering | Why do I keep running into an OrtRun error code 6 with a longer string for question-answering task:
`const result = await model(question, context, {
padding: true,
truncation: true,
});
`
Error:
`
models.js:158 An error occurred during model execution: "Error: failed to call OrtRun... | https://github.com/huggingface/transformers.js/issues/140 | closed | [
"bug",
"question"
] | 2023-06-09T04:07:28Z | 2023-07-11T11:07:26Z | null | iamfiscus |
huggingface/datasets | 5,931 | `datasets.map` not reusing cached copy by default | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was... | https://github.com/huggingface/datasets/issues/5931 | closed | [] | 2023-06-07T09:03:33Z | 2023-06-21T16:15:40Z | 1 | bhavitvyamalik |
huggingface/chat-ui | 282 | OpenID login | How to get providerURL, client ID and client token to create azure openid login????? | https://github.com/huggingface/chat-ui/issues/282 | closed | [
"support"
] | 2023-06-06T10:45:46Z | 2023-06-19T09:38:34Z | 1 | sankethgadadinni |
huggingface/transformers.js | 137 | [Question] Failed to fetch onnx model when to use AutoModel.from_pretrained | **The code here:**
```
import { AutoModel, AutoTokenizer } from '@xenova/transformers';
const modelPath = 'Xenova/distilgpt2'
let tokenizer = await AutoTokenizer.from_pretrained(modelPath); // **successful to fetch model**
let model = await AutoModel.from_pretrained(modelPath); // **failed to fetch model**
... | https://github.com/huggingface/transformers.js/issues/137 | closed | [
"question"
] | 2023-06-06T02:03:41Z | 2023-06-20T13:24:37Z | null | peter-up |
huggingface/transformers.js | 136 | [Question] Using CLIP for simple image-text similarity | I'm trying to get a simple image-text similarity thing working with CLIP, and I'm not sure how to do it, or whether it's currently supported with Transformers.js outside of the zero-shot image classification pipeline.
Is there a code example somewhere to get me started? Here's what I have so far:
```js
import { ... | https://github.com/huggingface/transformers.js/issues/136 | closed | [
"question"
] | 2023-06-05T14:24:56Z | 2023-06-06T13:35:45Z | null | josephrocca |
huggingface/diffusers | 3,669 | General question: what are the steps to debug if the image produced is just wrong? | I have a lora(lycoris) that I have tested with A1111's webui and I'm pretty happy with the result. When I tried to use it with `diffusers` it just give me corrupted image. The lora brings some desired effect (like white background), but the overall image is just not right.
I have included some personal code to use l... | https://github.com/huggingface/diffusers/issues/3669 | closed | [
"stale"
] | 2023-06-05T01:44:49Z | 2023-07-13T15:03:51Z | null | wangdong2023 |
huggingface/chat-ui | 275 | web search hallucination and prompt results | Hello, great job building web search module. Just a few things i noticed using it for the past hours.
1- It does connect to the web perfectly.
2- It tend to take only the first page result and not contextualize enough the data, trying to mix it with the model data and it ends up destroying the final output. So maybe ... | https://github.com/huggingface/chat-ui/issues/275 | open | [] | 2023-06-02T23:09:11Z | 2023-06-05T08:36:41Z | 1 | Billyroot |
huggingface/peft | 537 | Where is the PeftModel weights stored? | ## expect behavior
I am going to check if the model (mt0-xxl [13B](https://huggingface.co/bigscience/mt0-xxl)) weights have been updated.
Could you tell me how to check the weights of the model original before using peft?
How to check loaded Lora Module weights when using the peft?
## script
modified from [this... | https://github.com/huggingface/peft/issues/537 | closed | [] | 2023-06-02T09:10:09Z | 2023-07-10T15:03:40Z | null | dsj96 |
huggingface/chat-ui | 273 | Documentation about how to configure custom model endpoints is missing | It seems it has been removed in https://github.com/huggingface/chat-ui/commit/fae93d9fc3be9a39d8efd9ab9993dea13f0ae844. | https://github.com/huggingface/chat-ui/issues/273 | closed | [
"documentation"
] | 2023-06-01T19:37:44Z | 2023-06-19T08:59:15Z | 4 | djmaze |
huggingface/optimum | 1,078 | [SAM] Split encoder and mask decoder into separate .onnx files | ### Feature request
Currently, exporting SAM models with optimum results in a single .onnx file (https://huggingface.co/Xenova/sam-vit-base/tree/main/onnx). It would be great if we could add an option to separate the encoder and decoder into separate onnx files (like traditional seq2seq models).
Example SAM expor... | https://github.com/huggingface/optimum/issues/1078 | closed | [] | 2023-05-31T10:47:19Z | 2023-08-24T16:05:39Z | 8 | xenova |
huggingface/diffusers | 3,602 | What is the default for VAE option? | If "VAE" is not specified for "Stable Diffusion," what is the default applied? | https://github.com/huggingface/diffusers/issues/3602 | closed | [] | 2023-05-29T15:42:19Z | 2023-06-08T10:30:27Z | null | Michi-123 |
huggingface/transformers.js | 125 | [Question] Why running transformer in js is faster than python? | I created a repo to test how to use transformers.
https://github.com/pitieu/huggingface-transformers
I was wondering why is it that running the same models in javascript is faster than running them in python?
Is `Xenova/vit-gpt2-image-captioning` optimized somehow compared to `nlpconnect/vit-gpt2-image-captioning`... | https://github.com/huggingface/transformers.js/issues/125 | closed | [
"question"
] | 2023-05-28T05:23:05Z | 2023-07-16T17:21:39Z | null | pitieu |
huggingface/safetensors | 258 | ONNX has just become twice as fast as before. Can SafeTensors also achieve that? | Here are some announcements and technical details. It's nice to see that they are making significant improvements. Could some of that be useful and implemented for SafeTensors?
https://devblogs.microsoft.com/directx/dml-stable-diffusion/
https://www.tomshardware.com/news/nvidia-geforce-driver-promises-doubled-stabl... | https://github.com/huggingface/safetensors/issues/258 | closed | [] | 2023-05-27T12:23:01Z | 2023-06-07T09:26:24Z | 2 | WEBPerformace |
huggingface/datasets | 5,906 | Could you unpin responses version? | ### Describe the bug
Could you unpin [this](https://github.com/huggingface/datasets/blob/main/setup.py#L139) or move it to test requirements? This is a testing library and we also use it for our tests as well. We do not want to use a very outdated version.
### Steps to reproduce the bug
could not install this librar... | https://github.com/huggingface/datasets/issues/5906 | closed | [] | 2023-05-26T20:02:14Z | 2023-05-30T17:53:31Z | 0 | kenimou |
huggingface/datasets | 5,905 | Offer an alternative to Iterable Dataset that allows lazy loading and processing while skipping batches efficiently | ### Feature request
I would like a way to resume training from a checkpoint without waiting for a very long time when using an iterable dataset.
### Motivation
I am training models on the speech-recognition task. I have very large datasets that I can't comfortably store on a disk and also quite computationally... | https://github.com/huggingface/datasets/issues/5905 | open | [
"enhancement"
] | 2023-05-26T12:33:02Z | 2023-06-15T13:34:18Z | 1 | bruno-hays |
huggingface/chat-ui | 263 | [question] Where should we discuss chat-ui roadmap? | Is there a forum to discuss future features?
I need to implement some sort of UI component for answer references. Something like perplexity.ai "pills" under the answer.
I guess this is useful for others and I would like to discuss how should I implement such thing before hand.
- should I use pills?
- should I cr... | https://github.com/huggingface/chat-ui/issues/263 | closed | [] | 2023-05-24T13:17:47Z | 2023-05-26T02:22:29Z | 1 | fredguth |
huggingface/optimum | 1,069 | llama-7b inference report Failed to allocate memory for requested buffer of size 180355072 | ### System Info
```shell
optimum 1.8.5, 32g v100
```
### Who can help?
@JingyaHuang
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (gi... | https://github.com/huggingface/optimum/issues/1069 | closed | [
"bug",
"onnxruntime"
] | 2023-05-23T09:50:36Z | 2023-06-19T05:05:01Z | 6 | drxmy |
huggingface/chat-ui | 258 | Language change during chat | While writing in German, it answers in English. Before it always used to work...
Photo:

| https://github.com/huggingface/chat-ui/issues/258 | closed | [
"support"
] | 2023-05-23T08:41:44Z | 2023-07-24T11:46:33Z | 2 | Mbuni21 |
huggingface/transformers.js | 122 | [Question] Basic Whisper Inference vs Speed of Demo Site | Hello, I love the library~ thanks for making it!
I am trying to use the Whisper inference method displayed on the demo site, but it's running really slow,
It's taking me about 20 seconds to run it locally vs a few seconds on the demo site.
Is there some magic behind the scenes that I'm missing?
I'm just runn... | https://github.com/huggingface/transformers.js/issues/122 | closed | [
"question"
] | 2023-05-23T05:55:40Z | 2023-06-10T22:41:15Z | null | jpg-gamepad |
huggingface/datasets | 5,880 | load_dataset from s3 file system through streaming can't not iterate data | ### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.c... | https://github.com/huggingface/datasets/issues/5880 | open | [] | 2023-05-22T07:40:27Z | 2023-05-26T12:52:08Z | 4 | janineguo |
huggingface/chat-ui | 256 | changing model to 30B in the .env file | here is the model am using which is 12B i want to change to 30B:
defual one:
`MODELS=`[
{
"name": "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5",
"datasetName": "OpenAssistant/oasst1",
"description": "A good alternative to ChatGPT",
"websiteUrl": "https://open-assistant.io",
"userMessage... | https://github.com/huggingface/chat-ui/issues/256 | closed | [
"support"
] | 2023-05-21T18:30:04Z | 2023-06-19T09:34:10Z | 5 | C0deXG |
huggingface/transformers.js | 119 | [Question] A WebGPU-accelerated ONNX inference run-time | Is it possible to use https://github.com/webonnx/wonnx with transformersjs?
| https://github.com/huggingface/transformers.js/issues/119 | closed | [
"question"
] | 2023-05-21T06:11:20Z | 2024-10-18T13:30:07Z | null | ansarizafar |
huggingface/chat-ui | 255 | how to prompt it | how can i prompt this model to act certain way like be `your food assistant and you will provide the best food assistant` how can i prompt it because it all around the place when i run this model :( | https://github.com/huggingface/chat-ui/issues/255 | closed | [
"support"
] | 2023-05-20T21:41:46Z | 2023-06-01T13:00:48Z | 1 | C0deXG |
huggingface/setfit | 376 | How to get the number of parameters in a SetFitModel object? | The context is I would like to compare the parameter sizes of different models. Is there a way to count the model parameters in a SetFitModel object? Something like model.count_params() in keras. Thanks! | https://github.com/huggingface/setfit/issues/376 | closed | [
"question"
] | 2023-05-19T23:58:53Z | 2023-12-05T14:47:55Z | null | yihangit |
huggingface/chat-ui | 252 | Users can't get passed "Start Chatting" modal - ethicsModelAcceptedAt not getting set? | <img width="836" alt="image" src="https://github.com/huggingface/chat-ui/assets/1438064/28a3d7f1-65e4-4b61-a82b-ffc78eb3e074">
let me know what more info you need to debug. just keeps redirecting back to home and never clears the modal. | https://github.com/huggingface/chat-ui/issues/252 | open | [
"support",
"p2"
] | 2023-05-19T19:33:33Z | 2024-01-26T08:44:39Z | 7 | cfregly |
huggingface/optimum | 1,061 | mpt model support? | ### Feature request
Can you please add mpt model support to this library?
### Motivation
just testing things, and mpt seems to be unsupported by multiple huggingface libraries
### Your contribution
im just getting started, im not sure if ill be of any help | https://github.com/huggingface/optimum/issues/1061 | closed | [] | 2023-05-19T09:28:28Z | 2023-07-06T16:37:01Z | 7 | sail1369 |
huggingface/datasets | 5,875 | Why split slicing doesn't behave like list slicing ? | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> V... | https://github.com/huggingface/datasets/issues/5875 | closed | [
"duplicate"
] | 2023-05-19T07:21:10Z | 2024-01-31T15:54:18Z | 1 | astariul |
huggingface/chat-ui | 246 | Documentation Request - Clarity around login flow outside of HuggingFace context | Could the docs (if not the code) be improved to make it clear how to:
- run this without requiring users to authenticate
- handle authentication via a 3rd party cloud (Azure, AWS, GCP, etc)
- run this with an arbitrary 3rd party model (OpenAI, Rasa, etc)
I originally thought this was the purpose of `OPENID_CLIE... | https://github.com/huggingface/chat-ui/issues/246 | closed | [
"documentation",
"enhancement"
] | 2023-05-19T02:57:56Z | 2023-06-01T06:26:49Z | 3 | hack-r |
huggingface/chat-ui | 245 | Strange DNS Behavior | Apparently some part of this leverages DNS right away when you run it, but it doesn't work on any privacy-respecting DNS resolvers. I can demonstrate this via toggling firewall options, resolv.conf, or packet inspection, but I'm not sure what in the code is related to this or how to fix it. | https://github.com/huggingface/chat-ui/issues/245 | closed | [] | 2023-05-19T01:19:11Z | 2023-05-19T02:53:11Z | 1 | hack-r |
huggingface/optimum | 1,057 | owlvit is not supported | ### Feature request
The conversion is supported in transfomers[onnx], but not yet supported in optimum.
### Motivation
convert open world vocabulary to onnx model for faster inference.
### Your contribution
If there is a guideline on how to do it, I think I can help | https://github.com/huggingface/optimum/issues/1057 | closed | [] | 2023-05-17T07:01:39Z | 2023-07-12T13:20:52Z | 11 | darwinharianto |
huggingface/datasets | 5,870 | Behaviour difference between datasets.map and IterableDatasets.map | ### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to ... | https://github.com/huggingface/datasets/issues/5870 | open | [] | 2023-05-16T14:32:57Z | 2023-05-16T14:36:05Z | 1 | llStringll |
huggingface/chat-ui | 232 | Possible performance regression in the production model? | I have been using it for 5 days , it could write simple codes for me but now it can't ;/ | https://github.com/huggingface/chat-ui/issues/232 | closed | [
"bug",
"question"
] | 2023-05-16T08:39:19Z | 2023-09-11T09:30:26Z | null | overvalue |
huggingface/chat-ui | 230 | Task not found for this model | I tried running code on my local system and updated the model name in the .env file from "OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5" to "OpenAssistant/oasst-sft-6-llama-30b-xor" and now for every prompt I am getting "Task not found for this model" | https://github.com/huggingface/chat-ui/issues/230 | closed | [
"support"
] | 2023-05-16T05:18:25Z | 2024-12-13T01:28:06Z | 4 | newway-anshul |
huggingface/datasets | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always f... | https://github.com/huggingface/datasets/issues/5868 | closed | [
"enhancement"
] | 2023-05-16T03:45:42Z | 2023-05-17T11:21:36Z | 2 | zyh3826 |
huggingface/chat-ui | 225 | Special tokens for user and assistant turns? | Hi,
I've been checking the example that used `OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5` model. This model uses the following tokens to specify the beginning of the user and assistant:
```
"userMessageToken": "<|prompter|>",
"assistantMessageToken": "<|assistant|>"
```
I'm trying to run `bigcode/starco... | https://github.com/huggingface/chat-ui/issues/225 | closed | [] | 2023-05-15T10:32:06Z | 2023-05-15T11:06:23Z | 3 | frandominguezl |
huggingface/chat-ui | 218 | Support for Contrastive Search? | Context: https://huggingface.co/blog/introducing-csearch
Passing only:
"penalty_alpha":0.6,
"top_k": 4,
Does not seem to work, as truncate, and temperature is still required. When passing this:
<pre>
"parameters": {
"temperature": 0.9,
"penalty_alpha":0.6,
"top_k": 4,
"trunca... | https://github.com/huggingface/chat-ui/issues/218 | closed | [] | 2023-05-13T22:02:37Z | 2023-09-18T13:27:20Z | 2 | PhNyx |
huggingface/setfit | 374 | Resolving confusion between fine-grained classes | My dataset has 131 classes. Some of them are fine-grained, for example:
- Flag fraud on the account -> **Open Dispute**
- Find out if there is a fraud hold on my debit card ->**Dispute Inquiry**
The model is getting confused between such classes. I have roughly 20 samples per class in my dataset and I am using `... | https://github.com/huggingface/setfit/issues/374 | closed | [
"question"
] | 2023-05-13T10:13:15Z | 2023-11-24T15:09:55Z | null | vahuja4 |
huggingface/transformers.js | 108 | [Question] Problem when converting an embedding model. | A thirst I would like to thank everyone for providing and maintaining this library. It makes working with ML in JavaScript a breeze.
I was working with the embedding models and tried to convert a multilingual model [("paraphrase-multilingual-MiniLM-L12-v2")](https://huggingface.co/sentence-transformers/paraphrase-mul... | https://github.com/huggingface/transformers.js/issues/108 | closed | [
"question"
] | 2023-05-13T09:54:12Z | 2023-05-15T17:24:16Z | null | falcon027 |
huggingface/setfit | 372 | Update Previous Model with New Categories | Is there a way to add categories based on new data?
For example - Initially I trained a model with 5 categories and saved the model. I now have new data that I want to feed into the model but this new data has 8 categories. Would I have to start from scratch or can I use the original model I trained?
Thank you! | https://github.com/huggingface/setfit/issues/372 | closed | [
"question"
] | 2023-05-12T21:22:12Z | 2023-11-24T15:10:46Z | null | ronils428 |
huggingface/dataset-viewer | 1,174 | Add a field, and rename another one, in /opt-in-out-urls | The current response for /opt-in-out-urls is:
```
{
"urls_columns": ["url"],
"has_urls_columns": true,
"num_opt_in_urls": 0,
"num_opt_out_urls": 4052,
"num_scanned_rows": 12452281,
"num_urls": 12452281
}
```
I think we should:
- rename `num_urls` into `num_scanned_urls`
- add `num_rows` wit... | https://github.com/huggingface/dataset-viewer/issues/1174 | closed | [
"question"
] | 2023-05-12T13:15:40Z | 2023-05-12T13:54:14Z | null | severo |
huggingface/chat-ui | 207 | MongoParseError: Invalid scheme | I tried to run chat-ui on my mac (Intel 2020, MacOS Ventura 13.3.1), and I get the following error:
```bash
(base) thibo@mac-M:~/Documents/chat-ui$ npm install
added 339 packages, and audited 340 packages in 39s
72 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
... | https://github.com/huggingface/chat-ui/issues/207 | closed | [] | 2023-05-12T07:32:22Z | 2023-05-12T08:26:39Z | 1 | thiborose |
huggingface/chat-ui | 202 | Help wanted: Installing `@huggingface` package from NPM registry | 👋🏻
Sorry if I am opening a dumb issue but I was just looking into fixing some UI issues and not entirely sure how to run this project locally. I've created a `.env.local` with:
```
MONGODB_URL=
HF_ACCESS_TOKEN=XXX
```
Haven't actually set the `MONGODB_URL` but did create an access token for HF.
Running i... | https://github.com/huggingface/chat-ui/issues/202 | closed | [] | 2023-05-11T17:38:24Z | 2023-05-12T11:07:10Z | 5 | eertmanhidde |
huggingface/datasets | 5,841 | Abusurdly slow on iteration | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_d... | https://github.com/huggingface/datasets/issues/5841 | closed | [] | 2023-05-11T08:04:09Z | 2023-05-15T15:38:13Z | 4 | fecet |
huggingface/optimum | 1,046 | Make torchvision optional? | ### Feature request
Currently torchvision is a required dependency
https://github.com/huggingface/optimum/blob/22e4fd6de3ac5e7780571570f962947bd8777fd4/setup.py#L20
### Motivation
I only work on text so I don't need vision support
### Your contribution
I am sure the change would be more difficult than just "rem... | https://github.com/huggingface/optimum/issues/1046 | closed | [] | 2023-05-10T10:49:18Z | 2023-05-12T23:05:46Z | 4 | BramVanroy |
huggingface/datasets | 5,838 | Streaming support for `load_from_disk` | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data ... | https://github.com/huggingface/datasets/issues/5838 | closed | [
"enhancement"
] | 2023-05-10T06:25:22Z | 2024-10-28T14:19:44Z | 12 | Nilabhra |
huggingface/datasets | 5,834 | Is uint8 supported? | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way ... | https://github.com/huggingface/datasets/issues/5834 | closed | [] | 2023-05-09T17:31:13Z | 2023-05-13T05:04:21Z | 5 | ryokan0123 |
huggingface/transformers.js | 104 | [Question] npm install error in windows | I install transformers.js with npm but I get an error:
```
2135 info run canvas@2.11.2 install node_modules/canvas node-pre-gyp install --fallback-to-build --update-binary
2136 info run sharp@0.32.1 install node_modules/sharp (node install/libvips && node install/dll-copy && prebuild-install) || (node install/can-... | https://github.com/huggingface/transformers.js/issues/104 | closed | [
"question"
] | 2023-05-06T09:13:41Z | 2023-05-06T12:48:23Z | null | DominguitoLamo |
huggingface/datasets | 5,818 | Ability to update a dataset | ### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.sav... | https://github.com/huggingface/datasets/issues/5818 | open | [
"enhancement"
] | 2023-05-04T01:08:13Z | 2023-05-04T20:43:39Z | 3 | davidgilbertson |
huggingface/datasets | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:

- [X]... | https://github.com/huggingface/optimum/issues/1024 | open | [
"bug"
] | 2023-05-02T09:42:15Z | 2023-06-12T11:40:23Z | 4 | piegu |
huggingface/datasets | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | https://github.com/huggingface/datasets/issues/5809 | closed | [] | 2023-04-30T06:12:04Z | 2023-07-21T14:11:00Z | 1 | yulgok22 |
huggingface/datasets | 5,805 | Improve `Create a dataset` tutorial | Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required f... | https://github.com/huggingface/datasets/issues/5805 | open | [
"documentation"
] | 2023-04-28T13:26:22Z | 2024-07-26T21:16:13Z | 4 | polinaeterna |
huggingface/dataset-viewer | 1,104 | Delete finished jobs immediately? | Currently, finished jobs are deleted after 7 days by an index. See https://github.com/huggingface/datasets-server/blob/259fd092c12d240d9b8d733c965c4b9362e90684/libs/libcommon/src/libcommon/queue.py#L144
But we never use the finished jobs, so:
- we could delete them immediately after finishing
- we could reduce the... | https://github.com/huggingface/dataset-viewer/issues/1104 | closed | [
"question",
"improvement / optimization"
] | 2023-04-28T11:49:10Z | 2023-05-31T12:20:38Z | null | severo |
huggingface/transformers.js | 102 | How to convert Whisper Large v2 | Hello!
How to convert whisper-large-v2 model to onnx?
I'm using this command
`python3.9 -m scripts.convert --model_id whisper-large-v2 --quantize --task automatic-speech-recognition`
But when i try to connect the converted model i get the following error:
`Error: File not found. Could not locate "encode... | https://github.com/huggingface/transformers.js/issues/102 | closed | [
"question"
] | 2023-04-27T13:30:33Z | 2023-05-31T13:18:33Z | null | hotmeatballs |
huggingface/datasets | 5,797 | load_dataset is case sentitive? | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, sh... | https://github.com/huggingface/datasets/issues/5797 | open | [] | 2023-04-26T18:19:04Z | 2023-04-27T11:56:58Z | 2 | haonan-li |
huggingface/chat-ui | 122 | Add pre-prompt | cc @OlivierDehaene
> Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is neede... | https://github.com/huggingface/chat-ui/issues/122 | closed | [] | 2023-04-26T15:58:55Z | 2023-04-26T16:46:05Z | 1 | coyotte508 |
huggingface/setfit | 367 | Massive Text Embedding Benchmark (MTEB) Leaderboard | https://huggingface.co/spaces/mteb/leaderboard
Can we use all of these with setfit? | https://github.com/huggingface/setfit/issues/367 | closed | [
"question"
] | 2023-04-26T09:18:27Z | 2023-12-05T14:48:55Z | null | vahuja4 |
huggingface/huggingface.js | 165 | Add E2E where the module is downloaded (or linked) to a TS project | To prevent things like #164 | https://github.com/huggingface/huggingface.js/issues/165 | closed | [
"tooling"
] | 2023-04-25T20:23:17Z | 2023-05-07T09:18:47Z | null | coyotte508 |
huggingface/transformers.js | 100 | Whisper on webGPU? | Somewhat related to [this thread](https://github.com/xenova/transformers.js/issues/20).
Is it within scope to implement a webGPU accelerated version of Whisper?
Not sure if this helps, but there is a [C port for Whisper wirh CPU implementation](https://github.com/ggerganov/whisper.cpp), and as mentioned in [this... | https://github.com/huggingface/transformers.js/issues/100 | closed | [
"question"
] | 2023-04-25T09:34:10Z | 2024-10-18T13:30:07Z | null | sandorkonya |
huggingface/optimum | 1,002 | Add a README & log at export | ### Feature request
The logs of the ONNX export are insightful.
Moreover, it would be good to generate automatically a README/json containing:
* which params were used at export
* For decoders, how to use the obtained `.onnx` models, as it can be a bit involved for somebody who does not use the Optimum ORT integr... | https://github.com/huggingface/optimum/issues/1002 | open | [
"feature-request",
"onnx",
"tflite"
] | 2023-04-21T15:31:43Z | 2023-04-21T15:31:43Z | 0 | fxmarty |
huggingface/optimum | 999 | Remove attention mask creation for batch size = 1 when using SDPA | ### Feature request
Some pieces of transformers code are not useful when using SDPA with batch size = 1, for example:
https://github.com/huggingface/transformers/blob/874c7caf1966b1d0ee2749046703ada7a12ed797/src/transformers/models/gpt2/modeling_gpt2.py#L804-L822
https://github.com/huggingface/transformers/blob/87... | https://github.com/huggingface/optimum/issues/999 | closed | [
"feature-request",
"bettertransformer",
"Stale"
] | 2023-04-21T14:41:04Z | 2025-05-29T02:14:32Z | 1 | fxmarty |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.