repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/setfit | 423 | [Q] How to examine correct/wrong predictions in trainer.evaluate() | Hello,
After doing "metrics = trainer.evalute()" as shown in the example code, is there a way to examine which rows in the evaluation data set were predicted correctly?
Thanks! | https://github.com/huggingface/setfit/issues/423 | closed | [
"question"
] | 2023-09-25T23:41:53Z | 2023-11-24T13:04:45Z | null | youngjin-lee |
huggingface/chat-ui | 461 | The custom endpoint response doesn't stream even though the endpoint is sending streaming content | @nsarrazin I'm transmitting the streaming response to the chat UI, but it displays all the content simultaneously rather than progressively streaming the text generation part. Can you help me address this issue?
Reference: #380 | https://github.com/huggingface/chat-ui/issues/461 | open | [
"support"
] | 2023-09-25T07:43:57Z | 2023-10-29T11:21:04Z | 2 | nandhaece07 |
huggingface/autotrain-advanced | 279 | How to run AutoTrain Advanced UI locally | How to run AutoTrain Advanced UI locally 😢 | https://github.com/huggingface/autotrain-advanced/issues/279 | closed | [] | 2023-09-25T07:25:51Z | 2024-04-09T03:20:17Z | null | LronDC |
huggingface/transformers.js | 328 | [Question] React.js serve sentence bert in browser keep reporting models not found. | my codes:
```javascript
export const useInitTransformers = () => {
const init = async () => {
// @ts-ignore
env.allowLocalModels = false;
extractor = await pipeline(
"feature-extraction",
"Xenova/all-mpnet-base-v2",
);
};
return { init };
};
```
I'm building a frontend ... | https://github.com/huggingface/transformers.js/issues/328 | closed | [
"question"
] | 2023-09-24T15:51:47Z | 2024-10-18T13:30:11Z | null | bianyuanop |
pytorch/tutorials | 2,569 | 💡 [REQUEST] - <title> | ### 🚀 Descirbe the improvement or the new tutorial
This tutorial “A GENTLE INTRODUCTION TO TORCH.AUTOGRAD”, the gradients of the error w.r.t. parameters, Q w.r.t a, I think the result should be a 2x2 matrix but not a 2-d vector, according to the matrix calculus.
### Existing tutorials on this topic
_No response_
#... | https://github.com/pytorch/tutorials/issues/2569 | closed | [
"question",
"core"
] | 2023-09-24T11:24:53Z | 2023-10-27T19:23:44Z | null | haoyunliang |
pytorch/vision | 7,987 | How to update RegionProposalNetwork loss function in Faster RCNN? | Excuse me if this question is stupid, but I can't seem to figure out how to do this…
I want to update the loss function of the RPN in FasterRCNN. See these lines [here](https://github.com/pytorch/vision/blob/beb4bb706b5e13009cb5d5586505c6d2896d184a/torchvision/models/detection/generalized_rcnn.py#L104-L105), which c... | https://github.com/pytorch/vision/issues/7987 | closed | [] | 2023-09-24T09:16:17Z | 2023-10-05T14:46:37Z | null | darian69 |
pytorch/pytorch | 109,958 | How to compile torch 2.0.1 version from source? | ### 🐛 Describe the bug
While I was using 'git clone --branch v2.0.1 https://github.com/pytorch/pytorch.git & python setup.py develop', and 'Building wheel torch-1.14.0a0+410ce96' version was being built.
### Versions
I also checked the version.txt, it shows '2.0.0a0' which should be the version in v2.0.1 tag bran... | https://github.com/pytorch/pytorch/issues/109958 | open | [
"oncall: releng",
"triaged"
] | 2023-09-24T00:53:04Z | 2023-09-25T11:01:11Z | null | tonylin52 |
huggingface/candle | 944 | Question: How to tokeninize text for Llama? | Hello everybody,
How can I tokenize text to use with Llama? I want to fine-tune Llama on my custom data, so how can I tokenize from a String and then detokenize the logits into a String?
I have looked at the Llama example for how to detokenize, but cannot find any clear documentation on how the implementation actuall... | https://github.com/huggingface/candle/issues/944 | closed | [] | 2023-09-23T18:19:56Z | 2023-09-23T23:01:13Z | null | EricLBuehler |
huggingface/transformers.js | 327 | Calling pipeline returns `undefined`. What are possible reasons? | The repository if you need it ▶▶▶ [China Cups](https://github.com/piscopancer/china-cups)
## Next 13.5 / server-side approach
Just started digging into your library. Sorry for stupidity.
### `src/app/api/translate/route.ts` 👇
```ts
import { NextRequest, NextResponse } from 'next/server'
import { PipelineSi... | https://github.com/huggingface/transformers.js/issues/327 | closed | [
"question"
] | 2023-09-23T15:57:24Z | 2023-09-24T06:55:08Z | null | piscopancer |
pytorch/TensorRT | 2,340 | ❓ [Question] Why import torch_tensorrt set log level to info automatically? | ## ❓ Question
The default log level of python is warning.
Why import torch_tensorrt set log level to info automatically?
How could I set log level back to warning?
```
import logging
import torch_tensorrt
logging.info("INFO")
logging.warning("WARNING")
logging.error("ERROR")
```
stderr outputs:
```
... | https://github.com/pytorch/TensorRT/issues/2340 | open | [
"question",
"No Activity"
] | 2023-09-23T13:51:10Z | 2024-01-01T00:02:44Z | null | KindRoach |
huggingface/optimum | 1,410 | Export TrOCR to ONNX | I was trying to export my fine-tuned TrOCR model to ONNX using following command. I didn't get any errors, but in onnx folder only encoder model is saved.
```
!python -m transformers.onnx --model=model_path --feature=vision2seq-lm onnx/ --atol 1e-2
```
So, regarding this, I have 2 questions.
1. How to save decoder... | https://github.com/huggingface/optimum/issues/1410 | closed | [
"onnx"
] | 2023-09-23T09:19:50Z | 2024-10-15T16:21:52Z | 2 | VallabhMahajan1 |
pytorch/pytorch | 109,880 | [FSDP ]How to convert sharded_state_dict files into full_state_dict offline without distributed process | ### 🚀 The feature, motivation and pitch
Currently, if I use FSDP with 128 gpus and save checkpoints with sharded_state_dict to avoid gathering the full_state_dict on rank0 for saving, there is no way to obtain the full_state_dict ckpt offline.
The only way to obtain full_state_dict is to launch the exact 128GPU d... | https://github.com/pytorch/pytorch/issues/109880 | closed | [
"oncall: distributed",
"triaged",
"module: fsdp"
] | 2023-09-22T13:44:11Z | 2024-05-16T01:16:12Z | null | nxphi47 |
pytorch/tutorials | 2,566 | [BUG] - Per sample gradients using function transforms not working for RNN | ### Add Link
Hello!
I'm working on a optimization algorithm that requires computing the per sample gradients. Assuming the batch size is $N$ and the number of model parameters is $M$, I want to calculate $\partial \log p(\mathbf{x}^{(i)};\theta)/\partial \theta_j$, which is an $N \times M$ matrix. I found the [[PER-S... | https://github.com/pytorch/tutorials/issues/2566 | closed | [
"question"
] | 2023-09-22T02:15:18Z | 2023-10-26T16:03:36Z | null | bnuliujing |
huggingface/chat-ui | 459 | Chats Stop generation button is broken? | whenever I'm using the Chat UI on hf.co/chat, and I press the stop generation button it deletes both the prompt and the response? | https://github.com/huggingface/chat-ui/issues/459 | open | [
"support"
] | 2023-09-21T19:38:38Z | 2023-10-08T00:44:44Z | 4 | VatsaDev |
huggingface/chat-ui | 457 | Custom Models breaking Chat-ui | Setting a custom model in .env.local is now breaking chat-ui for me. @jackielii @nsarrazin
If I start mongo and then run ```npm run dev``` with a .env.local file including only the mongo url, there is no issue.
Then I add the following:
```
MODELS=`[
{
"name": "OpenAssistant/oasst-sft-4-pythia-12b-ep... | https://github.com/huggingface/chat-ui/issues/457 | closed | [
"support"
] | 2023-09-21T11:12:42Z | 2023-09-21T16:03:30Z | 10 | RonanKMcGovern |
huggingface/datasets | 6,252 | exif_transpose not done to Image (PIL problem) | ### Feature request
I noticed that some of my images loaded using PIL have some metadata related to exif that can rotate them when loading.
Since the dataset.features.Image uses PIL for loading, the loaded image may be rotated (width and height will be inverted) thus for tasks as object detection and layoutLM this ca... | https://github.com/huggingface/datasets/issues/6252 | closed | [
"enhancement"
] | 2023-09-21T08:11:46Z | 2024-03-19T15:29:43Z | 2 | rhajou |
pytorch/TensorRT | 2,335 | ❓ [Question] Bert lost a lot of accuracy when using fp16 | ## ❓ Question
BERT Text Classification model run in fp16 gets huge different result compared to fp32
## What you have already tried
<!-- A clear and concise description of what you have already done. -->
## Environment
> Build information about Torch-TensorRT can be found by turning on debug messages
... | https://github.com/pytorch/TensorRT/issues/2335 | closed | [
"question",
"No Activity"
] | 2023-09-21T07:50:12Z | 2024-05-07T06:37:23Z | null | HenryYuen128 |
huggingface/optimum | 1,401 | BUG: running python file called onnx.py causes circular errors. | ### System Info
```shell
latest optimum, python 3.10, linux cpu.
```
### Who can help?
@JingyaHuang, @echarlaix, @michaelbenayoun
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as ... | https://github.com/huggingface/optimum/issues/1401 | open | [
"bug"
] | 2023-09-21T04:12:49Z | 2023-10-05T14:32:40Z | 1 | gidzr |
huggingface/diffusers | 5,124 | How to fine tune checkpoint .safetensor | ### Describe the bug
I tried to fine tuning a model from a checkpoint (i.e https://civitai.com/models/119202/talmendoxl-sdxl-uncensored-full-model)I converted the checkpoint to diffuser format using this library:
https://github.com/waifu-diffusion/sdxl-ckpt-converter/
The model converted works fine for inference a... | https://github.com/huggingface/diffusers/issues/5124 | closed | [
"bug",
"stale"
] | 2023-09-20T22:45:38Z | 2023-11-22T15:06:19Z | null | EnricoBeltramo |
pytorch/text | 2,205 | Declaring _MapStyleDataset inside function makes it unpicklable | ## 🐛 Bug
**Describe the bug**
When trying to use a Dataset that was converted to map-style using `data.functional.to_map_style_dataset`, I encountered the following error message:
> ...
> File "/usr/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
> ForkingPickler(file, protocol).dump(obj)
>... | https://github.com/pytorch/text/issues/2205 | open | [] | 2023-09-20T12:27:34Z | 2023-09-20T12:27:34Z | 0 | AnthoJack |
huggingface/diffusers | 5,118 | how to use controlnet's reference_only fuction with diffusers?? | ### Model/Pipeline/Scheduler description
can anyone help me to understand how to use controlnet's reference_only fuction with diffusers
### Open source status
- [ ] The model implementation is available
- [ ] The model weights are available (Only relevant if addition is not a scheduler).
### Provide useful links fo... | https://github.com/huggingface/diffusers/issues/5118 | closed | [
"stale"
] | 2023-09-20T10:17:53Z | 2023-11-08T15:07:34Z | null | sudip550 |
pytorch/TensorRT | 2,327 | ❓ [Question] dynamc engines & interpolation align_corners=True | ## ❓ Question
<!-- Your question -->
## What you have already tried
I used the latest docker with tag 23.08-py3. When converting model doing interpolation with align_corners=True and dynamic input, I got error as below.
```
RuntimeError: [Error thrown at core/conversion/converters/impl/interpolate.cpp:412] E... | https://github.com/pytorch/TensorRT/issues/2327 | open | [
"question",
"component: converters"
] | 2023-09-20T07:25:34Z | 2023-11-30T10:57:37Z | null | ArtemisZGL |
huggingface/transformers.js | 321 | [Question] Image Embeddings for ViT | Is it possible to get image embeddings using Xenova/vit-base-patch16-224-in21k model? We use feature_extractor to get embeddings for sentences. Can we use feature_extractor to get image embeddings?
```js
const model_id = "Xenova/vit-base-patch16-224-in21k";
const image = await RawImage.read("https://huggingface.co/... | https://github.com/huggingface/transformers.js/issues/321 | closed | [
"question"
] | 2023-09-20T01:22:08Z | 2024-01-13T01:25:03Z | null | hadminh |
huggingface/optimum | 1,395 | TensorrtExecutionProvider documentation | ### System Info
```shell
main, docs
```
### Who can help?
@fxmarty
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction... | https://github.com/huggingface/optimum/issues/1395 | open | [
"documentation",
"onnxruntime"
] | 2023-09-19T09:06:17Z | 2023-09-19T09:57:26Z | 1 | IlyasMoutawwakil |
huggingface/transformers.js | 317 | How to use xenova/transformers in VSCode Extension | Hey guys! I am trying to use xenova/transformers in CodeStory, we roll a vscode extension as well and I am hitting issues with trying to get the import working, here's every flavor of importing the library which I have tried to date.
```
const TransformersApi = Function('return import("@xenova/transformers")')();
... | https://github.com/huggingface/transformers.js/issues/317 | open | [
"question"
] | 2023-09-19T01:35:21Z | 2024-07-27T20:36:37Z | null | theskcd |
huggingface/candle | 894 | How to fine-tune Llama? | Hello everybody,
I am trying to fine-tune the Llama model, but cannot load the safetensors file. I have modified the training loop for debugging and development:
```rust
pub fn run(args: &crate::TrainingCmd, common_args: &crate::Args) -> Result<()> {
let config_path = match &args.config {
Some(config... | https://github.com/huggingface/candle/issues/894 | closed | [] | 2023-09-18T22:18:04Z | 2023-09-21T10:05:57Z | null | EricLBuehler |
huggingface/candle | 891 | How to do fine-tuning? | Hello everybody,
I was looking through the Candle examples and cannot seem to find an example of fine-tuning for Llama. It appears the only example present is for training from scratch. How should I fine-tune a pretrained model on my own data? Or, more generally, how should I fine tune a model that it loaded from a ... | https://github.com/huggingface/candle/issues/891 | closed | [] | 2023-09-18T18:37:42Z | 2024-07-08T15:13:01Z | null | EricLBuehler |
huggingface/transformers | 26,218 | How to manually set the seed of randomsampler generator when training using transformers trainer | ### System Info
I used a [script](https://github.com/huggingface/transformers/blob/v4.33.0/examples/pytorch/language-modeling/run_clm.py) to continue pre-training the llama2 model. In the second epoch, the loss began to explode, so I chose to reload the checkpoint to continue training, but the loss changes were comp... | https://github.com/huggingface/transformers/issues/26218 | closed | [] | 2023-09-18T14:19:11Z | 2023-11-20T08:05:37Z | null | young-chao |
pytorch/tutorials | 2,563 | Multiple GPU example limited to one GPU | https://github.com/pytorch/tutorials/blob/646c8b6368e4f43acc808e0ddddc569153d6a30f/beginner_source/blitz/data_parallel_tutorial.py#L60
Isn't this line limiting the example to **one** GPU no matter how many GPUs are available?
cc @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen | https://github.com/pytorch/tutorials/issues/2563 | closed | [
"question",
"easy",
"docathon-h2-2023"
] | 2023-09-18T13:13:55Z | 2023-11-06T17:51:57Z | null | 9cpluss |
huggingface/transformers.js | 313 | [Question] How to use remote models for automatic-speech-recognition | I have an html file that is
```
<!DOCTYPE html>
<html>
<body>
<script type="module">
import { pipeline,env } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.0';
env.allowLocalModels = false;
const transcriber = await pipeline('automatic-speech-recognition', 'Xenova/whisper-tiny.en');
... | https://github.com/huggingface/transformers.js/issues/313 | closed | [
"question"
] | 2023-09-18T04:56:52Z | 2023-09-18T05:19:00Z | null | LehuyH |
huggingface/candle | 883 | Question: How to properly use VarBuilder? | Hello everybody,
I am working on implementing LoRA and want to use the VarBuilder system. However, when I try to get a tensor with get_with_hints, I get a CannotFindTensor Err. To create the Tensor, I do:
```rust
vb.pp("a").get_with_hints(
...lora specific shape...
"weight",
...lora specific hints...
)
```
However, th... | https://github.com/huggingface/candle/issues/883 | closed | [] | 2023-09-17T20:40:27Z | 2023-09-17T21:02:24Z | null | EricLBuehler |
pytorch/xla | 5,599 | Stubs or wheels for other OSes/architectures | ## ❓ Questions and Help
I'm new to torch/xla. One development pattern which I use, and which I expect to be common, is to write software on one system (eg M-series Mac laptop) which is intended to be run elsewhere. Project docs for torch/xla regarding installation specify downloading a wheel which is Linux x86 specific... | https://github.com/pytorch/xla/issues/5599 | closed | [
"question"
] | 2023-09-17T19:03:47Z | 2025-04-29T13:22:51Z | null | abeppu |
huggingface/transformers.js | 310 | How to load model from the static folder path in nextjs or react or vanilla js? | <!-- QUESTION GOES HERE -->
| https://github.com/huggingface/transformers.js/issues/310 | closed | [
"question"
] | 2023-09-17T14:13:57Z | 2023-09-27T08:36:29Z | null | adnankarim |
huggingface/safetensors | 360 | The default file format used when loading the model? | I guess that huggingface loads .safetensor files by default when loading models. Is this mandatory? Can I choose to load files in. bin format? (Because I only downloaded weights in bin format, and it reported an error “ could not find a file in safeTensor format”). I do not find related infomation in docs.
Thanks for ... | https://github.com/huggingface/safetensors/issues/360 | closed | [] | 2023-09-15T14:56:13Z | 2023-09-19T10:34:57Z | 1 | Kong-Aobo |
huggingface/diffusers | 5,055 | How to download config.json if it is not in the root directory. | Is there any way to download vae for a model where config.json is not in the root directory?
```python
vae = AutoencoderKL.from_pretrained("redstonehero/kl-f8-anime2")
```
For example, as shown above, there is no problem if config.json exists in the root directory, but if it does not exist, an error will occur... | https://github.com/huggingface/diffusers/issues/5055 | closed | [] | 2023-09-15T11:37:47Z | 2023-09-16T00:15:58Z | null | suzukimain |
pytorch/torchx | 766 | Is this repository no longer maintained? | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
Torch elastic redirects to this repository but it doesn't seem very active... | https://github.com/meta-pytorch/torchx/issues/766 | closed | [] | 2023-09-15T10:37:43Z | 2023-09-15T22:03:01Z | 4 | ccharest93 |
huggingface/transformers.js | 305 | [Question] Can I work with Peft models through the API? | Let's say I have the following code in Python. How would I translate that to js?
````
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "samwit/bloom-7b1-lora-tagger"
config = PeftConfig.from_pretrained(peft_model_id)
model = Aut... | https://github.com/huggingface/transformers.js/issues/305 | open | [
"question"
] | 2023-09-14T21:02:59Z | 2023-09-16T00:16:03Z | null | chrisfel-dev |
pytorch/TensorRT | 2,320 | ❓ [Question] How to use C++ bindings for torch tensorrt with CMake? | ## ❓ Question
I would like to know how to use the examples provided [here](https://github.com/pytorch/TensorRT/tree/v1.4.0/examples/torchtrt_runtime_example) with CMake. The instructions seem to indicate only how to use it with a makefile. CMake is not able to find `torchtrt`, exactly as described in #1207, but unfo... | https://github.com/pytorch/TensorRT/issues/2320 | closed | [
"question",
"No Activity"
] | 2023-09-14T18:42:13Z | 2023-12-28T22:10:34Z | null | janblumenkamp |
pytorch/TensorRT | 2,319 | ❓ [Question] How do I load the torch tensorRT model on multiple gpus | ## ❓ Question
In [TorchServe](https://github.com/pytorch/serve), we have this concept of workers. In a multi-GPU node, we can assign each GPU to a worker.
I am noticing that tensorRT model is getting loaded on GPU 0 even though we specify the correct GPU ID
for each worker.```torch.jit.load(model_pt_path, map... | https://github.com/pytorch/TensorRT/issues/2319 | closed | [
"question",
"component: runtime",
"bug: triaged [verified]"
] | 2023-09-14T18:41:36Z | 2023-09-27T19:55:28Z | null | agunapal |
huggingface/diffusers | 5,042 | How to give number of inference steps to Wuerstchen prior pipeline | **this below working with default DEFAULT_STAGE_C_TIMESTEPS but it always generates with exactly 29 number of prior inference steps**
```
prior_output = prior_pipeline(
prompt=prompt,
height=height,
width=width,
num_inference_steps=prior_num_inference_steps,
timesteps=DEF... | https://github.com/huggingface/diffusers/issues/5042 | closed | [
"bug"
] | 2023-09-14T15:21:31Z | 2023-09-20T07:41:19Z | null | FurkanGozukara |
huggingface/chat-ui | 440 | Web Search not working | i have been having this issues where it just searches something but then never shows me the answer it shows max tokens
i just keep seeing this
first i see the links of the resources
but then it does nothing at all
.
... | https://github.com/pytorch/TensorRT/issues/2318 | closed | [
"question",
"No Activity"
] | 2023-09-14T09:30:20Z | 2024-01-01T00:02:46Z | null | VictorIOVI |
huggingface/diffusers | 5,032 | How to unfuse_lora only the first one after I have added multiple lora? | base.load_lora_weights("models/safetensors/SDXL/国风插画SDXL.safetensors")
base.fuse_lora(lora_scale=.7)
base.load_lora_weights("models/safetensors/SDXL/sd_xl_offset_example-lora_1.0.safetensors")
base.fuse_lora(lora_scale=.8)
Now, When I execute unfuse_lora() only the most recent one has been unfuse .
so,how to un... | https://github.com/huggingface/diffusers/issues/5032 | closed | [
"stale"
] | 2023-09-14T08:10:46Z | 2023-10-30T15:06:34Z | null | yanchaoguo |
pytorch/kineto | 804 | Will PyTorch Profiler TensorBoard Plugin continue to evolve? It seems that it cannot support PyTorch 2.0 | https://github.com/pytorch/kineto/issues/804 | closed | [
"question",
"plugin"
] | 2023-09-14T02:21:09Z | 2023-12-28T16:44:59Z | null | BadTrasher | |
huggingface/optimum | 1,384 | Documentation Request: Table or heuristic for Ortmodel Method to Encoder/Decoder to .onnx File to Task | ### Feature request
Hi there
Could you provide either a table (where explicit rules apply - see attached image), or a heuristic, so I can tell which ML models, optimised file types, with which tasks, apply to which inference methods and inference tasks?
The example table below will help to clarify, and isn't ... | https://github.com/huggingface/optimum/issues/1384 | closed | [
"Stale"
] | 2023-09-14T01:45:38Z | 2025-04-24T02:11:24Z | 4 | gidzr |
pytorch/rl | 1,522 | [BUG] It's not clear how to call an advantage module with batched envs and pixel observations. | ## Describe the bug
When you get a tensordict rollout of shape `(N_envs, N_steps, C, H, W)` out of a collector and you want to apply an advantage module that starts with `conv2d` layers:
1. directly applying the module will crash with the `conv2d` layer complaining about the input size e.g. `RuntimeError: Expected ... | https://github.com/pytorch/rl/issues/1522 | open | [
"bug"
] | 2023-09-13T21:04:29Z | 2024-03-27T16:37:49Z | null | skandermoalla |
huggingface/optimum | 1,379 | Can't use bettertransformer to train vit? | ### System Info
```shell
Traceback (most recent call last):
File "test_bettertransformer_vit.py", line 95, in <module>
main()
File "test_bettertransformer_vit.py", line 92, in main
test_train_time()
File "test_bettertransformer_vit.py", line 86, in test_train_time
out_vit = model(pixel_values).... | https://github.com/huggingface/optimum/issues/1379 | closed | [
"bug"
] | 2023-09-13T12:49:53Z | 2025-02-20T08:38:26Z | 1 | lijiaoyang |
pytorch/examples | 1,190 | main.py: TensorBoard in case of Multi-processing Distributed Data Parallel Training | Dear developers
It is so great that you've provided a examples/imagenet/main.py script which looks amazing.
I'm looking how to setup a _Multi-processing Distributed Data Parallel Training_, for instance 8 GPUs on a single node but I can also use multi-nodes multi-gpus. I must say that I have never had so great infra... | https://github.com/pytorch/examples/issues/1190 | open | [] | 2023-09-13T11:19:44Z | 2023-09-13T11:19:44Z | 0 | jecampagne |
huggingface/text-generation-inference | 1,015 | how to text-generation-benchmark through the local tokenizer | The command i run in docker is
```
text-generation-benchmark --tokenizer-name /data/checkpoint-5600/
```
The error log is
```
2023-09-12T11:22:01.245495Z INFO text_generation_benchmark: benchmark/src/main.rs:132: Loading tokenizer
2023-09-12T11:22:01.245966Z INFO text_generation_benchmark: benchmark/src/... | https://github.com/huggingface/text-generation-inference/issues/1015 | closed | [
"Stale"
] | 2023-09-12T12:10:41Z | 2024-06-07T09:39:32Z | null | jessiewiswjc |
huggingface/autotrain-advanced | 260 | How to create instruction dataset (Q&A) for fine-tuning from PDFs? | https://github.com/huggingface/autotrain-advanced/issues/260 | closed | [] | 2023-09-12T02:54:07Z | 2023-12-18T15:31:13Z | null | mahimairaja | |
huggingface/transformers.js | 295 | [Question] Issue with deploying model to Vercel using NextJS and tRPC | Hi I'm trying to deploy my model to Vercel via NextJS and tRPC and have the .cache folder generated using the postinstall script
```
// @ts-check
let fs = require("fs-extra");
let path = require("path");
async function copyXenovaToLocalModules() {
const paths = [["../../../node_modules/@xenova", "../node_m... | https://github.com/huggingface/transformers.js/issues/295 | closed | [
"question"
] | 2023-09-11T11:13:11Z | 2023-09-12T15:23:17Z | null | arnabtarwani |
huggingface/transformers.js | 291 | [Question] Using transformers.js inside an Obsidian Plugin | I'm trying to run transfomer.js inside of Obsidian but running into some errors:
<img width="698" alt="Screenshot 2023-09-10 at 3 05 43 PM" src="https://github.com/xenova/transformers.js/assets/11430621/a6b4b83e-6a1e-44bb-9a46-c3966d058146">
This code is triggering the issues:
```js
class MyClassificationPipe... | https://github.com/huggingface/transformers.js/issues/291 | open | [
"question"
] | 2023-09-10T22:12:07Z | 2024-04-30T13:52:06Z | null | benjaminshafii |
huggingface/candle | 807 | How to use the kv_cache? | Hi, how would I use the kv_cache? Let's say I want a chat like type of thing, how would I save the kv_cache and load it so that all the tokens won't have to be computed again? | https://github.com/huggingface/candle/issues/807 | closed | [] | 2023-09-10T21:39:31Z | 2025-11-22T23:18:58Z | null | soupslurpr |
huggingface/transformers | 26,061 | How to perform batch inference? | ### Feature request
I want to pass a list of tests to model.generate.
text = "hey there"
inputs = tokenizer(text, return_tensors="pt").to(0)
out = model.generate(**inputs, max_new_tokens=184)
print(tokenizer.decode(out[0], skip_special_tokens=True))
### Motivation
I want to do batch inference.
### Y... | https://github.com/huggingface/transformers/issues/26061 | closed | [] | 2023-09-08T20:59:37Z | 2023-10-23T16:04:20Z | null | ryanshrott |
pytorch/vision | 7,947 | Why image shape different between Image.open and torchvision.io.read_image | ### 🐛 Describe the bug
EXIF image:

I have a JPEG image above with EXIF information and I tried to load this image into pytorch for augmentation.
1. try with opencv
```
import cv2
img = cv2.imread("1.jpg")
print(i... | https://github.com/pytorch/vision/issues/7947 | closed | [
"question"
] | 2023-09-08T10:17:45Z | 2023-09-25T09:40:25Z | null | kero-ly |
pytorch/tutorials | 2,554 | Autograd - M factor missing in Matrix Vector Multiplication? | In [this](https://github.com/pytorch/tutorials/blob/main/beginner_source/blitz/autograd_tutorial.py) tutorial, once the vector v is multiplied by the Jacobian, shouldn't there be an additional factor of M in the results?
cc @albanD @sekyondaMeta @svekars @carljparker @NicolasHug @kit1980 @subramen | https://github.com/pytorch/tutorials/issues/2554 | closed | [
"question",
"core",
"medium"
] | 2023-09-08T08:51:18Z | 2023-11-02T19:30:44Z | null | sudz123 |
huggingface/text-generation-inference | 998 | How to insert a custom stop symbol, like </s>? | ### Feature request
nothing
### Motivation
nothing
### Your contribution
nothing | https://github.com/huggingface/text-generation-inference/issues/998 | closed | [] | 2023-09-08T07:06:08Z | 2023-09-08T07:13:38Z | null | babytdream |
huggingface/safetensors | 355 | Safe tensors cannot be easily freed! | ### System Info
Hi,
I am using the safetensors for loading Falcon-180B model. I am loading the ckpts one by one on CPU, and then try to remove the tensors by simply calling `del` function. However, I am seeing that CPU memory keeps increasing until it runs out of memory and system crashes (I am also calling `gc.co... | https://github.com/huggingface/safetensors/issues/355 | closed | [
"Stale"
] | 2023-09-07T22:13:15Z | 2024-08-30T10:22:01Z | 4 | RezaYazdaniAminabadi |
huggingface/transformers.js | 285 | The generate API always returns the same number of tokens as output nomatter what is min_tokens | Here is the code I am trying
```js
import { pipeline } from '@xenova/transformers';
import { env } from '@xenova/transformers';
let generator = await pipeline('text2text-generation', 'Xenova/LaMini-Flan-T5-783M');
let output = await generator('write a blog on Kubernetes?', {
max_new_tokens: 512,min_new_toke... | https://github.com/huggingface/transformers.js/issues/285 | closed | [
"bug"
] | 2023-09-07T13:30:39Z | 2023-09-17T21:57:14Z | null | allthingssecurity |
huggingface/chat-ui | 430 | Server does not support event stream content error for custom endpoints | is there anyone faced the issue such as "Server does not support event stream content" when parsing the custom endpoint results.
what is the solution for this error?
In order to reproduce the issue,
User enter prompts saying "how are you" -> call goes to custom endpoint -> Endpoint returns response as string -> er... | https://github.com/huggingface/chat-ui/issues/430 | closed | [] | 2023-09-07T10:01:18Z | 2023-09-15T00:01:56Z | 3 | nandhaece07 |
huggingface/sentence-transformers | 2,300 | How to convert embedding vector to text ? | I use the script below to convert text to embeddings
```
model = SentenceTransformer('all-MiniLM-L6-v2')
embeddings = model.encode(text)
```
But how to convert embeddings to text ? | https://github.com/huggingface/sentence-transformers/issues/2300 | closed | [] | 2023-09-07T09:19:22Z | 2025-09-01T11:44:34Z | null | chengzhen123 |
huggingface/transformers.js | 283 | [Question] Model type for tt/ee not found, assuming encoder-only architecture | Reporting this as requested by the warning message, but as a question because I'm not entirely sure if it's a bug:

Here's the code I ran:
```js
let quantized = false; // change to `true` for a much smaller ... | https://github.com/huggingface/transformers.js/issues/283 | closed | [
"question"
] | 2023-09-07T05:01:34Z | 2023-09-08T13:17:07Z | null | josephrocca |
huggingface/safetensors | 354 | Is it possible to append to tensors along a primary axis? | ### Feature request
it would be really cool to be able to append to a safetensor file so you can continue to add data along, say, a batch dimension
### Motivation
for logging data during train runs that can be visualized from an external tool. something like a live application that lazily loads the saved data. this ... | https://github.com/huggingface/safetensors/issues/354 | closed | [
"Stale"
] | 2023-09-06T17:54:56Z | 2023-12-11T01:48:44Z | 2 | verbiiyo |
huggingface/huggingface_hub | 1,643 | We couldn't connect to 'https://huggingface.co/' to load this model and it looks like distilbert-base-uncased is not the path to a directory conaining a config.json file. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mod... | ### System Info
Hello, I have been using hugging face transformers with a lot of success. I have been able to create many successful fine-tuned pre-trained text classification models using various HF transformers and have been using HF integration with SageMaker in a SageMaker conda_pytorch_310 notebook.
my co... | https://github.com/huggingface/huggingface_hub/issues/1643 | closed | [] | 2023-09-06T17:18:45Z | 2023-09-07T15:51:12Z | null | a-rhodes-vcu |
huggingface/setfit | 417 | Passing multiple evaluation metrics to SetFitTrainer | Hi there, after reading the docs I find that one can easily get the f1 score or accuracy by passing the respective string as the `metric` argument to the trainer. However, how can I get both or even other metrics, such as f1_per_class?
Thanks :) | https://github.com/huggingface/setfit/issues/417 | closed | [
"question"
] | 2023-09-06T11:38:08Z | 2023-11-24T13:31:08Z | null | fhamborg |
huggingface/optimum | 1,357 | [RFC] MusicGen `.to_bettertransformer()` integration | ### Feature request
Add support for MusicGen Better Transformer integration. MusicGen is composed of three sub-models:
1. Text encoder: maps the text inputs to a sequence of hidden-state representations. The pre-trained MusicGen models use a frozen text encoder from either T5 or Flan-T5
2. MusicGen decoder: a lang... | https://github.com/huggingface/optimum/issues/1357 | closed | [] | 2023-09-06T10:25:50Z | 2024-01-10T17:31:44Z | 1 | sanchit-gandhi |
pytorch/serve | 2,569 | Failure in loading Deepspeed large model example | ### 🐛 Describe the bug
I am trying to follow the example to perform inference with the OPT-30B model according to this example: https://github.com/pytorch/serve/tree/master/examples/large_models/deepspeed
However, as specified in the [model-config.yaml](https://github.com/pytorch/serve/blob/master/examples/large_m... | https://github.com/pytorch/serve/issues/2569 | open | [
"question",
"triaged",
"example"
] | 2023-09-05T23:35:46Z | 2023-09-11T17:35:14Z | null | sachanub |
huggingface/diffusers | 4,906 | How to check whether the image is flagged as inappropriate automated? | Is there a way to know whether the generated image (without seeing it) was flagged as inappropriate? | https://github.com/huggingface/diffusers/issues/4906 | closed | [] | 2023-09-05T17:51:07Z | 2023-09-07T05:49:46Z | null | sarmientoj24 |
huggingface/diffusers | 4,905 | How to convert pretrained SDXL .safetensors model to diffusers folder format | As SDXL is gaining adoption, more and more community based models pop up that that are just saved as a .safetensors file. E.g the popular Realistic Vision: https://civitai.com/models/139562?modelVersionId=154590
When running train_dreambooth_lora_sdxl.py, the training script expects the diffusers folder format to ac... | https://github.com/huggingface/diffusers/issues/4905 | closed | [] | 2023-09-05T17:01:27Z | 2023-09-06T09:55:54Z | null | agcty |
huggingface/transformers.js | 280 | [Question] How to run multiple pipeline or multiple modal? | <!-- QUESTION GOES HERE -->
I am trying to transcribe from audio source and need to do multi language translation. I had tried transcribing using Xenova/whisper- and and take text input and feed in to "Xenova/m2m100_418M" modal but due to multiple pipeline it's failed. Is there any way to achieve
this? | https://github.com/huggingface/transformers.js/issues/280 | closed | [
"question"
] | 2023-09-05T11:33:44Z | 2023-11-01T11:32:15Z | null | sundarshahi |
huggingface/optimum | 1,346 | BetterTransfomer Support for the GPTBigCode model | ### Feature request
is it possible to support GPTBigCode with BetterTransformer?
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode
### Motivation
A very popular Decoder model for Code.
### Your contribution
hope you can achieve it. Thanks. | https://github.com/huggingface/optimum/issues/1346 | closed | [] | 2023-09-04T16:52:56Z | 2023-09-08T14:51:17Z | 5 | amarazad |
pytorch/TensorRT | 2,284 | ❓ [Question] Timeline for TensorRT 9.0 support | ## ❓ Question
What is the timeline to support TensorRT 9.0 ?
## What you have already tried
Using Nvidia's 9.0 TensorRT [release](https://github.com/NVIDIA/TensorRT/tree/release/9.0) is incompatible with the latest version of torch-tensorrt (which requires TensorRT 8.6).
| https://github.com/pytorch/TensorRT/issues/2284 | closed | [
"question"
] | 2023-09-04T07:26:02Z | 2023-09-06T16:56:33Z | null | tdeboissiere |
pytorch/serve | 2,564 | [Docs] More information regarding text generation & LLM inference | ### 📚 The doc issue
I am new to TorchServe and was looking for some features that I need to be able to consider using TorchServe for LLM text generation.
Today, there are a couple inference serving solutions out there, including [text-generation-inference](https://github.com/huggingface/text-generation-inference) ... | https://github.com/pytorch/serve/issues/2564 | open | [
"documentation",
"question",
"llm"
] | 2023-09-03T17:40:16Z | 2023-09-05T17:45:08Z | null | jaywonchung |
huggingface/chat-ui | 426 | `stream` is not supported for this model | Hello Eperts,
Trying to run https://github.com/huggingface/chat-ui by providing models like EleutherAI/pythia-1b, gpt2-large. With all these models, there is this consitent error
{"error":["Error in `stream`: `stream` is not supported for this model"]}
Although I can see that hosted inference API for these models ar... | https://github.com/huggingface/chat-ui/issues/426 | open | [
"question",
"models"
] | 2023-09-02T05:30:47Z | 2023-12-24T16:39:21Z | null | newUserForTesting |
huggingface/diffusers | 4,871 | How to run "StableDiffusionXLPipeline.from_single_file"? | I got an error when I ran the following code and it got an error on the line "pipe = StableDiffusionXLPipeline." and how to solve it?
notes:
I don't have a model refiner, I just want to run a model with a DIffuser XL
```
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import t... | https://github.com/huggingface/diffusers/issues/4871 | closed | [] | 2023-09-01T22:42:25Z | 2023-09-09T03:35:53Z | null | Damarcreative |
huggingface/optimum | 1,334 | Enable CLI export of decoder-only models without present outputs | ### Feature request
Currently `optimum-cli export onnx` only supports exporting text-generation models with present outputs (`--task text-generation`) or with past+present outputs (``--task text-generation-with-past`). It would be useful to be able to export a variant without any caching structures if they will not ... | https://github.com/huggingface/optimum/issues/1334 | closed | [] | 2023-09-01T15:56:27Z | 2023-09-13T11:43:36Z | 3 | mgoin |
huggingface/transformers.js | 274 | [Question] How to convert to ONNX a fine-tuned model | Hi, we're playing with this library to see if it can be useful for our project. I find it very easy and well done (congratulations).
The idea is not to use it directly as a frontend library but via node.js.
We've tried scripting a model directly from HF (google/flan-t5-small) and it worked but we're having trouble... | https://github.com/huggingface/transformers.js/issues/274 | open | [
"question"
] | 2023-09-01T15:27:21Z | 2023-09-01T16:12:12Z | null | mrddter |
huggingface/datasets | 6,203 | Support loading from a DVC remote repository | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible thr... | https://github.com/huggingface/datasets/issues/6203 | closed | [
"enhancement"
] | 2023-09-01T14:04:52Z | 2023-09-15T15:11:27Z | 4 | bilelomrani1 |
huggingface/optimum | 1,328 | Documentation for OpenVINO missing half() | ### System Info
```shell
N/A
```
### Who can help?
@echarlaix
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction (min... | https://github.com/huggingface/optimum/issues/1328 | closed | [
"bug"
] | 2023-08-31T20:44:28Z | 2023-08-31T20:46:34Z | 1 | ngaloppo |
huggingface/autotrain-advanced | 249 | How to save model locally after sft | I am wondering how to save model locally after sft | https://github.com/huggingface/autotrain-advanced/issues/249 | closed | [] | 2023-08-31T14:59:04Z | 2023-08-31T17:01:44Z | null | Diego0511 |
huggingface/chat-ui | 425 | Is it possible to modify it so that .env.local environment variables are set at runtime? | Currently for every different deployment of Chat-UI it is required to rebuild the Docker image with different .env.local environment variables. Is it theoretically possible to have it so that 1 image can be used for all deployments, but with different secrets passed at runtime? What environment variables and for what r... | https://github.com/huggingface/chat-ui/issues/425 | open | [
"enhancement",
"back",
"hacktoberfest"
] | 2023-08-31T12:55:17Z | 2024-03-14T20:05:38Z | 4 | martinkozle |
huggingface/text-generation-inference | 959 | How to enter the docker image to modify the environment | ### System Info
dokcer image: ghcr.io/huggingface/text-generation-inference:1.0.2
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [X] My own modifications
### Reproduction
I want to enter the image to modify the environment,like: tiktoken.
`docker run -it ... | https://github.com/huggingface/text-generation-inference/issues/959 | closed | [] | 2023-08-31T11:14:13Z | 2023-08-31T20:12:55Z | null | Romaosir |
huggingface/safetensors | 352 | Attempt to convert `PygmalionAI/pygmalion-2.7b` to `safetensors` | ### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.15.0-1039-gcp-x86_64-with-glibc2.31
- Python version: 3.9.5
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.20.3
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu118 (True)
- Tensorflow ... | https://github.com/huggingface/safetensors/issues/352 | closed | [
"Stale"
] | 2023-08-31T10:25:19Z | 2023-12-11T01:48:45Z | 2 | JulesBelveze |
huggingface/autotrain-advanced | 246 | how to load the fine-tuned model in the local? | hi
thz for your super convenient package makes easier for cookies like me to fine-tune a new model. However, as a cookie, I dont really know how to load my fine-tuned model and apply.
I was fine-tuning in Google colab and download on my PC but know how to call it out?
thz bro | https://github.com/huggingface/autotrain-advanced/issues/246 | closed | [] | 2023-08-31T08:15:11Z | 2023-12-18T15:31:11Z | null | kennyluke1023 |
huggingface/diffusers | 4,849 | how to use multiple GPUs to train textual inversion? |
I train the textual inversion fine tuning cat toy example from [here](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion)
my env:
diffusers: 0.20.0
torch: 1.12.1+cu113
accelerate: 0.22.0
train script, as follow:
```
CUDA_VISIBLE_DEVICES="0,1,2,3" python -u textual_inversion.py ... | https://github.com/huggingface/diffusers/issues/4849 | closed | [] | 2023-08-31T02:56:39Z | 2023-09-11T01:07:49Z | null | Adorablepet |
pytorch/xla | 5,525 | Query bazel deps of XLAC.so? | ## ❓ Questions and Help
I'm trying to see bazel dependencies of `//:_XLAC.so` target by running the following command (as described in [bazel guide](https://bazel.build/query/guide))
```
bazel query "deps(//:_XLAC.so)"
```
It shows me the following errors:
```bash
ERROR: An error occurred during the fetch of rep... | https://github.com/pytorch/xla/issues/5525 | open | [
"question",
"build"
] | 2023-08-30T21:27:58Z | 2025-04-30T12:34:57Z | null | apivovarov |
huggingface/chat-ui | 423 | AI response appears without user message, then both appear after refresh. | I was experimenting with my own back-end and was wanting to get a feel for the interface. Here is what my code looks like:
```py
import json
import random
from fastapi import FastAPI, Request
from fastapi.responses import Response, StreamingResponse
app = FastAPI()
async def yielder():
yield "data:" +... | https://github.com/huggingface/chat-ui/issues/423 | closed | [] | 2023-08-30T19:04:14Z | 2023-09-13T19:44:23Z | 5 | konst-aa |
huggingface/datasets | 6,195 | Force to reuse cache at given path | ### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 ... | https://github.com/huggingface/datasets/issues/6195 | closed | [] | 2023-08-30T18:44:54Z | 2023-11-03T10:14:21Z | 2 | Luosuu |
huggingface/trl | 713 | How to use custom evaluate function with multi-gpu deepspeed | I am trying to use `deepspeed` multi-gpu training with `SFTTrainer` for a hh-rlhf. My modified trainer looks something like this
```python
class SFTCustomEvalTrainer(SFTTrainer):
def evaluate(
self,
eval_dataset = None,
ignore_keys = None,
metric_key_prefix: ... | https://github.com/huggingface/trl/issues/713 | closed | [] | 2023-08-30T17:33:40Z | 2023-11-10T15:05:23Z | null | abaheti95 |
huggingface/optimum | 1,323 | Optimisation and Quantisation for Translation models / tasks | ### Feature request
Currently, the opimisation and quantisation functions look for mode.onnx in a folder, and will perform opt and quant on those files. When exporting a translation targeted ONNX, multiple files for encoding and decoding, and these can't be optimised or quantised.
I've tried a hacky approach to ch... | https://github.com/huggingface/optimum/issues/1323 | closed | [] | 2023-08-30T06:36:17Z | 2023-09-29T00:47:39Z | 2 | gidzr |
huggingface/datasets | 6,193 | Dataset loading script method does not work with .pyc file | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
#... | https://github.com/huggingface/datasets/issues/6193 | open | [] | 2023-08-29T19:35:06Z | 2023-08-31T19:47:29Z | 3 | riteshkumarumassedu |
huggingface/transformers.js | 270 | [Question] How to stop warning log | I am using NodeJS to serve a translation model.
There are so many warning log when translation processing. How to stop this?
`2023-08-29 23:04:32.061 node[3167:31841] 2023-08-29 23:04:32.061977 [W:onnxruntime:, graph.cc:3490 CleanUnusedInitializersAndNodeArgs] Removing initializer '/model/decoder/layers.2/encoder_att... | https://github.com/huggingface/transformers.js/issues/270 | open | [
"question"
] | 2023-08-29T16:08:41Z | 2025-08-02T15:48:45Z | null | tuannguyen90 |
huggingface/chat-ui | 420 | Error: ENOSPC: System limit for number of file watchers reached | Error: ENOSPC: System limit for number of file watchers reached, watch '/home/alvyn/chat-ui/vite.config.ts'
at FSWatcher.<computed> (node:internal/fs/watchers:247:19)
at Object.watch (node:fs:2418:34)
at createFsWatchInstance (file:///home/alvyn/chat-ui/node_modules/vite/dist/node/chunks/dep-e8f070e8.js:... | https://github.com/huggingface/chat-ui/issues/420 | closed | [
"support"
] | 2023-08-29T14:54:49Z | 2023-09-20T15:11:26Z | 2 | alvynabranches |
huggingface/transformers.js | 268 | [Question] Chunks from transcription always empty text | This example works fine:

But ATM I am sending Float32 to the worker here (i also confirm the audio is valid by playing it back)
https://github.com/quantuminformation/coherency/blob/main/components/audio-recorder... | https://github.com/huggingface/transformers.js/issues/268 | open | [
"question"
] | 2023-08-29T13:49:00Z | 2023-11-04T19:48:30Z | null | quantuminformation |
huggingface/diffusers | 4,831 | How to preview the image during generation,any demo for gradio? | How to preview the image during generation,any demo for gradio? | https://github.com/huggingface/diffusers/issues/4831 | closed | [] | 2023-08-29T13:32:07Z | 2023-08-30T15:31:31Z | null | wodsoe |
huggingface/transformers.js | 267 | [Question] multilingual-e5-* models don't work with pipeline | I just noticed that the `Xenova/multilingual-e5-*` model family doesn't work in the transformers.js pipeline for feature-extraction with your (@xenova) onnx versions on HF.
My code throws an error.
```Javascript
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.5.4';
async function... | https://github.com/huggingface/transformers.js/issues/267 | closed | [
"question"
] | 2023-08-29T12:39:26Z | 2023-08-30T12:05:02Z | null | do-me |
pytorch/xla | 5,510 | Kaggle Pytorch/XLA notebooks. How to import torch_xla? | I tried to use Kaggle [Pytorch/XLA notebooks](https://www.kaggle.com/code/aivovarov/pytorch-xla-2-0-on-kaggle/edit) with "Pin to original env" and "Always use the latest env" (in notebook options).
- pin to original env (2023-04-04_ uses python 3.7 , pytorch 1.13.0-cpu
- the latest env uses python 3.10, pytorch 2.0.... | https://github.com/pytorch/xla/issues/5510 | open | [
"question"
] | 2023-08-28T20:15:19Z | 2025-04-29T13:52:29Z | null | apivovarov |
huggingface/transformers | 25,803 | [Model] How to evaluate Idefics Model's ability with in context examples? | Hi the recent release of Idefics-9/80B-Instruct model is superbly promising!
We would like to evaluate them on a customized benchmarks with in context examples. May I ask how should I arrange the prompt template, especially for `instruct` version?
We had some problems previously when evaluating the model on sin... | https://github.com/huggingface/transformers/issues/25803 | closed | [] | 2023-08-28T19:39:02Z | 2023-10-11T08:06:48Z | null | Luodian |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.