repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 7,813 | I feel confused about this TODO issue. how to pass timesteps as tensors? | https://github.com/huggingface/diffusers/blob/235d34cf567e78bf958344d3132bb018a8580295/src/diffusers/models/unets/unet_2d_condition.py#L918
| https://github.com/huggingface/diffusers/issues/7813 | closed | [
"stale"
] | 2024-04-29T03:46:21Z | 2024-11-23T00:19:17Z | null | ghost |
pytorch/torchchat | 544 | [DOCS, TESTS] quantization option table & quantization option table testing | can we pin down the details for this, because this update is too generous and doesn't represent the swiss cheese that is the support matrix?
I seem to recall some operators didn't have the full set of group sizes - the group sizes are just an enumeration of powers of 2, did we test them? (I can't say the other tabl... | https://github.com/pytorch/torchchat/issues/544 | closed | [] | 2024-04-29T03:37:26Z | 2024-05-12T22:58:14Z | 2 | mikekgfb |
pytorch/torchchat | 543 | [PAPERCUTS] error message repeated ad nauseam | I get it -- maybe the error is `aten::_weight_int4pack_mm(Tensor self, Tensor mat2, int qGroupSize, Tensor qScaleAndZeros) -> Tensor' to its out variant with error: 'SchemaKind.out variant of operator aten::_weight_int4pack_mm can't be found. We've found the schemas of all the overloads: ['aten::_weight_int4pack_mm(Ten... | https://github.com/pytorch/torchchat/issues/543 | closed | [] | 2024-04-29T03:22:01Z | 2024-08-30T15:19:47Z | 1 | mikekgfb |
pytorch/torchchat | 542 | linear:int4 issues - RuntimeError: Missing out variants: {'aten::_weight_int4pack_mm'} | ```
(py311) mikekg@mikekg-mbp torchchat % python export.py --checkpoint-path ${MODEL_PATH} --temperature 0 --quantize '{"linear:int4": {"groupsize": 128}}' --output-pte mode.pte
[...]
Traceback (most recent call last):
File "/Users/mikekg/qops/torchchat/export.py", line 111, in <module>
main(args)
File ... | https://github.com/pytorch/torchchat/issues/542 | open | [] | 2024-04-29T03:03:40Z | 2024-07-30T17:36:20Z | 0 | mikekgfb |
pytorch/serve | 3,120 | If micro_batch_size of micro-batch is set to 1, then model inference is still batch processing? | ### 📚 The doc issue
I set the batchSize of the registered model to 10, and then set the micro_batch_size to 1. So for model inference, will it wait for 10 requests to complete preprocessing in parallel before aggregating them for inference?
### Suggest a potential alternative/fix
_No response_ | https://github.com/pytorch/serve/issues/3120 | open | [] | 2024-04-29T02:59:58Z | 2024-04-29T18:48:28Z | 1 | pengxin233 |
pytorch/torchchat | 533 | [FEATURE REQUEST] 8b weight quantization on ET |
What is the best we can do for int8 channel-wise quantization in XNNPACK (and elsewhere in ET) today? I see ATM we use` F.linear(x, weight.to(dtype=x.dtype)) * scales` as implementation in [ET examples](https://www.internalfb.com/code/fbsource/[7e7c1690e5ac43a50e5e17e41321005d126e3faf]/fbcode/executorch/examples/mode... | https://github.com/pytorch/torchchat/issues/533 | closed | [] | 2024-04-28T17:02:49Z | 2024-07-21T22:14:01Z | 7 | mikekgfb |
huggingface/datasets | 6,846 | Unimaginable super slow iteration | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
n... | https://github.com/huggingface/datasets/issues/6846 | closed | [] | 2024-04-28T05:24:14Z | 2024-05-06T08:30:03Z | 1 | rangehow |
pytorch/torchchat | 528 | [DOCS] runner documentation |
1 - Add llama2/3 options to docs/runner from https://github.com/pytorch/torchchat/pull/486
2 - Also does the file need a name change because it covers both build and run for the runners?
3 - Do we have the necessary documentation - how to build the tokenizer.bin?
That we have to use a different tokenizer... | https://github.com/pytorch/torchchat/issues/528 | closed | [] | 2024-04-27T22:24:30Z | 2024-07-21T21:38:37Z | 1 | mikekgfb |
pytorch/torchchat | 526 | [Better Engineering] Is no KV cache still a thing? |
I put the code there originally, but... wondering whether running models without KV cache is still a thing?
We don't really offer a way to build it without KV Cache...
https://github.com/pytorch/torchchat/blame/e26c5289453ccac7f4b600babcb40e30634bdeb2/runner/run.cpp#L175-L185
```
#ifndef __KV_CACHE__
// @l... | https://github.com/pytorch/torchchat/issues/526 | closed | [] | 2024-04-27T22:07:53Z | 2024-04-28T14:30:48Z | 0 | mikekgfb |
huggingface/lerobot | 112 | Do we want to use `transformers`? | I'd really go against establishing transformers as a dependency of lerobot and importing their whole library just to use the `PretrainedConfig` (or even other components). I think in this case it's very overkill and wouldn't necessarily fit our needs right now. The class is ~1000 lines of code - which we can copy into ... | https://github.com/huggingface/lerobot/issues/112 | closed | [
"question"
] | 2024-04-27T17:24:20Z | 2024-04-30T11:59:25Z | null | qgallouedec |
pytorch/tutorials | 2,849 | Transformer tutorial multiplying with sqrt(d_model) | https://github.com/pytorch/tutorials/blob/5e772fa2bf406598103e61e628a0ca0b8e471bfa/beginner_source/translation_transformer.py#L135
src = self.embedding(src) * math.sqrt(self.d_model)
shouln't this be
src = self.embedding(src) / math.sqrt(self.d_model)
at least that is the impression I got when reading the "... | https://github.com/pytorch/tutorials/issues/2849 | closed | [
"easy",
"docathon-h1-2024"
] | 2024-04-27T07:45:10Z | 2024-06-11T09:15:26Z | 3 | RogerJL |
pytorch/TensorRT | 2,782 | ❓ [Question] Unexpected exception _Map_base::at during PTQ | ## ❓ Question
I am attempting to execute [PTQ](https://pytorch.org/TensorRT/user_guide/ptq.html). During the compiling process, I get the following exception:
```
DEBUG: [Torch-TensorRT TorchScript Conversion Context] - Finalize: %142 : Tensor = aten::matmul(%x, %143) # /fsx_home/homes/srdecny/meaning/vocoder/h... | https://github.com/pytorch/TensorRT/issues/2782 | closed | [
"question"
] | 2024-04-26T18:29:58Z | 2025-03-27T12:42:10Z | null | srdecny |
pytorch/xla | 6,979 | Support non-traceable Custom Ops | ## 🚀 Feature
`torch.export` supports exporting blackbox custom ops, however, we fails to export it to StableHLO using `exported_program_to_stablehlo` API
https://pytorch.org/tutorials/intermediate/torch_export_tutorial.html#custom-ops
## Motivation
if we have non-traceable python codes in the custom ops, we can... | https://github.com/pytorch/xla/issues/6979 | closed | [
"stablehlo"
] | 2024-04-26T16:53:16Z | 2024-09-03T04:13:05Z | 4 | thong3le |
huggingface/evaluate | 582 | How to pass generation_kwargs to the TextGeneration evaluator ? | How can I pass the generation_kwargs to TextGeneration evaluator ? | https://github.com/huggingface/evaluate/issues/582 | open | [] | 2024-04-25T16:09:46Z | 2024-04-25T16:09:46Z | null | swarnava112 |
huggingface/chat-ui | 1,074 | 503 error | Hello, I was trying to install the chat-ui
I searched for any documentation to how to handle that on my vps
error 500 after build and not working with https although allow_insecure=false | https://github.com/huggingface/chat-ui/issues/1074 | closed | [
"support"
] | 2024-04-25T15:34:07Z | 2024-04-27T14:58:45Z | 1 | abdalladorrah |
huggingface/chat-ui | 1,073 | Support for Llama-3-8B-Instruct model | hi,
For model meta-llama/Meta-Llama-3-8B-Instruct, it is unlisted, not sure when will be supported?
https://github.com/huggingface/chat-ui/blob/3d83131e5d03e8942f9978bf595a7caca5e2b3cd/.env.template#L229
thanks. | https://github.com/huggingface/chat-ui/issues/1073 | open | [
"question",
"models",
"huggingchat"
] | 2024-04-25T14:03:35Z | 2024-04-30T05:47:05Z | null | cszhz |
huggingface/chat-ui | 1,072 | [v0.8.3] serper, serpstack API, local web search not working | ## Context
I have serper.dev API key, serpstack API key and I have put it correctly in my `.env.local` file.
<img width="478" alt="image" src="https://github.com/huggingface/chat-ui/assets/31769894/5082893a-7ecd-4ab5-9cb9-059875118dcd">
## Issue
However, even if I enable Web Search, it still does not reach ... | https://github.com/huggingface/chat-ui/issues/1072 | closed | [
"support"
] | 2024-04-25T13:24:40Z | 2024-05-09T16:28:15Z | 14 | adhishthite |
huggingface/diffusers | 7,775 | How to input gradio settings in Python | Hi.
I use **realisticStockPhoto_v20** on Fooocus with **sdxl_film_photography_style** lora and I really like the results.
Fooocus and other gradio implementations come with settings inputs that I want to utilize in Python as well. In particular, if this is my code:
```
device = "cuda"
model_path = "weights/reali... | https://github.com/huggingface/diffusers/issues/7775 | closed | [] | 2024-04-25T08:43:20Z | 2024-11-20T00:07:26Z | null | levoz92 |
huggingface/chat-ui | 1,069 | CohereForAI ChatTemplate | Now that there is official support for tgi in CohereForAI/c4ai-command-r-v01. How to use the chat template found in the tokenizer config for the ui. Or alternatively, is it possible to add in PROMPTS.md the correct template for cohere? | https://github.com/huggingface/chat-ui/issues/1069 | open | [] | 2024-04-25T05:45:35Z | 2024-04-25T05:45:35Z | 0 | yanivshimoni89 |
huggingface/transformers.js | 727 | Preferred citation of Transformers.js | ### Question
Love the package, and am using it in research - I am wondering, does there exist a preferred citation format for the package to cite it in papers? | https://github.com/huggingface/transformers.js/issues/727 | open | [
"question"
] | 2024-04-24T23:07:20Z | 2024-04-24T23:21:13Z | null | ludgerpaehler |
pytorch/pytorch | 124,887 | How to catch NCCL collective timeout in Python | ## Issue description
Currently, there are several error handling modes ([link](https://github.com/pytorch/pytorch/blob/bc117898f18e8a698b00823f57c19b2d874b93ba/torch/csrc/distributed/c10d/ProcessGroupNCCL.hpp#L114-L126)) for when NCCL collectives timeout. These error handling modes can be set via `TORCH_NCCL_ASYNC_ERR... | https://github.com/pytorch/pytorch/issues/124887 | closed | [
"needs reproduction",
"oncall: distributed"
] | 2024-04-24T22:27:43Z | 2024-05-01T06:16:25Z | null | gkroiz |
huggingface/diarizers | 4 | How to save the finetuned model as a .bin file? | Hi,
I finetuned the pyannote-segmentation model for my usecase but it is saved as a model.safetensors file. Can I convert it to a pytorch_model.bin file? I am using whisperx to create speaker-aware transcripts and .safetensors isn't working with that library. Thanks! | https://github.com/huggingface/diarizers/issues/4 | closed | [] | 2024-04-24T20:50:19Z | 2024-04-30T21:02:32Z | null | anuragrawal2024 |
pytorch/torchchat | 460 | First generated token not being displayed in chat mode sometimes. | What is your system prompt?
I am superman
What is your prompt?
How can i save the world?
, up, and away! As Superman, you're uniquely equipped
Seems like 'up' in up, up, and away is being lost. This happens with most responses. | https://github.com/pytorch/torchchat/issues/460 | closed | [] | 2024-04-24T19:00:40Z | 2024-04-24T22:13:06Z | 0 | JacobSzwejbka |
pytorch/executorch | 3,303 | How can I convert llama3 safetensors to the pth file needed to use with executorch? | Fine-tunes of Llama3 usually only have safetensors uploaded. In order to compile a Llama3 model following the tutorial, I need the original pth checkpoint file.
Is there a way to convert the safetensors to the checkpoint file? | https://github.com/pytorch/executorch/issues/3303 | closed | [
"enhancement",
"help wanted",
"high priority",
"triage review"
] | 2024-04-24T14:20:17Z | 2024-05-30T03:29:23Z | null | l3utterfly |
huggingface/transformers.js | 725 | How to choose a language's dialect when using `automatic-speech-recognition` pipeline? | ### Question
Hi, so I was originally using the transformers library (python version) in my backend, but when refactoring my application for scale. It made more sense to move my implementation of whisper from the backend to the frontend (for my specific usecase). So I was thrilled when I saw that transformers.js supp... | https://github.com/huggingface/transformers.js/issues/725 | closed | [
"question"
] | 2024-04-24T09:44:38Z | 2025-11-06T20:36:01Z | null | jquintanilla4 |
huggingface/text-embeddings-inference | 248 | how to support gpu version 10.1 rather than 12.2 | ### Feature request
how to support gpu version 10.1 rather than 12.2
### Motivation
how to support gpu version 10.1 rather than 12.2
### Your contribution
how to support gpu version 10.1 rather than 12.2 | https://github.com/huggingface/text-embeddings-inference/issues/248 | closed | [] | 2024-04-24T08:49:45Z | 2024-04-26T13:02:44Z | null | fanqiangwei |
huggingface/diffusers | 7,766 | IP-Adapter FaceID PLus How to use questions | https://github.com/huggingface/diffusers/blob/9ef43f38d43217f690e222a4ce0239c6a24af981/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L492
## error msg:
pipe.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
AttributeError: 'list' obje... | https://github.com/huggingface/diffusers/issues/7766 | closed | [] | 2024-04-24T07:56:38Z | 2024-11-20T00:02:30Z | null | Honey-666 |
huggingface/peft | 1,673 | How to set Lora_dropout=0 when loading trained peft model for inference? | ### System Info
peft==0.10.0
transformers==4.39.3
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
```pytho... | https://github.com/huggingface/peft/issues/1673 | closed | [] | 2024-04-24T07:47:19Z | 2024-05-10T02:22:17Z | null | flyliu2017 |
pytorch/torchchat | 450 | [Feature Request] Support for delegate information in torchchat | @lucylq can you please add the delegate summary info you added to ET's llama2/export_llama_lib to export_et.py?
Can you add a line or two about XNNPACK delegate (probably just a link to some text on the ET website?) and how to interpret the operator stats in docs/ADVANCED-USERS.md as well?
Thanks so much!
cc: @... | https://github.com/pytorch/torchchat/issues/450 | closed | [
"enhancement"
] | 2024-04-24T06:19:02Z | 2024-04-30T00:29:06Z | 0 | mikekgfb |
pytorch/vision | 8,394 | Run all torchvision models in one script. | ### 🚀 The feature
Is there a test script that can run models.
### Motivation, pitch
Hl, i am testing a model migration script from cuda to sycl and i would like to test it on torch vision model set, i would like to know do we have a test script that can run all models in torchvision? like run.py [code](https:... | https://github.com/pytorch/vision/issues/8394 | closed | [] | 2024-04-24T01:39:23Z | 2024-04-29T10:18:17Z | 1 | leizhenyuan |
pytorch/torchchat | 430 | [Feature Request] centralize measurement code |
@malfet said in https://github.com/pytorch/torchchat/pull/426
This code is repeated thrice in this PR. Can we have something like
```
with report_block-time("Time to load model"):
model = _load_model(builder_args, only_config=True)
device_sync(device=builder_args.device)
```
Might be a good comp... | https://github.com/pytorch/torchchat/issues/430 | closed | [
"enhancement"
] | 2024-04-23T22:26:16Z | 2024-05-12T21:32:58Z | 0 | mikekgfb |
huggingface/optimum | 1,826 | Phi3 support | ### Feature request
Microsoft's new phi3 mode, in particular the 128K context mini model, is not supported by Optimum export.
Error is:
"ValueError: Trying to export a phi3 model, that is a custom or unsupported architecture, but no custom export configuration was passed as `custom_export_configs`. Please refer t... | https://github.com/huggingface/optimum/issues/1826 | closed | [] | 2024-04-23T15:54:21Z | 2024-05-24T13:53:08Z | 4 | martinlyons |
huggingface/datasets | 6,830 | Add a doc page for the convert_to_parquet CLI | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | https://github.com/huggingface/datasets/issues/6830 | closed | [
"documentation"
] | 2024-04-23T09:49:04Z | 2024-04-25T10:44:11Z | 0 | severo |
pytorch/serve | 3,103 | How to pass parameters from preprocessing to postprocessing when using micro-batch operations | ### 📚 The doc issue
I have a variable that is obtained by parsing the image data in pre-processing, but it is not an input to the model. I want to pass it to post-processing and return it together with the results. Like knowing how to pass it from pre-processing to post-processing
### Suggest a potential alternative... | https://github.com/pytorch/serve/issues/3103 | closed | [
"triaged"
] | 2024-04-23T03:17:05Z | 2024-04-29T02:49:49Z | null | pengxin233 |
huggingface/transformers.js | 723 | 404 when trying Qwen in V3 | ### Question
This is probably just because V3 is a work in progress, but I wanted to make sure.
When trying to run Qwen 1.5 - 0.5B it works with the V2 script, but when swapping to V3 I get a 404 not found.
```
type not specified for model. Using the default dtype: q8.
GET https://huggingface.co/Xenova/Qwen1.5... | https://github.com/huggingface/transformers.js/issues/723 | open | [
"question"
] | 2024-04-22T19:14:17Z | 2024-05-28T08:26:09Z | null | flatsiedatsie |
huggingface/diffusers | 7,740 | How to get config of single_file | Hi,
Is there any way to get the equivalent of model_index.json from a single_file? | https://github.com/huggingface/diffusers/issues/7740 | closed | [] | 2024-04-22T14:00:21Z | 2024-04-22T23:26:50Z | null | suzukimain |
pytorch/torchchat | 372 | [Release] Documentation is sparse. | What does "the following models are supported" mean? Ostensibly you can load other models like language llama, as long as you have a params.json and they fit into the architectural parameters ?
the preamble explains it supports "Android (Devices that support XNNPACK)" - how do I know that as a user?
"Supporting... | https://github.com/pytorch/torchchat/issues/372 | closed | [] | 2024-04-22T08:17:04Z | 2024-04-25T18:47:09Z | 1 | mikekgfb |
pytorch/torchchat | 364 | [Release][documentation] Docs Regression: documentation for export_et / install_et broken | From chat:
@iseeyuan
> Separate question: When I tried python torchchat.py export stories15M --output-pte-path stories15M.pte, I got Export with executorch requested but ExecuTorch could not be loaded.
If I run the culprit line, from export_et import export_model as export_model_et, I got this stack, [P121961472... | https://github.com/pytorch/torchchat/issues/364 | closed | [] | 2024-04-22T04:01:22Z | 2024-04-24T02:32:50Z | 1 | mikekgfb |
pytorch/torchchat | 357 | runner-et build documentation broken |
The runner build information in our documentation is in even worse shape than the ci.
@shoumikhin
> anyhow just followed the readme and then tried that cmake command, got [P1219498869 (https://www.internalfb.com/intern/paste/P1219498869/)
```
cmake -S ./runner-et -B et-build/cmake-out -G Ninja
-- Using ET... | https://github.com/pytorch/torchchat/issues/357 | closed | [] | 2024-04-21T21:37:26Z | 2024-05-12T21:38:59Z | 4 | mikekgfb |
pytorch/torchchat | 356 | runner, runner-et and runner-aoti documentation | Add a description of the runner/run.cpp
highlight that it's only a few lines of C++ code that need to be different for PyTorch AOTI and PyTorch ET.
Might also check how many lines of llama2.c we avoid having to write by autogenerating llama.{pte,so}
maybe @shoumikhin and Hansong (@cbilgin can you put the right git... | https://github.com/pytorch/torchchat/issues/356 | closed | [] | 2024-04-21T21:30:59Z | 2024-04-25T07:57:47Z | 2 | mikekgfb |
pytorch/torchchat | 354 | [Feature Request] add Dr. CI (when this repository goes public) | Now that we're a real pytorch project and in the pytorch repo, can we have @pytorch-bot build the same summaries for pytorch/torchchat as it does for pytorch/pytorch? I find those exceedingly helpful to navigate.
https://github.com/pytorch/pytorch/pull/124570#issuecomment-2068152908
🔗 Helpful Links
🧪 See art... | https://github.com/pytorch/torchchat/issues/354 | closed | [
"enhancement"
] | 2024-04-21T21:01:47Z | 2024-05-13T17:29:28Z | 4 | mikekgfb |
pytorch/torchchat | 347 | [Release] Seems like we get a bit of a garbage output? | Maybe this has to do with how we leverage the start and end tokens for prompt and response, but I feel like I'm getting garbage output?
Steps to reproduce:
1. Run `python torchchat.py chat stories15M`
2. Enter `Can you tell me about your day?` as the prompt
3. I then see the following result
```
What is your pr... | https://github.com/pytorch/torchchat/issues/347 | closed | [] | 2024-04-21T18:19:45Z | 2024-04-22T21:13:17Z | 1 | orionr |
pytorch/torchchat | 346 | [Release] Chat only responds to one line of text? | I would expect chat to be interactive, but it isn't for me right now.
Steps to reproduce:
1. Run `python torchchat.py chat stories15M`
2. Enter some text like "Hello"
3. Notice that you get a response, but then the command exits
Expected:
1. I'd be able to continue chatting with the model until I hit Ctrl-C o... | https://github.com/pytorch/torchchat/issues/346 | closed | [] | 2024-04-21T18:16:01Z | 2024-04-25T07:58:45Z | 2 | orionr |
pytorch/torchchat | 345 | [Feature request] Allow for GPU and MPS as defaults on machines that support it? | Given that we won't see good performance without GPU enabled for machines that support CUDA, should we make sure we select `gpu`, `mps` and then `cpu` in that order for `chat` and `generate` commands?
Is this potentially a blocker for full launch?
cc @malfet @mikekgfb @dbort @byjlw | https://github.com/pytorch/torchchat/issues/345 | closed | [
"enhancement"
] | 2024-04-21T17:54:06Z | 2024-04-30T06:31:55Z | 2 | orionr |
pytorch/torchchat | 344 | [Resolve] Force requirements.txt or README.md to install PyTorch nightlies? | Given that we won't see good performance with the release version of PyTorch, should we update requirements.txt and/or README.md to have people install nightlies?
Is this potentially a blocker for full launch?
cc @malfet @mikekgfb @dbort @byjlw | https://github.com/pytorch/torchchat/issues/344 | closed | [] | 2024-04-21T17:52:19Z | 2024-04-22T13:58:30Z | 4 | orionr |
pytorch/torchchat | 336 | [Mitigated, pending confirmation/closure] Review update documentation for GPTQ | https://github.com/pytorch/torchchat/edit/main/docs/quantization.md
Please update the documentation to include all necessary options and information to use GPTQ with eager execution and export .
cc: @jerryzh168 @HDCharles | https://github.com/pytorch/torchchat/issues/336 | closed | [] | 2024-04-21T08:19:38Z | 2024-04-25T17:13:46Z | 0 | mikekgfb |
huggingface/diffusers | 7,724 | RuntimeError: Error(s) in loading state_dict for AutoencoderKL: Missing Keys! How to solve? | ### Describe the bug
I am trying to get a Lora to run locally on my computer by using this code: https://github.com/hollowstrawberry/kohya-colab and changing it to a local format. When I get to the loading of the models, it gives an error, It seems that the AutoEncoder model has changed but I do not know how to adjust... | https://github.com/huggingface/diffusers/issues/7724 | closed | [
"bug"
] | 2024-04-19T13:27:17Z | 2024-04-22T08:45:24Z | null | veraburg |
huggingface/optimum | 1,821 | Idefics2 Support in Optimum for ONNX export | ### Feature request
With reference to the new Idefics2 model- https://huggingface.co/HuggingFaceM4/idefics2-8b
I would like to export it to ONNX which is currently not possible.
Please enable conversion support. Current Error with pip install transformers via GIT
```
Traceback (most recent call last):
File "... | https://github.com/huggingface/optimum/issues/1821 | open | [
"feature-request",
"onnx"
] | 2024-04-19T07:12:41Z | 2025-02-18T19:25:11Z | 8 | gtx-cyber |
pytorch/pytorch | 124,452 | How to use system cuda/cudnn | ### 🚀 The feature, motivation and pitch
I have a machine with cuda/cudnn compatible rocm device.
```
$ nvcc --version
HIPHSA: Author SUGON
HIP version: 5.4.23453
Cuda compilation tools, release 11.8, V11.8.89
clang version 15.0.0 (http://10.15.3.7/dcutoolkit/driverruntime/llvm-project.git 1be90618e508074abc... | https://github.com/pytorch/pytorch/issues/124452 | closed | [] | 2024-04-19T03:23:41Z | 2024-04-19T15:13:57Z | null | fancyerii |
huggingface/alignment-handbook | 158 | How to work with local data | I downloaded a dataset from hf. I want to load it locally, but it still tries to download it from hf and place it into the cache.
How can I use the local one I already downloaded?
Thank you. | https://github.com/huggingface/alignment-handbook/issues/158 | open | [] | 2024-04-18T10:26:14Z | 2024-05-14T11:20:55Z | null | pretidav |
huggingface/optimum-quanto | 182 | Can I use quanto on AMD GPU? | Does quanto work with AMD GPUs ? | https://github.com/huggingface/optimum-quanto/issues/182 | closed | [
"question",
"Stale"
] | 2024-04-18T03:06:54Z | 2024-05-25T01:49:56Z | null | catsled |
huggingface/accelerate | 2,680 | How to get pytorch_model.bin from ckeckpoint files without zero_to_fp32.py | https://github.com/huggingface/accelerate/issues/2680 | closed | [] | 2024-04-17T11:30:32Z | 2024-04-18T22:40:14Z | null | lipiji | |
huggingface/datasets | 6,819 | Give more details in `DataFilesNotFoundError` when getting the config names | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (support... | https://github.com/huggingface/datasets/issues/6819 | open | [
"enhancement"
] | 2024-04-17T11:19:47Z | 2024-04-17T11:19:47Z | 0 | severo |
pytorch/vision | 8,382 | Regarding IMAGENET1K_V1 and IMAGENET1K_V2 weights | ### 🐛 Describe the bug
I found a very strange "bug" while I was trying to find similiar instances in a vector database of pictures. The model I used is ResNet50. The problem occurs only when using the` IMAGENET1K_V2` weights, but does not appear when using the legacy `V1` weights (referring to https://pytorch.org/b... | https://github.com/pytorch/vision/issues/8382 | open | [] | 2024-04-17T09:30:50Z | 2024-04-17T09:33:44Z | 0 | asusdisciple |
pytorch/TensorRT | 2,759 | ❓ [Question] How should the CMakeLists look like for running .ts files in C++? | ## ❓ Question
I am trying to load a .ts model in C++ on Jetson Orin NX. I am running on this container [https://github.com/dusty-nv/jetson-containers/tree/master/packages/pytorch/torch_tensorrt](), version:[r35.3.1].
```#include <torch/script.h> // One-stop header.
#include <torch_tensorrt/torch_tensorrt.h>
... | https://github.com/pytorch/TensorRT/issues/2759 | closed | [
"question"
] | 2024-04-17T09:15:23Z | 2024-04-24T05:39:27Z | null | DmytroIvakhnenkov |
huggingface/optimum | 1,818 | Request for ONNX Export Support for Blip Model in Optimum | Hi Team,
I hope this message finds you well.
I've encountered an issue while attempting to export Blip model into the ONNX format using Optimum. I have used below command.
`! optimum-cli export onnx -m Salesforce/blip-itm-base-coco --task feature-extraction blip_onnx`
It appears that Optimum currently l... | https://github.com/huggingface/optimum/issues/1818 | open | [
"feature-request",
"question",
"onnx"
] | 2024-04-17T08:55:45Z | 2024-10-14T12:26:36Z | null | n9s8a |
huggingface/transformers.js | 715 | How to unload/destroy a pipeline? | ### Question
I tried to find how to unload a pipeline to free up memory in the documentation, but couldn't find a mention of how to do that properly.
If there a proper way to "unload" a pipeline?
I'd be happy to add the answer to the documentation. | https://github.com/huggingface/transformers.js/issues/715 | closed | [
"question"
] | 2024-04-16T09:02:05Z | 2024-05-29T09:32:23Z | null | flatsiedatsie |
pytorch/torchchat | 211 | [Feature request] Support more GGUF tensor formats | Today we support parsing for F16, F32, Q4_0, and Q6_K GGUF tensors (see gguf_util.py). We'd like to add support for more GGUF quantization formats in https://github.com/ggerganov/llama.cpp/blob/master/ggml-quants.c.
Adding support for a new format should be straightforward, using Q4_0 and Q6_K as guides.
For Q4_... | https://github.com/pytorch/torchchat/issues/211 | open | [
"enhancement"
] | 2024-04-16T01:57:25Z | 2024-04-25T18:13:44Z | 0 | metascroy |
pytorch/pytorch | 124,090 | Fakeifying a non-leaf subclass where inner tensor is noncontiguous incorrectly produces contiguous tensor. | Minified repro from internal:
```
def test_dtensor_tensor_is_not_autograd_leaf_but_local_is_noncontiguous(self):
# Temporarily ignore setUp(), and use rank3 graphs during tracing
dist.destroy_process_group()
fake_store = FakeStore()
dist.init_process_group(
"fake... | https://github.com/pytorch/pytorch/issues/124090 | closed | [
"high priority",
"triaged",
"oncall: pt2",
"module: pt2-dispatcher"
] | 2024-04-15T19:11:01Z | 2024-05-01T21:56:06Z | null | bdhirsh |
huggingface/transformers.js | 714 | Reproducing model conversions | ### Question
I'm trying to reproduce the conversion of `phi-1_5_dev` to better understand the process. I'm running into a few bugs / issues along the way that I thought it'd be helpful to document.
The model [`@Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev) states:
> https://huggingface.co/sus... | https://github.com/huggingface/transformers.js/issues/714 | open | [
"question"
] | 2024-04-15T15:02:33Z | 2024-05-10T14:26:00Z | null | thekevinscott |
huggingface/sentence-transformers | 2,594 | What is the maximum number of sentences that a fast cluster can cluster? | What is the maximum number of sentences that a fast cluster can cluster? When I cluster 2 million sentences, the cluster gets killed. | https://github.com/huggingface/sentence-transformers/issues/2594 | open | [] | 2024-04-15T09:55:06Z | 2024-04-15T09:55:06Z | null | BinhMinhs10 |
huggingface/dataset-viewer | 2,721 | Help dataset owner to chose between configs and splits? | See https://huggingface.slack.com/archives/C039P47V1L5/p1713172703779839
> Am I correct in assuming that if you specify a "config" in a dataset, only the given config is downloaded, but if you specify a split, all splits for that config are downloaded? I came across it when using facebook's belebele (https://hugging... | https://github.com/huggingface/dataset-viewer/issues/2721 | open | [
"question",
"P2"
] | 2024-04-15T09:51:43Z | 2024-05-24T15:17:51Z | null | severo |
pytorch/serve | 3,086 | How to modify torchserve’s Python runtime from 3.8.0 to 3.10 | ### 📚 The doc issue
My handle uses the syntax of Python 3.10, but the log shows Python runtime: 3.8.0. causing the model to fail to run. I would like to ask how to convert its environment to Python 3.10. I have introduced the dependencies of the Python 3.10 version into the corresponding dockerfile.
### Suggest a po... | https://github.com/pytorch/serve/issues/3086 | closed | [
"triaged"
] | 2024-04-15T05:39:53Z | 2024-04-23T17:26:08Z | null | pengxin233 |
huggingface/diffusers | 7,676 | How to determine the type of file, such as checkpoint, etc. | Hello.
Is there some kind of script that determines the type of file "checkpoint", "LORA", "textual_inversion", etc.? | https://github.com/huggingface/diffusers/issues/7676 | closed | [] | 2024-04-14T23:58:08Z | 2024-04-15T02:50:43Z | null | suzukimain |
huggingface/diffusers | 7,670 | How to use IDDPM in diffusers ? | The code base is here:
https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py | https://github.com/huggingface/diffusers/issues/7670 | closed | [
"should-move-to-discussion"
] | 2024-04-14T12:30:34Z | 2024-11-20T00:17:18Z | null | jiarenyf |
pytorch/torchchat | 174 | core dump in ci |
We get quite repeatable core dumps with a segmentation fault, e.g., here https://github.com/pytorch/torchat/actions/runs/8676531709/job/23791140949?pr=171
/home/runner/work/_temp/aa3d75e7-8cff-4789-ba8a-71b211235396.sh: line 4: 2369 Segmentation fault (core dumped) python generate.py --dtype ${DTYPE} --check... | https://github.com/pytorch/torchchat/issues/174 | closed | [] | 2024-04-14T07:39:12Z | 2024-04-25T08:07:14Z | 2 | mikekgfb |
huggingface/transformers.js | 713 | Help understanding logits and model vocabs | ### Question
I'm trying to write a custom `LogitsProcessor` and have some questions. For reference, I'm using [`Xenova/phi-1_5_dev`](https://huggingface.co/Xenova/phi-1_5_dev). I'm trying to implement a custom logic for white or blacklisting tokens, but running into difficulties understanding how to interpret token ... | https://github.com/huggingface/transformers.js/issues/713 | closed | [
"question"
] | 2024-04-13T21:06:14Z | 2024-04-14T15:17:43Z | null | thekevinscott |
pytorch/audio | 3,773 | DEVICE AV-ASR WITH EMFORMER RNN-T tutorial : avsr not found | ### 🐛 Describe the bug
Hi, I am trying the device av-asr tutorial (https://pytorch.org/audio/stable/tutorials/device_avsr.html). When I trying to run the codes in the tutorial, it shows "no module named avsr" when executing the following code:
`from avsr.data_prep.detectors.mediapipe.detector import LandmarksDetec... | https://github.com/pytorch/audio/issues/3773 | closed | [] | 2024-04-13T14:31:19Z | 2024-04-13T14:37:11Z | 0 | sfcgta4794 |
huggingface/lighteval | 155 | How to run 30b plus model with lighteval when accelerate launch failed? OOM | CUDA Memory OOM when I launch an evaluation for 30b model using lighteval.
Whats the correct config for it? | https://github.com/huggingface/lighteval/issues/155 | closed | [] | 2024-04-13T03:49:20Z | 2024-05-04T11:18:38Z | null | xiechengmude |
huggingface/transformers | 30,213 | Mamba: which tokenizer has been saved and how to use it? | ### System Info
Hardware independent.
### Who can help?
@ArthurZucker
I described the doubts in the link below around 1 month ago, but maybe model-hub discussions are not so active. Then I post it here as repo issue. Please, let me know where to discuss it :)
https://huggingface.co/state-spaces/mamba-2.8b-hf/... | https://github.com/huggingface/transformers/issues/30213 | closed | [] | 2024-04-12T11:28:17Z | 2024-05-17T13:13:12Z | null | javiermcebrian |
huggingface/sentence-transformers | 2,587 | Implementing Embedding Quantization for Dynamic Serving Contexts | I'm currently exploring embedding quantization strategies to enhance storage and computation efficiency while maintaining high accuracy. Specifically, I'm looking at integrating these strategies with Infinity (https://github.com/michaelfeil/infinity/discussions/198), a high-throughput, low-latency REST API for serving ... | https://github.com/huggingface/sentence-transformers/issues/2587 | open | [
"question"
] | 2024-04-11T11:03:23Z | 2024-04-12T07:28:48Z | null | Nookbe |
huggingface/diffusers | 7,636 | how to use the controlnet sdxl tile model in diffusers | ### Describe the bug
I want to use [this model](https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic_V1) to make my slightly blurry photos clear, so i found this model.
I follow the code [here](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile) , but as the model mentioned above is XL not... | https://github.com/huggingface/diffusers/issues/7636 | closed | [
"bug",
"stale"
] | 2024-04-11T03:20:42Z | 2024-06-29T13:26:58Z | null | xinli2008 |
huggingface/optimum-quanto | 161 | Question: any plan to formally support smooth quantization and make it more general | Awesome work!
I noticed there are smooth quant implemented under [external](https://github.com/huggingface/quanto/tree/main/external/smoothquant). Currently, its implementation seems to be model-specific, we can only apply smooth on special `Linear`.
However, in general, the smooth can be applied on any `Linear` ... | https://github.com/huggingface/optimum-quanto/issues/161 | closed | [
"question",
"Stale"
] | 2024-04-11T02:45:31Z | 2024-05-18T01:49:52Z | null | yiliu30 |
pytorch/xla | 6,916 | SPMD + Dynamo | ## ❓ Questions and Help
Is there a way to get SPMD working with Dynamo/`torch.compile` to reduce the overhead of Pytorch re-tracing the module every time it gets called? | https://github.com/pytorch/xla/issues/6916 | closed | [] | 2024-04-11T01:50:44Z | 2024-04-12T19:50:56Z | 4 | BitPhinix |
pytorch/vision | 8,372 | Nightly build flaky pytorch/vision / conda-py3_11-cpu builds | ### 🐛 Describe the bug
Flaky issue on pytorch/vision / conda-py3_11-cpu builds. Has been happening for a while now.
Most likely due to corrupt worker environment:
```
+ __conda_exe run -p /Users/ec2-user/runner/_work/_temp/pytorch_pkg_helpers_8521283920_smoke python3 pytorch/vision/test/smoke_test.py
+ /opt/... | https://github.com/pytorch/vision/issues/8372 | open | [] | 2024-04-10T15:48:12Z | 2024-04-10T15:49:09Z | 1 | atalman |
pytorch/serve | 3,078 | Serve multiple models with both CPU and GPU | Hi guys, I have a question: Can I serve several models (about 5 - 6 models) using both CPU and GPU inference? | https://github.com/pytorch/serve/issues/3078 | open | [
"question",
"triaged"
] | 2024-04-10T15:03:35Z | 2025-01-12T06:29:51Z | null | hungtrieu07 |
huggingface/accelerate | 2,647 | How to use deepspeed with dynamic batch? | ### System Info
```Shell
- `Accelerate` version: 0.29.1
- Platform: Linux-5.19.0-46-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/yuchao/miniconda3/envs/TorchTTS/bin/accelerate
- Python version: 3.10.13
- Numpy version: 1.23.5
- PyTorch version (GPU?): 2.2.2+cu118 (True)
- PyTorch XPU availab... | https://github.com/huggingface/accelerate/issues/2647 | closed | [] | 2024-04-10T09:09:53Z | 2025-05-11T15:07:27Z | null | npuichigo |
huggingface/transformers.js | 690 | Is top-level await necessary in the v3 branch? | ### Question
I saw the excellent performance of WebGPU, so I tried to install xenova/transformers.js#v3 as a dependency in my project.
I found that v3 uses the top-level await syntax. If I can't restrict users to using the latest browser version, I have to make it compatible (using `vite-plugin-top-level-await` o... | https://github.com/huggingface/transformers.js/issues/690 | closed | [
"question"
] | 2024-04-10T08:49:32Z | 2024-04-11T17:18:42Z | null | ceynri |
huggingface/optimum-quanto | 158 | How dose quanto support int8 conv2d and linear? | Hi, I look into the code and didn't find any cuda kernel related to conv2d and linear. How did you implement the cuda backend for conv2d/linear? Thanks | https://github.com/huggingface/optimum-quanto/issues/158 | closed | [
"question"
] | 2024-04-10T05:41:43Z | 2024-04-11T09:26:35Z | null | zhexinli |
huggingface/transformers.js | 689 | Abort the audio recognition process | ### Question
Hello! How can I stop the audio file recognition process while leaving the loaded model? If I terminate the worker I have to reload the model to start the process of recognizing a new audio file. I need either functionality to be able to send a pipeline command to stop the recognition process, or the abil... | https://github.com/huggingface/transformers.js/issues/689 | open | [
"question"
] | 2024-04-10T02:51:37Z | 2024-04-20T06:09:11Z | null | innoware11 |
huggingface/transformers | 30,154 | Question about how to write code for trainer and dataset for multi-gpu | ### System Info
- Platform: Linux-5.15.0-1026-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Task... | https://github.com/huggingface/transformers/issues/30154 | closed | [] | 2024-04-10T00:08:00Z | 2024-04-10T22:57:53Z | null | zch-cc |
huggingface/accelerate | 2,643 | How to use gather_for_metrics for object detection models? | ### Reproduction
I used the `gather_for_metrics` function as follows:
```python
predictions, ground_truths = accelerator.gather_for_metrics((predictions, ground_truths))
```
And i've got the error:
```
accelerate.utils.operations.DistributedOperationException: Impossible to apply the desired operation due to... | https://github.com/huggingface/accelerate/issues/2643 | closed | [] | 2024-04-09T23:15:20Z | 2024-04-30T07:48:36Z | null | yann-rdgz |
pytorch/torchx | 875 | Fix Nightly push permissions | ## ❓ Questions and Help
### Please note that this issue tracker is not a help form and this issue will be closed.
Before submitting, please ensure you have gone through our
[documentation](https://pytorch.org/torchx).
### Question
<!-- your question here -->
Is it possible to fix the nightly push perm... | https://github.com/meta-pytorch/torchx/issues/875 | closed | [] | 2024-04-09T19:38:04Z | 2024-04-10T18:26:16Z | 6 | ryxli |
huggingface/candle | 2,033 | How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ? | How to use CUDA as the backend in `candle-wasm-examples/llama2-c` ?
In `candle-wasm-examples/llama2-c`, I do some changes shown below.
```diff
--- a/candle-wasm-examples/llama2-c/Cargo.toml
+++ b/candle-wasm-examples/llama2-c/Cargo.toml
@@ -9,7 +9,7 @@ categories.workspace = true
license.workspace = true
... | https://github.com/huggingface/candle/issues/2033 | closed | [] | 2024-04-09T16:16:55Z | 2024-04-12T08:26:24Z | null | wzzju |
huggingface/optimum | 1,804 | advice for simple onnxruntime script for ORTModelForVision2Seq (or separate encoder/decoder) | I am trying to use implement this [class ](https://github.com/huggingface/optimum/blob/69af5dbab133f2e0ae892721759825d06f6cb3b7/optimum/onnxruntime/modeling_seq2seq.py#L1832) in C++ because unfortunately I didn't find any C++ implementation for this.
Therefore, my current approach is to revert this class and the au... | https://github.com/huggingface/optimum/issues/1804 | open | [
"question",
"onnxruntime"
] | 2024-04-09T15:14:40Z | 2024-10-14T12:41:15Z | null | eduardatmadenn |
huggingface/chat-ui | 997 | Community Assistants | Hi, I've looked through all the possible issues but I didn't find what I was looking for.
On self-hosted is the option to have the community assistants such as the ones on https://huggingface.co/chat/ not available? I've also noticed that when I create Assistants on my side they do not show up on community tabs eit... | https://github.com/huggingface/chat-ui/issues/997 | closed | [
"help wanted",
"assistants"
] | 2024-04-09T12:44:49Z | 2024-04-23T06:09:47Z | 2 | Coinficient |
huggingface/evaluate | 570 | [Question] How to have no preset values sent into `.compute()` | We've a use-case https://huggingface.co/spaces/alvations/llm_harness_mistral_arc/blob/main/llm_harness_mistral_arc.py
where default feature input types for `evaluate.Metric` is nothing and we get something like this in our `llm_harness_mistral_arc/llm_harness_mistral_arc.py`
```python
import evaluate
import dat... | https://github.com/huggingface/evaluate/issues/570 | open | [] | 2024-04-08T22:58:41Z | 2024-04-08T23:54:42Z | null | alvations |
huggingface/transformers | 30,122 | What is the default multi-GPU training type? | ### System Info
NA
### Who can help?
@ArthurZucker , @younesbelkada
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
... | https://github.com/huggingface/transformers/issues/30122 | closed | [] | 2024-04-08T11:45:59Z | 2024-05-10T10:35:41Z | null | RonanKMcGovern |
huggingface/optimum | 1,798 | Issue Report: Unable to Export Qwen Model to ONNX Format in Optimum | ### System Info
```shell
Optimum Version: 1.18.0
Python Version: 3.8
Platform: Windows, x86_64
```
### Who can help?
@michaelbenayoun @JingyaHuang @echarlaix
I am writing to report an issue I encountered while attempting to export a Qwen model to ONNX format using Optimum.
Error message:
" ValueError: Tryin... | https://github.com/huggingface/optimum/issues/1798 | open | [
"bug"
] | 2024-04-08T11:36:09Z | 2024-04-08T11:36:09Z | 0 | Harini-Vemula-2382 |
huggingface/chat-ui | 986 | Github actions won't push built docker images on releases | We currently have a [github actions workflow](https://github.com/huggingface/chat-ui/blob/main/.github/workflows/build-image.yml) that builds an image on every push to `main` and tags it with `latest` and the commit id. [(see here)](https://github.com/huggingface/chat-ui/pkgs/container/chat-ui/versions)
The workflow... | https://github.com/huggingface/chat-ui/issues/986 | closed | [
"help wanted",
"CI/CD"
] | 2024-04-08T07:51:13Z | 2024-04-08T11:27:42Z | 2 | nsarrazin |
huggingface/candle | 2,025 | How to specify which graphics card to run a task on in a server with multiple graphics cards? | https://github.com/huggingface/candle/issues/2025 | closed | [] | 2024-04-07T10:48:35Z | 2024-04-07T11:05:52Z | null | lijingrs | |
pytorch/torchchat | 77 | [Feature request] Need a format for test reports and how we might track them? | Maybe we build a table, with something like
| Model. | Target tested | Platform tested (*) | submitter | test date | link to test transcript |
|--|--|--|--|--|--|
| stories15M | generate, AOTI CPU | Ubuntu x86 24.04 | mikekgfb | 2024-04-06 | [test transcript](https://github.com/pytorch-labs/llama-fast/actions... | https://github.com/pytorch/torchchat/issues/77 | open | [
"enhancement"
] | 2024-04-07T06:04:24Z | 2024-04-25T18:14:04Z | 0 | mikekgfb |
pytorch/torchchat | 70 | [Usability] Clean installation and first example steps in README to standardize on stories15M? | Looking great! However, I went through the README steps on a new M1 and hit a few issues. It would be ideal if we can make this a clean list of commands that a person could cut and paste all the way through. Here are some thoughts:
Can we move "The model definition (and much more!) is adopted from gpt-fast, so we su... | https://github.com/pytorch/torchchat/issues/70 | closed | [] | 2024-04-06T22:13:18Z | 2024-04-20T01:35:39Z | 6 | orionr |
huggingface/text-embeddings-inference | 229 | Question: How to add a prefix to the underlying server | I've managed to run the text embeddings inference perfectly using the already built docker images and I'm trying to allow it to our internal components
Right now they're sharing the following behavior
Myhost.com/modelname/v1/embeddings
I was wondering if this "model name" is possible to add as a prefix inside ... | https://github.com/huggingface/text-embeddings-inference/issues/229 | closed | [] | 2024-04-06T17:29:59Z | 2024-04-08T09:14:40Z | null | Ryojikn |
pytorch/torchchat | 69 | [Feature request] Torchchat performance comparison to gpt-fast | At present, llama-fast is 2x slower than gpt-fast when run out of the box. The root cause is we default to fp32 rather than bf16 (reducing our peak perf potential in a major way).
I changed the default to fp32 because some mobile targets do not support FP16 (and not at all bfloat16), so this was the least common de... | https://github.com/pytorch/torchchat/issues/69 | closed | [
"enhancement"
] | 2024-04-06T16:36:03Z | 2024-05-12T21:36:56Z | 3 | mikekgfb |
huggingface/transformers.js | 685 | Transformers.js seems to need an internet connection when it shouldn't? (Error: no available backend found.) | ### Question
What is the recommended way to get Transformers.js to work even when, later on, there is no internet connection?
Is it using a service worker? Or are there other (perhaps hidden) settings for managing caching of files?
I'm assuming here that the `Error: no available backend found` error message is r... | https://github.com/huggingface/transformers.js/issues/685 | open | [
"question"
] | 2024-04-06T12:40:15Z | 2024-09-03T01:22:15Z | null | flatsiedatsie |
huggingface/trl | 1,510 | [question] how to apply model parallism to solve cuda memory error | hi team. I am using the SFT and PPO code to train my model, link https://github.com/huggingface/trl/tree/main/examples/scripts.
Due to long context length and 7B-level model size, I am facing cuda memory issue on my single gpu.
Is there any straightforward manner to utilize multiple gpus on my server to train th... | https://github.com/huggingface/trl/issues/1510 | closed | [] | 2024-04-06T02:09:36Z | 2024-05-06T17:02:35Z | null | yanan1116 |
pytorch/tutorials | 2,827 | Misleading example for per-sample gradient | In the example of per-sample gradient, the following line can be misleading since the `predictions` of a net are logits:
https://github.com/pytorch/tutorials/blob/08a61b7cae9d00312d0029b1f86a248ec1253a83/intermediate_source/per_sample_grads.py#L49
The correct way should be:
``` python
return F.nll_loss(F.log_s... | https://github.com/pytorch/tutorials/issues/2827 | closed | [] | 2024-04-06T00:27:51Z | 2024-04-24T17:52:48Z | 3 | mingfeisun |
huggingface/dataset-viewer | 2,667 | Rename datasets-server to dataset-viewer in infra internals? | Follow-up to #2650.
Is it necessary? Not urgent in any Case.
Some elements to review:
- [ ] https://github.com/huggingface/infra
- [ ] https://github.com/huggingface/infra-deployments
- [ ] docker image tags (https://hub.docker.com/r/huggingface/datasets-server-services-search -> https://hub.docker.com/r/huggi... | https://github.com/huggingface/dataset-viewer/issues/2667 | closed | [
"question",
"P2"
] | 2024-04-05T16:53:34Z | 2024-04-08T09:26:14Z | null | severo |
huggingface/dataset-viewer | 2,666 | Change API URL to dataset-viewer.huggingface.co? | Follow-up to https://github.com/huggingface/dataset-viewer/issues/2650
Should we do it?
- https://github.com/huggingface/dataset-viewer/issues/2650#issuecomment-2040217875
- https://github.com/huggingface/moon-landing/pull/9520#issuecomment-2040220911
If we change it, we would have to update:
- moon-landing
-... | https://github.com/huggingface/dataset-viewer/issues/2666 | closed | [
"question",
"P2"
] | 2024-04-05T16:49:13Z | 2024-04-08T09:24:43Z | null | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.