repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 โ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/xla | 7,255 | [RFC] torch_xla2 dynamo integration | # Dynamo backend for torchxla2
## Goal
Have a dynamo backend backend by torch_xla2.
The users should be able to do the following:
```python
m = model ...
m_compiled = torch.compile(m, backend='torch_xla2_compile') # backend name TBD
result = m_compiled(*inputs)
```
The above should run on TPU will l... | https://github.com/pytorch/xla/issues/7255 | open | [
"dynamo",
"RFC",
"torchxla2"
] | 2024-06-12T17:31:23Z | 2025-11-12T19:14:04Z | 7 | qihqi |
huggingface/chat-ui | 1,277 | Difficulties with chat-ui promp to text-generation-webui openai api endpoint | Hello,
I'm trying my best to get the huggingface ```chat-ui``` working with the API endpoint of ```text-generation-webui```.
I would be really happy if I could get a hint what I am doing wrong.
Here is a reverse proxied test instance: https://chat-ui-test.pischem.com/
I can't get my prompt that I input into... | https://github.com/huggingface/chat-ui/issues/1277 | closed | [
"support"
] | 2024-06-12T14:18:12Z | 2025-01-30T18:46:22Z | 7 | Monviech |
huggingface/chat-ui | 1,275 | Feature Request - support for session sharing, archiving, and collaboration | AFAIK, HuggingChat (HC) currently has no support for session sharing, archiving, and collaboration. At least, neither the HC server nor my GitHub (GH) searching found anything like this. So, if this doesn't exist, please consider how it could be implemented. For example, if I wanted to publish an HC session, maybe I co... | https://github.com/huggingface/chat-ui/issues/1275 | open | [
"question"
] | 2024-06-12T11:35:31Z | 2024-06-14T05:24:08Z | null | RichMorin |
huggingface/lerobot | 263 | Seeking advice on how to choose between ACT and DP algorithms | Hello,
Thank you very much for the work you have done in bringing together the current excellent imitation learning collections for convenient use. Regarding the ACT algorithm and DP algorithm, besides the basic differences in the algorithms themselves, how should one choose between them for different tasks? Do they... | https://github.com/huggingface/lerobot/issues/263 | closed | [
"question"
] | 2024-06-12T07:45:39Z | 2024-06-19T14:02:43Z | null | le-wei |
pytorch/xla | 7,253 | [RFC] PyTorch/XLA eager mode as default | # Context
## Objective
In this RFC I will talk about the roadmap to enable eager mode as the default computation mode for PyTorch/XLA users and how to enable graph compilation in this mode.
## Background
PyTorch/XLA has been using tracing mode as the default mode since the project started. All of the to... | https://github.com/pytorch/xla/issues/7253 | open | [
"usability",
"RFC",
"eager"
] | 2024-06-12T03:40:12Z | 2025-11-09T19:39:21Z | 5 | JackCaoG |
pytorch/executorch | 3,939 | How can I use the generated pte file to process my own data and predict the results? | auto train_loader = torch::data::make_data_loader(
SWaTegLoader("/dataset/train.csv", 100, 10, "train"),
batch_size=256,
torch::data::DataLoaderOptions().workers(0).shuffle(true)
);
Is this correct? Then how do we process the data with the model?
for (auto& batch : *train_loader) {
... | https://github.com/pytorch/executorch/issues/3939 | closed | [
"need-user-input"
] | 2024-06-11T22:22:13Z | 2025-02-05T17:44:36Z | null | tayloryoung-o |
huggingface/dataset-viewer | 2,899 | Standardize access to metrics and healthcheck | In some apps, the metrics and healthcheck are public:
- https://datasets-server.huggingface.co/admin/metrics
- https://datasets-server.huggingface.co/sse/metrics
- https://datasets-server.huggingface.co/sse/healthcheck
- https://datasets-server.huggingface.co/healthcheck
- On others, itโs forbidden or not found:... | https://github.com/huggingface/dataset-viewer/issues/2899 | open | [
"question",
"infra",
"P2"
] | 2024-06-11T14:39:10Z | 2024-07-11T15:38:17Z | null | AndreaFrancis |
huggingface/lerobot | 261 | Which low cost robot with teleoperation to test the library ? | Firstly, thank you for all the work. At my company we would like to obtain results on real robots from this repository. However, the original setups are either quite expensive (around ~30k for Aloha) or require reconstruction for the UMI interface from Colombia via 3D printing, which would be time-consuming considering... | https://github.com/huggingface/lerobot/issues/261 | closed | [
"question"
] | 2024-06-11T13:21:32Z | 2024-07-23T07:55:15Z | null | RochMollero |
pytorch/pytorch | 128,414 | How to enable XNNPACK instead of NNPACK/MKLDNN in Windows? | ### ๐ The feature, motivation and pitch
I'm trying to compile PyTorch for Windows on ARM64 device. I've got one workable version, but NNPACK/MKLDNN doesn't work in ARM64 windows. May I know how to enable XNNPACK as the default 'PACK' to improve the performance?
Thanks in advance!
### Alternatives
_No response_
##... | https://github.com/pytorch/pytorch/issues/128414 | open | [
"module: windows",
"triaged",
"module: xnnpack",
"module: arm"
] | 2024-06-11T12:53:01Z | 2024-09-04T10:33:25Z | null | zhanweiw |
huggingface/diarizers | 11 | How can I save the model locally before pushing it to the Hub ?! | https://github.com/huggingface/diarizers/issues/11 | closed | [] | 2024-06-11T06:37:45Z | 2024-06-13T16:24:19Z | null | ma-mohsen | |
huggingface/parler-tts | 68 | How to predict after finetune? There is no config.json in checkpoint dir. | https://github.com/huggingface/parler-tts/issues/68 | open | [] | 2024-06-11T03:30:04Z | 2024-06-17T01:57:04Z | null | lyt719 | |
pytorch/data | 1,271 | Returning tensor instead of dict for state_dict causes failure | ### ๐ Describe the bug
```
class TensorStateDataset(torch.utils.data.IterableDataset, Stateful, Iterator):
def __init__(self, length):
self.length = length
self.i = 0
def __iter__(self):
return self
def __next__(self):
if self.i >= self.length:
... | https://github.com/meta-pytorch/data/issues/1271 | closed | [
"bug",
"stateful_dataloader"
] | 2024-06-10T23:49:43Z | 2024-06-13T19:16:27Z | 2 | gokulavasan |
pytorch/tutorials | 2,926 | ๐ก [REQUEST] - New recipe tutorial on calculating layer output dimensions | ### ๐ Describe the improvement or the new tutorial
This tutorial will help users understand how to transition from convolutional and pooling layers to linear layers in their models.
Learning objectives:
- How to manually calculate the output dimensions after applying a convolution or pooling layer
- How to print... | https://github.com/pytorch/tutorials/issues/2926 | closed | [] | 2024-06-10T23:01:44Z | 2025-04-16T20:08:34Z | 2 | loganthomas |
pytorch/tutorials | 2,925 | ๐ก [REQUEST] - New recipe tutorial on implementing a Keras progress bar | ### ๐ Describe the improvement or the new tutorial
This tutorial will help users to understand better how to implement a Keras progress bar in PyTorch.
- How to implement with a traditional train/test loop
- How to implement with a train loop with validation data
### Existing tutorials on this topic
_No res... | https://github.com/pytorch/tutorials/issues/2925 | closed | [] | 2024-06-10T22:59:38Z | 2025-04-16T20:08:41Z | 0 | loganthomas |
pytorch/tutorials | 2,924 | ๐ก [REQUEST] - New recipe tutorial on accessing model parameters | ### ๐ Describe the improvement or the new tutorial
This tutorial will help begginers understand how to access and make sense of model parameters, collect trainable parameters, and use `torchinfo.summary()`.
Learning objectives:
- How to inspect a model's parameters using ``.parameters()`` and ``.named_paramete... | https://github.com/pytorch/tutorials/issues/2924 | open | [] | 2024-06-10T22:56:58Z | 2024-06-10T23:01:48Z | 0 | loganthomas |
pytorch/xla | 7,232 | How to convert hlo.pb to hlo text? | ## โ Questions and Help
### How to convert hlo.pb to hlo_text in torch xla eco system?
In JAX we can do the following:
```python
from jax.lib.xla_bridge import xla_client
fname = "model.hlo.pb"
with open(fname, mode="rb") as f:
comp = xla_client.XlaComputation(f.read())
print(comp.as_hlo_text())
```
... | https://github.com/pytorch/xla/issues/7232 | closed | [
"question"
] | 2024-06-10T20:50:31Z | 2025-06-05T01:49:49Z | null | apivovarov |
huggingface/transformers.js | 802 | Long running transcription using webgpu-whisper | ### Question
Noob question - the [webgpu-whisper](https://github.com/xenova/transformers.js/tree/v3/examples/webgpu-whisper) demo does real time transcription, however it doesn't build out a full transcript from the start ie. 2 mins into transcription, the first few transcribed lines disappear.
Transcript at tim... | https://github.com/huggingface/transformers.js/issues/802 | open | [
"question"
] | 2024-06-10T16:44:01Z | 2025-05-30T05:52:37Z | null | iamhitarth |
huggingface/sentence-transformers | 2,738 | How is `max_length` taken into account compared to models setting | What happens under the hood, if I set max_length > than model's max_length?
it seems to work, but are inputs truncated or doi you apply RoPE-Extension? | https://github.com/huggingface/sentence-transformers/issues/2738 | open | [] | 2024-06-09T15:59:09Z | 2024-06-10T06:45:49Z | null | l4b4r4b4b4 |
huggingface/datasets | 6,961 | Manual downloads should count as downloads | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | https://github.com/huggingface/datasets/issues/6961 | open | [
"enhancement"
] | 2024-06-09T04:52:06Z | 2024-06-13T16:05:00Z | 1 | umarbutler |
huggingface/diffusers | 8,439 | How to use EDM2 model with diffusers? | model safetensors: https://huggingface.co/RedRocket/Fluffyrock-Unbound/blob/main/Fluffyrock-Unbound-v1-1.safetensors
yaml: https://huggingface.co/RedRocket/Fluffyrock-Unbound/raw/main/Fluffyrock-Unbound-v1-1.yaml
colab demo:
https://colab.research.google.com/drive/1LSGvjWXNVjs6Tthcpf0F5VwuTFJ_d-oB
results:
... | https://github.com/huggingface/diffusers/issues/8439 | open | [
"stale"
] | 2024-06-09T03:39:05Z | 2024-09-14T15:10:19Z | null | s9anus98a |
huggingface/transformers | 31,323 | Language modeling examples do not show how to do multi-gpu training / fine-tuning | ### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tenso... | https://github.com/huggingface/transformers/issues/31323 | closed | [
"Documentation"
] | 2024-06-07T18:49:35Z | 2024-12-02T08:11:31Z | null | csiefer2 |
huggingface/candle | 2,258 | How to Implement New Operators Using CUDA Host Functions Along with Thrust and CUB Libraries | As stated, the CUDA code in the candle-kernels repository seems to only contain kernel functions. When I want to implement new operators (such as nonzero), it seems I'm only able to use Rust for higher-level functionality, which means I cannot utilize the device_vector from Thrust or the flagged APIs from CUB. This pos... | https://github.com/huggingface/candle/issues/2258 | open | [] | 2024-06-07T16:52:44Z | 2024-06-09T15:56:36Z | null | chenwanqq |
huggingface/text-generation-inference | 2,035 | What is TGI's graceful shutdown behavior? | When SIGKILL arrives,
- does TGI process all pending inputs?
- does TGI blocks incoming inputs?
I saw a PR that adds graceful shutdown but it did not specify the exact program behavior. | https://github.com/huggingface/text-generation-inference/issues/2035 | closed | [] | 2024-06-07T06:24:00Z | 2024-06-07T08:08:51Z | null | seongminp |
huggingface/tokenizers | 1,549 | How to use `TokenizerBuilder`? | I expected `TokenizerBuilder` to produce a `Tokenizer` from the `build()` result, but instead `Tokenizer` wraps `TokenizerImpl`.
No problem, I see that it impl `From<TokenizerImpl> for Tokenizer`, but it's attempting to do quite a bit more for some reason? Meanwhile I cannot use `Tokenizer(unwrapped_build_result_her... | https://github.com/huggingface/tokenizers/issues/1549 | closed | [
"Stale"
] | 2024-06-07T01:18:07Z | 2024-07-20T01:52:03Z | null | polarathene |
huggingface/transformers.js | 796 | No performance gain on using WebGPU | ### Question
I want to use the model: https://huggingface.co/Xenova/clip-vit-large-patch14 with WebGPU for quick inference in the browser. I ran the WebGPU benchmark to observe the performance increase and indeed it showed a ~7x improvement in speed on my device.
But when I run the clip model linked above, there's ... | https://github.com/huggingface/transformers.js/issues/796 | closed | [
"question"
] | 2024-06-06T20:16:07Z | 2024-06-09T01:44:17Z | null | mr-sarthakgupta |
huggingface/optimum | 1,895 | Lift upper version limit of transformers for habana | ### Feature request
optimium currently limits transformers to `>= 4.38.0, < 4.39.0`. @regisss bumped the upper version limit in PR #1851 a month ago. Is there any technical reason to limit the upper version to `< 4.39`? Other dependencies allow for more recent versions. For example neuronx allows `< 4.42.0`, see #1881... | https://github.com/huggingface/optimum/issues/1895 | closed | [] | 2024-06-06T07:52:41Z | 2024-06-24T08:53:27Z | 4 | tiran |
pytorch/xla | 7,203 | [RFC] PR Cherrypicking Process After a Release Branch Cut | ## ๐ Feature
In this RFC, we propose the policy aiming to guide the decision-making process for determining whether Pull Requests (PRs) should be cherry-picked onto a release branch after the release branch has been cut. The goal is to maintain the stability and predictability of releases while addressing critical ... | https://github.com/pytorch/xla/issues/7203 | open | [
"RFC"
] | 2024-06-05T22:19:07Z | 2025-09-11T23:04:41Z | 2 | lsy323 |
huggingface/peft | 1,829 | How to change to PEFT model dynamically? | python==3.7.12
PEFT==0.3.0
@BenjaminBossan
I fine-tune the eleventh transformer of Bert as below:
```bash
target_modules = []
target_modules.append("11.attention.self.query")
target_modules.append("11.attention.self.value")
lora_config = LoraConfig(
r = self.args.lora_rank,
lora_alpha = self.... | https://github.com/huggingface/peft/issues/1829 | closed | [] | 2024-06-05T13:24:40Z | 2024-06-06T00:37:06Z | null | whr819987540 |
pytorch/xla | 7,196 | Distributed spmd training with multiple compilations | ## โ Questions and Help
When starting gpu spmd training with `torchrun`, why does it need to be compiled once per machine? Although the resulting graph is the same. Is there any way to avoid it | https://github.com/pytorch/xla/issues/7196 | closed | [
"question"
] | 2024-06-05T08:46:55Z | 2025-04-07T13:32:17Z | null | mars1248 |
pytorch/torchchat | 857 | [Feature Request]: Continuous batching | Does torchchat plan to support asynchronous requests and continuous batching?
To get higher tokens/second by making efficient use of compute, continuous batching is a common strategy that is used.
We could specify the `batch_size` `n` as a parameter and `torchchat` behind the scene would send `n` number of prom... | https://github.com/pytorch/torchchat/issues/857 | closed | [] | 2024-06-05T02:22:36Z | 2024-06-14T09:21:53Z | 1 | agunapal |
huggingface/transformers.js | 792 | Feature request: YOLO-World/Grounding DINO (Zero shot object detection) | ### Question
Hi!
I'm trying out some of the zero shot capabilities and I've been working with the owlv2 but I was wondering, is support for yolo-world and grounding Dino coming? They seem to be faster than owlv2.
Thanks! | https://github.com/huggingface/transformers.js/issues/792 | open | [
"question"
] | 2024-06-04T21:39:18Z | 2024-06-24T07:04:27Z | null | rogueturnip |
pytorch/xla | 7,191 | How do I know which pytorch parameter corresponds to which parameter in hlo ir | ## โ Questions and Help
I am dumping the optimized HLO IR and designing a new backend. There are some parameters and the corresponding shapes of them in the IR file. But I don't know which parameter is which module in the defined PyTorch model. Is there a way to get the mapping details of the model's input(weights a... | https://github.com/pytorch/xla/issues/7191 | closed | [
"question"
] | 2024-06-04T18:32:56Z | 2025-04-07T13:33:10Z | null | yao-jz |
huggingface/transformers.js | 791 | env.allowLocalModels and env.allowRemoteModels | ### Question
When I set env.allowLocalModels = true and look at the env object I see both
env.allowLocalModels and env.allowRemoteModels set to true. Does this mean that it will look for models locally first and then if not found go to the remoteHost? | https://github.com/huggingface/transformers.js/issues/791 | open | [
"question"
] | 2024-06-04T17:07:38Z | 2024-09-15T14:00:48Z | null | mram0509 |
pytorch/xla | 7,189 | Add example for training small LLM | ## ๐ Documentation
Create an example on how to train a small LLM.
Add it to the examples directory here:
https://github.com/pytorch/xla/tree/master/examples
| https://github.com/pytorch/xla/issues/7189 | open | [
"docathon-h1-2024",
"advanced"
] | 2024-06-04T16:42:54Z | 2024-06-19T01:14:21Z | 4 | alchemicduncan |
pytorch/xla | 7,185 | Try running inference on an ARM CPU | ## ๐ Documentation
Install the CPU PJRT plugin from the instructions here:
https://github.com/pytorch/xla/blob/master/plugins/cpu/README.md
Next try getting a model to run on a ARM CPU, if it works, create a tutorial on how to get it running.
| https://github.com/pytorch/xla/issues/7185 | open | [
"docathon-h1-2024",
"advanced"
] | 2024-06-04T16:40:13Z | 2024-06-17T17:59:07Z | 4 | alchemicduncan |
pytorch/xla | 7,183 | Create a distributed and single device example | ## ๐ Documentation
Select a model of your own to train. Then create an example of both running it on a single device, and running it on a distributed device of your choice.
Add both training examples that you came up with to the examples directory: https://github.com/pytorch/xla/tree/master/examples | https://github.com/pytorch/xla/issues/7183 | open | [
"docathon-h1-2024",
"advanced"
] | 2024-06-04T16:38:24Z | 2025-06-08T02:04:27Z | 1 | alchemicduncan |
pytorch/xla | 7,182 | Try running Resnet example on GPU | ## ๐ Documentation
Try running the Resnet training example on a GPU: https://github.com/pytorch/xla/blob/master/examples/train_resnet_base.py
If it works add a section about how to do it to the GPU instructions here: https://github.com/pytorch/xla/blob/master/docs/gpu.md
| https://github.com/pytorch/xla/issues/7182 | closed | [
"docathon-h1-2024",
"medium"
] | 2024-06-04T16:37:36Z | 2024-06-11T18:37:09Z | 1 | alchemicduncan |
pytorch/xla | 7,180 | Adding a new arg to a PyTorch op | ## โ Questions and Help
I'm trying to add a new (optional) argument to the `cumsum` operator in PyTorch - a boolean arg `full` which prepends a 0 to the beginning of the returned tensor. I'd appreciate some help to figure out how to get XLA to build with this change, and what the update process should look like (co... | https://github.com/pytorch/xla/issues/7180 | closed | [] | 2024-06-04T16:35:37Z | 2024-06-10T16:47:49Z | 0 | davidberard98 |
huggingface/diffusers | 8,400 | how can we load model to lora from singlefile ? | pipe.load_lora_weights("lora/aesthetic_anime_v1s.safetensors")
File "Z:\software\python11\Lib\site-packages\diffusers\loaders\lora.py", line 1230, in load_lora_weights
raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
pipe.load_lora_weigh... | https://github.com/huggingface/diffusers/issues/8400 | closed | [] | 2024-06-04T13:54:56Z | 2024-06-04T15:53:32Z | null | xalteropsx |
huggingface/datasets | 6,953 | Remove canonical datasets from docs | Remove canonical datasets from docs, now that we no longer have canonical datasets. | https://github.com/huggingface/datasets/issues/6953 | closed | [
"documentation"
] | 2024-06-04T12:09:03Z | 2024-07-01T11:31:25Z | 1 | albertvillanova |
pytorch/ao | 320 | Saving autoquant quantization plan | First of all, thank you for the great library! It makes quantization really easy.
Is it possible to run autoquant once and later applying the same quantization plan again? Or would I need to manually look at logs right now to see what autoquant came up with so I can apply the same quantization later?
// I see the... | https://github.com/pytorch/ao/issues/320 | closed | [
"question"
] | 2024-06-04T11:10:41Z | 2024-06-07T10:45:07Z | null | RobinKa |
huggingface/datasets | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | https://github.com/huggingface/datasets/issues/6951 | closed | [
"enhancement"
] | 2024-06-04T11:02:33Z | 2024-11-26T08:32:18Z | 5 | windmaple |
huggingface/datasets | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | https://github.com/huggingface/datasets/issues/6950 | closed | [
"documentation"
] | 2024-06-04T09:18:32Z | 2024-06-25T08:05:49Z | 2 | iansheng |
huggingface/sentence-transformers | 2,708 | What is the training order in the multi-task learning example? | hello. In the case of multi-task learning in the example below, what is the learning order? The example below is taken from https://www.sbert.net/examples/training/quora_duplicate_questions/README.html.
Regarding the dataset below, I know that the learning results are good if you learn mnrl after learning the cl da... | https://github.com/huggingface/sentence-transformers/issues/2708 | closed | [] | 2024-06-04T07:42:37Z | 2024-06-04T08:29:30Z | null | daegonYu |
pytorch/xla | 7,177 | Why not register low precision autocast for scaled dot product attention? | ## ๐ Bug
<!-- A clear and concise description of what the bug is. -->
MultiHeadAttention can not run with auto mixed precision mode.
Steps to reproduce the behavior:
```bash
import torch
import torch.nn as nn
import torch_xla
import torch_xla.core.xla_model as xm
xla_device = xm.xla_device()
embe... | https://github.com/pytorch/xla/issues/7177 | closed | [] | 2024-06-04T06:17:53Z | 2024-06-17T02:58:42Z | 2 | ghost |
huggingface/datasets | 6,949 | load_dataset error | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | https://github.com/huggingface/datasets/issues/6949 | closed | [] | 2024-06-04T01:24:45Z | 2024-07-01T11:33:46Z | 2 | frederichen01 |
huggingface/transformers.js | 789 | Can I use Xenova/Phi-3-mini-4k-instruct model server side? | ### Question
Hey there! Iโm trying to run Xenova/Phi-3-mini-4k-instruct model using transformers.js 2.17.2 on the server in my Node.js project, but I get an error saying that Phi-3 is not supported. Can I make it work somehow? Any ideas appreciated | https://github.com/huggingface/transformers.js/issues/789 | closed | [
"question"
] | 2024-06-03T18:43:20Z | 2024-06-04T04:57:42Z | null | StepanKukharskiy |
pytorch/serve | 3,172 | Two-way authentication/Mutual SSL in gRPC | ### ๐ The feature
Torchserve currently supports SSL for gRPC but one way authentication. Can we make it two way ?
### Motivation, pitch
More security
### Alternatives
reverse proxy like nginx is an option i think
### Additional context
_No response_ | https://github.com/pytorch/serve/issues/3172 | open | [
"enhancement"
] | 2024-06-03T14:58:07Z | 2024-06-03T17:37:53Z | 0 | MohamedAliRashad |
huggingface/datasets | 6,947 | FileNotFoundError๏ผerror when loading C4 dataset | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this๏ผ
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | https://github.com/huggingface/datasets/issues/6947 | closed | [] | 2024-06-03T13:06:33Z | 2024-06-25T06:21:28Z | 15 | W-215 |
huggingface/dataset-viewer | 2,878 | Remove or increase the 5GB limit? | The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.
Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination).
Sh... | https://github.com/huggingface/dataset-viewer/issues/2878 | closed | [
"question",
"feature request"
] | 2024-06-03T08:55:08Z | 2024-07-22T11:32:49Z | null | severo |
huggingface/transformers | 31,195 | How to get back the input time series after using PatchTSTForPretraining? | ### System Info
-
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My model is Patch... | https://github.com/huggingface/transformers/issues/31195 | closed | [] | 2024-06-03T06:44:31Z | 2024-10-26T07:44:56Z | null | nikhilajoshy |
huggingface/optimum | 1,885 | onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference | ### System Info
Hi,
i did a test between onnx optimum export + ORTOptimizer inference vs. setfit.export_onnx + onnxruntime.InferenceSession.
it seems that onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference
any idea why is that the reason?
i also chang... | https://github.com/huggingface/optimum/issues/1885 | open | [
"bug"
] | 2024-06-02T22:34:37Z | 2024-06-08T03:02:40Z | 1 | geraldstanje |
huggingface/chat-ui | 1,241 | ๐ป๐ปHow to deploy to vercel | Hi,
I am currently having troubles with deploying to Vercel, I am experiencing an error 404 NOT FOUND. I think i am using the wrong build command or the wrong default directory. Can someone please help?

Tha... | https://github.com/huggingface/chat-ui/issues/1241 | open | [
"support"
] | 2024-06-02T10:05:45Z | 2025-01-10T17:00:37Z | null | haydenkong |
huggingface/transformers.js | 788 | Is it possible to use transformers.js to implement audio source separation tasks? | ### Question
Hello, I have a beginner's question.
I want to implement the task of removing the human voice from the audio in the video and retaining the background sound in the browser. The idea is to load the model for audio source separation related to transformers.js to achieve the separation of the background s... | https://github.com/huggingface/transformers.js/issues/788 | open | [
"question"
] | 2024-06-02T04:00:55Z | 2024-12-26T06:05:26Z | null | asasas234 |
huggingface/lerobot | 238 | how to use on wslcan not visulize | how to use on wslcan not visulize | https://github.com/huggingface/lerobot/issues/238 | closed | [
"simulation"
] | 2024-06-02T03:58:44Z | 2025-10-08T08:25:31Z | null | jackylee1 |
huggingface/chat-ui | 1,236 | No Setup Deploy: Multiple models supported? | How can I make **multiple models** available on Chat UI using **No Setup Deploy**?
## Further Details
The form (see below) seems to only allow one model.
<details><summary>Form</summary>
<p>
<img width="661" alt="image" src="https://github.com/huggingface/chat-ui/assets/14152377/e5595c34-b5c5-4c09-8b83-d5a... | https://github.com/huggingface/chat-ui/issues/1236 | open | [
"enhancement",
"docker"
] | 2024-06-01T11:41:22Z | 2024-06-03T07:55:12Z | 1 | rodrigobdz |
huggingface/optimum | 1,884 | Add support for porting CLIPVisionModelWithProjection | ### Feature request
Currently there is not support for porting CLIPVisionModelWithProjection class models from the transformers library to onnx through optimum. I'd like to add support for the same for which we'd need to change the optimum/exporters/onnx/model_configs.py file. I'd like ot request you to help me guide ... | https://github.com/huggingface/optimum/issues/1884 | open | [
"feature-request",
"onnx"
] | 2024-05-31T22:25:45Z | 2024-10-09T07:56:28Z | 0 | mr-sarthakgupta |
huggingface/datasets | 6,940 | Enable Sharding to Equal Sized Shards | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | https://github.com/huggingface/datasets/issues/6940 | open | [
"enhancement"
] | 2024-05-31T21:55:50Z | 2024-06-01T07:34:12Z | 0 | yuvalkirstain |
pytorch/tutorials | 2,894 | ~PyTorch Docathon H1 2024!~ | ### **PyTorch Docathon H1 2024!**
Hooray! It's this time of the year again and we are excited for you to participate in the PyTorch docathon. We have the following repositories participating:
- [pytorch/pytorch](https://github.com/pytorch/pytorch)
- [pytorch/tutorials](https://github.com/pytorch/tutorials)
- ... | https://github.com/pytorch/tutorials/issues/2894 | closed | [
"docathon-h1-2024"
] | 2024-05-31T16:25:09Z | 2024-07-15T18:38:28Z | 0 | sekyondaMeta |
pytorch/examples | 1,264 | reference of weight initialization for llama2 model | first of all, thank you for supporting native TP for torch.
i just have been reading your TP tutorial code and found [the initialization detail](https://github.com/pytorch/examples/blob/main/distributed/tensor_parallelism/llama2_model.py#L316-L319) is different from the pytorch default parameterization (kaming init).
... | https://github.com/pytorch/examples/issues/1264 | closed | [] | 2024-05-31T03:18:46Z | 2024-05-31T04:18:26Z | 1 | SeunghyunSEO |
pytorch/examples | 1,263 | `local_rank` or `rank` for multi-node FSDP | I am wondering for multi-node FSDP, does `local_rank` and `rank` have any obvious difference here?
I think I understand that `local_rank` is the rank within a node.
I see in a few places it looks like `local_rank` is specifically used
For example
https://github.com/pytorch/examples/blob/main/distributed/FSDP/... | https://github.com/pytorch/examples/issues/1263 | open | [] | 2024-05-30T19:47:21Z | 2024-05-30T19:47:21Z | 0 | Emerald01 |
huggingface/chat-ui | 1,225 | SyntaxError: JSON5: invalid character 'u' at 1:1 | Where can I find out more about the following error? Is there an issue with the existing template?
## Reproduction Steps
1. Deploy [Chat UI using default template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) with `MONGO_URL` set to `mongodb+srv://<USER_SECRET>:<PASSWORD_SECRET>@<CLUSTE... | https://github.com/huggingface/chat-ui/issues/1225 | open | [
"docker"
] | 2024-05-30T11:07:36Z | 2025-01-16T22:54:08Z | 8 | rodrigobdz |
huggingface/chat-ui | 1,221 | 500 Internal Server Error with chat-ui | I executed an inference server with the address http://192.168.0.185:7777/generate_stream using text-generation-inference (TGI) v.2.0.4. When executing commands with curl, the inference results are responding normally. For ease of use, I am going to use chat-ui. Below is the .env.local file's content of chat-ui.
`... | https://github.com/huggingface/chat-ui/issues/1221 | closed | [
"support"
] | 2024-05-30T00:35:58Z | 2024-05-31T00:19:49Z | 4 | leemgs |
huggingface/transformers.js | 785 | Using AutoModel, AutoTokenizer with distilbert models | ### Question
Does transformers.js have a function to get the label after getting the logits? How to get the labels from the inference output?
let tokenizer = await AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');
let model = await AutoModel.from_pretrained('distilbert-base-uncased-... | https://github.com/huggingface/transformers.js/issues/785 | open | [
"question"
] | 2024-05-29T20:35:17Z | 2024-05-30T11:09:17Z | null | mram0509 |
huggingface/chat-ui | 1,220 | A few questions about the Cloudflare integration | Howdy ๐ ,
Working on a corresponding page for this in the [Cloudflare docs](https://developers.cloudflare.com/workers-ai/) and had a few [questions that I need answered](https://github.com/cloudflare/cloudflare-docs/pull/14488#issuecomment-2101481990) in this PR.
## Questions
1. If I'm reading [this line](htt... | https://github.com/huggingface/chat-ui/issues/1220 | closed | [
"documentation"
] | 2024-05-29T19:11:14Z | 2024-06-20T12:53:52Z | 3 | kodster28 |
huggingface/transformers.js | 784 | Shouldn't this work? #v3 | ### Question
### Issue with Transformer.js v3 and WebGPU
#### Description
Yesterday I installed `transformer.js` with the "v3" branch to test the new features with WebGPU, but I get an error.
#### Error Message
```
@xenova_transformers.js?v=3b2ad0ed:24861 Uncaught (in promise)
Error: This pipeline is not yet... | https://github.com/huggingface/transformers.js/issues/784 | open | [
"question"
] | 2024-05-29T13:36:52Z | 2024-05-29T14:59:49Z | null | kalix127 |
pytorch/xla | 7,139 | Setting FrontEnd attributes for CC ops replica groups in the HLO | ## ๐ Feature
<!-- A clear and concise description of the feature proposal -->
The metadata of the CC operation needs to have an extra field/key, indicating whether the replica groups are represented directly with all the ids or encoded in some other manner, expanded into actual ids downstream into the stack. These w... | https://github.com/pytorch/xla/issues/7139 | closed | [
"enhancement",
"distributed"
] | 2024-05-29T12:47:47Z | 2025-04-07T13:55:20Z | 2 | amithrm |
pytorch/vision | 8,450 | Let `v2.functional.gaussian_blur` backprop through `sigma` parameter | the v1 version of `gaussian_blur` allows to backprop through sigma
(example taken from https://github.com/pytorch/vision/issues/8401)
```
import torch
from torchvision.transforms.functional import gaussian_blur
device = "cuda"
device = "cpu"
k = 15
s = torch.tensor(0.3 * ((5 - 1) * 0.5 - 1) + 0.8, requires_... | https://github.com/pytorch/vision/issues/8450 | closed | [] | 2024-05-29T12:45:21Z | 2024-07-29T15:45:14Z | 3 | NicolasHug |
huggingface/datasets | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | https://github.com/huggingface/datasets/issues/6930 | open | [] | 2024-05-29T12:40:05Z | 2024-07-23T06:25:24Z | 2 | Polarisamoon |
huggingface/datasets | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | https://github.com/huggingface/datasets/issues/6929 | open | [
"enhancement"
] | 2024-05-29T10:36:06Z | 2024-05-29T20:51:56Z | 2 | zinc75 |
huggingface/candle | 2,226 | How to load LoRA adapter along with the GGUF model? | Hello all,
I have recently managed to convert the flan-t5 base model to GGUF #2215 . But I also have multiple LoRA adapters trained for different tasks.
@EricLBuehler @LaurentMazare So I wish to know if there is a way to also load single/multiple LoRA adapters along with the GGUF model. I am currently running an... | https://github.com/huggingface/candle/issues/2226 | open | [] | 2024-05-29T06:03:10Z | 2024-06-05T03:34:14Z | null | niranjanakella |
pytorch/pytorch | 127,320 | [While_loop] How to use layer like `torch.nn.BatchNorm2d` with while_loop? | ### ๐ Describe the bug
Hi, I'm trying to support `while_loop` with `DispatchKey.XLA`;
when I try linear and MNIST with torch, code would be dispatched to `DispatchKey.CompositeExplicitAutograd` to use pure python while, and finish;
my local example code for MNIST:
```python
import torch
from torch._higher_... | https://github.com/pytorch/pytorch/issues/127320 | closed | [
"triaged",
"module: xla",
"oncall: pt2",
"module: higher order operators",
"module: pt2-dispatcher"
] | 2024-05-28T18:37:15Z | 2024-05-29T22:42:57Z | null | ManfeiBai |
huggingface/transformers.js | 781 | Progress callback for Moondream? | ### Question
While implementing Moondream (from the excellent example) I stumbled upon a few questions.
- How can I implement a callback while Moondream is generating tokens? A normal progressCallback didnโt work?
```
self.model.generate({
...text_inputs,
...vision_inputs,
do_sample: false,
max_new_t... | https://github.com/huggingface/transformers.js/issues/781 | closed | [
"question"
] | 2024-05-28T14:07:07Z | 2024-06-03T18:49:10Z | null | flatsiedatsie |
huggingface/competitions | 29 | How to notify awardees or contact participants๏ผ | The competition just shows the participants' id.
So, how to contact them via email to inform them of the award requirements and request additional personal information? | https://github.com/huggingface/competitions/issues/29 | closed | [] | 2024-05-28T08:11:38Z | 2024-06-09T07:03:25Z | null | shangfenghuang |
huggingface/datatrove | 196 | How to deduplicate multiple datasets? | fineweb offer a deduplication demo for one dump. If want to deduplicate more dumps, should I merge dumps before deduplication ?
| https://github.com/huggingface/datatrove/issues/196 | closed | [] | 2024-05-28T03:00:31Z | 2024-06-07T07:25:45Z | null | canghaiyunfan |
huggingface/chat-ui | 1,183 | Prompt template for WizardLM-2-8x22B? | What is the prompt template for `WizardLM-2-8x22B` in the `.env.local`?
When setting it to the default one: `<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}`
the g... | https://github.com/huggingface/chat-ui/issues/1183 | open | [
"support",
"models"
] | 2024-05-27T14:28:47Z | 2024-07-29T15:27:25Z | 3 | Arche151 |
huggingface/chat-ui | 1,178 | Improve Domain Search Results for Assistants | The domain search for assistants is a great idea, but the current implementation is not really useful if the domains are less likely to be top results like Wikipedia.
This seems happen because the web is searched first, and the domain filter is applied afterward. This method can easily result in zero parseable results... | https://github.com/huggingface/chat-ui/issues/1178 | open | [
"question",
"websearch"
] | 2024-05-27T10:33:22Z | 2024-05-31T11:02:11Z | null | lueschow |
huggingface/datatrove | 195 | What is the difference between tasks and workers๏ผ | What is the difference between tasks and workers, what is the definition of tasks and how to determine the number of tasks?
| https://github.com/huggingface/datatrove/issues/195 | closed | [] | 2024-05-27T06:32:25Z | 2024-05-27T07:08:11Z | null | canghaiyunfan |
huggingface/transformers.js | 778 | Pipeline execution time with 'image-classification' pipeline | ### Question
While calling the 'image-classification' pipeline we pass the image url. So this does a fetch of the image. So will the time taken to process the image include the download time of the image? So if the network is slow this may impact the pipeline performance. Is there a way to use an image thats already ... | https://github.com/huggingface/transformers.js/issues/778 | open | [
"question"
] | 2024-05-26T20:15:21Z | 2024-05-27T04:14:52Z | null | mram0509 |
huggingface/transformers | 31,039 | What if past_key_values is in model_kwargs but is None | https://github.com/huggingface/transformers/blob/4c6c45ba138202f42582b5cea98126af87195a95/src/transformers/generation/utils.py#L1317
This line fails for me when past_key_values is in model_kwargs but is None. Line 1321 raises an error
Could you advice?
Thank you | https://github.com/huggingface/transformers/issues/31039 | closed | [] | 2024-05-26T07:58:18Z | 2024-06-10T06:32:23Z | null | estelleafl |
huggingface/chat-ui | 1,174 | Unable to deploy space with chatUI, getting error ** Failed to connect to 127.0.0.1 port 8080 after 0 ms** | Hi guys, so i am trying to deploy space with chatui template and **abacusai/Smaug-Llama-3-70B-Instruct** model but i am getting following error again and again in container logs.
`
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in... | https://github.com/huggingface/chat-ui/issues/1174 | open | [
"support",
"docker"
] | 2024-05-26T07:05:12Z | 2025-06-27T10:30:24Z | 5 | starlord263 |
huggingface/optimum | 1,876 | Unable to generate question-answering model for Llama and there is also no list of what are the supported models for question-answering | ### Feature request
Hi, I received this error:
ValueError: Asked to export a llama model for the task question-answering, but the Optimum ONNX exporter only supports the tasks feature-extraction, feature-extraction-with-past, text-generation, text-generation-with-past, text-classification for llama. Please use a su... | https://github.com/huggingface/optimum/issues/1876 | open | [
"bug",
"onnx"
] | 2024-05-26T06:10:47Z | 2024-10-09T07:57:24Z | null | customautosys |
huggingface/transformers.js | 776 | How to point to a specific model path in order to use compressed models? (brotli) | ### Question
Hi,
I just can't find the configuration to point to a specific model file path to use .onnx.br instead of .onnx for example.
I can run the model (distilbert-base-cased-distilled-squad) offline without any issue and it works. But I want to deploy it compressed using brotli. All I can see in the con... | https://github.com/huggingface/transformers.js/issues/776 | open | [
"question"
] | 2024-05-24T18:31:12Z | 2024-05-25T10:24:25Z | null | KamilCSPS |
huggingface/chat-ui | 1,169 | Help debugging "Sorry, something went wrong. Please try again." | I am a developer working on extending this project. Sometimes I get this error "Sorry, something went wrong. Please try again." I can't figure out how to debug it when it happens. What I want is for it to display the full error somehow, like with a console.log. Is there some way to do that? Or is the error saved in the... | https://github.com/huggingface/chat-ui/issues/1169 | closed | [] | 2024-05-24T18:30:08Z | 2024-06-17T12:47:03Z | 1 | loganlebanoff |
pytorch/pytorch | 127,075 | What is the processing principle when the complex64 input tensor contains nan or inf for addition? | ### ๐ Describe the bug
>>> import torch
>>> a = torch.tensor(complex(3, float('nan')))
>>> torch.add(a,a)
tensor(nan+nanj)
The rule for adding complex numbers is to add the real and imaginary parts separately.
In the above example, why is the real part nan instead of 4?
How to deal with nan/inf in the outpu... | https://github.com/pytorch/pytorch/issues/127075 | open | [
"triaged",
"module: complex"
] | 2024-05-24T09:55:35Z | 2024-05-27T03:59:52Z | null | liying-1997 |
pytorch/torchchat | 847 | Figure out how to leverage kernels in torchao | For quantized linear a lot of the kernels will be living in torchao: https://github.com/pytorch/ao/tree/main/torchao/csrc
We need to figure out how to use these kernels in torchchat/executorch.
| https://github.com/pytorch/torchchat/issues/847 | closed | [] | 2024-05-23T19:04:48Z | 2024-07-21T21:53:58Z | null | larryliu0820 |
pytorch/xla | 7,103 | Why does my 3-layer linear graph need to output two Transposes? | ## โ Questions and Help
torchxla is the latest version
this is my code๏ผ
```
import torch
import torch_xla
import torch_xla.runtime as xr
import torch_xla.core.xla_model as xm
import torch_xla.experimental.xla_sharding as xs
from torch_xla.experimental.xla_sharding import Mesh
from torch_xla.amp import autocas... | https://github.com/pytorch/xla/issues/7103 | closed | [
"question"
] | 2024-05-23T08:54:02Z | 2025-04-07T13:59:14Z | null | mars1248 |
pytorch/xla | 7,102 | Problem with mesh shape in HybridMesh on TPU | ## โ Questions and Help
I recived error when try create sqmd mesh on kaggle notebook when flow [Huggingface optimum-tpu](https://github.com/huggingface/optimum-tpu/blob/695ee84d657d9ed2761fcf481685afad0e849a90/examples/language-modeling/run_clm.py#L484)
```
import os
import numpy as np
import torch_xla
import... | https://github.com/pytorch/xla/issues/7102 | closed | [
"question",
"distributed",
"xla:tpu"
] | 2024-05-23T06:39:44Z | 2025-04-17T13:33:19Z | null | hiwamk |
huggingface/datasets | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | https://github.com/huggingface/datasets/issues/6916 | closed | [] | 2024-05-22T23:52:15Z | 2024-05-23T00:07:53Z | 0 | jetlime |
pytorch/vision | 8,437 | Add mobilenetv4 support and pretrained models? | ### ๐ The feature
Google has published the mobilenetv4 model. When will pytorch support it and open the pre-trained model?
### Motivation, pitch
I very much hope to use the latest lightweight backbone
### Alternatives
_No response_
### Additional context
_No response_ | https://github.com/pytorch/vision/issues/8437 | closed | [] | 2024-05-22T06:16:00Z | 2024-06-14T02:01:20Z | 5 | LiYufengzz |
huggingface/peft | 1,750 | How to finetune embeddings and LM head as a single layer when they are tied? | I am looking to LoRA-finetune models like Gemma, which have tied embeddings.
But, I would also like to have the shared embeddings as trainable (the common embedding table corresponding to both input and output embeddings of the network).
How do I achieve this?
---
_Note:_ Passing both `["embed_tokens","lm_he... | https://github.com/huggingface/peft/issues/1750 | closed | [] | 2024-05-21T18:32:07Z | 2025-08-12T11:54:09Z | null | GokulNC |
pytorch/audio | 3,797 | RTSP with StreamReader | Does torchaudio supports RTSP streams? I've been using with RTMP but when running RTSP streams is always crashes, mainly reporting that "threads" argument passed to FFMPEG is not supported.
Using FFMPEG 6.0

| https://github.com/pytorch/audio/issues/3797 | closed | [] | 2024-05-21T14:55:21Z | 2024-05-21T15:59:40Z | 0 | pedromoraesh |
huggingface/blog | 2,078 | Idefics2's perceiver how to make attentionamsk to None? | I set atttentionmask to None, but the model doesn't learned well, my inputs didn't padded so I dont want attention mask. How to resolve this?
I also tried add a all ones attnetionmask, but the result also very worse. | https://github.com/huggingface/blog/issues/2078 | open | [] | 2024-05-21T07:38:57Z | 2024-05-21T07:38:57Z | null | lucasjinreal |
huggingface/peft | 1,749 | how to fine tune LoRA HQQ? | ### Feature request
how to fine tune LoRA to HQQ?
### Motivation
how to fine tune LoRA to HQQ?
### Your contribution
how to fine tune LoRA to HQQ? | https://github.com/huggingface/peft/issues/1749 | closed | [] | 2024-05-21T02:56:18Z | 2024-06-29T15:03:18Z | null | NickyDark1 |
huggingface/trl | 1,650 | how to save v_head | currently, I use `ppo_trainer.save_pretrained` to save a model that is still in training, because the machine I used is rather unstable, and I would often need to resume retraining should it be interrupted. When I resume the training I got the following warning:
```
WARNING:root:A <class 'peft.peft_model.PeftModelFor... | https://github.com/huggingface/trl/issues/1650 | closed | [] | 2024-05-20T17:06:00Z | 2025-04-11T10:14:36Z | null | zyzhang1130 |
pytorch/torchchat | 837 | Cannot build mobile android app in unit test - due to licensing question in build process? | ERROR: type should be string, got "https://github.com/pytorch/torchchat/actions/runs/9161687849/job/25187114502?pr=831\r\n\r\nJanuary 16, 2019\r\n---------------------------------------\r\nAccept? (y/N): Skipping following packages as the license is not accepted:\r\nGoogle APIs Intel x86_64 Atom System Image\r\nThe following packages can not be installed since their licenses or those of the packages they depend on were not accepted:\r\n system-images;android-34;google_apis;x86_64\r\n[=======================================] 100% Computing updates... \r\n\r\n+ avdmanager list avd\r\n+ grep -q torchchat\r\n+ avdmanager create avd --name torchchat --package 'system-images;android-34;google_apis;x86_64'\r\nLoading local repository... \r\n[========= ] 25% Loading local repository... \r\n[========= ] 25% Fetch remote repository... \r\n[=======================================] 100% Fetch remote repository... \r\nError: Package path is not valid. Valid system image paths are:\r\nnull" | https://github.com/pytorch/torchchat/issues/837 | closed | [] | 2024-05-20T17:01:29Z | 2024-08-20T18:26:20Z | 0 | mikekgfb |
huggingface/chat-ui | 1,153 | Can we use Hugging Face Chat with a Custom Server | Requirement:
I have a custom API which takes in the inputs queries and passes it through a RAG pipeline and finally to llm and returns the result.
Question is, can I integrate it with Chat-UI (utilizing just chat-ui frontend and my custom backend). If yes, is there any documentation around it. As per what I unde... | https://github.com/huggingface/chat-ui/issues/1153 | closed | [] | 2024-05-20T16:44:01Z | 2024-09-03T07:52:18Z | 9 | snps-ravinu |
huggingface/nanotron | 176 | Where is the "nanotron format" defined? | I see that any(?) hf model can be converted to nanotron format with this [script](https://github.com/huggingface/nanotron/blob/main/examples/llama/convert_hf_to_nanotron.py).
Is there documentation describing this format?
Can any model that may be loaded with AutoModelForCausalLM be converted to nanotron format f... | https://github.com/huggingface/nanotron/issues/176 | closed | [] | 2024-05-20T13:54:52Z | 2024-05-21T17:22:50Z | null | RonanKMcGovern |
huggingface/chat-ui | 1,151 | Can I change localhost to remote IP? | I am running Chat-UI in local, but I want to change localhost to IP, I am unable to find this configguration in the code. Can anyone help? | https://github.com/huggingface/chat-ui/issues/1151 | closed | [] | 2024-05-20T05:34:23Z | 2024-05-20T07:01:30Z | 1 | snps-ravinu |
huggingface/candle | 2,197 | How to slice a tensor? | tch has the function `slice` that return a tensor slice. Is there a corresponding function for candle? | https://github.com/huggingface/candle/issues/2197 | closed | [] | 2024-05-20T00:55:08Z | 2024-05-20T01:46:58Z | null | Gadersd |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.