repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
pytorch/executorch | 1,101 | How to virtualize the qte model? | Hi,
I am now working on executorch. I want to see the model architecture of qte, which is easy for us to debug.
However, I cannot find a virtualizing tool. Netron does not support qte format now.
Could executorch support to virtualize the qte format model?
Besides, I wonder whether the export function will ... | https://github.com/pytorch/executorch/issues/1101 | closed | [
"need-user-input"
] | 2023-10-26T12:52:41Z | 2023-10-27T13:48:47Z | null | liang1232018 |
huggingface/diffusers | 5,538 | Why is the pipeline_stable_diffusion_upscale.py file not using the encoder-decoder latent? | ### Describe the bug
There is no training script for pipeline_stable_diffusion_upscale.py because the authors chose not to utilize the latent domain for the Super-resolution task. Additionally, the U-Net implemented in pipeline_stable_diffusion_upscale.py only accepts 7 channels. How is this achieved?
### Reproductio... | https://github.com/huggingface/diffusers/issues/5538 | closed | [
"question",
"stale"
] | 2023-10-26T10:47:10Z | 2023-12-08T15:05:44Z | null | AnasHXH |
huggingface/chat-ui | 534 | Login issue with Google OpenID | I set up google OpenID for my chatUI. I have set the scope to openId and ./auth/userinfo.profile in OAuth Consent Screen. I tried to log the data shared by google to the app and it was the following
{
sub: '****',
picture: 'https://lh3.googleusercontent.com/****',
email: 'shagun@****',
email_verified: t... | https://github.com/huggingface/chat-ui/issues/534 | closed | [] | 2023-10-26T10:00:05Z | 2023-10-26T10:49:36Z | 3 | shagunhexo |
pytorch/TensorRT | 2,415 | ❓ [Question] Examples not working in nvcr.io/nvidia/pytorch:23.09-py3. | ## ❓ Question
I am within the `nvcr.io/nvidia/pytorch:23.09-py3` container. Trying out some snippets from:
https://youtu.be/eGDMJ3MY4zk?si=MhkbgwAPVQSFZEha.
Both JIT and AoT examples failed. For JIT, it complained that "tensorrt" backend isn't available, for AoT, it complained that "The user code is using a fea... | https://github.com/pytorch/TensorRT/issues/2415 | closed | [
"question"
] | 2023-10-26T09:53:16Z | 2025-11-24T17:42:35Z | null | sayakpaul |
huggingface/candle | 1,185 | Question: How to create a Var from MmapedSafetensors | Hello everybody,
I was wondering how to create a Var instance from an `MMapedSafetensors` `TensorView`. I have tried using `candle_core::Var::from_slice(tensor.data(), tensor.shape(), &device)?`, but I get the error:
`Error: Shape mismatch, got buffer of size 90177536 which is compatible with shape [11008, 4096]`... | https://github.com/huggingface/candle/issues/1185 | closed | [] | 2023-10-26T09:41:37Z | 2023-10-26T11:26:29Z | null | EricLBuehler |
huggingface/datasets | 6,353 | load_dataset save_to_disk load_from_disk error | ### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird ha... | https://github.com/huggingface/datasets/issues/6353 | closed | [] | 2023-10-26T03:47:06Z | 2024-04-03T05:31:01Z | 5 | brisker |
huggingface/text-embeddings-inference | 43 | How to add custom python file for pretrained model on TEI server? | ### System Info
I am pretty new to this space. Please help.
I have made a python file with pre-trained model, which generates embeddings. What I want is to -
1. Create a docker image of Python file
2. Run it on TEI server?
How can we do this?
### Information
- [ ] Docker
- [ ] The CLI dire... | https://github.com/huggingface/text-embeddings-inference/issues/43 | open | [] | 2023-10-25T16:09:52Z | 2023-10-25T17:57:46Z | null | cken21 |
huggingface/llm-vscode | 100 | How to generate the response from locally hosted end point in vscode? | Hi,
I managed to plug the llm-vcode extension to point to the locally running endpoint. Now when I am selected the content like as below:
# function to sum 2 numbers in python
then Cmd+shif+a > llm: show code attribution
My local endpoint invokes and give the relevant response as well in below format
`{
... | https://github.com/huggingface/llm-vscode/issues/100 | open | [
"stale"
] | 2023-10-25T15:55:40Z | 2023-11-25T01:46:01Z | null | dkaus1 |
huggingface/tokenizers | 1,375 | Question: what is the add_special_tokens parameter of Tokenizer::encode? | As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks! | https://github.com/huggingface/tokenizers/issues/1375 | closed | [] | 2023-10-25T09:55:55Z | 2023-10-25T18:43:54Z | null | EricLBuehler |
huggingface/candle | 1,173 | Question: what is the add_special_tokens parameter of Tokenizer::encode? | As stated above, what does the parameter add_special_tokens do? Does it add bos/eos tokens? Thanks! | https://github.com/huggingface/candle/issues/1173 | closed | [] | 2023-10-25T09:30:01Z | 2023-10-25T09:55:42Z | null | EricLBuehler |
huggingface/dataset-viewer | 2,009 | Are URLs in rows response sanitized? | see https://github.com/huggingface/moon-landing/pull/7798#discussion_r1369813236 (internal)
> Is "src" validated / sanitized?
> if not there is a potential XSS exploit here (you can inject javascript code in an image src)
> Are S3 object names sanitized? If no, it should be the case in dataset-server side | https://github.com/huggingface/dataset-viewer/issues/2009 | closed | [
"question",
"security",
"P1"
] | 2023-10-24T15:10:29Z | 2023-11-21T15:39:13Z | null | severo |
huggingface/chat-ui | 528 | Websearch error in proxy | I'm developing in a proxy environment, I'm guessing it's because **websearch module can't import the model(Xenova/gte-small) from huggingface.**
I don't want to use websearch, but it tries to load the gte-small model anyway, and I get an error.
```
11:36:36 AM [vite] Error when evaluating SSR module /src/lib/serve... | https://github.com/huggingface/chat-ui/issues/528 | closed | [
"enhancement",
"support",
"websearch"
] | 2023-10-24T03:53:25Z | 2023-11-15T15:44:01Z | 6 | calycekr |
huggingface/candle | 1,165 | How do I raise 2 to the power of a tensor? | How do I write:
```python
x = 2 ** (y * z)
```
Where `y` is an integer and `z` is a tensor?
I tried to use `powf`, but it only works with float arguments. | https://github.com/huggingface/candle/issues/1165 | closed | [] | 2023-10-23T22:13:28Z | 2023-10-24T04:28:23Z | null | laptou |
huggingface/candle | 1,163 | how to modify the contents of a Tensor? | what is the `candle` equivalent of this?
```python
t[2, :] *= 2;
``` | https://github.com/huggingface/candle/issues/1163 | closed | [] | 2023-10-23T19:58:50Z | 2023-10-24T04:28:10Z | null | laptou |
huggingface/transformers.js | 367 | [Question] How to include ort-wasm-simd.wasm with the bundle? | How can I include ort-wasm-simd.wasm with the bundle? I'm using this on an app that needs to be able to run offline, so I'd like to package this with the lib. I'm also running this on web worker, so that file gets requested 1+n times per user session when the worker starts.
<img width="725" alt="image" src="https://gi... | https://github.com/huggingface/transformers.js/issues/367 | closed | [
"question"
] | 2023-10-23T04:54:16Z | 2023-10-26T08:27:28Z | null | mjp0 |
pytorch/torchx | 782 | Workspace patch is applied only on role[0] image | ## ❓ Questions and Help
Per https://github.com/pytorch/torchx/blob/main/torchx/runner/api.py#L362-L370, we assume that patch needs to be applied only for a single role. Effectively assumes that:
1. role0 is the only image that needs to be updated
2. workspace is mapped to image of role0.
This issue has surfa... | https://github.com/meta-pytorch/torchx/issues/782 | open | [
"enhancement",
"question"
] | 2023-10-22T23:26:32Z | 2023-10-23T19:56:21Z | 5 | kurman |
huggingface/autotrain-advanced | 310 | How to determine the LMTrainingType ? chat or generic mode? | It is said that there are two modes (chat and generic), but I cannot find a way to determine it. | https://github.com/huggingface/autotrain-advanced/issues/310 | closed | [] | 2023-10-21T14:28:59Z | 2023-11-26T04:31:08Z | null | qiaoqiaoLF |
huggingface/datasets | 6,324 | Conversion to Arrow fails due to wrong type heuristic | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowI... | https://github.com/huggingface/datasets/issues/6324 | closed | [] | 2023-10-20T23:20:58Z | 2023-10-23T20:52:57Z | 2 | jphme |
huggingface/transformers.js | 365 | [Question] Headers not defined | Hi friends!
Neither headers nor fetch seems to be getting resolved.. trying to run this on a nodejs application...
file:///home/rajesh/code/ai/js/invoice/node_modules/@xenova/transformers/src/utils/hub.js:201
return fetch(urlOrPath, { headers });
^
TypeError: fetch is not a function
... | https://github.com/huggingface/transformers.js/issues/365 | closed | [
"question"
] | 2023-10-20T16:29:28Z | 2023-11-22T06:15:35Z | null | trilloc |
huggingface/sentence-transformers | 2,335 | How to get individual token embeddings of a sentence from sentence transformers | How to get individual token embeddings of a sentence from sentence transformers | https://github.com/huggingface/sentence-transformers/issues/2335 | closed | [] | 2023-10-20T06:49:00Z | 2023-12-18T16:21:32Z | null | pradeepdev-1995 |
huggingface/safetensors | 371 | Non-blocking `save_file` | ### Feature request
Add the option to make calls to `safetensors.*.save_file` non-blocking to allow execution to continue while large tensors / models are being saved.
### Motivation
I'm writing a script a bulk compute embeddings however I am getting poor GPU utilisation due to time spent saving to disk with `safete... | https://github.com/huggingface/safetensors/issues/371 | closed | [
"Stale"
] | 2023-10-20T05:42:47Z | 2023-12-11T01:48:39Z | 1 | vvvm23 |
huggingface/huggingface_hub | 1,767 | Request: discerning what the default model is when using `InferenceClient` without a `model` | When doing something like the below:
```python
client = InferenceClient() # NOTE: no model specified
client.feature_extraction("hi")
```
It would be cool to know what model is being used behind the scenes. How can one figure this out programmatically?
I am thinking there may be a need for a new `Inference... | https://github.com/huggingface/huggingface_hub/issues/1767 | closed | [
"enhancement",
"good first issue"
] | 2023-10-19T20:56:53Z | 2023-11-08T13:47:14Z | null | jamesbraza |
huggingface/diffusers | 5,457 | What is function of `attention_mask` in `get_attention_scores`? | What is function of `attention_mask` in `get_attention_scores`? I guess it is used to ignore some value when calculating the attention map
I can not find a example in diffusers library that actually use this `attention_mask`. Could you provide an example on how to use it?
https://github.com/huggingface/diffusers/bl... | https://github.com/huggingface/diffusers/issues/5457 | closed | [
"stale"
] | 2023-10-19T18:14:38Z | 2023-11-28T15:05:41Z | null | g-jing |
pytorch/tutorials | 2,610 | [BUG] - <title>When I use fsdp, Because the flattened parameters, I always meet some question | ### Add Link
When I use fsdp, Because the flattened parameters, I always meet some question.
for examples:
`
RuntimeError: mat2 must be a matrix, got 1-D tensor
`
and
`
RuntimeError: weight should have at least three dimensions
`
It always occurred in some flattened model weights, sucn as conv, linear etc.
H... | https://github.com/pytorch/tutorials/issues/2610 | closed | [
"bug",
"distributed"
] | 2023-10-19T14:18:09Z | 2025-05-12T15:33:13Z | 4 | sqzhang-lazy |
huggingface/accelerate | 2,068 | How to use cpu_offload function, attach_align_device_hook function, | attach_align_device_hook is called in the cpu_offload function. How is skip_keys used in attach_align_device_hook ?
def attach_align_device_hook(
module: torch.nn.Module,
execution_device: Optional[torch.device] = None,
offload: bool = False,
weights_map: Optional[Mapping] = None,
offload_buff... | https://github.com/huggingface/accelerate/issues/2068 | closed | [] | 2023-10-19T10:25:07Z | 2023-11-26T15:06:04Z | null | LeonNerd |
huggingface/accelerate | 2,067 | how to automatically load state dict from memory to a multi-gpu device? | ``` Python
config_dict = AutoConfig.from_pretrained(model_config, device_map="auto")
model = AutoModelForCausalLM.from_config(config_dict)
raw_state_dict = torch.load(args.model_path, map_location="cpu")
state_dict = convert_ckpt(raw_state_dict)
model.load_sta... | https://github.com/huggingface/accelerate/issues/2067 | closed | [] | 2023-10-19T05:57:39Z | 2023-12-22T15:06:31Z | null | tlogn |
huggingface/accelerate | 2,064 | How to use `gather_for_metrics()` with decoder-generated strings to compute rouge score? | I am fine-tuning an encoder-decoder model and during the validation step, using the `.generate` method to generate tokens from the decoder that are subsequently decoded into strings (in this case classes). These generations are occurring across 8 GPUs and I am using Accelerate to manage the distribution.
My hope was... | https://github.com/huggingface/accelerate/issues/2064 | closed | [
"solved"
] | 2023-10-18T19:25:29Z | 2023-12-25T15:07:03Z | null | plamb-viso |
huggingface/transformers.js | 364 | [Question] Error in getModelJSON with React | Hey, I am trying to transcribe audio to speech using transformers.js. I tried two ways
1. https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesautomaticspeechrecognitionpipeline
2. https://huggingface.co/docs/transformers.js/tutorials/react
But seem to get an error like this
?
I thought that the result would be reproducible because SetFitTrainer() has a default random seed in its constructor, but found that it was not the case. SetFitTrainer source code indicates t... | https://github.com/huggingface/setfit/issues/432 | closed | [] | 2023-10-17T23:47:46Z | 2023-12-06T13:19:54Z | null | youngjin-lee |
huggingface/chat-ui | 519 | .env.local prepromt env variable with multi lines | Hi
I have a prepromt which is basically a 2 shorts inference. very long text ( 1200 lines like) that I want to add as a prepromts, but the env. file does not allow a multi line text as a variable
any idea how to handle this? | https://github.com/huggingface/chat-ui/issues/519 | open | [] | 2023-10-17T18:34:30Z | 2023-11-07T13:11:21Z | 6 | RachelShalom |
pytorch/xla | 5,709 | how can I debug in openxla xla source. in pytorch xla . | I build pytorch and pytorch xla install in my computer. and I can debug in pytorch xla, but I dont known ,how debug in openxla xla source code.
The compilation of xla depends on openxla. The openxla xla compiled source code can be seen here, xla/build/temp.linux-x86_64-cpython-310/bazel-xla/external. How should I set... | https://github.com/pytorch/xla/issues/5709 | closed | [
"question",
"openxla"
] | 2023-10-17T12:02:34Z | 2025-04-29T13:07:15Z | null | ckfgihub |
huggingface/optimum | 1,459 | nougat to onnx | ### Feature request
I would like to do the transformation of the [nougat](https://huggingface.co/facebook/nougat-base) model to onnx, is it possible to do it through optimum?
### Motivation
Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs... | https://github.com/huggingface/optimum/issues/1459 | closed | [] | 2023-10-17T10:03:15Z | 2024-08-27T06:16:17Z | 3 | arvisioncode |
pytorch/vision | 8,050 | Any plans to implement the functions in opencv? | ### 🚀 The feature
Expect an implementation of some of the apis available in opencv (e.g. cv2.findContours(), cv2.connectedComponents(), ...)
### Motivation, pitch
Just want torchvision to be able to do these things faster using gpus, and make these api faster.
### Alternatives
_No response_
### Additional contex... | https://github.com/pytorch/vision/issues/8050 | open | [] | 2023-10-17T07:59:40Z | 2023-10-18T18:24:54Z | 1 | mortal-Zero |
huggingface/diffusers | 5,416 | How to correctly implement a class-conditional model | Hi, I'd like to implement a DDPM that is class-conditioned, but not conditioned on anything else (no text), using `UNet2DConditionModel`. I'm training from scratch.
I'm calling the model with `noise_pred = model(noisy_images, timesteps, class_labels=class_labels, return_dict=False)[0]`, but I get the error `UNet2D... | https://github.com/huggingface/diffusers/issues/5416 | closed | [] | 2023-10-16T20:53:41Z | 2023-10-16T21:02:39Z | null | nickk124 |
huggingface/chat-ui | 511 | ChatUI on HuggingFace Spaces errors out with PermissionError: [Errno 13] Permission denied | When I try following the below two tutorials I hit the same error, where the container code tries to create a directory and fails due to permission issues on the host
tutorials:
1. https://huggingface.co/docs/hub/spaces-sdks-docker-chatui#chatui-on-spaces
2. https://huggingface.co/blog/Llama2-for-non-engineers
... | https://github.com/huggingface/chat-ui/issues/511 | open | [
"support",
"spaces"
] | 2023-10-16T08:29:06Z | 2023-12-17T02:58:52Z | 3 | Skrelan |
huggingface/candle | 1,105 | How to run a model in Fp16? | EDIT: Never mind, see below comment | https://github.com/huggingface/candle/issues/1105 | closed | [] | 2023-10-16T03:32:16Z | 2023-10-18T19:40:54Z | null | joeyballentine |
huggingface/candle | 1,104 | How to load .pth file weights? | I've been experimenting with candle and re-implementing ESRGAN in it. I ended up needing to convert a couple .pth files I have into .safetensors format in python in order to load them into the VarBuilder. I saw on the docs you say this supports loading pytorch weights directly though, but there does not seem to be an e... | https://github.com/huggingface/candle/issues/1104 | open | [] | 2023-10-16T03:29:53Z | 2023-10-19T22:01:42Z | null | joeyballentine |
huggingface/datasets | 6,303 | Parquet uploads off-by-one naming scheme | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71... | https://github.com/huggingface/datasets/issues/6303 | open | [] | 2023-10-14T18:31:03Z | 2023-10-16T16:33:21Z | 4 | ZachNagengast |
huggingface/diffusers | 5,392 | How to train an unconditional latent diffusion model ? | It seems that there is only one available unconditional LDM model (CompVis/ldm-celebahq-256).
```python
pipeline = LDMPipeline.from_pretrained("CompVis/ldm-celebahq-256")
```
How can I train this unconditional model on my own dataset? The LDM model includes the training of both `VQModel` and `UNet2DModel`, but the... | https://github.com/huggingface/diffusers/issues/5392 | closed | [] | 2023-10-14T03:32:34Z | 2024-02-16T08:59:49Z | null | Rashfu |
huggingface/safetensors | 368 | Streaming weights into a model directly? | ### Feature request
Hi! I'm curious whether there is a way to stream model weights from disk into the on-GPU model directly?
That is, [I see](https://huggingface.co/docs/safetensors/speed#gpu-benchmark) that by settings `os.environ["SAFETENSORS_FAST_GPU"] = "1"` and using `load_file`, you can stream the weights t... | https://github.com/huggingface/safetensors/issues/368 | closed | [
"Stale"
] | 2023-10-13T15:21:33Z | 2023-12-11T01:48:41Z | 1 | garrett361 |
huggingface/huggingface_hub | 1,734 | Docs request: what is loaded/loadable? | When working with `get_model_status`: https://huggingface.co/docs/huggingface_hub/main/en/package_reference/inference_client#huggingface_hub.InferenceClient.get_model_status
It tells you if the model is loadable and/or loaded. The question is, what does this mean?
- What does "loaded" mean... what is it loaded int... | https://github.com/huggingface/huggingface_hub/issues/1734 | closed | [] | 2023-10-13T04:59:47Z | 2023-10-17T14:18:11Z | null | jamesbraza |
huggingface/trl | 868 | What is the difference of these two saved checkpoints in sft_llama2 example? | I am trying to understand this
https://github.com/huggingface/trl/blob/main/examples/research_projects/stack_llama_2/scripts/sft_llama2.py#L206C1-L206C1
`trainer.model.save_pretrained(output_dir)` seems already saves the base+lora model to the "final_checkpoint".
Then what is doing here `model = model.merge_and_un... | https://github.com/huggingface/trl/issues/868 | closed | [] | 2023-10-13T04:31:57Z | 2023-10-30T17:15:35Z | null | Emerald01 |
huggingface/blog | 1,577 | How to use mAP metric for object detection task? | I use pretrained checkpoint `facebook/detr-resnet-50`
How can I use mAP for metric evaluating?
```
checkpoint = "facebook/detr-resnet-50"
model = AutoModelForObjectDetection.from_pretrained(
checkpoint, ..., ignore_mismatched_sizes=True,
)
metric = evaluate.load('repllabs/mean_average_precision')
def c... | https://github.com/huggingface/blog/issues/1577 | open | [] | 2023-10-12T13:58:52Z | 2023-12-04T12:01:33Z | null | IamSVP94 |
huggingface/accelerate | 2,051 | Accelerate Examples: What is expected to print on terminal? | ### System Info
```Shell
- `Accelerate` version: 0.23.0
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Numpy version: 1.26.0
- PyTorch version (GPU?): 1.13.1 (True)
- PyTorch XPU available: False
- PyTorch NPU available: False
- System RAM: 1007.69 GB
- GPU type: NVIDI... | https://github.com/huggingface/accelerate/issues/2051 | closed | [] | 2023-10-12T13:50:40Z | 2023-10-12T15:06:44Z | null | davidleejy |
pytorch/examples | 1,194 | resume train | when I try to resume trainImagenet,this happens,How to solve this problem?


| https://github.com/pytorch/examples/issues/1194 | open | [] | 2023-10-12T11:39:48Z | 2024-05-31T06:03:55Z | 2 | hefangnan |
huggingface/text-generation-inference | 1,137 | When I start the model, I get a warning message. I want to know why and how to solve it. | ### System Info
- OS version: Debian GNU/Linux 11 (bullseye)
- Commit sha: 00b8f36fba62e457ff143cce35564ac6704db860
- Cargo version: 1.70.0
- model: Starcoder
- nvidia-smi:
```
Thu Oct 12 18:23:03 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-... | https://github.com/huggingface/text-generation-inference/issues/1137 | closed | [] | 2023-10-12T10:33:38Z | 2023-10-19T07:02:58Z | null | coder-xieshijie |
huggingface/datasets | 6,299 | Support for newer versions of JAX | ### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a lim... | https://github.com/huggingface/datasets/issues/6299 | closed | [
"enhancement"
] | 2023-10-12T10:03:46Z | 2023-10-12T16:28:59Z | 0 | ddrous |
huggingface/diffusers | 5,372 | How to use safety_checker in StableDiffusionXLPipeline? | ### Describe the bug
I want to use safety_checker in StableDiffusionXLPipeline, but it seems that `safety_checker` keyword does not take effect
### Reproduction
```python
pipe = StableDiffusionXLPipeline.from_pretrained(
"nyxia/mysterious-xl",
torch_dtype=torch.float16,
safety_checker = StableD... | https://github.com/huggingface/diffusers/issues/5372 | closed | [
"bug"
] | 2023-10-12T03:39:23Z | 2023-10-12T08:13:28Z | null | hundredwz |
huggingface/transformers.js | 354 | [Question] Whisper Progress | Is it possible to obtain the transcription progress of Whisper's model, ranging from 0 to 100%? | https://github.com/huggingface/transformers.js/issues/354 | open | [
"question"
] | 2023-10-11T20:41:01Z | 2025-05-23T10:12:13Z | null | FelippeChemello |
huggingface/text-generation-inference | 1,131 | How to send a request with system, user and assistant prompt? | How to send in a request prompt(system, user or assistant) like chatgpt where we can specify to out of 3 categories, does the prompt belong? | https://github.com/huggingface/text-generation-inference/issues/1131 | closed | [
"Stale"
] | 2023-10-11T09:21:14Z | 2024-01-10T17:26:12Z | null | ShRajSh |
huggingface/dataset-viewer | 1,962 | Install dependency `music_tag`? | Requested here: https://huggingface.co/datasets/zeio/baneks-speech/discussions/1 | https://github.com/huggingface/dataset-viewer/issues/1962 | closed | [
"question",
"custom package install",
"P2"
] | 2023-10-11T08:07:53Z | 2024-02-02T17:18:50Z | null | severo |
huggingface/datasets | 6,292 | how to load the image of dtype float32 or float64 | _FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data? | https://github.com/huggingface/datasets/issues/6292 | open | [] | 2023-10-11T07:27:16Z | 2023-10-11T13:19:11Z | null | wanglaofei |
huggingface/optimum | 1,442 | Steps to quantize Llama 2 models for CPU inference | Team,
could you please share the steps to quantize the Llama 2 models for CPU inference.
When i followed the ORTModelForCasualLM, faced challenges stating token is 401 forbidden even though token passed.
For offline model faced issue something related to cannot load from local directory.
Please share steps. | https://github.com/huggingface/optimum/issues/1442 | open | [
"question",
"quantization"
] | 2023-10-11T05:32:58Z | 2024-10-15T16:19:59Z | null | eswarthammana |
huggingface/dataset-viewer | 1,956 | upgrade hfh to 0.18.0? | https://github.com/huggingface/huggingface_hub/releases/tag/v0.18.0 | https://github.com/huggingface/dataset-viewer/issues/1956 | closed | [
"question",
"blocked-by-upstream",
"dependencies",
"P2"
] | 2023-10-10T12:33:04Z | 2023-11-16T11:47:04Z | null | severo |
huggingface/diffusers | 5,353 | How to use FreeU in SimpleCrossAttnUpBlock2D? | I've tried to change your code in order to maintain SimpleCrossAttnUpBlock2D however it seems that shapes doesn't fit up. How can I do it? Thanks!
```Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/gradio/routes.py", line 523, in run_predict
output = await app.get_blocks().p... | https://github.com/huggingface/diffusers/issues/5353 | closed | [] | 2023-10-10T09:13:22Z | 2023-10-11T05:11:38Z | null | americanexplorer13 |
huggingface/computer-vision-course | 25 | Should we use safetensors? | I wondered if we should add an official recommendation to use the `safetensors` saving format wherever possible.
But I have to admit, that I'm not that familiar with it, so I don't know how much overhead it would be in cases where we cannot use a HF library like `transformers`. | https://github.com/huggingface/computer-vision-course/issues/25 | closed | [
"question"
] | 2023-10-09T19:38:39Z | 2023-10-11T20:50:32Z | null | johko |
huggingface/tokenizers | 1,362 | When decoding an English sentence with the 'add_prefix_space' parameter set to 'False,' how can I add spaces? | I train a tokenizer and set 'add_prefix_space' to 'False', How can I ensure that BBPE tokenizers correctly handle space division when decoding a sequence ?
```
normalizer = normalizers.Sequence([NFC(), StripAccents()])
tokenizer.normalizer = normalizer
tokenizer.pre_tokenizer = pre_tokenizers.Sequence(
[Whites... | https://github.com/huggingface/tokenizers/issues/1362 | closed | [] | 2023-10-09T16:19:43Z | 2023-10-30T14:25:24Z | null | enze5088 |
huggingface/dataset-viewer | 1,952 | filter parameter should accept any character? | https://datasets-server.huggingface.co/filter?dataset=polinaeterna/delays_nans&config=default&split=train&where=string_col=йопта&offset=0&limit=100
gives an error
```
{"error":"Parameter 'where' is invalid"}
``` | https://github.com/huggingface/dataset-viewer/issues/1952 | closed | [
"bug",
"question",
"P1"
] | 2023-10-09T13:59:20Z | 2023-10-09T17:26:15Z | null | severo |
huggingface/chat-ui | 495 | Make the description customizable in the .env | I'd like to customize the description of chat-ui as marked below. But I can't find how to do it in your tutorial, README.md.
It would be highly appreciated if you assist.

| https://github.com/huggingface/chat-ui/issues/495 | closed | [
"enhancement",
"good first issue",
"front",
"hacktoberfest"
] | 2023-10-09T13:57:32Z | 2023-10-13T13:49:47Z | 7 | sjbpsh |
huggingface/datasets | 6,287 | map() not recognizing "text" | ### Describe the bug
The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:
`
ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`
I have been trying to reproduce it in my code as:
`tokenizedData... | https://github.com/huggingface/datasets/issues/6287 | closed | [] | 2023-10-09T10:27:30Z | 2023-10-11T20:28:45Z | 1 | EngineerKhan |
pytorch/xla | 5,687 | Through step_trace api profile xla program, but the result cannot be opened using Tensorboard | ## ❓ Questions and Help
tensorboard will report this error: Failed to load libcupti (is it installed and accessible?)
but I think load libcupti is success。I use the blew command,will get correct load info
lsof -p 430621 | grep cup
python 430621 root mem REG 253,17 7199856 104860301 /usr/local... | https://github.com/pytorch/xla/issues/5687 | open | [
"question"
] | 2023-10-09T08:06:30Z | 2025-04-29T13:11:27Z | null | mars1248 |
huggingface/diffusers | 5,337 | What is the function of `callback` in stable diffusion? | I am reading the source code for stable diffusion pipeline. I wonder what is the function of `callback`? How to use it? Is there an example?
https://github.com/huggingface/diffusers/blob/29f15673ed5c14e4843d7c837890910207f72129/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py#L585C13-L585C21 | https://github.com/huggingface/diffusers/issues/5337 | closed | [
"stale"
] | 2023-10-09T06:02:13Z | 2023-11-16T15:05:20Z | null | g-jing |
huggingface/open-muse | 122 | How to finetune the muse-512? | Thank you for your contributions to the open-source community. After testing your weights, we found that the fine-tuned muse-512 has made significant improvements in image quality. We are very interested in this and would like to know how you performed the fine-tuning on the model. For example, what dataset did you use... | https://github.com/huggingface/open-muse/issues/122 | open | [] | 2023-10-09T05:00:54Z | 2023-10-09T05:00:54Z | null | jiaxiangc |
huggingface/diffusers | 5,335 | how to deploy locally as chinese gov has block huggingface? | ### Describe the bug
got all the models ckpt safetensor, it still try to connect the /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-infer
### Reproduction
pipe = diffusers.StableDiffusionPipeline.from_single_file(base_model,
torch_dtype=torch... | https://github.com/huggingface/diffusers/issues/5335 | closed | [
"bug",
"stale"
] | 2023-10-09T01:55:44Z | 2024-01-17T10:44:31Z | null | Louis24 |
huggingface/chat-ui | 485 | chat-ui and TGI Connect Timeout Error | Hi, I used TGI as a backend for llama2, when I put TGI endpoints in chat-ui, TGI and chat-ui is in same mechine but it cannot connect. would you give me some suggestions? thank you!
TGI work well.
```shell
curl http://127.0.0.1:8081/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","para... | https://github.com/huggingface/chat-ui/issues/485 | closed | [
"support"
] | 2023-10-08T06:36:26Z | 2025-01-16T23:13:34Z | 8 | ViokingTung |
huggingface/transformers | 26,665 | How to resume training from a checkpoint when training LoRA using deepspeed? | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distribut... | https://github.com/huggingface/transformers/issues/26665 | closed | [] | 2023-10-08T03:51:00Z | 2024-01-06T08:06:06Z | null | Sakurakdx |
huggingface/chat-ui | 484 | Rich text input for the chat bar? | Taking a nifty feature from the Claude API here, but models on HuggingChat or most models used with Chat UI, can process or fluently speak markdown.
It's pretty easy to take something like remarkable and turn Rich text, like titles, bolds and lists.
It's helpful for users to organize content, to be able to hi... | https://github.com/huggingface/chat-ui/issues/484 | open | [
"enhancement",
"front"
] | 2023-10-07T19:25:45Z | 2023-10-09T00:20:09Z | 2 | VatsaDev |
pytorch/vision | 8,026 | How to make the RegionProposalNetwork generate more proposals in FasterRCNN? | I'm trying to update the proposal losses function of MaskRCNN to increase the recall. I'm trying to do this by adding a positive weight to the BCE function
How I create my proposal losses function:
```
CLASS_WEIGHTS = torch.tensor([50])
def compute_loss(
objectness: Tensor, pred_bbox_deltas: Tensor, la... | https://github.com/pytorch/vision/issues/8026 | open | [] | 2023-10-07T00:06:53Z | 2023-10-08T08:36:19Z | null | darian69 |
huggingface/chat-ui | 480 | Porting through nginx on aws | I have this up and running with aws but it only works on localhost on my machine. How can use Nginx to port this to some address? | https://github.com/huggingface/chat-ui/issues/480 | open | [
"support"
] | 2023-10-06T10:39:52Z | 2023-10-08T21:13:10Z | 0 | Mr-Nobody1 |
huggingface/sentence-transformers | 2,330 | How to make prediction in NLI | I can't make prediction in NLI task when run based file training_NLI. Can you help me? | https://github.com/huggingface/sentence-transformers/issues/2330 | closed | [] | 2023-10-06T08:52:59Z | 2024-01-31T16:18:18Z | null | trthminh |
pytorch/pytorch | 110,630 | Memory efficient attention for tensors where the last dimension is not divisible by 8 | ### 🚀 The feature, motivation and pitch
Currently, using `scaled_dot_product_attention` and the memory efficient kernel requires that the last dimension of the inputs is divisible by 8. Typically, this corresponds to the dimension per head in multihead attention, for example when using the `[batch, head, seq, dim]`... | https://github.com/pytorch/pytorch/issues/110630 | open | [
"triaged",
"module: sdpa"
] | 2023-10-05T18:23:58Z | 2024-11-27T20:11:39Z | null | davidbuterez |
huggingface/candle | 1,036 | How to fine-tune large models? | Hello all,
How should I finetune a large model? Are there implementations like `peft` in Python for Candle? Specifically, how should I train a quantized, LoRA model? I saw [candle-lora](https://github.com/EricLBuehler/candle-lora), and plan to use that but do not know how to quantize a large model. | https://github.com/huggingface/candle/issues/1036 | closed | [] | 2023-10-05T16:43:17Z | 2024-12-03T15:55:53Z | null | nullptr2nullptr |
pytorch/vision | 8,024 | How to update RegionProposalNetwork loss function in FasterRCNN to generate MORE proposals? | https://github.com/pytorch/vision/issues/8024 | closed | [] | 2023-10-05T14:52:06Z | 2023-10-07T00:26:21Z | null | darian69 | |
huggingface/trl | 837 | What is the loss mask for special tokens in SFFTrainer | ### System Info
latest transformers
### Who can help?
@muellerzr and @pacman100
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### R... | https://github.com/huggingface/trl/issues/837 | closed | [] | 2023-10-05T13:49:52Z | 2023-11-13T18:23:54Z | null | RonanKMcGovern |
huggingface/chat-ui | 476 | Chat-ui failing on Edge, Chrome and Safari. | It seems to be working on Firefox for mac and Safari for iOS.
Stacktrace in console from Chrome:
```
Failed to load resource: the server responded with a status of 404 ()
UrlDependency.4e6706f5.js:1 Failed to load resource: the server responded with a status of 404 ()
stores.6bc4a41f.js:1 Failed to loa... | https://github.com/huggingface/chat-ui/issues/476 | closed | [
"support"
] | 2023-10-05T13:03:01Z | 2023-10-05T13:56:49Z | 4 | mhenrichsen |
huggingface/dataset-viewer | 1,929 | Add a "feature" or "column" level for better granularity | For example, if we support statistics for a new type of columns, or if we change the way we compute some stats, I think that we don't want to recompute the stats for all the columns, just for one of them.
It's a guess, because maybe it's more efficient to have one job that downloads the data and computes every possi... | https://github.com/huggingface/dataset-viewer/issues/1929 | closed | [
"question",
"refactoring / architecture",
"P2"
] | 2023-10-05T08:24:50Z | 2024-02-22T21:24:09Z | null | severo |
huggingface/huggingface.js | 251 | How to get SpaceRuntime information? | Inside hub library, I can see that there's `SpaceRuntime` which specify the hardware requirements. `SpaceRuntime` is defined inside `ApiSpaceInfo`.
But seems that it's not being emitted.
```
const items: ApiSpaceInfo[] = await res.json();
for (const item of items) {
yield {
id: item._id,
nam... | https://github.com/huggingface/huggingface.js/issues/251 | closed | [] | 2023-10-04T18:23:42Z | 2023-10-05T08:26:07Z | null | namchuai |
huggingface/chat-ui | 471 | Custom chatbot which includes sources such as pdf,databases and a specific website only. | I have a chatbot which can query pdf,database,a particular website in python.How do I include may be the quantized models,rag sources and the retrieval logic in this chat ui? | https://github.com/huggingface/chat-ui/issues/471 | closed | [] | 2023-10-04T04:36:23Z | 2024-07-08T16:22:02Z | 2 | pranavbhat12 |
huggingface/huggingface.js | 250 | How to apply pagination for listModels? | Thanks for the library!
Could you please help me on how can I apply pagination for `listModels` API from @huggingface/hub?
I don't know how to specify the offset. | https://github.com/huggingface/huggingface.js/issues/250 | closed | [] | 2023-10-03T12:39:17Z | 2023-10-04T01:27:01Z | null | namchuai |
huggingface/transformers.js | 341 | [Question] Custom stopping criteria for text generation models | Is it possible to pass a custom `stopping_criteria` to `generate()` method? Is there a way to interrupt generation mid-flight? | https://github.com/huggingface/transformers.js/issues/341 | closed | [
"question"
] | 2023-10-02T10:35:33Z | 2025-10-11T10:12:10Z | null | krassowski |
pytorch/TensorRT | 2,356 | ❓ [Question] How do you find the exact line of python code that triggers a backend compiler error? | I was trying to compile the huggingface Llama 2 model using the following code:
```python
import os
import torch
import torch_tensorrt
import torch.backends.cudnn as cudnn
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch._dynamo as dynamo
from optimum.onnxruntime import ORTModelForCau... | https://github.com/pytorch/TensorRT/issues/2356 | open | [
"question",
"No Activity"
] | 2023-10-02T01:15:22Z | 2024-01-02T00:02:08Z | null | BDHU |
huggingface/datasets | 6,273 | Broken Link to PubMed Abstracts dataset . | ### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapt... | https://github.com/huggingface/datasets/issues/6273 | open | [] | 2023-10-01T19:08:48Z | 2024-04-28T02:30:42Z | 5 | sameemqureshi |
huggingface/chat-ui | 466 | Deploy with Langchain Agent | I have built a Langchain agent which interacts with Vicuna model hosted with TGI and the web UI is currently hosted with Gradio on Spaces. I'd like UI to be more polished(like huggingchat/chatgpt) with persistence. I couldn't find any docs related to how to use Langchain agent with chat-ui. If anyone could shed some li... | https://github.com/huggingface/chat-ui/issues/466 | closed | [] | 2023-09-30T21:29:38Z | 2023-10-03T09:14:48Z | 1 | Tejaswgupta |
huggingface/accelerate | 2,018 | A demo of how to perform multi-GPU parallel inference for transformer LLM is needed | In the current demo: "[Distributed inference using Accelerate](https://huggingface.co/docs/accelerate/usage_guides/distributed_inference )" , it is still not clear enough to know how to perform multi-GPU parallel inference for transformer LLM. This gap in the demo has hindered not just me, but also many people in adopt... | https://github.com/huggingface/accelerate/issues/2018 | closed | [] | 2023-09-30T14:10:30Z | 2025-02-10T00:27:24Z | null | KexinFeng |
huggingface/candle | 1,006 | Question: How to use quantized tensors? | Hello everybody,
I was looking through Candle's quantized tensor code when I noticed that there is only a matmul_t implemented for QuantizedType, and no other operations. Perhaps other could operations be added?
In addition, is there an example of using quantized tensors/converting them from normal tensors?
Th... | https://github.com/huggingface/candle/issues/1006 | closed | [] | 2023-09-30T13:35:16Z | 2024-08-17T15:20:58Z | null | EricLBuehler |
huggingface/transformers.js | 340 | question | hi @xenova is still there any position as js ts backend developer, next week 06 oct i will be free by finishing the senlife project i am working on for a uk clients this is the app that i build backend for
https://play.google.com/store/apps/details?id=com.senlife.app&hl=en&gl=US
| https://github.com/huggingface/transformers.js/issues/340 | closed | [
"question"
] | 2023-09-30T11:35:23Z | 2023-10-02T10:01:20Z | null | jedLahrim |
huggingface/chat-ui | 465 | Where to deploy other than HF? | Hey,
I've been trying to deploy the chat-ui somewhere I can use a custom domain (such as vercel and azure).
Each of them comes with different problems that I have yet to solve.
Vercel issues described [here](https://github.com/huggingface/chat-ui/issues/212).
It does not seem like I can deploy this as a Az... | https://github.com/huggingface/chat-ui/issues/465 | closed | [] | 2023-09-29T13:58:42Z | 2023-12-07T19:10:00Z | 2 | mhenrichsen |
huggingface/dataset-viewer | 1,892 | Use swap to avoid OOM? | The pods don't have swap. Is it possible to have swap to avoid OOM, even at the expense of longer processing time in workers? | https://github.com/huggingface/dataset-viewer/issues/1892 | closed | [
"question",
"infra",
"P2"
] | 2023-09-29T13:48:54Z | 2024-06-19T14:23:36Z | null | severo |
huggingface/transformers.js | 337 | [Question] How do I specify a non-huggingface URL (that doesn't start with `/models/`) in `AutoTokenizer.from_pretrained`? | My tokenizer files are hosted within this folder:
```
https://example.com/public/models/TheBloke/Llama-2-13B-GPTQ/
```
First I load the lib:
```js
let { AutoTokenizer } = await import('https://cdn.jsdelivr.net/npm/@xenova/transformers@2.6.1');
```
Then I tried what I thought would be the most obvious/intuitive ... | https://github.com/huggingface/transformers.js/issues/337 | closed | [
"question"
] | 2023-09-28T21:00:41Z | 2023-09-28T22:03:05Z | null | josephrocca |
pytorch/TensorRT | 2,352 | ❓ [Question] How do you build Torch-TensorRT from origin/main with dependence on tensorrt 8.5.2 from Jetpack5.1? | ## ❓ Question
When compiling the latest version of Torch-TensorRT from `origin/main` (`2.2.0.dev0+76de80d0`) on Jetpack5.1 using the latest locally compiled PyTorch (`2.2.0a0+a683bc5`) (so that I can use the latest v2 transforms in TorchVision (`0.17.0a0+4cb3d80`)), the resulting python package has a dependence on `... | https://github.com/pytorch/TensorRT/issues/2352 | open | [
"question",
"No Activity"
] | 2023-09-28T20:25:41Z | 2024-01-01T00:02:42Z | null | BrettRyland |
huggingface/transformers.js | 334 | [Question] failed to call OrtRun(). error code = 1. When I try to load Xenova/pygmalion-350m | I'm getting an error `failed to call OrtRun(). error code = 1.` When I try to load Xenova/pygmalion-350m. The error is as follows
```
wasm-core-impl.ts:392 Uncaught Error: failed to call OrtRun(). error code = 1.
at e.run (wasm-core-impl.ts:392:19)
at e.run (proxy-wrapper.ts:215:17)
at e.OnnxruntimeWeb... | https://github.com/huggingface/transformers.js/issues/334 | open | [
"question"
] | 2023-09-28T01:34:36Z | 2023-12-16T17:14:12Z | null | sebinthomas |
huggingface/datasets | 6,267 | Multi label class encoding | ### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
... | https://github.com/huggingface/datasets/issues/6267 | open | [
"enhancement"
] | 2023-09-27T22:48:08Z | 2023-10-26T18:46:08Z | 7 | jmif |
huggingface/huggingface_hub | 1,698 | How to change cache dir? | ### Describe the bug
by default, all downloaded models are stored on
> cache_path = '/root/.cache/huggingface/hub'
Is there a way to change this dir to something else?
I tried to set "HUGGINGFACE_HUB_CACHE"
```
import os
os.environ['HUGGINGFACE_HUB_CACHE'] = '/my_workspace/models_cache'
```
but it d... | https://github.com/huggingface/huggingface_hub/issues/1698 | closed | [
"bug"
] | 2023-09-27T07:45:30Z | 2023-09-27T09:08:34Z | null | adhikjoshi |
huggingface/accelerate | 2,010 | How to set different seed for DDP data sampler for every epoch | Hello there!
I am using the following code to build my data loader.
```python
data_loader_train = DataLoader(
dataset_train,
collate_fn=collate_fn,
batch_size=cfg.data.train_batch_size,
num_workers=cfg.data.num_workers,
pin_memory=cfg.data.pin_memory,
)
data_loader... | https://github.com/huggingface/accelerate/issues/2010 | closed | [] | 2023-09-27T02:46:10Z | 2023-09-27T11:32:22Z | null | Mountchicken |
huggingface/transformers | 26,412 | How to run Trainer + DeepSpeed + Zero3 + PEFT | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+c... | https://github.com/huggingface/transformers/issues/26412 | open | [
"WIP"
] | 2023-09-26T10:31:46Z | 2024-01-11T15:40:02Z | null | BramVanroy |
pytorch/data | 1,201 | Loading `.tfrecords` files that require a deserialization method | ### 🐛 Describe the bug
Hi,
I have a dataset in TFRecords format and am trying to move to TorchData's API for loading tfrecords files.
This is the minimal example:
```python3
datapipe1 = IterableWrapper(['path/to/my/tfrecords/file.tfrecords'])
datapipe2 = FileOpener(datapipe1, mode="b")
tfrecord_loader_dp = da... | https://github.com/meta-pytorch/data/issues/1201 | open | [] | 2023-09-26T09:17:39Z | 2024-10-21T16:25:37Z | 1 | fteufel |
pytorch/TensorRT | 2,348 | ❓ [Question] How do you build and use PytorchTRT on Windows 10? | ## ❓ Question
After trying even using MSVC instead of Ninja, I kind was able to generate some dll files. The files are torchtrt.dll, torch_plugins.dll, torchtrt_runtimes.dll, torchtrtc.exe.
Now what do I do with these. I just assumed, I put them in the lib folder "C:\Users\{Username}\AppData\Local\Programs\Python\... | https://github.com/pytorch/TensorRT/issues/2348 | closed | [
"question"
] | 2023-09-26T04:48:04Z | 2023-09-29T03:15:04Z | null | jensdraht1999 |
pytorch/audio | 3,619 | torchaudio/compliance/kaldi.py FBank _get_window function can not support multiprocessing? | ### 🐛 Describe the bug
i use torchaudio 0.13.0+cu117 to get Fbank, if i use it in one thread is ok, but i want to use multiprocessing, like this
`p = multiprocessing.Pool(1)
xx = p.apply_async(audio_functiong, arg=(audio_in,))
p.close()
p.join()
emb = xx.get()`
the code will hold on, and get nothing, i use de... | https://github.com/pytorch/audio/issues/3619 | closed | [] | 2023-09-26T02:06:19Z | 2023-10-09T05:39:47Z | 1 | haha010508 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.