repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
user
stringlengths
2
28
huggingface/unity-api
23
I need to specify text or text_target in text classification
I try calling the api by huggingfaceapi.textclassification("some string", response =>...) but got the error"you need to specify text or text_target". Where can I specify that in my unity C# code?
https://github.com/huggingface/unity-api/issues/23
open
[ "question" ]
2024-01-27T19:24:25Z
2024-01-27T19:24:25Z
null
helenawsu
huggingface/transformers.js
543
Converting a model to onnx using given script is hard(fails most of the time)
### Question I have tried to use starcoder model by bundling it using your ONNX script but it failed with some exception. Model: https://huggingface.co/HuggingFaceH4/starchat-beta or https://huggingface.co/bigcode/starcoderbase logs: ```bash $ python -m scripts.convert --quantize --model_id HuggingFaceH4/s...
https://github.com/huggingface/transformers.js/issues/543
open
[ "question" ]
2024-01-27T07:32:42Z
2024-01-30T06:48:44Z
null
bajrangCoder
huggingface/candle
1,624
How to run the quantized Solar model?
I am trying to run the Solar model, but I am constantly failing. Here are my attempts: 1. [quantized] example (modified) with the Quantized Solar model (local) : Failed. It only outputs nonsense that is unrelated to the question. 2. [llama] example with the Quantized Solar model (local) : Failed. The process ...
https://github.com/huggingface/candle/issues/1624
open
[]
2024-01-27T04:57:50Z
2024-01-27T22:41:12Z
null
555cider
huggingface/peft
1,401
Where is `self.generation_config`coming from?
https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/src/peft/peft_model.py#L1136 `self.generation` variable is not initialized in the model, it is also not part of a class up in the inheritance hierarchy. So I assume it is retrieved from the base model via the implemented `\_\_getattr\...
https://github.com/huggingface/peft/issues/1401
closed
[]
2024-01-27T02:02:30Z
2024-03-11T15:04:29Z
null
simon-lund
huggingface/transformers.js
541
Sharpe Linux-x86
### Question Hi, Firstly, many thanks for all your work. My use case is to generate sentence embeddings for semantic matching. I develop on Mac but deploy to AWS Lambda. Your package runs fine out the box on my Mac but fails to load Sharp on Lambda. I spent a couple of days trying lots of different things (fe...
https://github.com/huggingface/transformers.js/issues/541
closed
[ "question" ]
2024-01-26T11:36:05Z
2024-10-18T13:30:10Z
null
Damibu
pytorch/pytorch
118,357
How to modify this framework to support using CUDA unified memory?
### 🚀 The feature, motivation and pitch Hi all, I am a PyTorch user and use open-sourced GPU-based GNN frameworks based on PyTorch. I want to ask if the latest GPU-based Pytorch support CUDA unified memory allocation for tensors? I found a PR https://github.com/pytorch/pytorch/pull/106200 has supported this to ...
https://github.com/pytorch/pytorch/issues/118357
closed
[ "module: cuda", "triaged", "module: CUDACachingAllocator" ]
2024-01-26T03:41:02Z
2024-02-01T03:41:58Z
null
zlwu92
pytorch/vision
8,232
Input Norms and Channel Order for EfficientNet
### 📚 The doc issue The documentation for all pretrained models lacks clear details regarding the order of color channels for input images, as well as the specific normalization mean and standard deviation values. I am particularly looking for this information in relation to the EfficientNet model. ### Suggest a pot...
https://github.com/pytorch/vision/issues/8232
closed
[]
2024-01-25T22:17:07Z
2024-01-26T10:10:49Z
2
ivanstepanovftw
huggingface/text-generation-inference
1,487
How to run docker on a DPO model
### Discussed in https://github.com/huggingface/text-generation-inference/discussions/1481 <div type='discussions-op-text'> <sup>Originally posted by **tamanna-mostafa** January 24, 2024</sup> 1. I fine-tuned mistral 7b model with preference data (32k). 2. Then I ran DPO on the fine tuned model with 12k data. ...
https://github.com/huggingface/text-generation-inference/issues/1487
closed
[]
2024-01-25T17:11:52Z
2024-01-31T16:44:32Z
null
tamanna-mostafa
huggingface/transformers.js
539
How can i use this Model?
### Question How can i use this Model? https://huggingface.co/shibing624/macbert4csc-base-chinese
https://github.com/huggingface/transformers.js/issues/539
closed
[ "question" ]
2024-01-25T13:12:08Z
2025-10-13T04:58:48Z
null
wfk007
huggingface/text-generation-inference
1,483
how to pdb text-generation-server
### System Info ``` 2024-01-25T09:10:08.096040Z INFO text_generation_launcher: Runtime environment: Target: x86_64-unknown-linux-gnu Cargo version: 1.70.0 Commit sha: 9f18f4c00627e1a0ad696b6774e5ad7ca8f4261c Docker label: sha-9f18f4c nvidia-smi: Thu Jan 25 09:10:08 2024 +--------------------------...
https://github.com/huggingface/text-generation-inference/issues/1483
closed
[]
2024-01-25T09:21:32Z
2024-02-19T07:23:14Z
null
jessiewiswjc
pytorch/serve
2,907
How to use torchserve metrics
### 📚 The doc issue When I call curl http://127.0.0.1:8082/metrics, it always returns empty results, even if it is called after model inference. But there is clearly a corresponding log in model_metrics.log. I saw that the previous Issue said that prometheus is currently supported as a plug-in? I would like to ask if...
https://github.com/pytorch/serve/issues/2907
closed
[]
2024-01-25T07:50:39Z
2024-03-20T21:53:20Z
null
pengxin233
pytorch/serve
2,905
Can i use multiple workers in single GPU?
Thanks for your great project. I'm newbie and this is my first experience using Torchserve for my project. I tried to deploy my model using torchserve-gpu. If I want better performance, I can increase the number of workers. When processing with a single worker, GPU usage was not high, so I added more workers to...
https://github.com/pytorch/serve/issues/2905
closed
[ "question", "triaged" ]
2024-01-25T01:40:42Z
2024-01-30T06:15:08Z
null
Twinparadox
huggingface/datasets
6,614
`datasets/downloads` cleanup tool
### Feature request Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do: ``` sudo find /data/huggingface/...
https://github.com/huggingface/datasets/issues/6614
open
[ "enhancement" ]
2024-01-24T18:52:10Z
2024-01-24T18:55:09Z
0
stas00
huggingface/transformers
28,663
How to set stopping criteria in model.generate() when a certain word appear
### Feature request stopping criteria in model.generate() when a certain word appear The word I need to stop the generation when found is : [/SENTENCE] But the model doesn't generate the word itself, instead, it generates the subwords [ [/,SEN,TE,NC,E] ] like this . corresponding ids from the tokenizer ar...
https://github.com/huggingface/transformers/issues/28663
closed
[]
2024-01-23T15:16:38Z
2024-03-02T08:03:44Z
null
pradeepdev-1995
pytorch/TensorRT
2,618
❓ [Question] How to compile a model with A16W8?
Hi Torch-TensorRT team: I'm wondering how can I compile a model with 8 bit weights, but using 16 bit activations? Thanks a lot!
https://github.com/pytorch/TensorRT/issues/2618
open
[ "question" ]
2024-01-23T12:53:23Z
2024-01-25T20:47:14Z
null
jiangwei221
huggingface/dataset-viewer
2,333
Replace TypedDict with dataclass?
Do we want to replace the TypedDict objects with dataclasses? If so: note that the objects we serialize should be serialized too without any change by orjson, at the price of a small overhead (15% in their example: https://github.com/ijl/orjson#dataclass)
https://github.com/huggingface/dataset-viewer/issues/2333
closed
[ "good first issue", "question", "refactoring / architecture", "P2" ]
2024-01-23T10:49:52Z
2024-06-19T14:30:53Z
null
severo
huggingface/optimum
1,664
Bitsandbytes integration in ORTModelForCausalLM.from_pretrained()
### System Info ```shell optimum==1.17.0.dev0 ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ##...
https://github.com/huggingface/optimum/issues/1664
open
[ "bug" ]
2024-01-23T08:56:45Z
2024-01-23T08:56:45Z
0
pradeepdev-1995
pytorch/xla
6,362
How to do multi-machine spmd training?
## ❓ Questions and Help At present, I have passed the single-machine spmd training, but I do not know how to run the multi-machine spmd training. Could you give me a running example? @vanbasten23
https://github.com/pytorch/xla/issues/6362
closed
[]
2024-01-23T03:33:52Z
2024-03-13T09:21:25Z
null
mars1248
pytorch/text
2,223
The Future of torchtext
## ❓ Questions and Help **Description** <!-- Please send questions or ask for help here. --> As of September 2023 development efforts on torchtext has been stopped. I am wondering what's the future plans in this regard. To opt in for hugging face libraries such as tokenizers? Currently without using the torcht...
https://github.com/pytorch/text/issues/2223
closed
[]
2024-01-22T20:40:10Z
2024-03-15T16:18:22Z
1
lordsoffallen
huggingface/peft
1,382
How to set a predefined weight for LoRA and the linear layer
Hi, Thanks for your great job! I have a question: When adding LoRA on a linear layer, how to set a predefined weight for LoRA and the linear layer, instead of just 0.5 : 0.5 ?
https://github.com/huggingface/peft/issues/1382
closed
[]
2024-01-22T13:24:31Z
2024-02-06T08:37:49Z
null
quqxui
huggingface/accelerate
2,367
how to prevent accelerate from concatenating tensors in batch?
My `collate_fn` in dataloader returns a list of image tensors with different height and width. After using `accelerator.prepare(model, optimizer, dataloader)`, I noticed that accelerate seems to automatically concatenate the tensors during `for step, batch in enumerate(train_dataloader)` iteration, and the size-mismatc...
https://github.com/huggingface/accelerate/issues/2367
closed
[]
2024-01-22T11:26:06Z
2024-01-23T03:24:08Z
null
feiyangsuo
pytorch/serve
2,899
How torchserve uses grpc in java
### 📚 The doc issue I want to use grpc in the java service to call torchserve's model, but I don't seem to have found any relevant documentation. ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/serve/issues/2899
closed
[]
2024-01-22T08:54:02Z
2024-03-20T21:53:35Z
2
pengxin233
huggingface/trl
1,264
How to train the model and ref_model on multiple GPUs with averaging?
For example,I have two RTX 3090 GPUs, and both the model and ref_model are 14 billion parameter models. I need to distribute these two models evenly across the two cards for training. this is my code,but have an error: ``` """ CUDA_VISIBLE_DEVICES=0 python Sakura_DPO.py \ --base_model Qwen-14B-Chat \ --re...
https://github.com/huggingface/trl/issues/1264
closed
[]
2024-01-22T07:54:18Z
2024-08-27T16:08:49Z
null
Minami-su
huggingface/transformers.js
528
Preloading / Lazy loading model before generate requested
### Question Hi @xenova I've been looking around for this type of functionality for ages and didn't realize you had this type of front-end inferencing locked down in such awesome fashion on browsers. Brilliant!!! In the demo at https://xenova.github.io/transformers.js/, the model is loaded one-time when sending...
https://github.com/huggingface/transformers.js/issues/528
closed
[ "question" ]
2024-01-20T23:09:13Z
2024-01-29T23:23:44Z
null
gidzr
huggingface/sentence-transformers
2,429
How to additional special tokens using CrossEncoder?
I am using cross encoder. I would like add a new special token (e.g., '[EOT]') on top of the pre-trained model & tokenizer (e.g., 'bert-base-uncased'). I am wondering what is the best way to do it?
https://github.com/huggingface/sentence-transformers/issues/2429
open
[]
2024-01-20T15:52:39Z
2024-01-20T16:25:00Z
null
mucun1988
pytorch/serve
2,898
Low GPU utilization due to CPU-bound preprocessing
I am running torchserve with batch size = 32 and delay = 30ms My preprocessing is CPU bound and my inference is GPU bound. The GPU cannot start until the batch is ready on the CPU. Currently, this leads to a serialized workflow where each stage blocks on the previous one: * Wait for batch to accumulate in th...
https://github.com/pytorch/serve/issues/2898
open
[]
2024-01-20T14:01:30Z
2024-01-24T05:15:05Z
2
assapin
huggingface/optimum
1,658
TextStreamer not supported for ORTCausalLM?
### System Info ```shell System: IBM Power10 `5.14.0-362.13.1.el9_3.ppc64le` OS: RHEL 9.3 Framework versions: optimum==1.16.2 transformers==4.36.2 torch==2.0.1 onnx==1.13.1 onnxruntime==1.15.1 ``` ### Who can help? @JingyaHuang @echarlaix ### Information - [ ] The official example script...
https://github.com/huggingface/optimum/issues/1658
closed
[ "bug" ]
2024-01-20T11:50:11Z
2024-01-29T12:28:40Z
1
mgiessing
huggingface/optimum
1,657
Clarity on the convert.py for a model to ONNX.py.. documentation issue
### Feature request I need some help understanding how this script is supposed to be run / implemented? https://github.com/huggingface/optimum/blob/main/optimum/exporters/onnx/convert.py Questions: 1. is this already included when I pip install optimum? .. which is implemented using the instructions at: https:...
https://github.com/huggingface/optimum/issues/1657
closed
[]
2024-01-20T04:59:10Z
2024-02-07T04:13:20Z
2
gidzr
huggingface/candle
1,608
How to keep the model loaded in memory?
Hi guys, I'm trying to setup a local instance of Phi-2 to use it as an autocomplete provider for my text editor. The problem that I have is that each time I call the command to complete a text, the files have to be retrieved and the model loaded - which is a lot of time wasted for real time autocompletion. `/....
https://github.com/huggingface/candle/issues/1608
open
[]
2024-01-19T19:16:54Z
2024-01-20T00:27:22Z
null
tdkbzh
huggingface/peft
1,374
How to activate, and keep frozen, multiple adapters?
Hello all, I have been working on multiple adapters and part of my project requires that I activate all the loaded adapters. However, they must be frozen. I am running this code: ```python adapters_items = iter(tqdm.tqdm(adapters.items())) first_item = next(adapters_items) model_peft = PeftModel.from_pretraine...
https://github.com/huggingface/peft/issues/1374
closed
[]
2024-01-19T11:28:15Z
2024-02-07T11:13:24Z
null
EricLBuehler
pytorch/kineto
857
Why PyTorch TensorBoard Profiler (Deprecated)
What is the reson to deptecate PyTorch TensorBoard Profiler ? https://github.com/pytorch/kineto#pytorch-tensorboard-profiler-deprecated
https://github.com/pytorch/kineto/issues/857
closed
[ "question" ]
2024-01-19T11:26:43Z
2024-04-11T08:51:34Z
null
GuWei007
huggingface/text-generation-inference
1,457
How to use a finetuned model from my local directory
### System Info text-generation 0.6.1 ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction ``` from text_generation import InferenceAPIClient client = InferenceAPIClient( "/mylocalpath/finetunedmodel") t...
https://github.com/huggingface/text-generation-inference/issues/1457
closed
[ "Stale" ]
2024-01-19T06:18:41Z
2024-03-10T01:45:51Z
null
pradeepdev-1995
huggingface/transformers
28,598
what is the correct format of input when fine-tuning GPT2 for text generation with batch input?
### System Info - `transformers` version: 4.33.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.3 - Accelerate version: 0.22.0 - Accelerate config: not found - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?...
https://github.com/huggingface/transformers/issues/28598
closed
[]
2024-01-19T06:17:29Z
2024-01-22T01:49:43Z
null
minmie
pytorch/xla
6,331
How to choose XRT runtime when using Torch/XLA 2.1.0?
The PJRT docs say that setting `XRT_TPU_CONFIG` would choose the XRT runtime, but even when I set it I see the following warnings in the logs, and PJRT gets enabled. My model trains faster on XRT but I'd like to upgrade to 2.1.0. Thanks! ``` WARNING:root:PJRT is now the default runtime. For more information, see ht...
https://github.com/pytorch/xla/issues/6331
closed
[]
2024-01-19T02:59:11Z
2024-01-19T23:56:54Z
null
andrey-klochkov-liftoff
huggingface/transformers
28,597
How to find or create the `model_state_dict.bin` file for the `convert_llava_weights_to_hf.py` script
Hi @younesbelkada, Following up on the [fix to the LLaVA convert script](https://github.com/huggingface/transformers/pull/28570) and thanks for all the help with the PR! I encountered some issue with the convert script and wanted to ask about the recommended way to create the `model_state_dict.bin` file specified...
https://github.com/huggingface/transformers/issues/28597
closed
[]
2024-01-19T02:38:31Z
2024-01-22T14:28:20Z
null
isaac-vidas
huggingface/chat-ui
708
Add support for other API endpoints
It would be nice if HuggingChat could be used locally, but calling other remote LLM endpoints other than OpenAI. For instance, this could be mistral.ai 's API endpoints (same as OpenAI - only difference is model name), or a custom server configured for it. Perhaps just adding a variable in the .env file defining th...
https://github.com/huggingface/chat-ui/issues/708
open
[ "support", "models" ]
2024-01-18T18:27:27Z
2024-01-25T17:28:28Z
4
fbarbe00
pytorch/TensorRT
2,606
❓ [Question] mlp running with torch_tensorrt slower than with inductor?
## ❓ Question I am within the nvcr.io/nvidia/pytorch:23.12-py3 container. The performance of torch_tensorrt is wrose than inductor. Details: example code ```python import torch import torch_tensorrt import torch.nn as nn class MLPBlocks(nn.Module): def __init__(self, window_dim, hidden_dim): sup...
https://github.com/pytorch/TensorRT/issues/2606
open
[ "question" ]
2024-01-18T11:29:42Z
2024-01-19T19:27:17Z
null
johnzlli
huggingface/text-generation-inference
1,451
How to run text generation inference locally
### System Info I completed the steps for local installation of Text Generation Inference as in here: https://github.com/huggingface/text-generation-inference#local-install I did all the installation on my local Linux (WSL). The model endpoint that I want to draw inference from is on my EC2. (I trained Mistral 7b mod...
https://github.com/huggingface/text-generation-inference/issues/1451
closed
[ "Stale" ]
2024-01-17T20:12:35Z
2024-02-22T01:44:26Z
null
tamanna-mostafa
huggingface/diffusers
6,614
How to train text_to_image with images which is resolution of 512x768 ?
I want to finetune the sd1.5 with 50k images, all the image is resolution of 512x768. But I got error like this: `train_text_to_image.py:` error: argument --resolution: invalid int value: '[512,768]'` so, how to train text_to_image with images which is resolution of 512x768?
https://github.com/huggingface/diffusers/issues/6614
closed
[]
2024-01-17T13:51:16Z
2024-01-25T14:28:01Z
null
lingxuan630
huggingface/accelerate
2,347
How to load model to specified GPU devices?
I'm trying a large model LLaVA1.5. I know that if I set the parameter `device_map='auto'` in `LlavaMPTForCausalLM.from_pretrained`, the model will be loaded on all visible GPUs (FSDP). Now I hope to load LLaVA1.5 on some of the visible GPUs, still in the FSDP mode, and automatically decide device_map like `device...
https://github.com/huggingface/accelerate/issues/2347
closed
[]
2024-01-17T09:23:04Z
2024-02-26T15:06:36Z
null
davidluciolu
huggingface/transformers
28,546
How to use fp32 and qLora to fine-tune models
### System Info I'm using transformers version 4.32.0 and I want to fine-tune the Qwen/Qwen-VL-Chat-Int4 model, but my 1080ti GPU doesn't support fp16. When I want to use "training_args.fp16 = False" to modify the parameters, the error "dataclasses.FrozenInstanceError: cannot assign to field fp16" will be reported. I ...
https://github.com/huggingface/transformers/issues/28546
closed
[]
2024-01-17T07:16:11Z
2024-02-26T08:04:39Z
null
guoyunqingyue
pytorch/pytorch
117,602
If I use torch.compile to compile the whole graph,in the my own compiler, how to manage the memory in my own compiler?
### 🐛 Describe the bug if I use torch.compile to compile the whole graph,in the my own compiler ,in forward stage, 1.if I enable memory reuse in the forward pass,how the backwards get the activation to calcute the gradient?has there some example in pytorch? 2.if i disable memory reuse,if i enable some op fusion,A o...
https://github.com/pytorch/pytorch/issues/117602
closed
[ "oncall: pt2" ]
2024-01-17T02:23:18Z
2024-01-19T17:55:06Z
null
mollon650
huggingface/sentence-transformers
2,416
How to specify class weights in model training?
I am having a very imbalanced training dataset. Is there a way I could specify class weights (e.g., class 0: 0.1, class 1: 1) for cross encoder training?
https://github.com/huggingface/sentence-transformers/issues/2416
closed
[]
2024-01-16T21:00:27Z
2024-01-20T15:49:54Z
null
mucun1988
huggingface/chat-ui
697
Add streaming support for SageMaker endpoints
Would be nice to have support for streaming tokens from sagemaker. here are some ressources from my conversation with @philschmid ### Code sample (Python Code) ``` body = {"inputs": "what is life", "parameters": {"max_new_tokens":400}} resp = smr.invoke_endpoint_with_response_stream(EndpointName=endpoint_name, B...
https://github.com/huggingface/chat-ui/issues/697
open
[ "enhancement", "back" ]
2024-01-16T10:59:47Z
2024-01-16T11:00:32Z
0
nsarrazin
huggingface/transformers.js
522
Is it possible to fine-tune the hosted pretrained models?
### Question Hello, If we have a large dataset in our domain, can we use it to fine-tune the hosted pretrained models(for example: Xenova/nllb-200-distilled-600M) with optimum? or is it possible to convert our own translation Pytorch model to ONNX which can be compatible with transformer.js?
https://github.com/huggingface/transformers.js/issues/522
open
[ "question" ]
2024-01-16T03:55:39Z
2024-01-16T12:54:53Z
null
lhohoz
huggingface/datasets
6,594
IterableDataset sharding logic needs improvement
### Describe the bug The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes. Splitting across num_workers (per train process loader processes) and...
https://github.com/huggingface/datasets/issues/6594
open
[]
2024-01-15T22:22:36Z
2025-11-10T14:55:20Z
7
rwightman
pytorch/pytorch
117,490
What is the next plan of FP8 support in PyTorch?
### 🚀 The feature, motivation and pitch Now PyTorch only supports FP8 data type conversion without scaling. The accuracy is not that good. What is the plan of FP8 support in PyTorch? Will FP8 DelayedScaling from TransformerEngine be taken into account? Thanks! ### Alternatives _No response_ ### Additional conte...
https://github.com/pytorch/pytorch/issues/117490
closed
[ "module: docs", "oncall: quantization", "triaged", "actionable", "module: floatx (formerly float8)" ]
2024-01-15T10:02:37Z
2024-01-26T01:48:45Z
null
yanbing-j
huggingface/alignment-handbook
103
Does QLora DPO Training support reference model?
Hello! Thanks for your awesome work! I meet an issue when I run dpo with qlora. I notice there is a setting: ``` if model_args.use_peft is True: ref_model = None ref_model_kwargs = None ``` I also notice that the `use_peft` is set to true only in config_qlora.yaml. This means if we use qlora to...
https://github.com/huggingface/alignment-handbook/issues/103
open
[]
2024-01-15T09:22:32Z
2024-01-15T09:27:08Z
0
Harry-mic
huggingface/swift-coreml-diffusers
91
How to import new .SAFETENSORS model?
How can I import a safetensor formatted model into the diffusers app? I tried copying the safetensor file to the folder loaded by the dropdown menu. But when I relaunch the app, it doesn't show the new model in the menu.
https://github.com/huggingface/swift-coreml-diffusers/issues/91
open
[]
2024-01-15T08:24:53Z
2024-07-07T09:03:27Z
null
mcandre
huggingface/candle
1,585
Extension request: How to construct Tensor for n-dimensional Vec
How do I best create a Tensor from a &Vec<Vec<u8>> type? Everything above 1D is quite hard to manage for index based value setting.
https://github.com/huggingface/candle/issues/1585
closed
[]
2024-01-14T17:46:57Z
2025-11-23T20:22:09Z
null
BDUG
huggingface/nanotron
21
Save checkpoint before terminating the training run
Why don't we save a model checkpoint before terminating the training run? [[link]](https://github.com/huggingface/nanotron/blob/fd99571e3769cb1876d5c9d698b512e85a6e4896/src/nanotron/trainer.py#L429) <img width="769" alt="image" src="https://github.com/huggingface/nanotron/assets/22252984/9eb78431-4df9-4795-8ac7-6947...
https://github.com/huggingface/nanotron/issues/21
closed
[ "question" ]
2024-01-13T11:28:20Z
2024-01-13T11:28:54Z
null
xrsrke
huggingface/accelerate
2,331
How to share non-tensor data between processes?
I am running a training on 2 GPUs on the same machine. I need a way to share some float values and maybe dicts between the two processes. I saw that there is a `gather` method, but this only works for tensors. Is there any way to do inter-process communication that is not directly related to the training? EDIT: W...
https://github.com/huggingface/accelerate/issues/2331
closed
[]
2024-01-12T19:13:27Z
2024-01-16T11:36:34Z
null
simonhessner
huggingface/transformers
28,476
How to avoid the peak RAM memory usage of a model when I want to load to GPU
### System Info - `transformers` version: 4.36.2 - Platform: Linux-5.10.201-191.748.amzn2.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.20.2 - Safetensors version: 0.4.1 - Accelerate version: 0.26.0 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (True) - T...
https://github.com/huggingface/transformers/issues/28476
closed
[]
2024-01-12T11:39:52Z
2024-02-12T08:08:17Z
null
JoanFM
huggingface/datasets
6,584
np.fromfile not supported
How to do np.fromfile to use it like np.load ```python def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs): import numpy as np if hasattr(filepath_or_buffer, "read"): return np.fromfile(filepath_or_buffer, *args, **kwargs) else: ...
https://github.com/huggingface/datasets/issues/6584
open
[]
2024-01-12T09:46:17Z
2024-01-15T05:20:50Z
6
d710055071
pytorch/audio
3,725
Resampling at arbitrary time steps
### 🚀 The feature Currently, `torchaudio.functional.resample` can only resample at regular time points and the period is determined by `orig_freq` and `new_freq`. Is it possible to resample at arbitrary time steps? So rather than specifying a resampling ratio, we specify a array of time steps. ### Motivation, pi...
https://github.com/pytorch/audio/issues/3725
open
[]
2024-01-12T09:20:10Z
2024-01-16T18:52:40Z
5
pfeatherstone
huggingface/distil-whisper
73
I want to confirm how the knowledge organization is implemented?
I don't quite understand how knowledge distillation is implemented here. Whisper is trained on 680,000 hours of untagged data for autoregression. According to the content of the fourth section of the paper, our model is trained on 21,170 hours of data with pseudo-labels generated by Whisper, with the first and 32nd...
https://github.com/huggingface/distil-whisper/issues/73
open
[]
2024-01-12T07:43:21Z
2024-01-17T16:57:31Z
null
hxypqr
huggingface/transformers.js
516
How to access attentions matrix for MarianMT?
### Question Hey, I've been trying to access the attentions output by the MarianMT like so (please excuse the unorthodox config argument, tidying up is next on my todo list): ``` const model_name = "Xenova/opus-mt-en-fr"; const tokenizer = await MarianTokenizer.from_pretrained(model_name, { config: { ...
https://github.com/huggingface/transformers.js/issues/516
open
[ "question" ]
2024-01-11T20:16:42Z
2024-01-15T08:21:17Z
null
DaveTJones
huggingface/text-generation-inference
1,437
How to run text-generation-benchmark without the graph and get the output data into a csv file or a json file?
### Feature request text-generation-benchmark has been an amazing tool for understanding the model deployments better. Is there a way where we can run this without generating the graph and get the results in a csv format? ### Motivation Motivation is that we want to use this tool with another program which gets the ...
https://github.com/huggingface/text-generation-inference/issues/1437
closed
[ "Stale" ]
2024-01-11T15:33:37Z
2024-02-17T01:44:18Z
null
pranavthombare
huggingface/transformers.js
515
ONNX optimisations for edge deployment
### Question Hello, I'm exploring if I can extract any more performance from my deployment of transformers.js. Appreciate the answer to this is nuanced and best answered by profiling, but would value opinions of experts that have walked this path before using this lib. In my specific use case I know that I will a...
https://github.com/huggingface/transformers.js/issues/515
closed
[ "question" ]
2024-01-11T13:49:59Z
2025-10-13T04:59:32Z
null
georgedavies019
pytorch/serve
2,894
How can I implement batch inference in my model?
### 📚 The doc issue I read the docs, and I see this sentence: > The frontend then tries to aggregate the batch-size number of requests and send it to the backend. How does it work? In my case, my batch_size is 4 and max_batch_delay is 5000. I sent 2 request simultaneously to torchserve, but in my handler l...
https://github.com/pytorch/serve/issues/2894
closed
[]
2024-01-11T10:39:58Z
2024-01-12T05:28:13Z
5
steelONIONknight
huggingface/alignment-handbook
98
Is QLoRA better than finetuning?
The results reported in https://github.com/huggingface/alignment-handbook/pull/88 suggest that QLoRA is better for both SFT and DPO. Is this accurate, and have people seen this happen in any other settings?
https://github.com/huggingface/alignment-handbook/issues/98
open
[]
2024-01-10T21:04:11Z
2024-01-10T21:04:11Z
0
normster
huggingface/transformers.js
514
Is it possible to use adapters from the hub?
### Question Hi, would it be possible to use adapters on top of a model using the js library?
https://github.com/huggingface/transformers.js/issues/514
open
[ "question" ]
2024-01-10T20:57:03Z
2024-01-11T16:01:11Z
null
vabatta
huggingface/setfit
468
How effective is to use your own pre-trained ST model based on NLI dataset ?
Hi ! I'm interested to use SetFit for classify text extracted from hotel reviews (booking, tripadvisor, etc) but I would to add domain knowledge to my Sentence Transfomers body. For example, this [paper](https://arxiv.org/abs/2202.01924) use a Sentence Transformers model trained on a custom NLI dataset (RNLI for ...
https://github.com/huggingface/setfit/issues/468
closed
[]
2024-01-10T19:25:09Z
2024-02-09T14:55:46Z
null
azaismarc
huggingface/transformers.js
512
What do you all think about having a "Transformers.js Community" in Hugging Face?
### Question After checking how [MLX Community on Hugging Face](https://huggingface.co/mlx-community) is working, I thought it could be a good idea to have one for Transformers.js. One of the key benefits of a community is "multiple curators": anyone in the community would have the ability to edit the repositories,...
https://github.com/huggingface/transformers.js/issues/512
closed
[ "question" ]
2024-01-10T16:03:51Z
2025-05-10T21:06:54Z
null
felladrin
huggingface/candle
1,552
How to pass the attention_mask to Bert model in examples?
I am trying to run `shibing624/text2vec-base-chinese` with candle, and the encoder returns `input_ids`, `attention_mask`, `token_id_types`, but there are only two params of BertModel in candle. https://github.com/huggingface/candle/blob/main/candle-examples/examples/bert/main.rs#L170 ```python from transformers ...
https://github.com/huggingface/candle/issues/1552
closed
[]
2024-01-10T11:57:55Z
2024-01-10T12:38:54Z
null
lz1998
huggingface/sentence-transformers
2,400
New release of library?
I was wondering when you will be releasing a new version of the library that includes the latest changes in the main branch? We are eagerly awaiting one inorder to consume the fix for this issue https://github.com/UKPLab/sentence-transformers/issues/1800
https://github.com/huggingface/sentence-transformers/issues/2400
closed
[ "question" ]
2024-01-09T20:42:53Z
2024-01-29T10:00:33Z
null
vineetsajuTR
pytorch/serve
2,892
Setting log level of handler
### 📚 The doc issue I need to set the logging level of handler to debug, i wanna see all the logs (of torch also). The docs dont mention much other than setting the log level for torch serve itself (log4j ones). I tried setting the config inside the handler, but it didnt work ```python logging.basicConfig(leve...
https://github.com/pytorch/serve/issues/2892
closed
[ "question", "triaged" ]
2024-01-09T18:04:49Z
2024-06-07T21:39:33Z
null
hariom-qure
huggingface/peft
1,334
when we use inject_adapter_in_model method to inject the adapters directly into a PyTorch model, how to merge the Lora weight with the base model in the inference stage?
https://github.com/huggingface/peft/issues/1334
closed
[]
2024-01-09T12:30:52Z
2024-02-17T15:03:59Z
null
mikiyukio
huggingface/datasets
6,570
No online docs for 2.16 release
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1). In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index ![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a765...
https://github.com/huggingface/datasets/issues/6570
closed
[ "bug", "documentation" ]
2024-01-09T07:43:30Z
2024-01-09T16:45:50Z
7
albertvillanova
pytorch/xla
6,274
Inconsistent behaviour with `xm.xrt_world_size()` and/or `xm.get_xla_supported_devices()`
## 🐛 Bug I noticed that when I execute some code (see further below) on a TPU VM v3-8 (inside a Python venv 3.10.12 + torch 2.1.2+cu121 + torch_xla 2.1.0) uncommenting each time either the `xm.xrt_world_size()` part (**Output 1**) or `xm.get_xla_supported_devices()` (**Output 2**) or none of them - both commented ...
https://github.com/pytorch/xla/issues/6274
closed
[ "question", "distributed" ]
2024-01-09T05:45:42Z
2025-04-23T14:42:27Z
null
h-sellak
huggingface/text-generation-inference
1,415
How to use local Medusa head?
It is said that Medusa can significantly accelerate inference speed. During my attempts to utilize it, I have observed that it does not support the use of local Medusa config and head. The code fragment I discovered that pertains to this functionality is as follows, which I have modified. However, I do not comprehend t...
https://github.com/huggingface/text-generation-inference/issues/1415
closed
[]
2024-01-09T03:22:47Z
2024-01-10T17:36:23Z
null
eurus-ch
huggingface/transformers
28,388
How to use an efficient encoder as shared EncoderDecoderModel?
### Feature request Efficient encoder like destilBERT, ALBERT or ELECTRA aren't supported as decoder of the EncoderDecoderModel and so they can't be shared as encoder and decoder. ### Motivation Warm-starting shared models is a powerful way to build transformer models. Yet the efficient models can't be used. ### Yo...
https://github.com/huggingface/transformers/issues/28388
open
[ "Feature request" ]
2024-01-08T11:43:05Z
2024-01-08T12:35:24Z
null
Bachstelze
pytorch/kineto
854
Is Kineto planning to support backend extensions?
Hello, there is 'PrivateUse1' in pytorch to support backend integration. Will Kineto provide similar features?
https://github.com/pytorch/kineto/issues/854
closed
[ "question" ]
2024-01-08T03:19:53Z
2024-04-23T15:21:34Z
null
fwenguang
huggingface/alignment-handbook
92
Is there anyway that I can use learning rate warm-up during the training ?
I am using this repo to: 1. Continual Pre-training 2. SFT 3. DPR For stage 1, I want to use a learning rate warm-up.
https://github.com/huggingface/alignment-handbook/issues/92
closed
[]
2024-01-07T21:07:25Z
2024-01-10T06:48:52Z
1
shamanez
huggingface/alignment-handbook
91
how to use dpo without flash-attention
Is there any flash-attention free version?
https://github.com/huggingface/alignment-handbook/issues/91
open
[]
2024-01-07T16:27:08Z
2024-02-06T19:51:38Z
null
Fu-Dayuan
huggingface/accelerate
2,312
Seeking for Help: how to work deepspeed zero stage 3 with quantized model?
Hi, I would like to conduct dpo training on my 2 a6000 (48GB) gpus based on this project (https://github.com/allenai/open-instruct). Specifically, the model was based on qlora and reference model was based on quantized one. I would like to utilize the deepspeed zero stage 3 to accelerate training time. During the t...
https://github.com/huggingface/accelerate/issues/2312
closed
[]
2024-01-07T09:44:28Z
2024-01-11T11:01:31Z
null
grayground
huggingface/datasets
6,565
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
### Describe the bug Scenario: - Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't ha...
https://github.com/huggingface/datasets/issues/6565
closed
[]
2024-01-07T02:46:50Z
2025-03-08T09:46:05Z
2
naba89
huggingface/transformers.js
505
How do I use WebGL as executionProvider?
### Question ```js export const executionProviders = [ // 'webgpu', 'wasm' ]; ``` I looked at src/backends/onnx.js and noticed that there was no webgl in the executionProviders. Is there a way to use WebGL as executionProvider?
https://github.com/huggingface/transformers.js/issues/505
closed
[ "question" ]
2024-01-06T19:16:36Z
2024-10-18T13:30:09Z
null
kwaroran
pytorch/executorch
1,548
How to implement the "aten.mul.Scalar" for Qualcomm backend
The second arg of "aten.mul.Scalar" is const scalar value, such as float: 0.5f. The function define_tensor/define_scalar/define_value of NodeVisitor should get the arg "node" as input, but how can I define one node like torch.fx.Node for const scalar value?
https://github.com/pytorch/executorch/issues/1548
closed
[ "partner: qualcomm", "triaged" ]
2024-01-06T09:12:19Z
2024-01-09T02:18:37Z
null
czy2014hust
pytorch/pytorch
116,922
How to adapt to `at::scaled_dot_product_attention`'s routing logic for a third-party cuda-like device?
https://github.com/pytorch/pytorch/blob/f24bba1624a8bb5c920833b18fc6162db084ca09/aten/src/ATen/native/transformers/attention.cpp#L635-L642 Now, I am adapting `at::scaled_dot_product_attention` to a specific type of cuda-like device and encounters a problem. In `at::scaled_dot_product_attention`, it will choose a pa...
https://github.com/pytorch/pytorch/issues/116922
closed
[]
2024-01-06T07:28:43Z
2024-01-15T02:13:30Z
null
drslark
huggingface/diffusers
6,474
how to use xformers
Maybe this is a relatively low-level question, but what always bothers me is how does Xformer run when running SD? Or can it be accelerated by default after installing this library? Thank you all for answering your questions
https://github.com/huggingface/diffusers/issues/6474
closed
[]
2024-01-06T03:34:16Z
2024-01-11T03:38:19Z
null
babyta
pytorch/serve
2,890
Difference between `Custom handler with module level entry point` and `Custom handler with class level entry point`
### 📚 The doc issue # Not an issue What is the difference between `Custom handler with module level entry point` and `Custom handler with class level entry point`? Can you give me any examples? Thanks for help ### Suggest a potential alternative/fix _No response_
https://github.com/pytorch/serve/issues/2890
closed
[ "question", "triaged" ]
2024-01-05T20:45:25Z
2024-01-25T05:07:51Z
null
IonBoleac
huggingface/datasets
6,561
Document YAML configuration with "data_dir"
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
https://github.com/huggingface/datasets/issues/6561
open
[ "documentation" ]
2024-01-05T14:03:33Z
2025-08-07T14:57:58Z
6
severo
pytorch/TensorRT
2,579
❓ [Question] Support for layers with Custom C++ and CUDA Extensions
## ❓ Question Support for layers with Custom C++ and CUDA Extensions ## What you have already tried Can I convert the LLTM class in directory `cuda` of https://github.com/pytorch/extension-cpp (below) into a tensorrt engine through Torch-TensorRT? I tried the code below: ```lltm.py import math from torch imp...
https://github.com/pytorch/TensorRT/issues/2579
closed
[ "question" ]
2024-01-05T07:25:23Z
2024-01-15T06:22:05Z
null
Siyeong-Lee
pytorch/TensorRT
2,577
Can please somebody give a clear explanation of how to install torch-tensorrt on Windows?
## ❓ Question Hello, I've encountered problems installing torch-tensorrt on Windows 10 No matter how I try, how many sources I look up to, there is no clear explanation on how to do everything. The documentation is vague, and because I am used to working with python code, which does everything for you, that is...
https://github.com/pytorch/TensorRT/issues/2577
closed
[ "question" ]
2024-01-05T02:52:01Z
2025-12-02T18:12:43Z
null
ninono12345
huggingface/sentence-transformers
2,397
Does finetuning a cross-encoder yield prediction labels and not similarity scores?
Hi, This is less of a coding issue and more of a conceptual question. I have binary labels for similarity and dissimilarity while training a cross-encoder; so its a binary classification task. The pretrained cross-encoder has a float score, most of the time around .5. After finetuning, the models only predict a deci...
https://github.com/huggingface/sentence-transformers/issues/2397
closed
[ "question" ]
2024-01-04T21:01:44Z
2024-01-09T17:53:17Z
null
FDSRashid
huggingface/text-generation-inference
1,403
How to load llama-2 thru Client
### System Info Hi there, text_generation.__version__ = 0.6.0 ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [ ] An officially supported command - [ ] My own modifications ### Reproduction I am trying to load llama-2 model thru Client ``` from text_generation import Client model_endpoin...
https://github.com/huggingface/text-generation-inference/issues/1403
closed
[]
2024-01-04T17:25:59Z
2024-01-05T16:01:56Z
null
yanan1116
huggingface/transformers
28,343
How to log custom value?
I want to log some info to `{'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0}` how can i do that? like: {'loss': 2.5234, 'learning_rate': 1.0344827586206896e-06, 'epoch': 0.0, 'version': 'v1'}
https://github.com/huggingface/transformers/issues/28343
closed
[]
2024-01-04T12:28:43Z
2024-01-07T13:07:22Z
null
xmy0916
huggingface/transformers.js
499
An error occurred during model execution: "RangeError: offset is out of bounds".
### Question Hello - having an issue getting this code to run in the browser. Using `Xenova/TinyLlama-1.1B-Chat-v1.0` on `"@xenova/transformers": "^2.13.2"` It runs perfectly in node. ```ts import { pipeline } from '@xenova/transformers'; console.log('Loading model...'); const generator = await pipeline('...
https://github.com/huggingface/transformers.js/issues/499
closed
[ "question" ]
2024-01-03T19:55:45Z
2024-10-18T13:30:09Z
null
wesbos
huggingface/transformers.js
497
Cross Encoder
### Question I'm trying to run this pre-trained Cross Encoder model ([MS Marco TinyBERT](https://huggingface.co/cross-encoder/ms-marco-TinyBERT-L-2-v2)) not available in Transformers.js. I've managed to convert it using the handy script, and I'm successfully running it with the "feature-extraction" task: ```js co...
https://github.com/huggingface/transformers.js/issues/497
closed
[ "question" ]
2024-01-03T16:24:37Z
2024-03-01T00:11:31Z
null
achrafash
huggingface/autotrain-advanced
448
What is the difference between autotrain and kohya_ss?
What is the difference between autotrain and kohya_ss?
https://github.com/huggingface/autotrain-advanced/issues/448
closed
[ "stale" ]
2024-01-03T16:18:58Z
2024-01-22T15:01:45Z
null
loboere
pytorch/executorch
1,527
How to build qnn_executor_runner for linux-gcc9.3?
My requirements are that I want to compile the model on x86 host and run the inference on linux device using Qualcomm AI Engine, e.g. SA8295. So how to build `qnn_executor_runner` for linux-gcc9.3 not android? thanks~ the libQnnHtp.so is different in qnn. ``` $ find . -name libQnnHtp.so ./lib/aarch64-oe-linux-gcc9...
https://github.com/pytorch/executorch/issues/1527
closed
[ "partner: qualcomm", "triaged" ]
2024-01-03T09:04:08Z
2024-01-29T07:49:12Z
null
huangzhiyuan
huggingface/optimum
1,622
device set bug
### System Info ```shell optimum 1.16.1 ``` ### Who can help? @philschmid ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details b...
https://github.com/huggingface/optimum/issues/1622
open
[ "bug" ]
2024-01-03T09:01:16Z
2024-01-09T10:17:45Z
1
Yuang-Deng
pytorch/pytorch
116,687
How to install pytorch on
https://github.com/pytorch/pytorch/issues/116687
closed
[]
2024-01-03T08:12:33Z
2024-01-03T08:42:38Z
null
Joseph513shen
huggingface/transformers.js
494
in-browser inference slower than node inference to be expected?
### Question i noticed that i get much higher performance when i run inference in node vs in the browser (latest chrome, m2 mac, ). is that generally to be expected? for context - i'm creating embeddings for chunks of text using the gte-small model. thank you!
https://github.com/huggingface/transformers.js/issues/494
closed
[ "question" ]
2024-01-03T04:26:47Z
2024-08-27T23:53:36Z
null
carlojoerges
huggingface/optimum
1,621
Cannot convert sentence transformer model properly
### System Info ```shell Optimum Version = 1.16.1 ``` ### Who can help? @michaelbenayoun @fxmarty ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own tas...
https://github.com/huggingface/optimum/issues/1621
closed
[ "bug" ]
2024-01-02T12:08:07Z
2024-01-12T15:26:21Z
4
leodalcin
huggingface/alignment-handbook
87
How can I config `loss_type`?
I want to change the **loss_type** into KTO or something else to test but I can't. Please show me the way. Thank you.
https://github.com/huggingface/alignment-handbook/issues/87
closed
[]
2024-01-02T11:54:34Z
2024-01-10T13:41:19Z
2
hahuyhoang411
pytorch/examples
1,208
add examples/siamese_network with triplet loss example
<!-- Thank you for suggesting an idea to improve pytorch/examples Please fill in as much of the template below as you're able. --> ## Is your feature request related to a problem? Please describe. Can you please provide an example of Siamese network training / testing with triplet loss such that it can be used...
https://github.com/pytorch/examples/issues/1208
open
[]
2024-01-01T19:19:35Z
2024-01-01T19:19:35Z
0
pax7
huggingface/datasets
6,548
Skip if a dataset has issues
### Describe the bug Hello everyone, I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error: Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10...
https://github.com/huggingface/datasets/issues/6548
open
[]
2023-12-31T12:41:26Z
2024-01-02T10:33:17Z
1
hadianasliwa
huggingface/transformers.js
491
Running tests locally fail
### Question When I git clone to my Mac, and run tests, I get a lot of errors: ``` ● Models › Loading different architecture types › gpt2 (GPT2Model) Could not locate file: "https://huggingface.co/gpt2/resolve/main/tokenizer_config.json". 239 | 240 | const message = ERROR_MAPPING[statu...
https://github.com/huggingface/transformers.js/issues/491
closed
[ "question" ]
2023-12-30T02:12:35Z
2024-10-18T13:30:11Z
null
sroussey