repo stringclasses 147
values | number int64 1 172k | title stringlengths 2 476 | body stringlengths 0 5k | url stringlengths 39 70 | state stringclasses 2
values | labels listlengths 0 9 | created_at timestamp[ns, tz=UTC]date 2017-01-18 18:50:08 2026-01-06 07:33:18 | updated_at timestamp[ns, tz=UTC]date 2017-01-18 19:20:07 2026-01-06 08:03:39 | comments int64 0 58 ⌀ | user stringlengths 2 28 |
|---|---|---|---|---|---|---|---|---|---|---|
huggingface/diffusers | 8,700 | [PAG] add `StableDiffusionXLControlNetPAGImg2ImgPipeline` | We recently integrated PAG into diffusers! See the PR here: https://github.com/huggingface/diffusers/pull/7944
Does anyone want to add a `StableDiffusionXLControlNetPAGImg2ImgPipeline`?
1. You should put it under the [pag folder](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/pag)
2. yo... | https://github.com/huggingface/diffusers/issues/8700 | closed | [
"good first issue",
"help wanted",
"contributions-welcome"
] | 2024-06-25T18:52:18Z | 2024-08-21T17:24:23Z | 6 | yiyixuxu |
huggingface/sentence-transformers | 2,779 | what is the default tokenizer when "No sentence-transformers model found with name"? | I'm trying to use the sentence-transformer dangvantuan/sentence-camembert-large model and I'm getting a "no model found" error. This error is probably because some Sentence-Transformers-specific files are missing in their Huggingface (modules.json and config_sentence_transformers.json).
But then, Sentence Transformer... | https://github.com/huggingface/sentence-transformers/issues/2779 | closed | [] | 2024-06-25T15:17:58Z | 2024-07-05T10:42:27Z | null | Hortatori |
huggingface/accelerate | 2,891 | How to set a custom Config in python code using Accelerate? | Hello everyone!
Could you please advise how to replace the console command for setting a config
```
accelerate launch --config_file {path/to/config/my_config_file.yaml} {script_name.py} {--arg1} {--arg2}
```
with code in the Python file script_name.py?
I am expecting something like the following functionality... | https://github.com/huggingface/accelerate/issues/2891 | closed | [] | 2024-06-25T11:56:10Z | 2024-10-07T15:08:01Z | null | konstantinator |
huggingface/diffusers | 8,693 | SD3 + SDXL refine fix lying on grass. How to do in diffusers colab workflow? | this is comfy workflow

how can i do in diffusers colab workflow? | https://github.com/huggingface/diffusers/issues/8693 | closed | [
"stale"
] | 2024-06-25T07:30:55Z | 2024-09-23T11:37:25Z | null | s9anus98a |
huggingface/text-generation-inference | 2,113 | how to launch a service using downloaded model weights? | ### System Info
I have downloaded model weights of bge-models, and I want to launch a model service using TGI, the command is :
```
model=/storage/nfs2/ModelHub/embedding/BAAI/bge-small-zh-v1.5
revision=refs/pr/5
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
... | https://github.com/huggingface/text-generation-inference/issues/2113 | closed | [] | 2024-06-25T03:18:14Z | 2024-06-28T03:50:10Z | null | chenchunhui97 |
huggingface/chat-ui | 1,302 | Assistant feature: Send user query as part of template variable GET request | Trying to integrate RAG as an assistant. Thinking of using a template variable that makes a GET request (with the prompt as the request body), to get the relevant documents as context. Is this possible (i.e. there is a special variable in the system prompt page for the user query), or is there a better way of doing thi... | https://github.com/huggingface/chat-ui/issues/1302 | closed | [] | 2024-06-24T22:27:02Z | 2025-01-02T12:09:23Z | 2 | ethayu |
huggingface/diffusers | 8,683 | Why do Diffusers schedulers produce lower quality outputs compared to ComfyUI? | ### Discussed in https://github.com/huggingface/diffusers/discussions/8682
<sup>Originally posted by **nducthang** June 24, 2024</sup>
Hi,
I'm encountering an issue when comparing the quality of ComfyUI and Diffusers. I've noticed that the output of Diffusers is consistently lower than ComfyUI in many cases, des... | https://github.com/huggingface/diffusers/issues/8683 | closed | [] | 2024-06-24T14:37:19Z | 2024-06-25T06:06:12Z | 20 | nducthang |
huggingface/alignment-handbook | 174 | Question about torch_dtype when runnging run_orpo.py | I have been using `run_orpo.py` with my personal data successfully. However, as I use it, I have a question.
When I look at the code for `run_orpo.py`, I see that there is a code to match torch_dtype to the dtype of the pretrained model. However, when I actually train and save the model, even if the pretrained model... | https://github.com/huggingface/alignment-handbook/issues/174 | closed | [] | 2024-06-23T08:28:02Z | 2024-07-30T05:05:03Z | 6 | sylee96 |
huggingface/diffusers | 8,666 | Attention api changes no documentation ? | how can i see ur previous changes on attention ?
u have rename`` _slice_size , _sliced_attention and _attention`` attribute from attention
need to know what are alternative using of its ? | https://github.com/huggingface/diffusers/issues/8666 | closed | [] | 2024-06-23T07:08:58Z | 2024-06-23T11:31:47Z | 4 | xalteropsx |
huggingface/transformers.js | 819 | Blog on walkthrough with transformers js | ### Question
Hey, So I am writing this blog part of sharing knowledge in a blog series called Running AI/ML in the client. I am using transformer js example walkthrough in this part to validate some concepts. Can I get some feedback before it goes live? How do we connect? | https://github.com/huggingface/transformers.js/issues/819 | closed | [
"question"
] | 2024-06-23T06:06:42Z | 2024-06-27T19:10:05Z | null | ArijitCloud |
huggingface/trl | 1,763 | What is the difference between PPOv2Trainer and PPOTrainer? | What is the difference between PPOv2Trainer and PPOTrainer? And in trl\examples\scripts\ppo\ppo.py and trl\examples\scripts\ppo.py , there are two dpo.py files, can you tell me what is different between them? | https://github.com/huggingface/trl/issues/1763 | closed | [] | 2024-06-22T14:48:38Z | 2024-08-24T09:25:52Z | null | mst272 |
huggingface/diffusers | 8,649 | SD3 - num_images_per_prompt no longer honoured (throws error) | ### Describe the bug
With models prior to SD3, the parameter num_images_per_prompt is honoured, enabling generation of several images per prompt. With sd3-medium an error is generated.
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
Not... | https://github.com/huggingface/diffusers/issues/8649 | closed | [
"bug"
] | 2024-06-20T11:28:22Z | 2024-06-29T13:05:28Z | 4 | zagglez |
huggingface/transformers.js | 814 | Consultation on the use of the library with chatbot models | ### Question
Hello, Greetings Vladimir, programmer in a web environment with PHP, JS, AJAX, first I apologize for my English, my native language is Latin Spanish, I am not very good at writing it, I have used a translator, I wanted to consult, how can I use this interesting and useful tool, to be able to create a chat... | https://github.com/huggingface/transformers.js/issues/814 | open | [
"question"
] | 2024-06-20T03:24:34Z | 2024-07-29T10:47:24Z | null | mate07 |
huggingface/optimum | 1,912 | Could you provide the official onnx model of Qwen-VL-Chat(-Int4)? | ### Feature request
Qwen-VL-Chat(-Int4) is useful to image-to-text model.
### Motivation
The image-to-text LMM model just like Qwen-VL-Chat(-Int4) is very useful.
### Your contribution
Not yet. | https://github.com/huggingface/optimum/issues/1912 | open | [
"feature-request",
"quantization"
] | 2024-06-19T08:43:58Z | 2024-10-09T07:52:54Z | 0 | yzq1990 |
huggingface/diffusers | 8,626 | More thorough guidance for multiple IP adapter images/masks and a single IP Adapter | ### Describe the bug
I'm trying to use a single IP adapter with multiple IP adapter images and masks. This section of the docs gives an example of how I could do that: https://huggingface.co/docs/diffusers/v0.29.0/en/using-diffusers/ip_adapter#ip-adapter-masking
The docs provide the following code:
```python
fr... | https://github.com/huggingface/diffusers/issues/8626 | closed | [
"bug",
"stale"
] | 2024-06-18T18:06:37Z | 2024-09-23T11:36:10Z | 11 | chrismaltais |
huggingface/datasets | 6,979 | How can I load partial parquet files only? | I have a HUGE dataset about 14TB, I unable to download all parquet all. I just take about 100 from it.
dataset = load_dataset("xx/", data_files="data/train-001*-of-00314.parquet")
How can I just using 000 - 100 from a 00314 from all partially?
I search whole net didn't found a solution, **this is stupid if the... | https://github.com/huggingface/datasets/issues/6979 | closed | [] | 2024-06-18T15:44:16Z | 2024-06-21T17:09:32Z | 12 | lucasjinreal |
huggingface/pytorch-image-models | 2,211 | How to Replicate Official Model Accuracy | Based on the accuracy provided by the official source, how can one replicate and train these models?
For example, for mobilenetv4_hybrid_large.e600_r384_in1k with a top-1 accuracy of 84.266
where can one find the training hyperparameters such as epochs, scheduler, warmup epochs, learning rate, batch size, and ot... | https://github.com/huggingface/pytorch-image-models/issues/2211 | closed | [
"enhancement"
] | 2024-06-18T05:30:59Z | 2024-06-24T23:36:45Z | null | usergxx |
huggingface/chat-ui | 1,290 | ERROR: Exception in ASGI application | Hello everyone, I have the following problem when using Huggingface ChatUI with FastChat. How can I change the configuration? Use npm to start development mode.
Thanks
```
MODELS=`[
{
"name": "Infinirc-7b-Llama2",
"id": "Infinirc-7b-Llama2",
"model": "Infinirc-7b-Llama2",
"parameters": {
... | https://github.com/huggingface/chat-ui/issues/1290 | open | [
"support"
] | 2024-06-18T02:07:50Z | 2024-06-23T13:26:59Z | 1 | rickychen-infinirc |
huggingface/autotrain-advanced | 684 | Where is the fine-tuned model output? | I’m new to using AutoTrain on Hugging Face and I encountered an issue during my first attempt at fine-tuning a model. I have a free account, because I want to see whether I can get something to work before I start paying for training. Here’s a summary of what I did and the problem I’m facing:
Training Configuration:
... | https://github.com/huggingface/autotrain-advanced/issues/684 | closed | [] | 2024-06-17T23:01:53Z | 2024-06-22T03:49:27Z | null | RonPisaturo |
huggingface/transformers | 31,453 | How to build and evaluate a vanilla transformer? | ### Model description
"Attention Is All You Need" is a landmark 2017 research paper authored by eight scientists working at Google, responsible for expanding 2014 attention mechanisms proposed by Bahdanau et al. into a new deep learning architecture known as the transformer with an encoder, cross-attention, and a deco... | https://github.com/huggingface/transformers/issues/31453 | closed | [] | 2024-06-17T17:17:11Z | 2024-11-04T13:56:06Z | null | Bachstelze |
huggingface/parler-tts | 74 | How to do with flan-t5 when i want to finetune based on Mini v0.1 but not from scratch? Flan t5 can not deal my language. | https://github.com/huggingface/parler-tts/issues/74 | open | [] | 2024-06-17T06:39:24Z | 2024-06-17T06:39:24Z | null | lyt719 | |
huggingface/candle | 2,269 | How to select which GPU to use | We are working with the stable diffusion example. How do we select which GPU device on our system to use for the rendering?
thanks. | https://github.com/huggingface/candle/issues/2269 | open | [] | 2024-06-16T19:53:18Z | 2024-06-21T19:29:31Z | null | donkey-donkey |
huggingface/chat-ui | 1,283 | SELF_SIGNED_CERT_IN_CHAIN | I am experiencing this error. I'm on a corporate VPN and I tried turning it off and still the same error. The TLS reject is set to false as well.
SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error errno SELF_SIGNED_CERT_IN_CHAIN
71.61
npm error request to https://registry.npmjs.org/failed, reason: self-signed certificate... | https://github.com/huggingface/chat-ui/issues/1283 | open | [
"support"
] | 2024-06-14T04:03:48Z | 2024-06-17T06:50:29Z | 2 | solanki-aman |
huggingface/diffusers | 8,527 | how to add controlnet in sd3! | I currently use inpainting controlnet in sdxl because it uses unet to easily support controlnet. And I am curious about how to add controlnet in sd3 with transforms model structure. | https://github.com/huggingface/diffusers/issues/8527 | closed | [] | 2024-06-13T10:14:38Z | 2024-08-24T04:20:28Z | null | appleyang123 |
huggingface/lerobot | 266 | Question - how to handle additional sensory input | Hi guys, sorry to bother you again :wink:
and thanks for your work, I'm very excited by Lerobot!
I'm currently collecting some teleop data where the robot has tactile sensors on the fingertips, as well as a FT sensor on the wrist and I was wondering how I would integrate this best into a Lerobot Dataset.
One... | https://github.com/huggingface/lerobot/issues/266 | closed | [
"question",
"dataset",
"stale"
] | 2024-06-13T08:39:26Z | 2025-10-23T02:29:29Z | null | tlpss |
huggingface/nanotron | 196 | how to run benchmark tests | Hi,
I can build this project with your commands, but there is no "pyaottriton" when ran the benchmark test like: benchmark_forward.py or benchmark_backward.py.
anything I missed?
Thanks | https://github.com/huggingface/nanotron/issues/196 | closed | [] | 2024-06-13T08:31:06Z | 2024-06-13T08:38:24Z | null | jinsong-mao |
huggingface/chat-ui | 1,277 | Difficulties with chat-ui promp to text-generation-webui openai api endpoint | Hello,
I'm trying my best to get the huggingface ```chat-ui``` working with the API endpoint of ```text-generation-webui```.
I would be really happy if I could get a hint what I am doing wrong.
Here is a reverse proxied test instance: https://chat-ui-test.pischem.com/
I can't get my prompt that I input into... | https://github.com/huggingface/chat-ui/issues/1277 | closed | [
"support"
] | 2024-06-12T14:18:12Z | 2025-01-30T18:46:22Z | 7 | Monviech |
huggingface/chat-ui | 1,275 | Feature Request - support for session sharing, archiving, and collaboration | AFAIK, HuggingChat (HC) currently has no support for session sharing, archiving, and collaboration. At least, neither the HC server nor my GitHub (GH) searching found anything like this. So, if this doesn't exist, please consider how it could be implemented. For example, if I wanted to publish an HC session, maybe I co... | https://github.com/huggingface/chat-ui/issues/1275 | open | [
"question"
] | 2024-06-12T11:35:31Z | 2024-06-14T05:24:08Z | null | RichMorin |
huggingface/lerobot | 263 | Seeking advice on how to choose between ACT and DP algorithms | Hello,
Thank you very much for the work you have done in bringing together the current excellent imitation learning collections for convenient use. Regarding the ACT algorithm and DP algorithm, besides the basic differences in the algorithms themselves, how should one choose between them for different tasks? Do they... | https://github.com/huggingface/lerobot/issues/263 | closed | [
"question"
] | 2024-06-12T07:45:39Z | 2024-06-19T14:02:43Z | null | le-wei |
huggingface/dataset-viewer | 2,899 | Standardize access to metrics and healthcheck | In some apps, the metrics and healthcheck are public:
- https://datasets-server.huggingface.co/admin/metrics
- https://datasets-server.huggingface.co/sse/metrics
- https://datasets-server.huggingface.co/sse/healthcheck
- https://datasets-server.huggingface.co/healthcheck
- On others, it’s forbidden or not found:... | https://github.com/huggingface/dataset-viewer/issues/2899 | open | [
"question",
"infra",
"P2"
] | 2024-06-11T14:39:10Z | 2024-07-11T15:38:17Z | null | AndreaFrancis |
huggingface/lerobot | 261 | Which low cost robot with teleoperation to test the library ? | Firstly, thank you for all the work. At my company we would like to obtain results on real robots from this repository. However, the original setups are either quite expensive (around ~30k for Aloha) or require reconstruction for the UMI interface from Colombia via 3D printing, which would be time-consuming considering... | https://github.com/huggingface/lerobot/issues/261 | closed | [
"question"
] | 2024-06-11T13:21:32Z | 2024-07-23T07:55:15Z | null | RochMollero |
huggingface/diarizers | 11 | How can I save the model locally before pushing it to the Hub ?! | https://github.com/huggingface/diarizers/issues/11 | closed | [] | 2024-06-11T06:37:45Z | 2024-06-13T16:24:19Z | null | ma-mohsen | |
huggingface/parler-tts | 68 | How to predict after finetune? There is no config.json in checkpoint dir. | https://github.com/huggingface/parler-tts/issues/68 | open | [] | 2024-06-11T03:30:04Z | 2024-06-17T01:57:04Z | null | lyt719 | |
huggingface/transformers.js | 802 | Long running transcription using webgpu-whisper | ### Question
Noob question - the [webgpu-whisper](https://github.com/xenova/transformers.js/tree/v3/examples/webgpu-whisper) demo does real time transcription, however it doesn't build out a full transcript from the start ie. 2 mins into transcription, the first few transcribed lines disappear.
Transcript at tim... | https://github.com/huggingface/transformers.js/issues/802 | open | [
"question"
] | 2024-06-10T16:44:01Z | 2025-05-30T05:52:37Z | null | iamhitarth |
huggingface/sentence-transformers | 2,738 | How is `max_length` taken into account compared to models setting | What happens under the hood, if I set max_length > than model's max_length?
it seems to work, but are inputs truncated or doi you apply RoPE-Extension? | https://github.com/huggingface/sentence-transformers/issues/2738 | open | [] | 2024-06-09T15:59:09Z | 2024-06-10T06:45:49Z | null | l4b4r4b4b4 |
huggingface/datasets | 6,961 | Manual downloads should count as downloads | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | https://github.com/huggingface/datasets/issues/6961 | open | [
"enhancement"
] | 2024-06-09T04:52:06Z | 2024-06-13T16:05:00Z | 1 | umarbutler |
huggingface/diffusers | 8,439 | How to use EDM2 model with diffusers? | model safetensors: https://huggingface.co/RedRocket/Fluffyrock-Unbound/blob/main/Fluffyrock-Unbound-v1-1.safetensors
yaml: https://huggingface.co/RedRocket/Fluffyrock-Unbound/raw/main/Fluffyrock-Unbound-v1-1.yaml
colab demo:
https://colab.research.google.com/drive/1LSGvjWXNVjs6Tthcpf0F5VwuTFJ_d-oB
results:
... | https://github.com/huggingface/diffusers/issues/8439 | open | [
"stale"
] | 2024-06-09T03:39:05Z | 2024-09-14T15:10:19Z | null | s9anus98a |
huggingface/transformers | 31,323 | Language modeling examples do not show how to do multi-gpu training / fine-tuning | ### System Info
- `transformers` version: 4.41.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.23.3
- Safetensors version: 0.4.2
- Accelerate version: 0.31.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.2.1+cu121 (True)
- Tenso... | https://github.com/huggingface/transformers/issues/31323 | closed | [
"Documentation"
] | 2024-06-07T18:49:35Z | 2024-12-02T08:11:31Z | null | csiefer2 |
huggingface/candle | 2,258 | How to Implement New Operators Using CUDA Host Functions Along with Thrust and CUB Libraries | As stated, the CUDA code in the candle-kernels repository seems to only contain kernel functions. When I want to implement new operators (such as nonzero), it seems I'm only able to use Rust for higher-level functionality, which means I cannot utilize the device_vector from Thrust or the flagged APIs from CUB. This pos... | https://github.com/huggingface/candle/issues/2258 | open | [] | 2024-06-07T16:52:44Z | 2024-06-09T15:56:36Z | null | chenwanqq |
huggingface/text-generation-inference | 2,035 | What is TGI's graceful shutdown behavior? | When SIGKILL arrives,
- does TGI process all pending inputs?
- does TGI blocks incoming inputs?
I saw a PR that adds graceful shutdown but it did not specify the exact program behavior. | https://github.com/huggingface/text-generation-inference/issues/2035 | closed | [] | 2024-06-07T06:24:00Z | 2024-06-07T08:08:51Z | null | seongminp |
huggingface/tokenizers | 1,549 | How to use `TokenizerBuilder`? | I expected `TokenizerBuilder` to produce a `Tokenizer` from the `build()` result, but instead `Tokenizer` wraps `TokenizerImpl`.
No problem, I see that it impl `From<TokenizerImpl> for Tokenizer`, but it's attempting to do quite a bit more for some reason? Meanwhile I cannot use `Tokenizer(unwrapped_build_result_her... | https://github.com/huggingface/tokenizers/issues/1549 | closed | [
"Stale"
] | 2024-06-07T01:18:07Z | 2024-07-20T01:52:03Z | null | polarathene |
huggingface/transformers.js | 796 | No performance gain on using WebGPU | ### Question
I want to use the model: https://huggingface.co/Xenova/clip-vit-large-patch14 with WebGPU for quick inference in the browser. I ran the WebGPU benchmark to observe the performance increase and indeed it showed a ~7x improvement in speed on my device.
But when I run the clip model linked above, there's ... | https://github.com/huggingface/transformers.js/issues/796 | closed | [
"question"
] | 2024-06-06T20:16:07Z | 2024-06-09T01:44:17Z | null | mr-sarthakgupta |
huggingface/optimum | 1,895 | Lift upper version limit of transformers for habana | ### Feature request
optimium currently limits transformers to `>= 4.38.0, < 4.39.0`. @regisss bumped the upper version limit in PR #1851 a month ago. Is there any technical reason to limit the upper version to `< 4.39`? Other dependencies allow for more recent versions. For example neuronx allows `< 4.42.0`, see #1881... | https://github.com/huggingface/optimum/issues/1895 | closed | [] | 2024-06-06T07:52:41Z | 2024-06-24T08:53:27Z | 4 | tiran |
huggingface/peft | 1,829 | How to change to PEFT model dynamically? | python==3.7.12
PEFT==0.3.0
@BenjaminBossan
I fine-tune the eleventh transformer of Bert as below:
```bash
target_modules = []
target_modules.append("11.attention.self.query")
target_modules.append("11.attention.self.value")
lora_config = LoraConfig(
r = self.args.lora_rank,
lora_alpha = self.... | https://github.com/huggingface/peft/issues/1829 | closed | [] | 2024-06-05T13:24:40Z | 2024-06-06T00:37:06Z | null | whr819987540 |
huggingface/transformers.js | 792 | Feature request: YOLO-World/Grounding DINO (Zero shot object detection) | ### Question
Hi!
I'm trying out some of the zero shot capabilities and I've been working with the owlv2 but I was wondering, is support for yolo-world and grounding Dino coming? They seem to be faster than owlv2.
Thanks! | https://github.com/huggingface/transformers.js/issues/792 | open | [
"question"
] | 2024-06-04T21:39:18Z | 2024-06-24T07:04:27Z | null | rogueturnip |
huggingface/transformers.js | 791 | env.allowLocalModels and env.allowRemoteModels | ### Question
When I set env.allowLocalModels = true and look at the env object I see both
env.allowLocalModels and env.allowRemoteModels set to true. Does this mean that it will look for models locally first and then if not found go to the remoteHost? | https://github.com/huggingface/transformers.js/issues/791 | open | [
"question"
] | 2024-06-04T17:07:38Z | 2024-09-15T14:00:48Z | null | mram0509 |
huggingface/diffusers | 8,400 | how can we load model to lora from singlefile ? | pipe.load_lora_weights("lora/aesthetic_anime_v1s.safetensors")
File "Z:\software\python11\Lib\site-packages\diffusers\loaders\lora.py", line 1230, in load_lora_weights
raise ValueError("PEFT backend is required for this method.")
ValueError: PEFT backend is required for this method.
pipe.load_lora_weigh... | https://github.com/huggingface/diffusers/issues/8400 | closed | [] | 2024-06-04T13:54:56Z | 2024-06-04T15:53:32Z | null | xalteropsx |
huggingface/datasets | 6,953 | Remove canonical datasets from docs | Remove canonical datasets from docs, now that we no longer have canonical datasets. | https://github.com/huggingface/datasets/issues/6953 | closed | [
"documentation"
] | 2024-06-04T12:09:03Z | 2024-07-01T11:31:25Z | 1 | albertvillanova |
huggingface/datasets | 6,951 | load_dataset() should load all subsets, if no specific subset is specified | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | https://github.com/huggingface/datasets/issues/6951 | closed | [
"enhancement"
] | 2024-06-04T11:02:33Z | 2024-11-26T08:32:18Z | 5 | windmaple |
huggingface/datasets | 6,950 | `Dataset.with_format` behaves inconsistently with documentation | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | https://github.com/huggingface/datasets/issues/6950 | closed | [
"documentation"
] | 2024-06-04T09:18:32Z | 2024-06-25T08:05:49Z | 2 | iansheng |
huggingface/sentence-transformers | 2,708 | What is the training order in the multi-task learning example? | hello. In the case of multi-task learning in the example below, what is the learning order? The example below is taken from https://www.sbert.net/examples/training/quora_duplicate_questions/README.html.
Regarding the dataset below, I know that the learning results are good if you learn mnrl after learning the cl da... | https://github.com/huggingface/sentence-transformers/issues/2708 | closed | [] | 2024-06-04T07:42:37Z | 2024-06-04T08:29:30Z | null | daegonYu |
huggingface/datasets | 6,949 | load_dataset error | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | https://github.com/huggingface/datasets/issues/6949 | closed | [] | 2024-06-04T01:24:45Z | 2024-07-01T11:33:46Z | 2 | frederichen01 |
huggingface/transformers.js | 789 | Can I use Xenova/Phi-3-mini-4k-instruct model server side? | ### Question
Hey there! I’m trying to run Xenova/Phi-3-mini-4k-instruct model using transformers.js 2.17.2 on the server in my Node.js project, but I get an error saying that Phi-3 is not supported. Can I make it work somehow? Any ideas appreciated | https://github.com/huggingface/transformers.js/issues/789 | closed | [
"question"
] | 2024-06-03T18:43:20Z | 2024-06-04T04:57:42Z | null | StepanKukharskiy |
huggingface/datasets | 6,947 | FileNotFoundError:error when loading C4 dataset | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | https://github.com/huggingface/datasets/issues/6947 | closed | [] | 2024-06-03T13:06:33Z | 2024-06-25T06:21:28Z | 15 | W-215 |
huggingface/dataset-viewer | 2,878 | Remove or increase the 5GB limit? | The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.
Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination).
Sh... | https://github.com/huggingface/dataset-viewer/issues/2878 | closed | [
"question",
"feature request"
] | 2024-06-03T08:55:08Z | 2024-07-22T11:32:49Z | null | severo |
huggingface/transformers | 31,195 | How to get back the input time series after using PatchTSTForPretraining? | ### System Info
-
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
My model is Patch... | https://github.com/huggingface/transformers/issues/31195 | closed | [] | 2024-06-03T06:44:31Z | 2024-10-26T07:44:56Z | null | nikhilajoshy |
huggingface/optimum | 1,885 | onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference | ### System Info
Hi,
i did a test between onnx optimum export + ORTOptimizer inference vs. setfit.export_onnx + onnxruntime.InferenceSession.
it seems that onnx optimum ORTOptimizer inference runs slower than setfit.export_onnx runtime.InferenceSession inference
any idea why is that the reason?
i also chang... | https://github.com/huggingface/optimum/issues/1885 | open | [
"bug"
] | 2024-06-02T22:34:37Z | 2024-06-08T03:02:40Z | 1 | geraldstanje |
huggingface/chat-ui | 1,241 | 💻💻How to deploy to vercel | Hi,
I am currently having troubles with deploying to Vercel, I am experiencing an error 404 NOT FOUND. I think i am using the wrong build command or the wrong default directory. Can someone please help?

Tha... | https://github.com/huggingface/chat-ui/issues/1241 | open | [
"support"
] | 2024-06-02T10:05:45Z | 2025-01-10T17:00:37Z | null | haydenkong |
huggingface/transformers.js | 788 | Is it possible to use transformers.js to implement audio source separation tasks? | ### Question
Hello, I have a beginner's question.
I want to implement the task of removing the human voice from the audio in the video and retaining the background sound in the browser. The idea is to load the model for audio source separation related to transformers.js to achieve the separation of the background s... | https://github.com/huggingface/transformers.js/issues/788 | open | [
"question"
] | 2024-06-02T04:00:55Z | 2024-12-26T06:05:26Z | null | asasas234 |
huggingface/lerobot | 238 | how to use on wslcan not visulize | how to use on wslcan not visulize | https://github.com/huggingface/lerobot/issues/238 | closed | [
"simulation"
] | 2024-06-02T03:58:44Z | 2025-10-08T08:25:31Z | null | jackylee1 |
huggingface/chat-ui | 1,236 | No Setup Deploy: Multiple models supported? | How can I make **multiple models** available on Chat UI using **No Setup Deploy**?
## Further Details
The form (see below) seems to only allow one model.
<details><summary>Form</summary>
<p>
<img width="661" alt="image" src="https://github.com/huggingface/chat-ui/assets/14152377/e5595c34-b5c5-4c09-8b83-d5a... | https://github.com/huggingface/chat-ui/issues/1236 | open | [
"enhancement",
"docker"
] | 2024-06-01T11:41:22Z | 2024-06-03T07:55:12Z | 1 | rodrigobdz |
huggingface/optimum | 1,884 | Add support for porting CLIPVisionModelWithProjection | ### Feature request
Currently there is not support for porting CLIPVisionModelWithProjection class models from the transformers library to onnx through optimum. I'd like to add support for the same for which we'd need to change the optimum/exporters/onnx/model_configs.py file. I'd like ot request you to help me guide ... | https://github.com/huggingface/optimum/issues/1884 | open | [
"feature-request",
"onnx"
] | 2024-05-31T22:25:45Z | 2024-10-09T07:56:28Z | 0 | mr-sarthakgupta |
huggingface/datasets | 6,940 | Enable Sharding to Equal Sized Shards | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | https://github.com/huggingface/datasets/issues/6940 | open | [
"enhancement"
] | 2024-05-31T21:55:50Z | 2024-06-01T07:34:12Z | 0 | yuvalkirstain |
huggingface/chat-ui | 1,225 | SyntaxError: JSON5: invalid character 'u' at 1:1 | Where can I find out more about the following error? Is there an issue with the existing template?
## Reproduction Steps
1. Deploy [Chat UI using default template](https://huggingface.co/new-space?template=huggingchat/chat-ui-template) with `MONGO_URL` set to `mongodb+srv://<USER_SECRET>:<PASSWORD_SECRET>@<CLUSTE... | https://github.com/huggingface/chat-ui/issues/1225 | open | [
"docker"
] | 2024-05-30T11:07:36Z | 2025-01-16T22:54:08Z | 8 | rodrigobdz |
huggingface/chat-ui | 1,221 | 500 Internal Server Error with chat-ui | I executed an inference server with the address http://192.168.0.185:7777/generate_stream using text-generation-inference (TGI) v.2.0.4. When executing commands with curl, the inference results are responding normally. For ease of use, I am going to use chat-ui. Below is the .env.local file's content of chat-ui.
`... | https://github.com/huggingface/chat-ui/issues/1221 | closed | [
"support"
] | 2024-05-30T00:35:58Z | 2024-05-31T00:19:49Z | 4 | leemgs |
huggingface/transformers.js | 785 | Using AutoModel, AutoTokenizer with distilbert models | ### Question
Does transformers.js have a function to get the label after getting the logits? How to get the labels from the inference output?
let tokenizer = await AutoTokenizer.from_pretrained('distilbert-base-uncased-finetuned-sst-2-english');
let model = await AutoModel.from_pretrained('distilbert-base-uncased-... | https://github.com/huggingface/transformers.js/issues/785 | open | [
"question"
] | 2024-05-29T20:35:17Z | 2024-05-30T11:09:17Z | null | mram0509 |
huggingface/chat-ui | 1,220 | A few questions about the Cloudflare integration | Howdy 👋 ,
Working on a corresponding page for this in the [Cloudflare docs](https://developers.cloudflare.com/workers-ai/) and had a few [questions that I need answered](https://github.com/cloudflare/cloudflare-docs/pull/14488#issuecomment-2101481990) in this PR.
## Questions
1. If I'm reading [this line](htt... | https://github.com/huggingface/chat-ui/issues/1220 | closed | [
"documentation"
] | 2024-05-29T19:11:14Z | 2024-06-20T12:53:52Z | 3 | kodster28 |
huggingface/transformers.js | 784 | Shouldn't this work? #v3 | ### Question
### Issue with Transformer.js v3 and WebGPU
#### Description
Yesterday I installed `transformer.js` with the "v3" branch to test the new features with WebGPU, but I get an error.
#### Error Message
```
@xenova_transformers.js?v=3b2ad0ed:24861 Uncaught (in promise)
Error: This pipeline is not yet... | https://github.com/huggingface/transformers.js/issues/784 | open | [
"question"
] | 2024-05-29T13:36:52Z | 2024-05-29T14:59:49Z | null | kalix127 |
huggingface/datasets | 6,930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | https://github.com/huggingface/datasets/issues/6930 | open | [] | 2024-05-29T12:40:05Z | 2024-07-23T06:25:24Z | 2 | Polarisamoon |
huggingface/datasets | 6,929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | https://github.com/huggingface/datasets/issues/6929 | open | [
"enhancement"
] | 2024-05-29T10:36:06Z | 2024-05-29T20:51:56Z | 2 | zinc75 |
huggingface/candle | 2,226 | How to load LoRA adapter along with the GGUF model? | Hello all,
I have recently managed to convert the flan-t5 base model to GGUF #2215 . But I also have multiple LoRA adapters trained for different tasks.
@EricLBuehler @LaurentMazare So I wish to know if there is a way to also load single/multiple LoRA adapters along with the GGUF model. I am currently running an... | https://github.com/huggingface/candle/issues/2226 | open | [] | 2024-05-29T06:03:10Z | 2024-06-05T03:34:14Z | null | niranjanakella |
huggingface/transformers.js | 781 | Progress callback for Moondream? | ### Question
While implementing Moondream (from the excellent example) I stumbled upon a few questions.
- How can I implement a callback while Moondream is generating tokens? A normal progressCallback didn’t work?
```
self.model.generate({
...text_inputs,
...vision_inputs,
do_sample: false,
max_new_t... | https://github.com/huggingface/transformers.js/issues/781 | closed | [
"question"
] | 2024-05-28T14:07:07Z | 2024-06-03T18:49:10Z | null | flatsiedatsie |
huggingface/competitions | 29 | How to notify awardees or contact participants? | The competition just shows the participants' id.
So, how to contact them via email to inform them of the award requirements and request additional personal information? | https://github.com/huggingface/competitions/issues/29 | closed | [] | 2024-05-28T08:11:38Z | 2024-06-09T07:03:25Z | null | shangfenghuang |
huggingface/datatrove | 196 | How to deduplicate multiple datasets? | fineweb offer a deduplication demo for one dump. If want to deduplicate more dumps, should I merge dumps before deduplication ?
| https://github.com/huggingface/datatrove/issues/196 | closed | [] | 2024-05-28T03:00:31Z | 2024-06-07T07:25:45Z | null | canghaiyunfan |
huggingface/chat-ui | 1,183 | Prompt template for WizardLM-2-8x22B? | What is the prompt template for `WizardLM-2-8x22B` in the `.env.local`?
When setting it to the default one: `<s>{{#each messages}}{{#ifUser}}[INST] {{#if @first}}{{#if @root.preprompt}}{{@root.preprompt}}\n{{/if}}{{/if}}{{content}} [/INST]{{/ifUser}}{{#ifAssistant}}{{content}}</s>{{/ifAssistant}}{{/each}}`
the g... | https://github.com/huggingface/chat-ui/issues/1183 | open | [
"support",
"models"
] | 2024-05-27T14:28:47Z | 2024-07-29T15:27:25Z | 3 | Arche151 |
huggingface/chat-ui | 1,178 | Improve Domain Search Results for Assistants | The domain search for assistants is a great idea, but the current implementation is not really useful if the domains are less likely to be top results like Wikipedia.
This seems happen because the web is searched first, and the domain filter is applied afterward. This method can easily result in zero parseable results... | https://github.com/huggingface/chat-ui/issues/1178 | open | [
"question",
"websearch"
] | 2024-05-27T10:33:22Z | 2024-05-31T11:02:11Z | null | lueschow |
huggingface/datatrove | 195 | What is the difference between tasks and workers? | What is the difference between tasks and workers, what is the definition of tasks and how to determine the number of tasks?
| https://github.com/huggingface/datatrove/issues/195 | closed | [] | 2024-05-27T06:32:25Z | 2024-05-27T07:08:11Z | null | canghaiyunfan |
huggingface/transformers.js | 778 | Pipeline execution time with 'image-classification' pipeline | ### Question
While calling the 'image-classification' pipeline we pass the image url. So this does a fetch of the image. So will the time taken to process the image include the download time of the image? So if the network is slow this may impact the pipeline performance. Is there a way to use an image thats already ... | https://github.com/huggingface/transformers.js/issues/778 | open | [
"question"
] | 2024-05-26T20:15:21Z | 2024-05-27T04:14:52Z | null | mram0509 |
huggingface/transformers | 31,039 | What if past_key_values is in model_kwargs but is None | https://github.com/huggingface/transformers/blob/4c6c45ba138202f42582b5cea98126af87195a95/src/transformers/generation/utils.py#L1317
This line fails for me when past_key_values is in model_kwargs but is None. Line 1321 raises an error
Could you advice?
Thank you | https://github.com/huggingface/transformers/issues/31039 | closed | [] | 2024-05-26T07:58:18Z | 2024-06-10T06:32:23Z | null | estelleafl |
huggingface/chat-ui | 1,174 | Unable to deploy space with chatUI, getting error ** Failed to connect to 127.0.0.1 port 8080 after 0 ms** | Hi guys, so i am trying to deploy space with chatui template and **abacusai/Smaug-Llama-3-70B-Instruct** model but i am getting following error again and again in container logs.
`
curl: (7) Failed to connect to 127.0.0.1 port 8080 after 0 ms: Connection refused
Warning: Problem : connection refused. Will retry in... | https://github.com/huggingface/chat-ui/issues/1174 | open | [
"support",
"docker"
] | 2024-05-26T07:05:12Z | 2025-06-27T10:30:24Z | 5 | starlord263 |
huggingface/optimum | 1,876 | Unable to generate question-answering model for Llama and there is also no list of what are the supported models for question-answering | ### Feature request
Hi, I received this error:
ValueError: Asked to export a llama model for the task question-answering, but the Optimum ONNX exporter only supports the tasks feature-extraction, feature-extraction-with-past, text-generation, text-generation-with-past, text-classification for llama. Please use a su... | https://github.com/huggingface/optimum/issues/1876 | open | [
"bug",
"onnx"
] | 2024-05-26T06:10:47Z | 2024-10-09T07:57:24Z | null | customautosys |
huggingface/transformers.js | 776 | How to point to a specific model path in order to use compressed models? (brotli) | ### Question
Hi,
I just can't find the configuration to point to a specific model file path to use .onnx.br instead of .onnx for example.
I can run the model (distilbert-base-cased-distilled-squad) offline without any issue and it works. But I want to deploy it compressed using brotli. All I can see in the con... | https://github.com/huggingface/transformers.js/issues/776 | open | [
"question"
] | 2024-05-24T18:31:12Z | 2024-05-25T10:24:25Z | null | KamilCSPS |
huggingface/chat-ui | 1,169 | Help debugging "Sorry, something went wrong. Please try again." | I am a developer working on extending this project. Sometimes I get this error "Sorry, something went wrong. Please try again." I can't figure out how to debug it when it happens. What I want is for it to display the full error somehow, like with a console.log. Is there some way to do that? Or is the error saved in the... | https://github.com/huggingface/chat-ui/issues/1169 | closed | [] | 2024-05-24T18:30:08Z | 2024-06-17T12:47:03Z | 1 | loganlebanoff |
huggingface/datasets | 6,916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | https://github.com/huggingface/datasets/issues/6916 | closed | [] | 2024-05-22T23:52:15Z | 2024-05-23T00:07:53Z | 0 | jetlime |
huggingface/peft | 1,750 | How to finetune embeddings and LM head as a single layer when they are tied? | I am looking to LoRA-finetune models like Gemma, which have tied embeddings.
But, I would also like to have the shared embeddings as trainable (the common embedding table corresponding to both input and output embeddings of the network).
How do I achieve this?
---
_Note:_ Passing both `["embed_tokens","lm_he... | https://github.com/huggingface/peft/issues/1750 | closed | [] | 2024-05-21T18:32:07Z | 2025-08-12T11:54:09Z | null | GokulNC |
huggingface/blog | 2,078 | Idefics2's perceiver how to make attentionamsk to None? | I set atttentionmask to None, but the model doesn't learned well, my inputs didn't padded so I dont want attention mask. How to resolve this?
I also tried add a all ones attnetionmask, but the result also very worse. | https://github.com/huggingface/blog/issues/2078 | open | [] | 2024-05-21T07:38:57Z | 2024-05-21T07:38:57Z | null | lucasjinreal |
huggingface/peft | 1,749 | how to fine tune LoRA HQQ? | ### Feature request
how to fine tune LoRA to HQQ?
### Motivation
how to fine tune LoRA to HQQ?
### Your contribution
how to fine tune LoRA to HQQ? | https://github.com/huggingface/peft/issues/1749 | closed | [] | 2024-05-21T02:56:18Z | 2024-06-29T15:03:18Z | null | NickyDark1 |
huggingface/trl | 1,650 | how to save v_head | currently, I use `ppo_trainer.save_pretrained` to save a model that is still in training, because the machine I used is rather unstable, and I would often need to resume retraining should it be interrupted. When I resume the training I got the following warning:
```
WARNING:root:A <class 'peft.peft_model.PeftModelFor... | https://github.com/huggingface/trl/issues/1650 | closed | [] | 2024-05-20T17:06:00Z | 2025-04-11T10:14:36Z | null | zyzhang1130 |
huggingface/chat-ui | 1,153 | Can we use Hugging Face Chat with a Custom Server | Requirement:
I have a custom API which takes in the inputs queries and passes it through a RAG pipeline and finally to llm and returns the result.
Question is, can I integrate it with Chat-UI (utilizing just chat-ui frontend and my custom backend). If yes, is there any documentation around it. As per what I unde... | https://github.com/huggingface/chat-ui/issues/1153 | closed | [] | 2024-05-20T16:44:01Z | 2024-09-03T07:52:18Z | 9 | snps-ravinu |
huggingface/nanotron | 176 | Where is the "nanotron format" defined? | I see that any(?) hf model can be converted to nanotron format with this [script](https://github.com/huggingface/nanotron/blob/main/examples/llama/convert_hf_to_nanotron.py).
Is there documentation describing this format?
Can any model that may be loaded with AutoModelForCausalLM be converted to nanotron format f... | https://github.com/huggingface/nanotron/issues/176 | closed | [] | 2024-05-20T13:54:52Z | 2024-05-21T17:22:50Z | null | RonanKMcGovern |
huggingface/chat-ui | 1,151 | Can I change localhost to remote IP? | I am running Chat-UI in local, but I want to change localhost to IP, I am unable to find this configguration in the code. Can anyone help? | https://github.com/huggingface/chat-ui/issues/1151 | closed | [] | 2024-05-20T05:34:23Z | 2024-05-20T07:01:30Z | 1 | snps-ravinu |
huggingface/candle | 2,197 | How to slice a tensor? | tch has the function `slice` that return a tensor slice. Is there a corresponding function for candle? | https://github.com/huggingface/candle/issues/2197 | closed | [] | 2024-05-20T00:55:08Z | 2024-05-20T01:46:58Z | null | Gadersd |
huggingface/tokenizers | 1,534 | How to allow the merging of consecutive newline tokens \n when training a byte-level bpe tokenizer? | Hello, I'm currently working on training a byte-level BPE tokenizer using the Huggingface tokenizers library. I've created a simple training script, a sample corpus, and provided the output produced by this script. My aim is to understand why consecutive newline tokens `\n` are not being merged into a single token `\n\... | https://github.com/huggingface/tokenizers/issues/1534 | open | [
"bug"
] | 2024-05-18T03:11:35Z | 2025-07-07T09:34:16Z | null | liuslnlp |
huggingface/transformers | 30,886 | How to get the data seen by the model during training? | Hi! I haven't been able to find an answer to my question so opening an issue here. I'm fine-tuning the GPT-2 XL model using the trainer for 10 epochs and I'd like to save the data seen by the model during each epoch. More specifically, I want to save the data seen by the model every 242 steps. For instance, data seen f... | https://github.com/huggingface/transformers/issues/30886 | closed | [] | 2024-05-17T21:32:50Z | 2024-05-20T17:26:29Z | null | jaydeepborkar |
huggingface/optimum | 1,859 | Improve inference time TrOCR | I have a fine tuning TrOCR model, and i'm using
`from optimum.onnxruntime import ORTModelForVision2Seq`
how i can then make the inferation faster, when some one make a request in a endpoint api ? , i already using async for multi request | https://github.com/huggingface/optimum/issues/1859 | closed | [
"question",
"inference",
"Stale"
] | 2024-05-16T13:31:53Z | 2024-12-18T02:06:21Z | null | CrasCris |
huggingface/chat-ui | 1,148 | Chat-ui Audit Logs | Hello,
Is there a way to log the username, sessionID, conversation ID, what question was sent in some type of log in chat-ui ? Or just the username and the question?
How can we accomplish this?
Thanks | https://github.com/huggingface/chat-ui/issues/1148 | open | [] | 2024-05-16T11:13:30Z | 2024-05-21T18:48:17Z | 5 | Neb2653 |
huggingface/diffusers | 7,957 | How to implement `IPAdapterAttnProcessor2_0` with xformers | I want to fine-tune IP-adapter model with xformers, but I did not find the implementation of the xformers version corresponding to IPAdapterAttnProcessor2_0. I want to implement attention processor in xformers, are the following two lines of code the only difference between the two versions?
In `XFormersAttnProcesso... | https://github.com/huggingface/diffusers/issues/7957 | closed | [] | 2024-05-16T08:54:07Z | 2024-05-23T13:03:42Z | null | JWargrave |
huggingface/OBELICS | 12 | How to use LDA for topic modeling | Thanks for your work again!
In the paper the topic modeling of OBELICS is implemented using LDA, and I am wondering what is the specific LDA model was used, what setting was used to train the model, and most importantly, how the topic was derived from the key words and weights(like using LLMs)? Thank you for answering... | https://github.com/huggingface/OBELICS/issues/12 | open | [] | 2024-05-16T03:56:29Z | 2024-06-11T16:27:12Z | null | jrryzh |
huggingface/transformers.js | 765 | Can you use all transformers models with transformers.js? | ### Question
Hi,
can you use [all transformers models ](https://huggingface.co/models?library=transformers&sort=trending)(which seem to be listed under the python library) also in transformers.js? If yes, how so? Just download and provide the local path? I'm working in nodejs right now.
For example I'd like to u... | https://github.com/huggingface/transformers.js/issues/765 | open | [
"question"
] | 2024-05-15T19:35:28Z | 2024-05-15T21:21:57Z | null | Sir-hennihau |
huggingface/datasets | 6,899 | List of dictionary features get standardized | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | https://github.com/huggingface/datasets/issues/6899 | open | [] | 2024-05-15T14:11:35Z | 2025-04-01T20:48:03Z | 2 | sohamparikh |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.