url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone null | comments list | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool | is_closed bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/39737 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39737/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39737/comments | https://api.github.com/repos/huggingface/transformers/issues/39737/events | https://github.com/huggingface/transformers/pull/39737 | 3,270,636,490 | PR_kwDOCUB6oc6hBNUX | 39,737 | Fix Cache.max_cache_len max value for Hybrid models | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-28T16:45:04 | 2025-07-29T15:12:51 | 2025-07-29T15:12:51 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39737",
"html_url": "https://github.com/huggingface/transformers/pull/39737",
"diff_url": "https://github.com/huggingface/transformers/pull/39737.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39737.patch",
"merged_at": "2025-07-29T15:12:51"
} | Ensure `max_cache_len` uses the maximum value. This is relevant when both sliding and full attention layers are used in a hybrid setting, eg, Gemma models.
Fixes #39711
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39737/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39736 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39736/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39736/comments | https://api.github.com/repos/huggingface/transformers/issues/39736/events | https://github.com/huggingface/transformers/issues/39736 | 3,270,580,194 | I_kwDOCUB6oc7C8Rfi | 39,736 | losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) output logits and labels are not the batch same size | {
"login": "Lmmfff",
"id": 94843719,
"node_id": "U_kgDOBaczRw",
"avatar_url": "https://avatars.githubusercontent.com/u/94843719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Lmmfff",
"html_url": "https://github.com/Lmmfff",
"followers_url": "https://api.github.com/users/Lmmfff/followers",
"following_url": "https://api.github.com/users/Lmmfff/following{/other_user}",
"gists_url": "https://api.github.com/users/Lmmfff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Lmmfff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Lmmfff/subscriptions",
"organizations_url": "https://api.github.com/users/Lmmfff/orgs",
"repos_url": "https://api.github.com/users/Lmmfff/repos",
"events_url": "https://api.github.com/users/Lmmfff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Lmmfff/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-28T16:27:36 | 2025-09-06T08:02:31 | 2025-09-06T08:02:31 | NONE | null | null | null | null | ### System Info
When I used Trainer to train, I defined the compute_metrics by myself, as follows, but when I ran it, I would report an error as follows, I printed it according to the position of the error, and found that the length of the first dimension of the predictions shape and labels shape in the in compute_metrics coming data was inconsistent, so I returned to the source code of the Trainer to check and found that evaluation_ loop function, here losses, logits,
```
labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
# print("logits",logits.shape)#([7, 5000, 3])
# print("labels",labels.shape)#([8, 5000])
```
Printed results. Size([7, 5000, 3]),labels torch. Size([8, 5000])。
My collate_fn is normal. What's going on?
````
Traceback (most recent call last):
File "/ailab/user/wuhaoning/liumingfei/Spliceformer/finetune_Evo2/train.py", line 423, in <module>
main(args)
File "/ailab/user/wuhaoning/liumingfei/Spliceformer/finetune_Evo2/train.py", line 349, in main
trainer.train()
File "/ailab/user/wuhaoning/miniconda3/envs/evo2_env-py310/lib/python3.10/site-packages/transformers/trainer.py", line 2206, in train
return inner_training_loop(
File "/ailab/user/wuhaoning/miniconda3/envs/evo2_env-py310/lib/python3.10/site-packages/transformers/trainer.py", line 2623, in _inner_training_loop
self._maybe_log_save_evaluate(
File "/ailab/user/wuhaoning/miniconda3/envs/evo2_env-py310/lib/python3.10/site-packages/transformers/trainer.py", line 3096, in _maybe_log_save_evaluate
metrics = self._evaluate(trial, ignore_keys_for_eval)
File "/ailab/user/wuhaoning/miniconda3/envs/evo2_env-py310/lib/python3.10/site-packages/transformers/trainer.py", line 3045, in _evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/ailab/user/wuhaoning/miniconda3/envs/evo2_env-py310/lib/python3.10/site-packages/transformers/trainer.py", line 4198, in evaluate
output = eval_loop(
File "/ailab/user/wuhaoning/miniconda3/envs/evo2_env-py310/lib/python3.10/site-packages/transformers/trainer.py", line 4489, in evaluation_loop
metrics = self.compute_metrics(
File "/ailab/user/wuhaoning/liumingfei/Spliceformer/finetune_Evo2/train.py", line 203, in compute_metrics
Y_pred_acceptor = predictions[is_expr, :, 1].flatten()
IndexError: boolean index did not match indexed array along axis 0; size of axis is 175 but size of corresponding boolean axis is 200
````
````
def evaluation_loop(
self,
dataloader: DataLoader,
description: str,
prediction_loss_only: Optional[bool] = None,
ignore_keys: Optional[list[str]] = None,
metric_key_prefix: str = "eval",
) -> EvalLoopOutput:
"""
Prediction/evaluation loop, shared by `Trainer.evaluate()` and `Trainer.predict()`.
Works both with or without labels.
"""
args = self.args
prediction_loss_only = prediction_loss_only if prediction_loss_only is not None else args.prediction_loss_only
# if eval is called w/o train, handle model prep here
if self.is_deepspeed_enabled and self.deepspeed is None:
_, _ = deepspeed_init(self, num_training_steps=0, inference=True)
model = self._wrap_model(self.model, training=False, dataloader=dataloader)
if len(self.accelerator._models) == 0 and model is self.model:
start_time = time.time()
model = (
self.accelerator.prepare(model)
if self.is_deepspeed_enabled
or (self.is_fsdp_enabled and self.accelerator.mixed_precision != "fp8" and not self.args.torch_compile)
else self.accelerator.prepare_model(model, evaluation_mode=True)
)
self.model_preparation_time = round(time.time() - start_time, 4)
if self.is_fsdp_enabled:
self.model = model
# for the rest of this function `model` is the outside model, whether it was wrapped or not
if model is not self.model:
self.model_wrapped = model
# backward compatibility
if self.is_deepspeed_enabled:
self.deepspeed = self.model_wrapped
# if full fp16 or bf16 eval is wanted and this ``evaluation`` or ``predict`` isn't called
# while ``train`` is running, cast it to the right dtype first and then put on device
if not self.is_in_train:
if args.fp16_full_eval:
model = model.to(dtype=torch.float16, device=args.device)
elif args.bf16_full_eval:
model = model.to(dtype=torch.bfloat16, device=args.device)
batch_size = self.args.eval_batch_size
logger.info(f"\n***** Running {description} *****")
if has_length(dataloader):
logger.info(f" Num examples = {self.num_examples(dataloader)}")
else:
logger.info(" Num examples: Unknown")
logger.info(f" Batch size = {batch_size}")
model.eval()
if hasattr(self.optimizer, "eval") and callable(self.optimizer.eval):
self.optimizer.eval()
self.callback_handler.eval_dataloader = dataloader
# Do this before wrapping.
eval_dataset = getattr(dataloader, "dataset", None)
if args.past_index >= 0:
self._past = None
# Initialize containers
all_losses = EvalLoopContainer(self.args.eval_do_concat_batches, padding_index=-100)
all_preds = EvalLoopContainer(self.args.eval_do_concat_batches, padding_index=-100)
all_labels = EvalLoopContainer(self.args.eval_do_concat_batches, padding_index=-100)
all_inputs = EvalLoopContainer(self.args.eval_do_concat_batches, padding_index=-100)
metrics = None
eval_set_kwargs = {}
# Will be useful when we have an iterable dataset so don't know its length.
observed_num_examples = 0
# Main evaluation loop
for step, inputs in enumerate(dataloader):
# Update the observed num examples
observed_batch_size = find_batch_size(inputs)
if observed_batch_size is not None:
observed_num_examples += observed_batch_size
# For batch samplers, batch_size is not known by the dataloader in advance.
if batch_size is None:
batch_size = observed_batch_size
# Prediction step
losses, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
# print("logits",logits.shape)#([7, 5000, 3])
# print("labels",labels.shape)#([8, 5000])
main_input_name = getattr(self.model, "main_input_name", "input_ids")
inputs_decode = (
self._prepare_input(inputs[main_input_name]) if "inputs" in args.include_for_metrics else None
)
if is_torch_xla_available():
xm.mark_step()
# Update containers
if losses is not None:
losses = self.gather_function(losses.repeat(batch_size))
all_losses.add(losses)
if inputs_decode is not None:
inputs_decode = self.accelerator.pad_across_processes(inputs_decode, dim=1, pad_index=-100)
inputs_decode = self.gather_function(inputs_decode)
if not self.args.batch_eval_metrics or description == "Prediction":
all_inputs.add(inputs_decode)
if labels is not None:
# Pad labels here, preparing for preprocess_logits_for_metrics in next logits block.
labels = self.accelerator.pad_across_processes(labels, dim=1, pad_index=-100)
if logits is not None:
logits = self.accelerator.pad_across_processes(logits, dim=1, pad_index=-100)
if self.preprocess_logits_for_metrics is not None:
logits = self.preprocess_logits_for_metrics(logits, labels)
logits = self.gather_function(logits)
if not self.args.batch_eval_metrics or description == "Prediction":
all_preds.add(logits)
if labels is not None:
labels = self.gather_function(labels)
if not self.args.batch_eval_metrics or description == "Prediction":
all_labels.add(labels)
self.control = self.callback_handler.on_prediction_step(args, self.state, self.control)
if self.args.batch_eval_metrics:
if self.compute_metrics is not None and logits is not None and labels is not None:
is_last_step = self.accelerator.gradient_state.end_of_dataloader
batch_kwargs = {}
batch_kwargs["losses"] = losses if "loss" in args.include_for_metrics else None
batch_kwargs["inputs"] = inputs if "inputs" in args.include_for_metrics else None
metrics = self.compute_metrics(
EvalPrediction(predictions=logits, label_ids=labels, **batch_kwargs),
compute_result=is_last_step,
)
del losses, logits, labels, inputs
torch.cuda.empty_cache()
# Gather all tensors and put them back on the CPU if we have done enough accumulation steps.
elif args.eval_accumulation_steps is not None and (step + 1) % args.eval_accumulation_steps == 0:
all_losses.to_cpu_and_numpy()
all_preds.to_cpu_and_numpy()
all_labels.to_cpu_and_numpy()
all_inputs.to_cpu_and_numpy()
del losses, logits, labels, inputs
torch.cuda.empty_cache()
# After all calls to `.gather_function`, reset to `gather_for_metrics`:
self.gather_function = self.accelerator.gather_for_metrics
if args.past_index and hasattr(self, "_past"):
# Clean the state at the end of the evaluation loop
delattr(self, "_past")
# Gather all remaining tensors and put them back on the CPU
all_losses = all_losses.get_arrays()
all_preds = all_preds.get_arrays()
all_labels = all_labels.get_arrays()
all_inputs = all_inputs.get_arrays()
# Number of samples
if has_length(eval_dataset):
num_samples = len(eval_dataset)
# The instance check is weird and does not actually check for the type, but whether the dataset has the right
# methods. Therefore we need to make sure it also has the attribute.
elif isinstance(eval_dataset, IterableDatasetShard) and getattr(eval_dataset, "num_examples", 0) > 0:
num_samples = eval_dataset.num_examples
else:
if has_length(dataloader):
num_samples = self.num_examples(dataloader)
else: # both len(dataloader.dataset) and len(dataloader) fail
num_samples = observed_num_examples
if num_samples == 0 and observed_num_examples > 0:
num_samples = observed_num_examples
# Metrics!
if (
self.compute_metrics is not None
and all_preds is not None
and all_labels is not None
and not self.args.batch_eval_metrics
):
eval_set_kwargs["losses"] = all_losses if "loss" in args.include_for_metrics else None
eval_set_kwargs["inputs"] = all_inputs if "inputs" in args.include_for_metrics else None
metrics = self.compute_metrics(
EvalPrediction(predictions=all_preds, label_ids=all_labels, **eval_set_kwargs)
)
elif metrics is None:
metrics = {}
# To be JSON-serializable, we need to remove numpy types or zero-d tensors
metrics = denumpify_detensorize(metrics)
```
if isinstance(all_losses, list) and all_losses:
metrics[f"{metric_key_prefix}_loss"] = np.concatenate(all_losses).mean().item()
elif isinstance(all_losses, np.ndarray):
metrics[f"{metric_key_prefix}_loss"] = all_losses.mean().item()
if hasattr(self, "jit_compilation_time"):
metrics[f"{metric_key_prefix}_jit_compilation_time"] = self.jit_compilation_time
if hasattr(self, "model_preparation_time"):
metrics[f"{metric_key_prefix}_model_preparation_time"] = self.model_preparation_time
# Prefix all keys with metric_key_prefix + '_'
for key in list(metrics.keys()):
if not key.startswith(f"{metric_key_prefix}_"):
metrics[f"{metric_key_prefix}_{key}"] = metrics.pop(key)
return EvalLoopOutput(predictions=all_preds, label_ids=all_labels, metrics=metrics, num_samples=num_samples)
`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am the model is the Evo2 with a sorting head spliced on the back
### Expected behavior
The normal situation is that the predicted output is the same size as the first dimension of the real label, both batchsize | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39736/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39735 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39735/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39735/comments | https://api.github.com/repos/huggingface/transformers/issues/39735/events | https://github.com/huggingface/transformers/pull/39735 | 3,270,543,574 | PR_kwDOCUB6oc6hA4iB | 39,735 | handle multimodal models with tp_plan on the text_config | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-28T16:17:35 | 2025-08-04T13:38:38 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39735",
"html_url": "https://github.com/huggingface/transformers/pull/39735",
"diff_url": "https://github.com/huggingface/transformers/pull/39735.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39735.patch",
"merged_at": null
} | # What does this PR do?
Affects models like Gemma3
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39735/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39735/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39734 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39734/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39734/comments | https://api.github.com/repos/huggingface/transformers/issues/39734/events | https://github.com/huggingface/transformers/pull/39734 | 3,270,535,378 | PR_kwDOCUB6oc6hA2sT | 39,734 | Update IMPORTANT_MODELS list | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T16:15:26 | 2025-07-29T10:34:58 | 2025-07-29T10:34:57 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39734",
"html_url": "https://github.com/huggingface/transformers/pull/39734",
"diff_url": "https://github.com/huggingface/transformers/pull/39734.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39734.patch",
"merged_at": "2025-07-29T10:34:57"
} | Updated according to internal discussion ☺️ | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39734/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39733 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39733/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39733/comments | https://api.github.com/repos/huggingface/transformers/issues/39733/events | https://github.com/huggingface/transformers/pull/39733 | 3,270,217,839 | PR_kwDOCUB6oc6g_wVg | 39,733 | Fix: add back base model plan | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-28T14:46:18 | 2025-07-29T09:37:35 | 2025-07-29T09:37:34 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39733",
"html_url": "https://github.com/huggingface/transformers/pull/39733",
"diff_url": "https://github.com/huggingface/transformers/pull/39733.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39733.patch",
"merged_at": "2025-07-29T09:37:34"
} | null | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39733/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39732 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39732/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39732/comments | https://api.github.com/repos/huggingface/transformers/issues/39732/events | https://github.com/huggingface/transformers/pull/39732 | 3,270,097,425 | PR_kwDOCUB6oc6g_VoA | 39,732 | Fix Layer device placement in Caches | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-28T14:15:20 | 2025-07-28T14:37:12 | 2025-07-28T14:37:11 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39732",
"html_url": "https://github.com/huggingface/transformers/pull/39732",
"diff_url": "https://github.com/huggingface/transformers/pull/39732.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39732.patch",
"merged_at": "2025-07-28T14:37:11"
} | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/39730!
If the Layer were initialized on the correct device in the beginning, this should not be needed. However, as explained in the comment and as was the case before, sometimes `generate` does not initialize them correctly (this should be fixable in the future and would be the preffered way, I will investigate this week) | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39732/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39731/comments | https://api.github.com/repos/huggingface/transformers/issues/39731/events | https://github.com/huggingface/transformers/pull/39731 | 3,269,974,532 | PR_kwDOCUB6oc6g-6Vo | 39,731 | update `GemmaIntegrationTest::test_model_2b_bf16_dola` again | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T13:41:47 | 2025-07-29T09:42:56 | 2025-07-29T09:42:55 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39731",
"html_url": "https://github.com/huggingface/transformers/pull/39731",
"diff_url": "https://github.com/huggingface/transformers/pull/39731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39731.patch",
"merged_at": "2025-07-29T09:42:55"
} | # What does this PR do?
Fixes the BC (#39636) invalids a previous PR
Update `GemmaIntegrationTest::test_model_2b_bf16_dola` (#39362)
and the original expect value works again. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39731/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39730/comments | https://api.github.com/repos/huggingface/transformers/issues/39730/events | https://github.com/huggingface/transformers/issues/39730 | 3,269,662,855 | I_kwDOCUB6oc7C4xiH | 39,730 | device mismatch error when using `SlidingWindowLayer`. | {
"login": "nuxlear",
"id": 16524489,
"node_id": "MDQ6VXNlcjE2NTI0NDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/16524489?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nuxlear",
"html_url": "https://github.com/nuxlear",
"followers_url": "https://api.github.com/users/nuxlear/followers",
"following_url": "https://api.github.com/users/nuxlear/following{/other_user}",
"gists_url": "https://api.github.com/users/nuxlear/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nuxlear/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nuxlear/subscriptions",
"organizations_url": "https://api.github.com/users/nuxlear/orgs",
"repos_url": "https://api.github.com/users/nuxlear/repos",
"events_url": "https://api.github.com/users/nuxlear/events{/privacy}",
"received_events_url": "https://api.github.com/users/nuxlear/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
},
{
"id": 6735206706,
"node_id": "LA_kwDOCUB6oc8AAAABkXMZMg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Cache",
"name": "Cache",
"color": "1EA506",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-28T12:16:23 | 2025-07-28T14:37:12 | 2025-07-28T14:37:12 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.15.0-1083-gcp-x86_64-with-glibc2.31
- Python version: 3.12.10
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.6.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: yes (`device_map="auto"` with accelerate)
- Using GPU in script?: yes
- GPU type: NVIDIA A100-SXM4-40GB
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
1. Run quickstart example on EXAONE-4.0 using multiple GPUs (we use 2 x A100-40GB)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="bfloat16",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# choose your prompt
prompt = "Explain how wonderful you are"
prompt = "Explica lo increíble que eres"
prompt = "너가 얼마나 대단한지 설명해 봐"
messages = [
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
```
2. It raises an error on indexing KV cache on `SlidingWindowLayer`.
```
Traceback (most recent call last):
File "/home/junwon_hwang/transformers_release/quickstart.py", line 27, in <module>
output = model.generate(
^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 2633, in generate
result = self._sample(
^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/generation/utils.py", line 3614, in _sample
outputs = self(**model_inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/accelerate/hooks.py", line 176, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 961, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/models/exaone4/modeling_exaone4.py", line 491, in forward
outputs: BaseModelOutputWithPast = self.model(
^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/utils/generic.py", line 1069, in wrapper
outputs = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/models/exaone4/modeling_exaone4.py", line 404, in forward
hidden_states = decoder_layer(
^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/modeling_layers.py", line 94, in __call__
return super().__call__(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/accelerate/hooks.py", line 176, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/models/exaone4/modeling_exaone4.py", line 291, in forward
hidden_states, _ = self.self_attn(
^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/accelerate/hooks.py", line 176, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/models/exaone4/modeling_exaone4.py", line 230, in forward
key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/cache_utils.py", line 967, in _wrapped_update
key_tensors, value_tensors = fn(self, key_states, value_states, layer_idx, cache_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/cache_utils.py", line 1176, in update
return self.layers[layer_idx].update(key_states, value_states, cache_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/junwon_hwang/uv_env/exaone4/.venv/lib/python3.12/site-packages/transformers/cache_utils.py", line 362, in update
k_out_shifted = self.keys[:, :, indices]
~~~~~~~~~^^^^^^^^^^^^^^^
RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:1)
```
### Expected behavior
It should be run successfully when the model is deployed on multiple devices.
I guess matching the devices before indexing can be resolve this issue. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39730/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39729/comments | https://api.github.com/repos/huggingface/transformers/issues/39729/events | https://github.com/huggingface/transformers/pull/39729 | 3,269,555,304 | PR_kwDOCUB6oc6g9dWB | 39,729 | Export private symbols | {
"login": "cyyever",
"id": 17618148,
"node_id": "MDQ6VXNlcjE3NjE4MTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyyever",
"html_url": "https://github.com/cyyever",
"followers_url": "https://api.github.com/users/cyyever/followers",
"following_url": "https://api.github.com/users/cyyever/following{/other_user}",
"gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyyever/subscriptions",
"organizations_url": "https://api.github.com/users/cyyever/orgs",
"repos_url": "https://api.github.com/users/cyyever/repos",
"events_url": "https://api.github.com/users/cyyever/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyyever/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T11:44:25 | 2025-08-01T14:08:19 | 2025-08-01T12:36:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39729",
"html_url": "https://github.com/huggingface/transformers/pull/39729",
"diff_url": "https://github.com/huggingface/transformers/pull/39729.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39729.patch",
"merged_at": "2025-08-01T12:36:48"
} | # What does this PR do?
This is follow-up of #37340. mypy and pyright consider redundant aliases as public symbols. Therefore, it is necessary to re-export symbols to make them happy.
The changes were preformed by vim regex copy and paste.
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39729/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39728/comments | https://api.github.com/repos/huggingface/transformers/issues/39728/events | https://github.com/huggingface/transformers/pull/39728 | 3,269,488,624 | PR_kwDOCUB6oc6g9Ogb | 39,728 | Fix mamba regression | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-28T11:24:21 | 2025-07-29T10:44:29 | 2025-07-29T10:44:29 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39728",
"html_url": "https://github.com/huggingface/transformers/pull/39728",
"diff_url": "https://github.com/huggingface/transformers/pull/39728.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39728.patch",
"merged_at": "2025-07-29T10:44:29"
} | This fixes the sneaky regression introduced in #38086 causing loading errors for falcon_mamba:
```
"line": "tests/models/falcon_mamba/test_modeling_falcon_mamba.py::FalconMambaModelTest::test_model_from_pretrained",
"trace": "(line 2593) RuntimeError: Error(s) in loading state_dict for FalconMambaMixer:"
},
{
"line": "tests/models/falcon_mamba/test_modeling_falcon_mamba.py::FalconMambaIntegrationTests::test_batched_generation",
"trace": "(line 2593) RuntimeError: Error(s) in loading state_dict for FalconMambaMixer:"
},
{
"line": "tests/models/falcon_mamba/test_modeling_falcon_mamba.py::FalconMambaIntegrationTests::test_generation_4bit",
"trace": "(line 2593) RuntimeError: Error(s) in loading state_dict for FalconMambaMixer:"
},
{
"line": "tests/models/falcon_mamba/test_modeling_falcon_mamba.py::FalconMambaIntegrationTests::test_generation_fp16",
"trace": "(line 2593) RuntimeError: Error(s) in loading state_dict for FalconMambaMixer:"
},
{
"line": "tests/models/falcon_mamba/test_modeling_falcon_mamba.py::FalconMambaIntegrationTests::test_generation_torch_compile",
"trace": "(line 2593) RuntimeError: Error(s) in loading state_dict for FalconMambaMixer:"
```
The gist of the problem is: modular forces the `super().__init__` call to be on top of FalconMambaConfig. However, before modular rewrite, it was at the bottom, which was critical for the `intermediate_size` property from the config file to take effect.
Second bug fixed: `tests/models/mamba/test_modeling_mamba.py::MambaIntegrationTests::test_compile_mamba_cache ` was failing due to a missplaced import. | {
"login": "manueldeprada",
"id": 6536835,
"node_id": "MDQ6VXNlcjY1MzY4MzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6536835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manueldeprada",
"html_url": "https://github.com/manueldeprada",
"followers_url": "https://api.github.com/users/manueldeprada/followers",
"following_url": "https://api.github.com/users/manueldeprada/following{/other_user}",
"gists_url": "https://api.github.com/users/manueldeprada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manueldeprada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manueldeprada/subscriptions",
"organizations_url": "https://api.github.com/users/manueldeprada/orgs",
"repos_url": "https://api.github.com/users/manueldeprada/repos",
"events_url": "https://api.github.com/users/manueldeprada/events{/privacy}",
"received_events_url": "https://api.github.com/users/manueldeprada/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39728/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39727/comments | https://api.github.com/repos/huggingface/transformers/issues/39727/events | https://github.com/huggingface/transformers/pull/39727 | 3,269,377,564 | PR_kwDOCUB6oc6g81cD | 39,727 | Super tiny update | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T10:55:48 | 2025-07-30T10:21:41 | 2025-07-30T10:21:41 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39727",
"html_url": "https://github.com/huggingface/transformers/pull/39727",
"diff_url": "https://github.com/huggingface/transformers/pull/39727.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39727.patch",
"merged_at": "2025-07-30T10:21:41"
} | # What does this PR do?
Calling the processor with video inputs raises warning `Unused kwargs "images"` because we are passing `images=None` to the video processor | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39727/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39726/comments | https://api.github.com/repos/huggingface/transformers/issues/39726/events | https://github.com/huggingface/transformers/pull/39726 | 3,269,236,136 | PR_kwDOCUB6oc6g8VGf | 39,726 | [qwen-vl] fix beam search with videos | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T10:22:29 | 2025-08-11T07:21:04 | 2025-08-11T07:21:04 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39726",
"html_url": "https://github.com/huggingface/transformers/pull/39726",
"diff_url": "https://github.com/huggingface/transformers/pull/39726.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39726.patch",
"merged_at": "2025-08-11T07:21:04"
} | # What does this PR do?
As per title, the tests covered only image input so we never noticed the bug. It has been there since model release
Fixes https://github.com/huggingface/transformers/issues/39723
| {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39726/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39725/comments | https://api.github.com/repos/huggingface/transformers/issues/39725/events | https://github.com/huggingface/transformers/pull/39725 | 3,268,971,698 | PR_kwDOCUB6oc6g7ZFX | 39,725 | Remove all expired deprecation cycles | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T09:22:27 | 2025-07-28T13:43:43 | 2025-07-28T13:43:41 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39725",
"html_url": "https://github.com/huggingface/transformers/pull/39725",
"diff_url": "https://github.com/huggingface/transformers/pull/39725.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39725.patch",
"merged_at": "2025-07-28T13:43:41"
} | # What does this PR do?
As per the title. Time for cleanups!
cc @gante @qubvel @zucchini-nlp as well as you started some of those deprecation cycles! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39725/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39724/comments | https://api.github.com/repos/huggingface/transformers/issues/39724/events | https://github.com/huggingface/transformers/pull/39724 | 3,268,851,147 | PR_kwDOCUB6oc6g6-JF | 39,724 | Fix int4 quantized model cannot work with cpu | {
"login": "yuanwu2017",
"id": 34643241,
"node_id": "MDQ6VXNlcjM0NjQzMjQx",
"avatar_url": "https://avatars.githubusercontent.com/u/34643241?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuanwu2017",
"html_url": "https://github.com/yuanwu2017",
"followers_url": "https://api.github.com/users/yuanwu2017/followers",
"following_url": "https://api.github.com/users/yuanwu2017/following{/other_user}",
"gists_url": "https://api.github.com/users/yuanwu2017/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuanwu2017/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuanwu2017/subscriptions",
"organizations_url": "https://api.github.com/users/yuanwu2017/orgs",
"repos_url": "https://api.github.com/users/yuanwu2017/repos",
"events_url": "https://api.github.com/users/yuanwu2017/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuanwu2017/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-28T08:53:51 | 2025-08-07T15:24:01 | 2025-08-07T15:24:01 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39724",
"html_url": "https://github.com/huggingface/transformers/pull/39724",
"diff_url": "https://github.com/huggingface/transformers/pull/39724.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39724.patch",
"merged_at": "2025-08-07T15:24:01"
} | # What does this PR do?
Currently the CPU can support int4 quantized model, should not prevent its execution.
```
import torch
from transformers import TorchAoConfig, AutoModelForCausalLM, AutoTokenizer
from torchao.quantization import Int4WeightOnlyConfig
from torchao.dtypes import Int4CPULayout
quant_config = Int4WeightOnlyConfig(group_size=32, layout=Int4CPULayout())
quantization_config = TorchAoConfig(quant_type=quant_config)
# Load and quantize the model
quantized_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct",
torch_dtype="auto",
device_map="cpu",
quantization_config=quantization_config
)
# save the quantized model
output_dir = "llama-3-8b-torchao-int8"
#print(f"quantized_model:{quantized_model}")
quantized_model.save_pretrained(output_dir, safe_serialization=False)
# reload the quantized model
reloaded_model = AutoModelForCausalLM.from_pretrained(
output_dir,
device_map="auto",
torch_dtype=torch.bfloat16
)
#print(f"reloaded_model:{reloaded_model}")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
input_text = "What are we having for dinner?"
input_ids = tokenizer(input_text, return_tensors="pt")
# Move input to the same device as the model
if hasattr(reloaded_model, 'device'):
input_ids = input_ids.to(reloaded_model.device)
elif next(reloaded_model.parameters()).device != torch.device('cpu'):
input_ids = input_ids.to(next(reloaded_model.parameters()).device)
output = reloaded_model.generate(**input_ids, max_new_tokens=10)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Fix the following issue:
<img width="1314" height="415" alt="image" src="https://github.com/user-attachments/assets/680e93e8-6df3-4e64-9fcd-ad57234591a5" />
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39724/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39723/comments | https://api.github.com/repos/huggingface/transformers/issues/39723/events | https://github.com/huggingface/transformers/issues/39723 | 3,268,727,368 | I_kwDOCUB6oc7C1NJI | 39,723 | `num_beams` > 1 leads to exception for Qwen2.5VL (Qwen family or all VLM models?) | {
"login": "iglaweb",
"id": 3032604,
"node_id": "MDQ6VXNlcjMwMzI2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3032604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iglaweb",
"html_url": "https://github.com/iglaweb",
"followers_url": "https://api.github.com/users/iglaweb/followers",
"following_url": "https://api.github.com/users/iglaweb/following{/other_user}",
"gists_url": "https://api.github.com/users/iglaweb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iglaweb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iglaweb/subscriptions",
"organizations_url": "https://api.github.com/users/iglaweb/orgs",
"repos_url": "https://api.github.com/users/iglaweb/repos",
"events_url": "https://api.github.com/users/iglaweb/events{/privacy}",
"received_events_url": "https://api.github.com/users/iglaweb/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-28T08:23:44 | 2025-08-31T17:52:46 | 2025-08-11T07:21:05 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.2
- Platform: Windows-10-10.0.26100-SP0
- Python version: 3.10.18
- Huggingface_hub version: 0.33.4
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.5.1+cu118 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 4080
### Who can help?
@amyeroberts @qubvel @zucchini-nlp
### Information
- [x] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
num_beams = 2
do_sample = False
max_token_length = 8192
video_path = 'some video file path'
# load model
model_id = 'Qwen/Qwen2.5-VL-7B-Instruct'
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
device_map='auto',
)
processor = AutoProcessor.from_pretrained(model_id)
messages = []
messages.append({
"role": "user",
"content": [
{"type": "text", "text": 'Describe a video in detail.'},
{"type": "video", "path": video_path},
],
})
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
if num_beams > 1:
inp_model_kwargs = {'num_beams': num_beams}
else:
inp_model_kwargs = {}
outputs = model.generate(
**inputs,
do_sample=do_sample,
max_new_tokens=max_token_length,
**inp_model_kwargs
# num_return_sequences=2,
)
# ...
```
Whenever I make `num_beams` greater than 1, I get the following exception:
```python
Traceback (most recent call last):
File "hf_vlm_run_exps_eval.py", line 750, in <module>
run_main()
File "hf_vlm_run_exps_eval.py", line 734, in run_main
answers_dict = exec_llm_on_segment_videos(
File "hf_vlm_run_exps_eval.py", line 473, in exec_llm_on_segment_videos
output_text = extract_answer_from_llm(
File "hf_vlm_run_exps_eval.py", line 517, in extract_answer_from_llm
output_text = hf_base_model_wrapper.run_model_single_inference(
File "hf_base_model_wrapper.py", line 94, in run_model_single_inference
outputs = model.generate(
File "C:\Users\User\.conda\envs\env_grounded_sam2\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\User\.conda\envs\env_grounded_sam2\lib\site-packages\transformers\generation\utils.py", line 2637, in generate
input_ids, model_kwargs = self._expand_inputs_for_generation(
File "C:\Users\User\.conda\envs\env_grounded_sam2\lib\site-packages\transformers\models\qwen2_5_vl\modeling_qwen2_5_vl.py", line 1695, in _expand_inputs_for_generation
model_kwargs = _expand_dict_for_generation_visual(model_kwargs)
File "C:\Users\User\.conda\envs\env_grounded_sam2\lib\site-packages\transformers\models\qwen2_5_vl\modeling_qwen2_5_vl.py", line 1672, in _expand_dict_for_generation_visual
raise TypeError(
TypeError: Expected value for key 'second_per_grid_ts' to be a list, but got <class 'torch.Tensor'> instead.
```
Same issue if I add `num_return_sequences=2` to `.generate`.
### Expected behavior
I'd expect the function to run successfully and return multiple (different) sequences. | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39723/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/39723/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39722/comments | https://api.github.com/repos/huggingface/transformers/issues/39722/events | https://github.com/huggingface/transformers/pull/39722 | 3,268,488,465 | PR_kwDOCUB6oc6g5u31 | 39,722 | [Feat] Adding Intern-S1 | {
"login": "hhaAndroid",
"id": 17425982,
"node_id": "MDQ6VXNlcjE3NDI1OTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/17425982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hhaAndroid",
"html_url": "https://github.com/hhaAndroid",
"followers_url": "https://api.github.com/users/hhaAndroid/followers",
"following_url": "https://api.github.com/users/hhaAndroid/following{/other_user}",
"gists_url": "https://api.github.com/users/hhaAndroid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hhaAndroid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hhaAndroid/subscriptions",
"organizations_url": "https://api.github.com/users/hhaAndroid/orgs",
"repos_url": "https://api.github.com/users/hhaAndroid/repos",
"events_url": "https://api.github.com/users/hhaAndroid/events{/privacy}",
"received_events_url": "https://api.github.com/users/hhaAndroid/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-28T07:20:05 | 2025-10-27T09:52:36 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39722",
"html_url": "https://github.com/huggingface/transformers/pull/39722",
"diff_url": "https://github.com/huggingface/transformers/pull/39722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39722.patch",
"merged_at": null
} | # Adding Intern-S1
This PR adds the support of codes for the Intern-S1 models. Please visit https://huggingface.co/internlm/Intern-S1
## Features
- Strong performance across language and vision reasoning benchmarks, especially scientific tasks.
- Continuously pretrained on a massive 5T token dataset, with over 50% specialized scientific data, embedding deep domain expertise.
- Dynamic tokenizer enables native understanding of molecular formulas, protein sequences, and seismic signals.
## Usage
```python
from transformers import AutoProcessor, AutoModelForImageTextToText
import torch
model_checkpoint = 'xxxx'
processor = AutoProcessor.from_pretrained(model_checkpoint)
model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map="auto", torch_dtype="auto")
messages = [
{
"role": "user",
"content": [
{"type": "image",
"url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": "Please describe the image shortly."},
],
}
]
inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True,
return_tensors="pt").to(model.device, dtype=torch.bfloat16)
generate_ids = model.generate(**inputs, max_new_tokens=32768)
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(decoded_output)
```
## Progress
- [x] add modeling py
- [x] add tokenizer.py
- [x] add test
- [x] fix lint
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39722/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39722/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39721/comments | https://api.github.com/repos/huggingface/transformers/issues/39721/events | https://github.com/huggingface/transformers/issues/39721 | 3,268,411,294 | I_kwDOCUB6oc7Cz_-e | 39,721 | Support loading Qwen3 MoE GGUF | {
"login": "ctcanbol",
"id": 103742287,
"node_id": "U_kgDOBi77Tw",
"avatar_url": "https://avatars.githubusercontent.com/u/103742287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctcanbol",
"html_url": "https://github.com/ctcanbol",
"followers_url": "https://api.github.com/users/ctcanbol/followers",
"following_url": "https://api.github.com/users/ctcanbol/following{/other_user}",
"gists_url": "https://api.github.com/users/ctcanbol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctcanbol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctcanbol/subscriptions",
"organizations_url": "https://api.github.com/users/ctcanbol/orgs",
"repos_url": "https://api.github.com/users/ctcanbol/repos",
"events_url": "https://api.github.com/users/ctcanbol/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctcanbol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | null | [] | 2025-07-28T06:55:57 | 2025-07-29T14:57:27 | 2025-07-29T14:57:27 | CONTRIBUTOR | null | null | null | null | ### Feature request
Currently, GGUF versions of Qwen3 MoE models raises "GGUF model with architecture qwen3moe is not supported yet" error.
### Motivation
Qwen3 GGUF models with MoE will successfully run.
### Your contribution
This PR resolves this issue: #39638 | {
"login": "ctcanbol",
"id": 103742287,
"node_id": "U_kgDOBi77Tw",
"avatar_url": "https://avatars.githubusercontent.com/u/103742287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctcanbol",
"html_url": "https://github.com/ctcanbol",
"followers_url": "https://api.github.com/users/ctcanbol/followers",
"following_url": "https://api.github.com/users/ctcanbol/following{/other_user}",
"gists_url": "https://api.github.com/users/ctcanbol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctcanbol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctcanbol/subscriptions",
"organizations_url": "https://api.github.com/users/ctcanbol/orgs",
"repos_url": "https://api.github.com/users/ctcanbol/repos",
"events_url": "https://api.github.com/users/ctcanbol/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctcanbol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39721/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39720/comments | https://api.github.com/repos/huggingface/transformers/issues/39720/events | https://github.com/huggingface/transformers/issues/39720 | 3,268,254,818 | I_kwDOCUB6oc7CzZxi | 39,720 | [transformers==4.54.0] FSDP1 forward misalignment after loading state dict | {
"login": "ETOgaosion",
"id": 57280232,
"node_id": "MDQ6VXNlcjU3MjgwMjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/57280232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ETOgaosion",
"html_url": "https://github.com/ETOgaosion",
"followers_url": "https://api.github.com/users/ETOgaosion/followers",
"following_url": "https://api.github.com/users/ETOgaosion/following{/other_user}",
"gists_url": "https://api.github.com/users/ETOgaosion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ETOgaosion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ETOgaosion/subscriptions",
"organizations_url": "https://api.github.com/users/ETOgaosion/orgs",
"repos_url": "https://api.github.com/users/ETOgaosion/repos",
"events_url": "https://api.github.com/users/ETOgaosion/events{/privacy}",
"received_events_url": "https://api.github.com/users/ETOgaosion/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-28T06:02:20 | 2025-10-12T08:03:17 | 2025-10-12T08:03:17 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.10.135.bsk.6-amd64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.6.0+cu124 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA L20
### Who can help?
Pytorch: @ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Actually this bug is related with [verl](https://github.com/volcengine/verl) CI: https://github.com/volcengine/verl/actions/runs/16546080190/job/46795356728?pr=2771
1. Run docker image:
```sh
git clone https://github.com/volcengine/verl.git && cd verl
docker network create --label afd56a github_network_e80bcea206fe468baa29dd406591e667
docker run --gpus all --network github_network_e80bcea206fe468baa29dd406591e667 --shm-size=10g -it -v $(pwd):/workspace/verl verlai/verl:app-verl0.4-sglang0.4.6.post5-vllm0.8.5-mcore0.12.2-te2.2 bash
```
2. Prepare the environment in container:
```sh
cd verl
pip3 install --no-deps -e .[test]
pip3 install --upgrade transformers
```
2. Use the script in `tests/special_distributed/test_fsdp_ckpt.py`:
```
STRATEGY=fsdp torchrun --nproc_per_node=8 tests/special_distributed/test_fsdp_ckpt.py
```
Or for more infomation, you can paste the script into `test_fsdp_ckpt.py`:
The scripts logic:
1. run FSDP forward, backward and optimizer (1 step)
2. export **state_dict1** from current model
3. save all checkpoints (model state_dict, optimizer, learning rate scheduler)
4. run 1 step
5. run model forward to get logits
6. load checkpoints (model state_dict, optimizer, learning rate scheduler)
7. compare loaded model state_dict with **state_dict1**, **same**
8. run 1 step, shall align with 4
9. compare outputs **misalign**, compare model state_dict **misalign**
10. run model forward to get logits, shall align with 5, but **misalign**
```py
import os
import shutil
import tempfile
import torch
import torch.distributed
from torch.distributed import init_device_mesh
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp import MixedPrecision, ShardingStrategy
from transformers import AutoModelForCausalLM, AutoTokenizer, Qwen2Config, LlamaConfig
from verl.utils.checkpoint.fsdp_checkpoint_manager import FSDPCheckpointManager
from verl.utils.distributed import initialize_global_process_group
from verl.utils.fsdp_utils import MixedPrecisionPolicy, apply_fsdp2
def test_fsdp_ckpt(strategy="fsdp"):
assert torch.cuda.device_count() >= 2, "need at least 2 gpus for test"
local_rank, rank, world_size = initialize_global_process_group()
device_mesh = init_device_mesh("cuda", mesh_shape=(world_size,), mesh_dim_names=("dp",))
model_name = "deepseek-ai/deepseek-coder-1.3b-instruct"
config = LlamaConfig(num_hidden_layers=1)
with torch.device("cuda"):
model = AutoModelForCausalLM.from_config(
config=config, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2"
)
model = model.to(device="cuda")
# Wrap model with FSDP
if strategy == "fsdp":
mixed_precision = MixedPrecision(
param_dtype=torch.bfloat16, reduce_dtype=torch.float32, buffer_dtype=torch.float32
)
model = FSDP(
model,
use_orig_params=False,
device_id=torch.cuda.current_device(),
sharding_strategy=ShardingStrategy.FULL_SHARD,
mixed_precision=mixed_precision,
device_mesh=device_mesh,
)
else:
mp_policy = MixedPrecisionPolicy(
param_dtype=torch.bfloat16, reduce_dtype=torch.float32, cast_forward_inputs=True
)
fsdp_kwargs = {
"mesh": device_mesh,
"mp_policy": mp_policy,
}
apply_fsdp2(model, fsdp_kwargs, {})
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-4)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.9)
# Create checkpoint manager
tokenizer = AutoTokenizer.from_pretrained(model_name)
checkpoint_manager = FSDPCheckpointManager(
model=model, optimizer=optimizer, lr_scheduler=lr_scheduler, tokenizer=tokenizer
)
# Generate sample input
batch_size = 2
seq_len = 32
vocab_size = 32000
# First input for initial update
input_ids1 = torch.randint(0, vocab_size, (batch_size, seq_len), device="cuda")
attention_mask1 = torch.ones_like(input_ids1)
# Second input for verification
input_ids2 = torch.randint(0, vocab_size, (batch_size, seq_len), device="cuda")
attention_mask2 = torch.ones_like(input_ids2)
# Step 1: Initial update and save checkpoint
outputs1 = model(input_ids=input_ids1, attention_mask=attention_mask1)
loss1 = outputs1.logits.mean()
loss1.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
# Save checkpoint after first update
temp_dir = tempfile.mkdtemp()
# checkpoint_path = os.path.join(temp_dir, "checkpoint")
checkpoint_path = "checkpoint"
checkpoint_manager.save_checkpoint(local_path=checkpoint_path, hdfs_path=None, global_step=0)
state_dict1 = model.state_dict()
# Step 2: Second update and forward pass
outputs2 = model(input_ids=input_ids2, attention_mask=attention_mask2)
loss2 = outputs2.logits.mean()
loss2.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
state_dict2 = model.state_dict()
# Record logits after second update
with torch.no_grad():
logits_before_load = model(input_ids=input_ids2, attention_mask=attention_mask2).logits
# Step 3: Load checkpoint and repeat second update
checkpoint_manager.load_checkpoint(checkpoint_path)
load_state_dict = model.state_dict()
for k in load_state_dict:
print(f"testing {k}")
torch.testing.assert_close(load_state_dict[k], state_dict1[k])
# Repeat the second update with same input
outputs3 = model(input_ids=input_ids2, attention_mask=attention_mask2)
loss3 = outputs3.logits.mean()
loss3.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
state_dict3 = model.state_dict()
for k in state_dict3:
print(f"testing {k}")
torch.testing.assert_close(load_state_dict[k], state_dict2[k])
torch.testing.assert_close(outputs2, outputs3, atol=0.0, rtol=0.0)
# Record logits after loaded checkpoint and update
with torch.no_grad():
logits_after_load = model(input_ids=input_ids2, attention_mask=attention_mask2).logits
# Step 4: Verify outputs match
print(f'logits_before_load: {logits_before_load}, logits_after_load: {logits_after_load}')
torch.testing.assert_close(logits_before_load, logits_after_load, atol=0.0, rtol=0.0)
print("Checkpoint save/load test passed!")
# Cleanup
shutil.rmtree(temp_dir)
torch.distributed.barrier()
torch.distributed.destroy_process_group()
if __name__ == "__main__":
strategy = os.environ.get("STRATEGY", "fsdp")
test_fsdp_ckpt(strategy=strategy)
```
You can find that:
- model state_dict loaded and saved are the same
- After a forward, `outputs` is different
- After a backward, model weights differ from embedding layer
- After an update, output logits are totally different
- Other than qwen, deepseek also have this bug
### Expected behavior
After loading, all forward and backward or update operation shall behave the same.
FSDP2 is OK, if you execute:
```
STRATEGY=fsdp2 torchrun --nproc_per_node=8 tests/special_distributed/test_fsdp_ckpt.py
```
Old transformers is OK, if you execute:
```
pip install transformers==4.53.3
STRATEGY=fsdp torchrun --nproc_per_node=8 tests/special_distributed/test_fsdp_ckpt.py
``` | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39720/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39719/comments | https://api.github.com/repos/huggingface/transformers/issues/39719/events | https://github.com/huggingface/transformers/issues/39719 | 3,267,873,442 | I_kwDOCUB6oc7Cx8qi | 39,719 | vlmm 0.10.0 load baidu/ERNIE-4.5-300B-A47B-Base-PT error | {
"login": "lianghao6",
"id": 35061094,
"node_id": "MDQ6VXNlcjM1MDYxMDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/35061094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lianghao6",
"html_url": "https://github.com/lianghao6",
"followers_url": "https://api.github.com/users/lianghao6/followers",
"following_url": "https://api.github.com/users/lianghao6/following{/other_user}",
"gists_url": "https://api.github.com/users/lianghao6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lianghao6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lianghao6/subscriptions",
"organizations_url": "https://api.github.com/users/lianghao6/orgs",
"repos_url": "https://api.github.com/users/lianghao6/repos",
"events_url": "https://api.github.com/users/lianghao6/events{/privacy}",
"received_events_url": "https://api.github.com/users/lianghao6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-28T03:09:05 | 2025-07-28T06:03:57 | 2025-07-28T06:03:57 | NONE | null | null | null | null | ### System Info
centos 8
python:3.9
transformer: 4.54.0
vllm: 0.10.0
cuda: 12.9
### Reproduction
launch cmd: `vllm serve $local_model_path --host ***** --port 8081 --dtype bfloat16 --pipeline-parallel-size 1 --tensor-parallel-size 8 --trust-remote-code --enable-chunked-prefill --served-model-name /mnt/hdfs/zw04mlnn01/checkpoint/llm_platform/model/baidu/ERNIE-4.5-300B-A47B-Base-PT/main --max-model-len 131072 --max-num-batched-tokens 2048 --max-num-seqs 256 --gpu-memory-utilization 0.9 --disable-custom-all-reduce --enable-chunked-prefill`
throw miss some weight, but model itself does not have these weights :
`
WorkerProc failed to start.
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] Traceback (most recent call last):
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] File "/usr/local/lib64/python3.9/site-packages/vllm/v1/executor/multiproc_executor.py", line 485, in worker_main
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] worker = WorkerProc(*args, **kwargs)
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] File "/usr/local/lib64/python3.9/site-packages/vllm/v1/executor/multiproc_executor.py", line 382, in __init__
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] self.worker.load_model()
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] File "/usr/local/lib64/python3.9/site-packages/vllm/v1/worker/gpu_worker.py", line 201, in load_model
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] self.model_runner.load_model(eep_scale_up=eep_scale_up)
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] File "/usr/local/lib64/python3.9/site-packages/vllm/v1/worker/gpu_model_runner.py", line 1876, in load_model
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] self.model = model_loader.load_model(
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] File "/usr/local/lib64/python3.9/site-packages/vllm/model_executor/model_loader/base_loader.py", line 49, in load_model
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] self.load_weights(model, model_config)
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] File "/usr/local/lib64/python3.9/site-packages/vllm/model_executor/model_loader/default_loader.py", line 271, in load_weights
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] raise ValueError("Following weights were not initialized from "
[1;36m(VllmWorker rank=3 pid=4867)[0;0m ERROR 07-26 18:47:24 [multiproc_executor.py:511] ValueError: Following weights were not initialized from checkpoint: {'model.layers.19.mlp.shared_experts.gate_up_proj.weight', 'model.layers.37.mlp.shared_experts.gate_up_proj.weight', 'model.layers.31.mlp.shared_experts.down_proj.weight', 'model.layers.44.mlp.shared_experts.gate_up_proj.weight', 'model.layers.41.mlp.shared_experts.down_proj.weight', 'model.layers.46.mlp.shared_experts.gate_up_proj.weight', 'model.layers.48.mlp.shared_experts.gate_up_proj.weight', 'model.layers.33.mlp.shared_experts.gate_up_proj.weight', 'model.layers.39.mlp.shared_experts.gate_up_proj.weight', 'model.layers.10.mlp.shared_experts.gate_up_proj.weight', 'model.layers.26.mlp.shared_experts.down_proj.weight', 'model.layers.11.mlp.shared_experts.gate_up_proj.weight', 'model.layers.47.mlp.shared_experts.down_proj.weight', 'model.layers.46.mlp.shared_experts.down_proj.weight', 'model.layers.6.mlp.shared_experts.down_proj.weight', 'model.layers.32.mlp.shared_experts.down_proj.weight', 'model.layers.43.mlp.shared_experts.gate_up_proj.weight', 'model.layers.27.mlp.shared_experts.gate_up_proj.weight', 'model.layers.17.mlp.shared_experts.gate_up_proj.weight', 'model.layers.19.mlp.shared_experts.down_proj.weight', 'model.layers.32.mlp.shared_experts.gate_up_proj.weight', 'model.layers.16.mlp.shared_experts.gate_up_proj.weight', 'model.layers.29.mlp.shared_experts.gate_up_proj.weight', 'model.layers.52.mlp.shared_experts.gate_up_proj.weight', 'model.layers.25.mlp.shared_experts.gate_up_proj.weight', 'model.layers.20.mlp.shared_experts.gate_up_proj.weight', 'model.layers.17.mlp.shared_experts.down_proj.weight', 'model.layers.50.mlp.shared_experts.down_proj.weight', 'model.layers.53.mlp.shared_experts.down_proj.weight', 'model.layers.45.mlp.shared_experts.down_proj.weight', 'model.layers.4.mlp.shared_experts.down_proj.weight', 'model.layers.3.mlp.shared_experts.down_proj.weight', 'model.layers.12.mlp.shared_experts.down_proj.weight', 'model.layers.12.mlp.shared_experts.gate_up_proj.weight', 'model.layers.37.mlp.shared_experts.down_proj.weight', 'model.layers.51.mlp.shared_experts.down_proj.weight', 'model.layers.20.mlp.shared_experts.down_proj.weight', 'model.layers.34.mlp.shared_experts.down_proj.weight', 'model.layers.35.mlp.shared_experts.gate_up_proj.weight', 'model.layers.18.mlp.shared_experts.gate_up_proj.weight', 'model.layers.38.mlp.shared_experts.gate_up_proj.weight', 'model.layers.27.mlp.shared_experts.down_proj.weight', 'model.layers.38.mlp.shared_experts.down_proj.weight', 'model.layers.34.mlp.shared_experts.gate_up_proj.weight', 'model.layers.42.mlp.shared_experts.down_proj.weight', 'model.layers.23.mlp.shared_experts.gate_up_proj.weight', 'model.layers.5.mlp.shared_experts.down_proj.weight', 'model.layers.35.mlp.shared_experts.down_proj.weight', 'model.layers.23.mlp.shared_experts.down_proj.weight', 'model.layers.40.mlp.shared_experts.down_proj.weight', 'model.layers.45.mlp.shared_experts.gate_up_proj.weight', 'model.layers.21.mlp.shared_experts.down_proj.weight', 'model.layers.39.mlp.shared_experts.down_proj.weight', 'model.layers.14.mlp.shared_experts.gate_up_proj.weight', 'model.layers.48.mlp.shared_experts.down_proj.weight', 'model.layers.41.mlp.shared_experts.gate_up_proj.weight', 'model.layers.10.mlp.shared_experts.down_proj.weight', 'model.layers.22.mlp.shared_experts.gate_up_proj.weight', 'model.layers.30.mlp.shared_experts.down_proj.weight', 'model.layers.47.mlp.shared_experts.gate_up_proj.weight', 'model.layers.24.mlp.shared_experts.down_proj.weight', 'model.layers.13.mlp.shared_experts.down_proj.weight', 'model.layers.15.mlp.shared_experts.down_proj.weight', 'model.layers.25.mlp.shared_experts.down_proj.weight', 'model.layers.29.mlp.shared_experts.down_proj.weight', 'model.layers.22.mlp.shared_experts.down_proj.weight', 'model.layers.7.mlp.shared_experts.down_proj.weight', 'model.layers.44.mlp.shared_experts.down_proj.weight', 'model.layers.52.mlp.shared_experts.down_proj.weight', 'model.layers.7.mlp.shared_experts.gate_up_proj.weight', 'model.layers.42.mlp.shared_experts.gate_up_proj.weight', 'model.layers.14.mlp.shared_experts.down_proj.weight', 'model.layers.8.mlp.shared_experts.gate_up_proj.weight', 'model.layers.51.mlp.shared_experts.gate_up_proj.weight', 'model.layers.5.mlp.shared_experts.gate_up_proj.weight', 'model.layers.30.mlp.shared_experts.gate_up_proj.weight', 'model.layers.36.mlp.shared_experts.down_proj.weight', 'model.layers.13.mlp.shared_experts.gate_up_proj.weight', 'model.layers.53.mlp.shared_experts.gate_up_proj.weight', 'model.layers.31.mlp.shared_experts.gate_up_proj.weight', 'model.layers.49.mlp.shared_experts.gate_up_proj.weight', 'model.layers.4.mlp.shared_experts.gate_up_proj.weight', 'model.layers.49.mlp.shared_experts.down_proj.weight', 'model.layers.24.mlp.shared_experts.gate_up_proj.weight', 'model.layers.11.mlp.shared_experts.down_proj.weight', 'model.layers.26.mlp.shared_experts.gate_up_proj.weight', 'model.layers.3.mlp.shared_experts.gate_up_proj.weight', 'model.layers.9.mlp.shared_experts.gate_up_proj.weight', 'model.layers.50.mlp.shared_experts.gate_up_proj.weight', 'model.layers.28.mlp.shared_experts.gate_up_proj.weight', 'model.layers.6.mlp.shared_experts.gate_up_proj.weight', 'model.layers.16.mlp.shared_experts.down_proj.weight', 'model.layers.15.mlp.shared_experts.gate_up_proj.weight', 'model.layers.9.mlp.shared_experts.down_proj.weight', 'model.layers.28.mlp.shared_experts.down_proj.weight', 'model.layers.8.mlp.shared_experts.down_proj.weight', 'model.layers.36.mlp.shared_experts.gate_up_proj.weight', 'model.layers.40.mlp.shared_experts.gate_up_proj.weight', 'model.layers.43.mlp.shared_experts.down_proj.weight', 'model.layers.33.mlp.shared_experts.down_proj.weight', 'model.layers.21.mlp.shared_experts.gate_up_proj.weight', 'model.layers.18.mlp.shared_experts.down_proj.weight'}
`
### Expected behavior
Why are some weights that are not included in the model configuration thrown out | {
"login": "lianghao6",
"id": 35061094,
"node_id": "MDQ6VXNlcjM1MDYxMDk0",
"avatar_url": "https://avatars.githubusercontent.com/u/35061094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lianghao6",
"html_url": "https://github.com/lianghao6",
"followers_url": "https://api.github.com/users/lianghao6/followers",
"following_url": "https://api.github.com/users/lianghao6/following{/other_user}",
"gists_url": "https://api.github.com/users/lianghao6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lianghao6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lianghao6/subscriptions",
"organizations_url": "https://api.github.com/users/lianghao6/orgs",
"repos_url": "https://api.github.com/users/lianghao6/repos",
"events_url": "https://api.github.com/users/lianghao6/events{/privacy}",
"received_events_url": "https://api.github.com/users/lianghao6/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39719/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39718/comments | https://api.github.com/repos/huggingface/transformers/issues/39718/events | https://github.com/huggingface/transformers/pull/39718 | 3,267,758,399 | PR_kwDOCUB6oc6g3NyJ | 39,718 | Fix SigLIP2 documentation model/processor mismatch | {
"login": "killerdevildog",
"id": 31830590,
"node_id": "MDQ6VXNlcjMxODMwNTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/31830590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/killerdevildog",
"html_url": "https://github.com/killerdevildog",
"followers_url": "https://api.github.com/users/killerdevildog/followers",
"following_url": "https://api.github.com/users/killerdevildog/following{/other_user}",
"gists_url": "https://api.github.com/users/killerdevildog/gists{/gist_id}",
"starred_url": "https://api.github.com/users/killerdevildog/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/killerdevildog/subscriptions",
"organizations_url": "https://api.github.com/users/killerdevildog/orgs",
"repos_url": "https://api.github.com/users/killerdevildog/repos",
"events_url": "https://api.github.com/users/killerdevildog/events{/privacy}",
"received_events_url": "https://api.github.com/users/killerdevildog/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-28T02:11:36 | 2025-07-28T02:11:36 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39718",
"html_url": "https://github.com/huggingface/transformers/pull/39718",
"diff_url": "https://github.com/huggingface/transformers/pull/39718.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39718.patch",
"merged_at": null
} | - Change processor from siglip2-base-patch16-224 to siglip2-large-patch16-512 to match model
- Ensures quantization example works correctly
- Fixes issue #39692
# What does this PR do?
Documentation Update
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Documentation: @stevhliu
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39718/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39717/comments | https://api.github.com/repos/huggingface/transformers/issues/39717/events | https://github.com/huggingface/transformers/pull/39717 | 3,267,523,618 | PR_kwDOCUB6oc6g2bGO | 39,717 | Fix eval thread fork bomb | {
"login": "JustinVanHeek",
"id": 28269485,
"node_id": "MDQ6VXNlcjI4MjY5NDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/28269485?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JustinVanHeek",
"html_url": "https://github.com/JustinVanHeek",
"followers_url": "https://api.github.com/users/JustinVanHeek/followers",
"following_url": "https://api.github.com/users/JustinVanHeek/following{/other_user}",
"gists_url": "https://api.github.com/users/JustinVanHeek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JustinVanHeek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JustinVanHeek/subscriptions",
"organizations_url": "https://api.github.com/users/JustinVanHeek/orgs",
"repos_url": "https://api.github.com/users/JustinVanHeek/repos",
"events_url": "https://api.github.com/users/JustinVanHeek/events{/privacy}",
"received_events_url": "https://api.github.com/users/JustinVanHeek/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T23:30:49 | 2025-08-05T10:51:06 | 2025-08-05T10:50:33 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39717",
"html_url": "https://github.com/huggingface/transformers/pull/39717",
"diff_url": "https://github.com/huggingface/transformers/pull/39717.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39717.patch",
"merged_at": "2025-08-05T10:50:32"
} | # What does this PR do?
Currently a fork bomb is being created due to the accelerator preparing a new dataloader at each evaluation when dataloader_persistent_workers=True.
It appears there was an attempt to fix this issue #29538 but the problem seems to still exist. I tried using version 4.39.0 which was the first release version that included that fix along with the most recent version of accelerate for that point in time (0.27.2) and was still able to reproduce the fork bomb.
This PR makes a minor change to the original fix by storing the prepared dataloader rather than the dataloader prior to preparing it with the accelerator.
The author of the original fix left a comment in the code specifically about storing the dataloader prior to being prepared due to accelerator.free_memory() destroying the references. I was unable to reproduce that problem when storing the prepared dataloader (even when calling accelerator.free_memory() before each evaluation), but wouldn't mind hearing from @muellerzr on why they did that in case I have missed something. Though at the same time he left [this comment](https://github.com/huggingface/transformers/issues/28469#issuecomment-2040363029) on the GitHub Issue suggesting he intended to make these same proposed changes.
Fixes #28469
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@SunMarc @amyeroberts | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39717/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39717/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39716/comments | https://api.github.com/repos/huggingface/transformers/issues/39716/events | https://github.com/huggingface/transformers/issues/39716 | 3,267,382,333 | I_kwDOCUB6oc7CwEw9 | 39,716 | [rank0]: ValueError: Your setup doesn't support bf16/gpu. | {
"login": "kadirnar",
"id": 36204372,
"node_id": "MDQ6VXNlcjM2MjA0Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/36204372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kadirnar",
"html_url": "https://github.com/kadirnar",
"followers_url": "https://api.github.com/users/kadirnar/followers",
"following_url": "https://api.github.com/users/kadirnar/following{/other_user}",
"gists_url": "https://api.github.com/users/kadirnar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kadirnar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kadirnar/subscriptions",
"organizations_url": "https://api.github.com/users/kadirnar/orgs",
"repos_url": "https://api.github.com/users/kadirnar/repos",
"events_url": "https://api.github.com/users/kadirnar/events{/privacy}",
"received_events_url": "https://api.github.com/users/kadirnar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-27T19:32:38 | 2025-09-30T13:27:57 | 2025-07-31T00:24:07 | CONTRIBUTOR | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-5.15.0-144-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: True
- fsdp_config: {'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch_policy': 'BACKWARD_PRE', 'fsdp_forward_prefetch': True, 'fsdp_offload_params': False, 'fsdp_sharding_strategy': 1, 'fsdp_state_dict_type': 'FULL_STATE_DICT', 'fsdp_transformer_layer_cls_to_wrap': ['LlamaDecoderLayer']}
- downcast_bf16: False
- tpu_use_cluster: False
- tpu_use_sudo: False
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@eustlb @ArthurZucker @zach-huggingface @SunMarc
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code:
```python
import torch
from datasets import load_dataset
from transformers import Trainer, TrainingArguments, AutoTokenizer
import numpy as np
from torch.distributed.fsdp.fully_sharded_data_parallel import FullStateDictConfig
from torch.distributed.fsdp import (
FullyShardedDataParallel as FSDP, FullStateDictConfig, StateDictType)
from torch.utils.data import DataLoader, Dataset
from torch.utils.data.distributed import DistributedSampler
import yaml
import wandb
from huggingface_hub import HfApi
from accelerate import Accelerator
import math
from liger_kernel.transformers import AutoLigerKernelForCausalLM
config_file = "/mnt/kadirnar/...x/.../configs/llama3_config.yaml"
with open(config_file, "r") as file:
config = yaml.safe_load(file)
dsn1 = config["text_QA_dataset"]
dsn2 = config["TTS_dataset"]
model_name = config["model_name"]
tokenizer_name = config["tokenizer_name"]
run_name = config["run_name"]
project_name = config["project_name"]
base_repo_id = config["save_folder"]
epochs = config["epochs"]
batch_size = config["batch_size"]
save_steps = config["save_steps"]
pad_token = config["pad_token"]
number_processes = config["number_processes"]
learning_rate = config["learning_rate"]
# Parse ratio from config (e.g., "2:1" -> 2)
ratio_str = config["ratio"]
initial_ratio = int(ratio_str.split(":")[0])
final_ratio = 1 # Target ratio is 1:1
class GradualRatioDataset(Dataset):
def __init__(self, dataset1, dataset2, batch_total, initial_ratio=2, final_ratio=1, total_steps=None):
self.dataset1 = dataset1
self.dataset2 = dataset2
self.batch_total = batch_total
self.initial_ratio = initial_ratio
self.final_ratio = final_ratio
self.total_steps = total_steps
# Calculate length based on the maximum ratio to ensure we have enough data
max_ratio = max(initial_ratio, final_ratio)
num_cycles_ds1 = len(dataset1) // (batch_total * max_ratio)
num_cycles_ds2 = len(dataset2) // batch_total
self.num_cycles = min(num_cycles_ds1, num_cycles_ds2)
# Use initial ratio for length calculation
self.length = self.num_cycles * (initial_ratio + 1) * batch_total
# For tracking current step
self.current_step = 0
def set_current_step(self, step):
"""Called by trainer to update current step for ratio calculation"""
self.current_step = step
def get_current_ratio(self):
"""Calculate current ratio based on training progress"""
if self.total_steps is None or self.total_steps == 0:
return self.initial_ratio
# Linear interpolation from initial_ratio to final_ratio
progress = min(self.current_step / self.total_steps, 1.0)
current_ratio = self.initial_ratio - (self.initial_ratio - self.final_ratio) * progress
return max(int(round(current_ratio)), self.final_ratio)
def __len__(self):
return int(self.length)
def __getitem__(self, index):
current_ratio = self.get_current_ratio()
# Compute the cycle length in terms of samples with current ratio
cycle_length = (current_ratio + 1) * self.batch_total
cycle = index // cycle_length
pos_in_cycle = index % cycle_length
if pos_in_cycle < current_ratio * self.batch_total:
# Text dataset (dataset1)
batch_in_cycle = pos_in_cycle // self.batch_total
sample_in_batch = pos_in_cycle % self.batch_total
ds1_index = cycle * current_ratio * self.batch_total + batch_in_cycle * self.batch_total + sample_in_batch
# Handle index overflow by wrapping around
if ds1_index >= len(self.dataset1):
ds1_index = ds1_index % len(self.dataset1)
return self.dataset1[ds1_index]
else:
# TTS dataset (dataset2)
sample_in_batch = pos_in_cycle - current_ratio * self.batch_total
ds2_index = cycle * self.batch_total + sample_in_batch
# Handle index overflow by wrapping around
if ds2_index >= len(self.dataset2):
ds2_index = ds2_index % len(self.dataset2)
return self.dataset2[ds2_index]
class AlternatingDistributedSampler(DistributedSampler):
def __init__(self, dataset, num_replicas=None, rank=None, shuffle=False):
super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle)
self.shuffle = shuffle
def __iter__(self):
indices = list(range(len(self.dataset)))
indices = indices[self.rank:self.total_size:self.num_replicas]
return iter(indices)
class FSDPTrainer(Trainer):
def __init__(self, *args, initial_ratio=2, final_ratio=1, **kwargs):
super().__init__(*args, **kwargs)
self.repo_id = base_repo_id
self.api = HfApi()
self.initial_ratio = initial_ratio
self.final_ratio = final_ratio
self.text_step = 0
self.audio_step = 0
# Calculate total steps for gradual ratio adjustment
self.total_steps = self.calculate_total_steps()
def calculate_total_steps(self):
"""Calculate total training steps"""
num_update_steps_per_epoch = len(self.train_dataset) // (
self.args.per_device_train_batch_size * self.args.gradient_accumulation_steps * self.args.world_size
)
return int(num_update_steps_per_epoch * self.args.num_train_epochs)
def get_current_ratio(self):
"""Get current ratio based on training progress"""
if self.total_steps == 0:
return self.initial_ratio
progress = min(self.state.global_step / self.total_steps, 1.0)
current_ratio = self.initial_ratio - (self.initial_ratio - self.final_ratio) * progress
return max(int(round(current_ratio)), self.final_ratio)
def get_train_dataloader(self):
# Update dataset with total steps info
if hasattr(self.train_dataset, 'total_steps'):
self.train_dataset.total_steps = self.total_steps
sampler = AlternatingDistributedSampler(
self.train_dataset,
num_replicas=torch.distributed.get_world_size(),
rank=torch.distributed.get_rank(),
shuffle=False,
)
return DataLoader(
self.train_dataset,
batch_size=self.args.per_device_train_batch_size,
sampler=sampler,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=0,
pin_memory=self.args.dataloader_pin_memory,
)
def training_step(self, model, inputs, num_items_in_batch=None):
# Update dataset with current step
if hasattr(self.train_dataset, 'set_current_step'):
self.train_dataset.set_current_step(self.state.global_step)
return super().training_step(model, inputs, num_items_in_batch)
def log(self, logs, start_time=None):
super().log(logs, start_time)
if self.is_world_process_zero():
current_ratio = self.get_current_ratio()
global_step = self.state.global_step
# Log current ratio
if "loss" in logs:
wandb.log({"current_ratio": current_ratio, "global_step": global_step})
# Each cycle is (current_ratio + 1) steps
cycle_length = current_ratio + 1
step_in_cycle = global_step % cycle_length
# Only log to wandb if 'loss' is in the logs dictionary
if "loss" in logs:
if step_in_cycle < current_ratio:
# Text loss
wandb.log({"text_loss": logs["loss"], "text_step": self.text_step})
self.text_step += 1
else:
# Audio loss
wandb.log({"audio_loss": logs["loss"], "audio_step": self.audio_step})
self.audio_step += 1
def save_model(self, output_dir=None, _internal_call=False):
if output_dir is None:
output_dir = self.args.output_dir
self.save_and_push_model(output_dir)
def save_and_push_model(self, output_dir):
save_policy = FullStateDictConfig(offload_to_cpu=True, rank0_only=True)
with FSDP.state_dict_type(self.model, StateDictType.FULL_STATE_DICT, save_policy):
cpu_state_dict = self.model.state_dict()
self.model.save_pretrained(output_dir, state_dict=cpu_state_dict)
def data_collator(features):
input_ids = [f["input_ids"] for f in features]
if any("attention_mask" not in f for f in features):
attention_mask = [[1]*len(ids) for ids in input_ids]
else:
attention_mask = [f["attention_mask"] for f in features]
if any("labels" not in f for f in features):
labels = input_ids
else:
labels = [f["labels"] for f in features]
input_ids = torch.nn.utils.rnn.pad_sequence([torch.tensor(
i, dtype=torch.long) for i in input_ids], batch_first=True, padding_value=pad_token)
attention_mask = torch.nn.utils.rnn.pad_sequence([torch.tensor(
m, dtype=torch.long) for m in attention_mask], batch_first=True, padding_value=0)
labels = torch.nn.utils.rnn.pad_sequence([torch.tensor(
l, dtype=torch.long) for l in labels], batch_first=True, padding_value=-100)
return {"input_ids": input_ids, "attention_mask": attention_mask, "labels": labels}
wandb.init(project=project_name, name=run_name)
# Setup accelerate (this initializes distributed environment)
accelerator = Accelerator()
device = accelerator.device
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)
# Initialize model with proper dtype for Flash Attention 2.0
model = AutoLigerKernelForCausalLM.from_pretrained(
model_name,
attn_implementation="kernels-community/flash-attn3",
torch_dtype=torch.bfloat16,
)
# Initialize model on first GPU to make Flash Attention happy
if accelerator.is_local_main_process:
print(f"Pre-initializing model on {device} before FSDP")
model = model.to(device)
number_add_tokens = 7 * 4096 + 10
new_tokens = [f"<custom_token_{i}>" for i in range(0, number_add_tokens + 1)]
tokenizer.add_tokens(new_tokens)
model.resize_token_embeddings(len(tokenizer))
ds1 = load_dataset(dsn1, split="train")
ds2 = load_dataset(dsn2, split="train")
batch_total = batch_size * number_processes
# Calculate total steps for the dataset
num_update_steps_per_epoch = len(ds1) // (batch_size * number_processes)
total_steps = int(num_update_steps_per_epoch * epochs)
train_dataset = GradualRatioDataset(
ds1, ds2, batch_total,
initial_ratio=initial_ratio,
final_ratio=final_ratio,
total_steps=total_steps
)
training_args = TrainingArguments(
overwrite_output_dir=True,
num_train_epochs=epochs,
per_device_train_batch_size=batch_size,
logging_steps=1,
bf16=True,
output_dir=f"./{base_repo_id}",
fsdp="auto_wrap",
report_to="wandb",
save_steps=save_steps,
remove_unused_columns=True,
learning_rate=learning_rate,
lr_scheduler_type="cosine",
)
trainer = FSDPTrainer(
model=model,
args=training_args,
train_dataset=train_dataset,
data_collator=data_collator,
initial_ratio=initial_ratio,
final_ratio=final_ratio
)
print(f"Starting training with ratio progression: {initial_ratio}:1 -> {final_ratio}:1")
print(f"Total steps: {total_steps}")
trainer.train()
```
accelerate config:
```bash
{
"compute_environment": "LOCAL_MACHINE",
"debug": false,
"distributed_type": "MULTI_GPU",
"downcast_bf16": false,
"enable_cpu_affinity": true,
"machine_rank": 0,
"main_process_ip": "137.135.43.39",
"main_process_port": 29500,
"main_training_function": "main",
"mixed_precision": "bf16",
"num_machines": 2,
"num_processes": 16,
"rdzv_backend": "static",
"same_network": true,
"tpu_use_cluster": false,
"tpu_use_sudo": false,
"use_cpu": false,
"fsdp_config": {
"fsdp_auto_wrap_policy": "TRANSFORMER_BASED_WRAP",
"fsdp_backward_prefetch_policy": "BACKWARD_PRE",
"fsdp_forward_prefetch": true,
"fsdp_offload_params": false,
"fsdp_sharding_strategy": 1,
"fsdp_state_dict_type": "FULL_STATE_DICT",
"fsdp_transformer_layer_cls_to_wrap": ["LlamaDecoderLayer"] # SmolLM3DecoderLayer, LlamaDecoderLayer
},
}
```
### Expected behavior
I'm training a TTS based on LLaMA. It works with 8xH100. However, when I try to train in parallel with 8xH100 on two different servers, I encounter this error.
How can I resolve it?
```bash
rank0]: Traceback (most recent call last):
[rank0]: File "/mnt/kadirnar/.../../train.py", line 285, in <module>
[rank0]: training_args = TrainingArguments(
[rank0]: File "<string>", line 133, in __init__
[rank0]: File "/mnt/kadirnar/.../.venv/lib/python3.10/site-packages/transformers/training_args.py", line 1731, in __post_init__
[rank0]: raise ValueError(error_message)
[rank0]: ValueError: Your setup doesn't support bf16/gpu.
``` | {
"login": "kadirnar",
"id": 36204372,
"node_id": "MDQ6VXNlcjM2MjA0Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/36204372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kadirnar",
"html_url": "https://github.com/kadirnar",
"followers_url": "https://api.github.com/users/kadirnar/followers",
"following_url": "https://api.github.com/users/kadirnar/following{/other_user}",
"gists_url": "https://api.github.com/users/kadirnar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kadirnar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kadirnar/subscriptions",
"organizations_url": "https://api.github.com/users/kadirnar/orgs",
"repos_url": "https://api.github.com/users/kadirnar/repos",
"events_url": "https://api.github.com/users/kadirnar/events{/privacy}",
"received_events_url": "https://api.github.com/users/kadirnar/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39716/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39716/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39715/comments | https://api.github.com/repos/huggingface/transformers/issues/39715/events | https://github.com/huggingface/transformers/pull/39715 | 3,267,367,941 | PR_kwDOCUB6oc6g19zD | 39,715 | 🌐 [i18n-KO] Translated `main_classes/processors.md` to Korean | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T19:08:32 | 2025-07-27T23:29:40 | 2025-07-27T23:29:31 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39715",
"html_url": "https://github.com/huggingface/transformers/pull/39715",
"diff_url": "https://github.com/huggingface/transformers/pull/39715.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39715.patch",
"merged_at": null
} | # What does this PR do?
Translated the `main_classes/processors.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@jungnerd
@harheem
@4N3MONE
@yijun-lee
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- @stevhliu May you please review this PR? --> | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39715/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39714/comments | https://api.github.com/repos/huggingface/transformers/issues/39714/events | https://github.com/huggingface/transformers/pull/39714 | 3,267,352,586 | PR_kwDOCUB6oc6g1617 | 39,714 | 🌐 [i18n-KO] Translated `main_classes/backbones.md` to Korean | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T18:46:35 | 2025-10-23T23:28:55 | 2025-08-31T03:07:00 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39714",
"html_url": "https://github.com/huggingface/transformers/pull/39714",
"diff_url": "https://github.com/huggingface/transformers/pull/39714.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39714.patch",
"merged_at": null
} | # What does this PR do?
Translated the `main_classes/backbones.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@jungnerd, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S
@harheem
@4N3MONE
@yijun-lee
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
<!-- @stevhliu May you please review this PR? --> | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39714/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39713/comments | https://api.github.com/repos/huggingface/transformers/issues/39713/events | https://github.com/huggingface/transformers/pull/39713 | 3,267,318,505 | PR_kwDOCUB6oc6g1zXA | 39,713 | 🌐 [i18n-KO] Translated `main_classes/optimizer_schedules.md` to Korean | {
"login": "luckyvickyricky",
"id": 75977640,
"node_id": "MDQ6VXNlcjc1OTc3NjQw",
"avatar_url": "https://avatars.githubusercontent.com/u/75977640?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luckyvickyricky",
"html_url": "https://github.com/luckyvickyricky",
"followers_url": "https://api.github.com/users/luckyvickyricky/followers",
"following_url": "https://api.github.com/users/luckyvickyricky/following{/other_user}",
"gists_url": "https://api.github.com/users/luckyvickyricky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luckyvickyricky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luckyvickyricky/subscriptions",
"organizations_url": "https://api.github.com/users/luckyvickyricky/orgs",
"repos_url": "https://api.github.com/users/luckyvickyricky/repos",
"events_url": "https://api.github.com/users/luckyvickyricky/events{/privacy}",
"received_events_url": "https://api.github.com/users/luckyvickyricky/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T18:22:21 | 2025-08-13T15:23:09 | 2025-08-13T15:23:09 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39713",
"html_url": "https://github.com/huggingface/transformers/pull/39713",
"diff_url": "https://github.com/huggingface/transformers/pull/39713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39713.patch",
"merged_at": "2025-08-13T15:23:09"
} | # What does this PR do?
Translated the `main_classes/optimizer_schedules.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@jungnerd, @chelsseeey, @skwh54, @amo33, @maximizemaxwell, @D15M4S
@harheem
@4N3MONE
@yijun-lee
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39713/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39713/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39712/comments | https://api.github.com/repos/huggingface/transformers/issues/39712/events | https://github.com/huggingface/transformers/pull/39712 | 3,267,238,072 | PR_kwDOCUB6oc6g1jf7 | 39,712 | 🌐 [i18n-KO] Translated `attention_interface.md` to Korean | {
"login": "Jwaminju",
"id": 49024958,
"node_id": "MDQ6VXNlcjQ5MDI0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/49024958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jwaminju",
"html_url": "https://github.com/Jwaminju",
"followers_url": "https://api.github.com/users/Jwaminju/followers",
"following_url": "https://api.github.com/users/Jwaminju/following{/other_user}",
"gists_url": "https://api.github.com/users/Jwaminju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jwaminju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jwaminju/subscriptions",
"organizations_url": "https://api.github.com/users/Jwaminju/orgs",
"repos_url": "https://api.github.com/users/Jwaminju/repos",
"events_url": "https://api.github.com/users/Jwaminju/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jwaminju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T16:23:31 | 2025-08-18T12:16:11 | 2025-08-18T12:15:16 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39712",
"html_url": "https://github.com/huggingface/transformers/pull/39712",
"diff_url": "https://github.com/huggingface/transformers/pull/39712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39712.patch",
"merged_at": null
} | # What does this PR do?
Translated the `attention_interface.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [ ] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review? (Final)
stevhliu May you please review this PR? | {
"login": "Jwaminju",
"id": 49024958,
"node_id": "MDQ6VXNlcjQ5MDI0OTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/49024958?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jwaminju",
"html_url": "https://github.com/Jwaminju",
"followers_url": "https://api.github.com/users/Jwaminju/followers",
"following_url": "https://api.github.com/users/Jwaminju/following{/other_user}",
"gists_url": "https://api.github.com/users/Jwaminju/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jwaminju/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jwaminju/subscriptions",
"organizations_url": "https://api.github.com/users/Jwaminju/orgs",
"repos_url": "https://api.github.com/users/Jwaminju/repos",
"events_url": "https://api.github.com/users/Jwaminju/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jwaminju/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39712/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39711/comments | https://api.github.com/repos/huggingface/transformers/issues/39711/events | https://github.com/huggingface/transformers/issues/39711 | 3,267,221,370 | I_kwDOCUB6oc7Cvdd6 | 39,711 | Max cache length issue with Gemma 3 | {
"login": "mitchelldehaven",
"id": 47208251,
"node_id": "MDQ6VXNlcjQ3MjA4MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/47208251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mitchelldehaven",
"html_url": "https://github.com/mitchelldehaven",
"followers_url": "https://api.github.com/users/mitchelldehaven/followers",
"following_url": "https://api.github.com/users/mitchelldehaven/following{/other_user}",
"gists_url": "https://api.github.com/users/mitchelldehaven/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mitchelldehaven/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mitchelldehaven/subscriptions",
"organizations_url": "https://api.github.com/users/mitchelldehaven/orgs",
"repos_url": "https://api.github.com/users/mitchelldehaven/repos",
"events_url": "https://api.github.com/users/mitchelldehaven/events{/privacy}",
"received_events_url": "https://api.github.com/users/mitchelldehaven/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-27T15:59:21 | 2025-07-29T15:12:52 | 2025-07-29T15:12:52 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Linux-6.11.0-1011-nvidia-x86_64-with-glibc2.39
- Python version: 3.11.13
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA RTX PRO 6000 Blackwell Workstation Edition
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Trying to generate with `AutoModelForCausalLM` with `google/gemma3-1b-it` is running into the following error when using `eager` attention as specified to do from logging messages (rather than using `sdpa`).
```
ValueError: Max cache length is not consistent across layers: [512, 512, 512, 512, 512, 741, 512, 512, 512, 512, 512, 741, 512, 512, 512, 512, 512, 741, 512, 512, 512, 512, 512, 741, 512, 512]
```
The offending code seems to be here in `transformers/cache_utils.py`
```
@property
def max_cache_len(self) -> int:
"""Return the maximum cache length of the cache"""
values = [layer.max_cache_len for layer in self.layers]
if len(set(values)) > 1:
raise ValueError(f"Max cache length is not consistent across layers: {values}")
return values[0]
```
This check seems to be inconsistent with Gemma3's layer structure, where 5 layers use sliding attention with size `512` and 6th layer uses full casual attention.
The gitblame shows this was changed recently from the commit: https://github.com/huggingface/transformers/commit/c338fd43b0be2c7f5d73e693fa6fb1b5e7a0bdc2
This worked fine in the previous transformers version I was running, but I needed to update my version recently and this error started occurring. I'm happy to post a PR for a fix if this is determined to be a bug.
### Expected behavior
That the model generates text as expected. | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39711/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39710/comments | https://api.github.com/repos/huggingface/transformers/issues/39710/events | https://github.com/huggingface/transformers/issues/39710 | 3,267,213,641 | I_kwDOCUB6oc7CvblJ | 39,710 | OWLv2 with visual prompt - alternative query embedding selection method | {
"login": "vvmnnnkv",
"id": 12518480,
"node_id": "MDQ6VXNlcjEyNTE4NDgw",
"avatar_url": "https://avatars.githubusercontent.com/u/12518480?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvmnnnkv",
"html_url": "https://github.com/vvmnnnkv",
"followers_url": "https://api.github.com/users/vvmnnnkv/followers",
"following_url": "https://api.github.com/users/vvmnnnkv/following{/other_user}",
"gists_url": "https://api.github.com/users/vvmnnnkv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvmnnnkv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvmnnnkv/subscriptions",
"organizations_url": "https://api.github.com/users/vvmnnnkv/orgs",
"repos_url": "https://api.github.com/users/vvmnnnkv/repos",
"events_url": "https://api.github.com/users/vvmnnnkv/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvmnnnkv/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-27T15:48:01 | 2025-08-15T09:54:48 | null | NONE | null | null | null | null | ### Feature request
When using OWLv2 with visual prompt, like this:
```python
processor = AutoProcessor.from_pretrained("google/owlv2-base-patch16-ensemble")
model = Owlv2ForObjectDetection.from_pretrained("google/owlv2-base-patch16-ensemble")
target_image = Image.open(...)
prompt_image = Image.open(...)
inputs = processor(images=target_image, query_images=prompt_image, return_tensors="pt")
# forward pass
with torch.no_grad():
outputs = model.image_guided_detection(**inputs)
target_sizes = torch.Tensor([image.size[::-1]])
results = processor.post_process_image_guided_detection(outputs=outputs, threshold=0.9, nms_threshold=0.3, target_sizes=target_sizes)
```
The results sometimes come out weird, with detection boxes looking absolutely random.
I was playing with the zero-shot detection in HF space and found that visual prompt works much worse than text prompt. Then I found that choosing query embedding from prompt image differently can yield much better results. You can see examples with 2 differently annotated outputs and prompt image also annotated with selected query embedding: https://huggingface.co/spaces/vvmnnnkv/owlv2-visual-prompt
The feature request is to improve default method used in `Owlv2ForObjectDetection.embed_image_query` or perhaps to have an option to choose from different query embedding selection methods.
### Motivation
Existing `image_guided_detection` and `post_process_image_guided_detection` are very convenient; however, the way the query embedding is selected by default is not always optimal. Overcoming this requires figuring out the underlying reason and fiddling with code beyond these convenient methods. Maybe it makes sense to do query embedding selection differently by default or have a flag in `image_guided_detection` to choose methods.
### Your contribution
Can do a PR if you think it's worth doing | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39710/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39709/comments | https://api.github.com/repos/huggingface/transformers/issues/39709/events | https://github.com/huggingface/transformers/pull/39709 | 3,267,033,117 | PR_kwDOCUB6oc6g05-T | 39,709 | Fix checkpoint saving after interrupted training | {
"login": "ved1beta",
"id": 146507396,
"node_id": "U_kgDOCLuGhA",
"avatar_url": "https://avatars.githubusercontent.com/u/146507396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ved1beta",
"html_url": "https://github.com/ved1beta",
"followers_url": "https://api.github.com/users/ved1beta/followers",
"following_url": "https://api.github.com/users/ved1beta/following{/other_user}",
"gists_url": "https://api.github.com/users/ved1beta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ved1beta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ved1beta/subscriptions",
"organizations_url": "https://api.github.com/users/ved1beta/orgs",
"repos_url": "https://api.github.com/users/ved1beta/repos",
"events_url": "https://api.github.com/users/ved1beta/events{/privacy}",
"received_events_url": "https://api.github.com/users/ved1beta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T12:01:16 | 2025-07-27T12:01:29 | 2025-07-27T12:01:22 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39709",
"html_url": "https://github.com/huggingface/transformers/pull/39709",
"diff_url": "https://github.com/huggingface/transformers/pull/39709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39709.patch",
"merged_at": null
} | # What does this PR do?
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ved1beta",
"id": 146507396,
"node_id": "U_kgDOCLuGhA",
"avatar_url": "https://avatars.githubusercontent.com/u/146507396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ved1beta",
"html_url": "https://github.com/ved1beta",
"followers_url": "https://api.github.com/users/ved1beta/followers",
"following_url": "https://api.github.com/users/ved1beta/following{/other_user}",
"gists_url": "https://api.github.com/users/ved1beta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ved1beta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ved1beta/subscriptions",
"organizations_url": "https://api.github.com/users/ved1beta/orgs",
"repos_url": "https://api.github.com/users/ved1beta/repos",
"events_url": "https://api.github.com/users/ved1beta/events{/privacy}",
"received_events_url": "https://api.github.com/users/ved1beta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39709/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39708/comments | https://api.github.com/repos/huggingface/transformers/issues/39708/events | https://github.com/huggingface/transformers/pull/39708 | 3,267,033,011 | PR_kwDOCUB6oc6g0581 | 39,708 | 🌐[i18n-bn] Introduce Bengali version of Transformers documentation | {
"login": "ankitdutta428",
"id": 159722886,
"node_id": "U_kgDOCYUthg",
"avatar_url": "https://avatars.githubusercontent.com/u/159722886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankitdutta428",
"html_url": "https://github.com/ankitdutta428",
"followers_url": "https://api.github.com/users/ankitdutta428/followers",
"following_url": "https://api.github.com/users/ankitdutta428/following{/other_user}",
"gists_url": "https://api.github.com/users/ankitdutta428/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankitdutta428/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankitdutta428/subscriptions",
"organizations_url": "https://api.github.com/users/ankitdutta428/orgs",
"repos_url": "https://api.github.com/users/ankitdutta428/repos",
"events_url": "https://api.github.com/users/ankitdutta428/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankitdutta428/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-27T12:01:07 | 2025-07-30T20:48:15 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39708",
"html_url": "https://github.com/huggingface/transformers/pull/39708",
"diff_url": "https://github.com/huggingface/transformers/pull/39708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39708.patch",
"merged_at": null
} | # What does this PR do?
This PR adds a new Bengali localization section to the documentation of the Transformers library.
- Introduces a new folder: `docs/source/bn/`
- Adds a localized file: `index.md` providing an introduction to the Transformers library in Bengali
- Adds a new file: `i18n/README_bn.md` to guide Bengali-speaking users with general information about the library
This is part of an effort to make the Hugging Face documentation more accessible to Bengali-speaking developers, researchers, and learners.
<!-- Remove if not applicable -->
Fixes: [#39705](https://github.com/huggingface/transformers/issues/39705)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Tagging @stevhliu for documentation-related review.
Any contributor interested in localization or multilingual documentation is welcome to review as well.
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39708/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39707/comments | https://api.github.com/repos/huggingface/transformers/issues/39707/events | https://github.com/huggingface/transformers/pull/39707 | 3,266,895,107 | PR_kwDOCUB6oc6g0dkE | 39,707 | Fix Causality Handling in Flash Attention to Support Bidirectional Attention | {
"login": "lucaswychan",
"id": 109060491,
"node_id": "U_kgDOBoAhiw",
"avatar_url": "https://avatars.githubusercontent.com/u/109060491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaswychan",
"html_url": "https://github.com/lucaswychan",
"followers_url": "https://api.github.com/users/lucaswychan/followers",
"following_url": "https://api.github.com/users/lucaswychan/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaswychan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaswychan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaswychan/subscriptions",
"organizations_url": "https://api.github.com/users/lucaswychan/orgs",
"repos_url": "https://api.github.com/users/lucaswychan/repos",
"events_url": "https://api.github.com/users/lucaswychan/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaswychan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T08:43:37 | 2025-08-18T12:48:58 | 2025-08-12T16:16:28 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39707",
"html_url": "https://github.com/huggingface/transformers/pull/39707",
"diff_url": "https://github.com/huggingface/transformers/pull/39707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39707.patch",
"merged_at": "2025-08-12T16:16:28"
} | # What does this PR do?
Fixes #39554
The original implementation of the `flash_attention_forward` function is restricted to performing **causal attention** and does not support **bidirectional attention**. This behavior stems from how the function handles causality:
- The function relies on the `Attention.is_causal` attribute, which belongs to the `Attention` class in the model.
- By default, `Attention.is_causal` is set to `True`, enforcing causal attention (where the model only attends to previous tokens in a sequence).
- This attribute is never modified in the code, meaning the setting is effectively fixed.
- Additionally, while the function removes the `is_causal` key from the keyword arguments (`kwargs`) passed to it, this value is not used. Instead, it always defers to the hardcoded `Attention.is_causal` value.
As a result, even if a user attempts to pass `is_causal=False` through `kwargs` to enable bidirectional attention (where the model can attend to both previous and future tokens), the input is ignored. Consequently, the current setup makes it impossible to perform bidirectional attention when using flash attention.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @Cyrilvallez @vasqu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39707/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39707/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39706/comments | https://api.github.com/repos/huggingface/transformers/issues/39706/events | https://github.com/huggingface/transformers/pull/39706 | 3,266,768,892 | PR_kwDOCUB6oc6g0EdY | 39,706 | Remove python3.7 reference from doc link | {
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-27T06:36:40 | 2025-07-29T16:17:14 | 2025-07-29T16:17:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39706",
"html_url": "https://github.com/huggingface/transformers/pull/39706",
"diff_url": "https://github.com/huggingface/transformers/pull/39706.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39706.patch",
"merged_at": "2025-07-29T16:17:13"
} | # What does this PR do?
Updates the documentation link in the docstring to reference the general Python 3 docs instead of Python 3.7 which is now end-of-life. Referencing the main Python 3 docs is more appropriate and future-proof.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Documentation: @stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39706/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39705/comments | https://api.github.com/repos/huggingface/transformers/issues/39705/events | https://github.com/huggingface/transformers/issues/39705 | 3,266,744,311 | I_kwDOCUB6oc7Cto_3 | 39,705 | [i18n-<bn>] Translating docs to <Bengali> | {
"login": "ankitdutta428",
"id": 159722886,
"node_id": "U_kgDOCYUthg",
"avatar_url": "https://avatars.githubusercontent.com/u/159722886?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ankitdutta428",
"html_url": "https://github.com/ankitdutta428",
"followers_url": "https://api.github.com/users/ankitdutta428/followers",
"following_url": "https://api.github.com/users/ankitdutta428/following{/other_user}",
"gists_url": "https://api.github.com/users/ankitdutta428/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ankitdutta428/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ankitdutta428/subscriptions",
"organizations_url": "https://api.github.com/users/ankitdutta428/orgs",
"repos_url": "https://api.github.com/users/ankitdutta428/repos",
"events_url": "https://api.github.com/users/ankitdutta428/events{/privacy}",
"received_events_url": "https://api.github.com/users/ankitdutta428/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | null | [] | 2025-07-27T06:18:20 | 2025-07-27T11:58:32 | null | NONE | null | null | null | null | <!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the Bengali-speaking community 🌐 (currently 0 out of 267 complete)
Who would want to translate? Please follow the 🤗 [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers 🤗).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @stevhliu for review.
* 🙋 If you'd like others to help you with the translation, you can also post in the 🤗 [forums](https://discuss.huggingface.co/).
## Get Started section
- [x] [index.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.md) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.md) (waiting for initial PR to go through)
- [ ] [installation.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.md).
## Tutorial section
- [ ] [pipeline_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.md)
- [ ] [autoclass_tutorial.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/autoclass_tutorial.md)
- [ ] [preprocessing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.md)
- [ ] [training.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.md)
- [ ] [accelerate.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.md)
- [ ] [model_sharing.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.md)
- [ ] [multilingual.md](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.md)
<!--
Keep on adding more as you go 🔥
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39705/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39705/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39704/comments | https://api.github.com/repos/huggingface/transformers/issues/39704/events | https://github.com/huggingface/transformers/issues/39704 | 3,266,600,543 | I_kwDOCUB6oc7CtF5f | 39,704 | Iwin Transformer: Hierarchical Vision Transformer using Interleaved Windows | {
"login": "Cominder",
"id": 19499571,
"node_id": "MDQ6VXNlcjE5NDk5NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19499571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cominder",
"html_url": "https://github.com/Cominder",
"followers_url": "https://api.github.com/users/Cominder/followers",
"following_url": "https://api.github.com/users/Cominder/following{/other_user}",
"gists_url": "https://api.github.com/users/Cominder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cominder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cominder/subscriptions",
"organizations_url": "https://api.github.com/users/Cominder/orgs",
"repos_url": "https://api.github.com/users/Cominder/repos",
"events_url": "https://api.github.com/users/Cominder/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cominder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-27T05:01:23 | 2025-08-24T14:50:33 | 2025-08-24T14:44:39 | NONE | null | null | null | null | We introduce Iwin Transformer, a novel position-embedding-free hierarchical vision transformer, which can be
fine-tuned directly from low to high resolution, through the collaboration of innovative interleaved window attention and
depthwise separable convolution. This approach uses attention to connect distant tokens and applies convolution to link neighboring tokens, enabling global information exchange within a single module, overcoming Swin Transformer’s limitation of
requiring two consecutive blocks to approximate global attention. Extensive experiments on visual benchmarks demonstrate that
Iwin Transformer exhibits strong competitiveness in tasks such as image classification (87.4 top-1 accuracy on ImageNet-1K),
semantic segmentation and video action recognition. We also validate the effectiveness of the core component in Iwin as a
standalone module that can seamlessly replace the self-attention module in class-conditional image generation. The concepts
and methods introduced by the Iwin Transformer have the potential to inspire future research, like Iwin 3D Attention
in video generation. | {
"login": "Cominder",
"id": 19499571,
"node_id": "MDQ6VXNlcjE5NDk5NTcx",
"avatar_url": "https://avatars.githubusercontent.com/u/19499571?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cominder",
"html_url": "https://github.com/Cominder",
"followers_url": "https://api.github.com/users/Cominder/followers",
"following_url": "https://api.github.com/users/Cominder/following{/other_user}",
"gists_url": "https://api.github.com/users/Cominder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cominder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cominder/subscriptions",
"organizations_url": "https://api.github.com/users/Cominder/orgs",
"repos_url": "https://api.github.com/users/Cominder/repos",
"events_url": "https://api.github.com/users/Cominder/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cominder/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39704/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39703/comments | https://api.github.com/repos/huggingface/transformers/issues/39703/events | https://github.com/huggingface/transformers/issues/39703 | 3,266,297,198 | I_kwDOCUB6oc7Cr71u | 39,703 | ValueError: Number of image placeholders in the prompt does not match the number of images. internVL3 | {
"login": "hexiao0275",
"id": 68224838,
"node_id": "MDQ6VXNlcjY4MjI0ODM4",
"avatar_url": "https://avatars.githubusercontent.com/u/68224838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hexiao0275",
"html_url": "https://github.com/hexiao0275",
"followers_url": "https://api.github.com/users/hexiao0275/followers",
"following_url": "https://api.github.com/users/hexiao0275/following{/other_user}",
"gists_url": "https://api.github.com/users/hexiao0275/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hexiao0275/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hexiao0275/subscriptions",
"organizations_url": "https://api.github.com/users/hexiao0275/orgs",
"repos_url": "https://api.github.com/users/hexiao0275/repos",
"events_url": "https://api.github.com/users/hexiao0275/events{/privacy}",
"received_events_url": "https://api.github.com/users/hexiao0275/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T23:43:46 | 2025-09-07T08:02:56 | 2025-09-07T08:02:56 | NONE | null | null | null | null | Traceback (most recent call last):
File "/data1/users/hexiao/project_hx/dpo_contrastive_dataset_contrusct/5-0-extrae-1118.py", line 47, in <module>
inputs = processor(images=images, text=prompt, return_tensors="pt", padding=True).to(model.device)
File "/data1/users/hexiao/miniconda3/envs/dpoatt/lib/python3.10/site-packages/transformers/models/internvl/processing_internvl.py", line 248, in __call__
raise ValueError("Number of image placeholders in the prompt does not match the number of images.")
ValueError: Number of image placeholders in the prompt does not match the number of images. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39703/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39702/comments | https://api.github.com/repos/huggingface/transformers/issues/39702/events | https://github.com/huggingface/transformers/pull/39702 | 3,266,259,165 | PR_kwDOCUB6oc6gyYFL | 39,702 | Update mT5 model card | {
"login": "dross20",
"id": 73395516,
"node_id": "MDQ6VXNlcjczMzk1NTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/73395516?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dross20",
"html_url": "https://github.com/dross20",
"followers_url": "https://api.github.com/users/dross20/followers",
"following_url": "https://api.github.com/users/dross20/following{/other_user}",
"gists_url": "https://api.github.com/users/dross20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dross20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dross20/subscriptions",
"organizations_url": "https://api.github.com/users/dross20/orgs",
"repos_url": "https://api.github.com/users/dross20/repos",
"events_url": "https://api.github.com/users/dross20/events{/privacy}",
"received_events_url": "https://api.github.com/users/dross20/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T23:11:48 | 2025-07-30T15:35:05 | 2025-07-30T15:35:05 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39702",
"html_url": "https://github.com/huggingface/transformers/pull/39702",
"diff_url": "https://github.com/huggingface/transformers/pull/39702.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39702.patch",
"merged_at": "2025-07-30T15:35:04"
} | # What does this PR do?
This PR replaces the mT5 model card with a new model card matching the format introduced in #36979.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@stevhliu
## Notes
- I left out the "Resources" section since I couldn't find any learning materials associated with this model.
| {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39702/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39701/comments | https://api.github.com/repos/huggingface/transformers/issues/39701/events | https://github.com/huggingface/transformers/pull/39701 | 3,266,171,133 | PR_kwDOCUB6oc6gyF9n | 39,701 | standardized BARThez model card | {
"login": "EthanV431",
"id": 113210015,
"node_id": "U_kgDOBr9ynw",
"avatar_url": "https://avatars.githubusercontent.com/u/113210015?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanV431",
"html_url": "https://github.com/EthanV431",
"followers_url": "https://api.github.com/users/EthanV431/followers",
"following_url": "https://api.github.com/users/EthanV431/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanV431/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EthanV431/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanV431/subscriptions",
"organizations_url": "https://api.github.com/users/EthanV431/orgs",
"repos_url": "https://api.github.com/users/EthanV431/repos",
"events_url": "https://api.github.com/users/EthanV431/events{/privacy}",
"received_events_url": "https://api.github.com/users/EthanV431/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T21:57:22 | 2025-07-30T18:51:22 | 2025-07-30T15:33:13 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39701",
"html_url": "https://github.com/huggingface/transformers/pull/39701",
"diff_url": "https://github.com/huggingface/transformers/pull/39701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39701.patch",
"merged_at": "2025-07-30T15:33:13"
} | # What does this PR do?
This standardizes the BARThez model card according to the template outlined in #36979.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
@stevhliu | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39701/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39701/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39700/comments | https://api.github.com/repos/huggingface/transformers/issues/39700/events | https://github.com/huggingface/transformers/pull/39700 | 3,266,010,558 | PR_kwDOCUB6oc6gxmWw | 39,700 | properly save model across tensor parallel processes | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T19:07:02 | 2025-07-28T13:36:41 | 2025-07-28T13:36:41 | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39700",
"html_url": "https://github.com/huggingface/transformers/pull/39700",
"diff_url": "https://github.com/huggingface/transformers/pull/39700.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39700.patch",
"merged_at": null
} | # What does this PR do?
Tensor Parallel saving doesn't work because currently it would use the default of checking args.should_save which would only save on rank0, but we need to coordinate across TP ranks to gather the dtensors.
This is still in draft as I need to solve for FSDP + TP too.
@S1ro1
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39700/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39699/comments | https://api.github.com/repos/huggingface/transformers/issues/39699/events | https://github.com/huggingface/transformers/issues/39699 | 3,265,946,360 | I_kwDOCUB6oc7CqmL4 | 39,699 | No flag to support Conditional Parameter Loading for gemma-3n-E2B models in transformer | {
"login": "aakashgaur01",
"id": 88384530,
"node_id": "MDQ6VXNlcjg4Mzg0NTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/88384530?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aakashgaur01",
"html_url": "https://github.com/aakashgaur01",
"followers_url": "https://api.github.com/users/aakashgaur01/followers",
"following_url": "https://api.github.com/users/aakashgaur01/following{/other_user}",
"gists_url": "https://api.github.com/users/aakashgaur01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aakashgaur01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aakashgaur01/subscriptions",
"organizations_url": "https://api.github.com/users/aakashgaur01/orgs",
"repos_url": "https://api.github.com/users/aakashgaur01/repos",
"events_url": "https://api.github.com/users/aakashgaur01/events{/privacy}",
"received_events_url": "https://api.github.com/users/aakashgaur01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-26T18:08:00 | 2025-09-03T08:02:58 | 2025-09-03T08:02:58 | NONE | null | null | null | null | ### System Info
Hi,
While a lot has been mentioned about gemma-3n-E2B and gemma-3n-E4B about the COnditional parameter loading and reduced memory loading
There is no configuration currently visible in transformers for supporting that.
Is it possible to get the related configuration/code/documentation to make it work to get an actual lower memory model?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
import torch
from transformers import AutoProcessor, AutoModelForImageTextToText
GEMMA_MODEL_ID = "google/gemma-3n-E2B-it"
print("Loading processor")
processor = AutoProcessor.from_pretrained(GEMMA_MODEL_ID)
print("Loadind model")
model = AutoModelForImageTextToText.from_pretrained(
GEMMA_MODEL_ID, torch_dtype="auto", device_map=None).to("cpu")
There is no flag for doing Conditional parameter Loading or PLE
### Expected behavior
Some flag using which Conditional Parameter Loading can be enabled and save on the memory | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39699/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39699/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39698/comments | https://api.github.com/repos/huggingface/transformers/issues/39698/events | https://github.com/huggingface/transformers/pull/39698 | 3,265,909,243 | PR_kwDOCUB6oc6gxRdQ | 39,698 | Fix exaone4 layer_types ZeroDivision/TypeError when sliding_window_pattern is None/"LLLG" | {
"login": "wheeze01",
"id": 54202163,
"node_id": "MDQ6VXNlcjU0MjAyMTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54202163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wheeze01",
"html_url": "https://github.com/wheeze01",
"followers_url": "https://api.github.com/users/wheeze01/followers",
"following_url": "https://api.github.com/users/wheeze01/following{/other_user}",
"gists_url": "https://api.github.com/users/wheeze01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wheeze01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wheeze01/subscriptions",
"organizations_url": "https://api.github.com/users/wheeze01/orgs",
"repos_url": "https://api.github.com/users/wheeze01/repos",
"events_url": "https://api.github.com/users/wheeze01/events{/privacy}",
"received_events_url": "https://api.github.com/users/wheeze01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-26T17:17:51 | 2025-07-30T17:32:20 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39698",
"html_url": "https://github.com/huggingface/transformers/pull/39698",
"diff_url": "https://github.com/huggingface/transformers/pull/39698.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39698.patch",
"merged_at": null
} | # What does this PR do?
Fixes a crash in `Exaone4Config.__init__` when `sliding_window_pattern` is `None` (EXAONE-4.0-1.2B) or a string like `"LLLG"` (EXAONE-4.0-32B). The original code unconditionally performed a modulo operation on `sliding_window_pattern`, causing either a `ZeroDivisionError` or a `TypeError`. It also removed an incorrect `"sliding_window"` key check that left `_attn_implementation` unset. Now:
* We branch safely on three cases for `sliding_window_pattern`:
1. `None` or `0` → all layers use `"full_attention"`.
2. `str` (e.g. `"LLLG"`) → map each character (`L` → `"sliding_attention"`, others → `"full_attention"`), repeat to cover all layers, and force the final layer to `"full_attention"`.
3. positive `int` (e.g. `4`) → every `n`‑th layer is `"full_attention"`, others `"sliding_attention"`, final layer forced `"full_attention"`.
* We remove the incorrect check for `"sliding_window"` in `layer_types` and no longer force `_attn_implementation="hybrid"`; we let Hugging Face’s internal `_check_and_adjust_attn_implementation` decide the proper backend (e.g., `"eager"`, `"sdpa"`, `"flash_attention_*"`).
This resolves both the division/modulo crash and the risk of `_attn_implementation` remaining `None` downstream.
Fixes #39696
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- EXAONE-4.0: @lgai-exaone
Models:
- text models: @ArthurZucker | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39698/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39697/comments | https://api.github.com/repos/huggingface/transformers/issues/39697/events | https://github.com/huggingface/transformers/pull/39697 | 3,265,895,404 | PR_kwDOCUB6oc6gxOrI | 39,697 | use untyped storage for dtensors due to deprecation | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-26T16:57:55 | 2025-08-04T13:20:16 | null | CONTRIBUTOR | null | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39697",
"html_url": "https://github.com/huggingface/transformers/pull/39697",
"diff_url": "https://github.com/huggingface/transformers/pull/39697.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39697.patch",
"merged_at": null
} | # What does this PR do?
TypedStorge has been deprecated for a while now https://github.com/pytorch/pytorch/commit/ee28b865ee9c87cce4db0011987baf8d125cc857 and I'm getting this warning when using TP
```
/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/transformers/pytorch_utils.py:302: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
```
similarly, the nbytes size should be based on the underlying storage similar to the accelerate implementation
https://github.com/huggingface/accelerate/blob/2f075c724ccb4e38fade64db3b0627ca167b5fd2/src/accelerate/utils/modeling.py#L201-L202
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39697/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39696/comments | https://api.github.com/repos/huggingface/transformers/issues/39696/events | https://github.com/huggingface/transformers/issues/39696 | 3,265,894,538 | I_kwDOCUB6oc7CqZiK | 39,696 | [exaone4] ZeroDivisionError/TypeError when sliding_window_pattern is None/"LLLG" and _attn_implementation stays None (4.54.0 & main) | {
"login": "wheeze01",
"id": 54202163,
"node_id": "MDQ6VXNlcjU0MjAyMTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54202163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wheeze01",
"html_url": "https://github.com/wheeze01",
"followers_url": "https://api.github.com/users/wheeze01/followers",
"following_url": "https://api.github.com/users/wheeze01/following{/other_user}",
"gists_url": "https://api.github.com/users/wheeze01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wheeze01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wheeze01/subscriptions",
"organizations_url": "https://api.github.com/users/wheeze01/orgs",
"repos_url": "https://api.github.com/users/wheeze01/repos",
"events_url": "https://api.github.com/users/wheeze01/events{/privacy}",
"received_events_url": "https://api.github.com/users/wheeze01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-26T16:56:31 | 2025-07-29T02:46:39 | 2025-07-29T02:46:39 | NONE | null | null | null | null | ### System Info
```
- `transformers` version: 4.54.0
- Platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
- Python version: 3.11.11
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.8.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: FSDP
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- fsdp_config: {'fsdp_activation_checkpointing': False, 'fsdp_auto_wrap_policy': 'TRANSFORMER_BASED_WRAP', 'fsdp_backward_prefetch': 'BACKWARD_PRE', 'fsdp_cpu_ram_efficient_loading': True, 'fsdp_forward_prefetch': False, 'fsdp_offload_params': False, 'fsdp_reshard_after_forward': 'FULL_SHARD', 'fsdp_state_dict_type': 'SHARDED_STATE_DICT', 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': True, 'fsdp_version': 1}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.0+cu126 (CUDA)
- Tensorflow version (GPU?): 2.18.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA RTX A6000
```
### Who can help?
@lgai-exaone
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "LGAI-EXAONE/EXAONE-4.0-1.2B"
# model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name, torch_dtype="bfloat16", device_map=None
).to("cuda")
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "너가 얼마나 대단한지 설명해 봐"
messages = [{"role": "user", "content": prompt}]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
```
### Expected behavior
Some routes are my personal, so I processed [MASKED]. The [MASKED] part is my account name.
# EXAONE-4.0-1.2B
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/[MASKED]/braille_translator/trash/test_exaone4.py", line 6, in <module>
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 547, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1277, in from_pretrained
return config_class.from_dict(config_dict, **unused_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 789, in from_dict
config = cls(**config_dict)
^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/exaone4/configuration_exaone4.py", line 208, in __init__
self.layer_types = [
^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/exaone4/configuration_exaone4.py", line 210, in <listcomp>
if ((i + 1) % (sliding_window_pattern) != 0 and i < self.num_hidden_layers)
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
ZeroDivisionError: integer modulo by zero
```
# EXAONE-4.0-32B
```
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/[MASKED]/braille_translator/trash/test_exaone4.py", line 6, in <module>
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 547, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1277, in from_pretrained
return config_class.from_dict(config_dict, **unused_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/configuration_utils.py", line 789, in from_dict
config = cls(**config_dict)
^^^^^^^^^^^^^^^^^^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/exaone4/configuration_exaone4.py", line 208, in __init__
self.layer_types = [
^
File "/home/[MASKED]/miniconda3/envs/py11/lib/python3.11/site-packages/transformers/models/exaone4/configuration_exaone4.py", line 210, in <listcomp>
if ((i + 1) % (sliding_window_pattern) != 0 and i < self.num_hidden_layers)
~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~
TypeError: unsupported operand type(s) for %: 'int' and 'str'
```
| {
"login": "wheeze01",
"id": 54202163,
"node_id": "MDQ6VXNlcjU0MjAyMTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54202163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wheeze01",
"html_url": "https://github.com/wheeze01",
"followers_url": "https://api.github.com/users/wheeze01/followers",
"following_url": "https://api.github.com/users/wheeze01/following{/other_user}",
"gists_url": "https://api.github.com/users/wheeze01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wheeze01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wheeze01/subscriptions",
"organizations_url": "https://api.github.com/users/wheeze01/orgs",
"repos_url": "https://api.github.com/users/wheeze01/repos",
"events_url": "https://api.github.com/users/wheeze01/events{/privacy}",
"received_events_url": "https://api.github.com/users/wheeze01/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39696/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39695/comments | https://api.github.com/repos/huggingface/transformers/issues/39695/events | https://github.com/huggingface/transformers/pull/39695 | 3,265,887,306 | PR_kwDOCUB6oc6gxNAX | 39,695 | Don't set `run_name` when none | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T16:46:50 | 2025-07-30T01:39:31 | 2025-07-30T01:39:30 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39695",
"html_url": "https://github.com/huggingface/transformers/pull/39695",
"diff_url": "https://github.com/huggingface/transformers/pull/39695.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39695.patch",
"merged_at": "2025-07-30T01:39:30"
} | # What does this PR do?
All the tracking library support `run_name=None`, and usually generate a name in this case, which can be quite practical.
Consequently, we don't need setting this value to `output_dir` when `None`
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39695/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39694/comments | https://api.github.com/repos/huggingface/transformers/issues/39694/events | https://github.com/huggingface/transformers/issues/39694 | 3,265,801,360 | I_kwDOCUB6oc7CqCyQ | 39,694 | 4.54.0 bug: ImportError: cannot import name 'deterministic_g' from 'transformers.modeling_flash_attention_utils' | {
"login": "georgethrax",
"id": 10423276,
"node_id": "MDQ6VXNlcjEwNDIzMjc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10423276?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/georgethrax",
"html_url": "https://github.com/georgethrax",
"followers_url": "https://api.github.com/users/georgethrax/followers",
"following_url": "https://api.github.com/users/georgethrax/following{/other_user}",
"gists_url": "https://api.github.com/users/georgethrax/gists{/gist_id}",
"starred_url": "https://api.github.com/users/georgethrax/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/georgethrax/subscriptions",
"organizations_url": "https://api.github.com/users/georgethrax/orgs",
"repos_url": "https://api.github.com/users/georgethrax/repos",
"events_url": "https://api.github.com/users/georgethrax/events{/privacy}",
"received_events_url": "https://api.github.com/users/georgethrax/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-26T15:31:53 | 2025-09-13T08:02:40 | 2025-09-13T08:02:40 | NONE | null | null | null | null | ### System Info
4.54.0 bug: ImportError: cannot import name 'deterministic_g' from 'transformers.modeling_flash_attention_utils'
4.53.3: import success
@ArthurZucker
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. pip install transformers=4.54.0
2. from transformers.modeling_flash_attention_utils import deterministic_g
### Expected behavior
raise exception:
ImportError: cannot import name 'deterministic_g' from 'transformers.modeling_flash_attention_utils' | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39694/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39693/comments | https://api.github.com/repos/huggingface/transformers/issues/39693/events | https://github.com/huggingface/transformers/pull/39693 | 3,265,701,523 | PR_kwDOCUB6oc6gwneP | 39,693 | PATCH: add back n-dim device-mesh + fix tp trainer saving | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-26T14:03:25 | 2025-07-28T12:30:12 | 2025-07-28T12:29:58 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39693",
"html_url": "https://github.com/huggingface/transformers/pull/39693",
"diff_url": "https://github.com/huggingface/transformers/pull/39693.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39693.patch",
"merged_at": "2025-07-28T12:29:58"
} | 1. Fixes ndim check on device_mesh - This was merged in previously with #38949 but was by mistake reverted by #39501, we need this for upcoming accelerate/axolotl release.
2. Makes sure we save properly on distributed rank if tp allowed in trainer
cc @ArthurZucker @SunMarc | {
"login": "S1ro1",
"id": 54212263,
"node_id": "MDQ6VXNlcjU0MjEyMjYz",
"avatar_url": "https://avatars.githubusercontent.com/u/54212263?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/S1ro1",
"html_url": "https://github.com/S1ro1",
"followers_url": "https://api.github.com/users/S1ro1/followers",
"following_url": "https://api.github.com/users/S1ro1/following{/other_user}",
"gists_url": "https://api.github.com/users/S1ro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/S1ro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/S1ro1/subscriptions",
"organizations_url": "https://api.github.com/users/S1ro1/orgs",
"repos_url": "https://api.github.com/users/S1ro1/repos",
"events_url": "https://api.github.com/users/S1ro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/S1ro1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39693/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39692/comments | https://api.github.com/repos/huggingface/transformers/issues/39692/events | https://github.com/huggingface/transformers/issues/39692 | 3,265,628,633 | I_kwDOCUB6oc7CpYnZ | 39,692 | SigLIP2 documentation example has multiple errors (model/processor mismatch + quantization failure) | {
"login": "david-littlefield",
"id": 30560737,
"node_id": "MDQ6VXNlcjMwNTYwNzM3",
"avatar_url": "https://avatars.githubusercontent.com/u/30560737?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-littlefield",
"html_url": "https://github.com/david-littlefield",
"followers_url": "https://api.github.com/users/david-littlefield/followers",
"following_url": "https://api.github.com/users/david-littlefield/following{/other_user}",
"gists_url": "https://api.github.com/users/david-littlefield/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-littlefield/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-littlefield/subscriptions",
"organizations_url": "https://api.github.com/users/david-littlefield/orgs",
"repos_url": "https://api.github.com/users/david-littlefield/repos",
"events_url": "https://api.github.com/users/david-littlefield/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-littlefield/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-26T13:25:19 | 2025-09-03T08:03:01 | 2025-09-03T08:03:01 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.54.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.34.1
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu128 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: no
- Using GPU in script?: yes
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The SigLIP2 documentation example has two issues:
1. The example loads mismatched model and processor versions
2. The 4-bit quantization fails with a dtype error
Running the example from https://huggingface.co/docs/transformers/en/model_doc/siglip2:
```python
import torch
import requests
from PIL import Image
from transformers import AutoProcessor, AutoModel, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
model = AutoModel.from_pretrained("google/siglip2-large-patch16-512", quantization_config=bnb_config, device_map="auto", attn_implementation="sdpa")
processor = AutoProcessor.from_pretrained("google/siglip2-base-patch16-224")
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
candidate_labels = ["a Pallas cat", "a lion", "a Siberian tiger"]
# follows the pipeline prompt template to get same results
texts = [f'This is a photo of {label}.' for label in candidate_labels]
# IMPORTANT: we pass `padding=max_length` and `max_length=64` since the model was trained with this
inputs = processor(text=texts, images=image, padding="max_length", max_length=64, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = torch.sigmoid(logits_per_image)
print(f"{probs[0][0]:.1%} that image 0 is '{candidate_labels[0]}'")
```
## Issues
1. **Model/Processor Mismatch**: The example loads `siglip2-large-patch16-512` model but `siglip2-base-patch16-224` processor
2. **Quantization Error**: When run (even after fixing the mismatch), the code fails with:
```
RuntimeError: self and mat2 must have the same dtype, but got Half and Byte
```
### Expected behavior
1. The example should use matching model and processor
2. The quantization should work as shown, or the documentation should note that quantization is not supported for SigLIP2 models | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39692/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39691/comments | https://api.github.com/repos/huggingface/transformers/issues/39691/events | https://github.com/huggingface/transformers/pull/39691 | 3,265,465,073 | PR_kwDOCUB6oc6gv2Nr | 39,691 | fix misspelled issues | {
"login": "ganlerseian",
"id": 158489141,
"node_id": "U_kgDOCXJaNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/158489141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ganlerseian",
"html_url": "https://github.com/ganlerseian",
"followers_url": "https://api.github.com/users/ganlerseian/followers",
"following_url": "https://api.github.com/users/ganlerseian/following{/other_user}",
"gists_url": "https://api.github.com/users/ganlerseian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ganlerseian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganlerseian/subscriptions",
"organizations_url": "https://api.github.com/users/ganlerseian/orgs",
"repos_url": "https://api.github.com/users/ganlerseian/repos",
"events_url": "https://api.github.com/users/ganlerseian/events{/privacy}",
"received_events_url": "https://api.github.com/users/ganlerseian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T11:30:56 | 2025-08-14T17:17:49 | 2025-08-14T17:17:49 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39691",
"html_url": "https://github.com/huggingface/transformers/pull/39691",
"diff_url": "https://github.com/huggingface/transformers/pull/39691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39691.patch",
"merged_at": null
} | # What does this PR do?
fixed spelling issues | {
"login": "ganlerseian",
"id": 158489141,
"node_id": "U_kgDOCXJaNQ",
"avatar_url": "https://avatars.githubusercontent.com/u/158489141?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ganlerseian",
"html_url": "https://github.com/ganlerseian",
"followers_url": "https://api.github.com/users/ganlerseian/followers",
"following_url": "https://api.github.com/users/ganlerseian/following{/other_user}",
"gists_url": "https://api.github.com/users/ganlerseian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ganlerseian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ganlerseian/subscriptions",
"organizations_url": "https://api.github.com/users/ganlerseian/orgs",
"repos_url": "https://api.github.com/users/ganlerseian/repos",
"events_url": "https://api.github.com/users/ganlerseian/events{/privacy}",
"received_events_url": "https://api.github.com/users/ganlerseian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39691/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39690/comments | https://api.github.com/repos/huggingface/transformers/issues/39690/events | https://github.com/huggingface/transformers/pull/39690 | 3,265,272,032 | PR_kwDOCUB6oc6gvRzx | 39,690 | Allow custom hf_quantizer in from_pretrained | {
"login": "tanuj-rai",
"id": 84439872,
"node_id": "MDQ6VXNlcjg0NDM5ODcy",
"avatar_url": "https://avatars.githubusercontent.com/u/84439872?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tanuj-rai",
"html_url": "https://github.com/tanuj-rai",
"followers_url": "https://api.github.com/users/tanuj-rai/followers",
"following_url": "https://api.github.com/users/tanuj-rai/following{/other_user}",
"gists_url": "https://api.github.com/users/tanuj-rai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tanuj-rai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tanuj-rai/subscriptions",
"organizations_url": "https://api.github.com/users/tanuj-rai/orgs",
"repos_url": "https://api.github.com/users/tanuj-rai/repos",
"events_url": "https://api.github.com/users/tanuj-rai/events{/privacy}",
"received_events_url": "https://api.github.com/users/tanuj-rai/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-26T07:15:11 | 2025-08-29T08:25:27 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39690",
"html_url": "https://github.com/huggingface/transformers/pull/39690",
"diff_url": "https://github.com/huggingface/transformers/pull/39690.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39690.patch",
"merged_at": null
} | # What does this PR do?
This PR adds support for passing a custom `hf_quantizer` instance to `from_pretrained` via kwargs.
Fixes #31738
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@matthewdouglas @SunMarc
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39690/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39689/comments | https://api.github.com/repos/huggingface/transformers/issues/39689/events | https://github.com/huggingface/transformers/pull/39689 | 3,264,917,172 | PR_kwDOCUB6oc6guGJz | 39,689 | Fix missing initialization of `FastSpeech2Conformer` | {
"login": "bvantuan",
"id": 37981884,
"node_id": "MDQ6VXNlcjM3OTgxODg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37981884?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bvantuan",
"html_url": "https://github.com/bvantuan",
"followers_url": "https://api.github.com/users/bvantuan/followers",
"following_url": "https://api.github.com/users/bvantuan/following{/other_user}",
"gists_url": "https://api.github.com/users/bvantuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bvantuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bvantuan/subscriptions",
"organizations_url": "https://api.github.com/users/bvantuan/orgs",
"repos_url": "https://api.github.com/users/bvantuan/repos",
"events_url": "https://api.github.com/users/bvantuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/bvantuan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T02:43:09 | 2025-07-28T08:47:40 | 2025-07-28T08:47:40 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39689",
"html_url": "https://github.com/huggingface/transformers/pull/39689",
"diff_url": "https://github.com/huggingface/transformers/pull/39689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39689.patch",
"merged_at": "2025-07-28T08:47:40"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes missing initialization of `FastSpeech2Conformer`. For some reason, the CI pipeline in #39239 did not run the test `test_can_init_all_missing_weights` for `FastSpeech2Conformer`.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Cyrilvallez @ydshieh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39689/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39688/comments | https://api.github.com/repos/huggingface/transformers/issues/39688/events | https://github.com/huggingface/transformers/pull/39688 | 3,264,917,022 | PR_kwDOCUB6oc6guGH1 | 39,688 | fix missing model._tp_size from ep refactor | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-26T02:43:01 | 2025-07-26T10:26:37 | 2025-07-26T10:26:36 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39688",
"html_url": "https://github.com/huggingface/transformers/pull/39688",
"diff_url": "https://github.com/huggingface/transformers/pull/39688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39688.patch",
"merged_at": "2025-07-26T10:26:36"
} | # What does this PR do?
#39501 refactored the TP into `distribute_model` and it seems like this part:
https://github.com/huggingface/transformers/pull/39501/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaL5130-L5132
<img width="565" height="82" alt="Screenshot 2025-07-25 at 10 42 02 PM" src="https://github.com/user-attachments/assets/b3adcaed-e658-4d50-8561-cf4873d4fe59" />
should have been refactored into that new function, but now when doing TP, the model now doesn't have `._tp_size` set which is still needed so all TP training seems to be broken now.
I've added it the logic back to the new `distribute_model` function to restore this functionality.
@ArthurZucker @S1ro1
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39688/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39687/comments | https://api.github.com/repos/huggingface/transformers/issues/39687/events | https://github.com/huggingface/transformers/issues/39687 | 3,264,808,518 | I_kwDOCUB6oc7CmQZG | 39,687 | [DeepSeek-V3] Different rotary embedding implementation between DeepSeek-AI and Transformers | {
"login": "wwwjn",
"id": 40016222,
"node_id": "MDQ6VXNlcjQwMDE2MjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/40016222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwwjn",
"html_url": "https://github.com/wwwjn",
"followers_url": "https://api.github.com/users/wwwjn/followers",
"following_url": "https://api.github.com/users/wwwjn/following{/other_user}",
"gists_url": "https://api.github.com/users/wwwjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwwjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwwjn/subscriptions",
"organizations_url": "https://api.github.com/users/wwwjn/orgs",
"repos_url": "https://api.github.com/users/wwwjn/repos",
"events_url": "https://api.github.com/users/wwwjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwwjn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 9076594403,
"node_id": "LA_kwDOCUB6oc8AAAACHQHW4w",
"url": "https://api.github.com/repos/huggingface/transformers/labels/ROPE",
"name": "ROPE",
"color": "731107",
"default": false,
"description": "Any issues or PR related to the trickyness of ROPE"
}
] | closed | false | null | [] | null | [] | 2025-07-26T01:15:11 | 2025-08-08T07:47:06 | 2025-08-08T00:14:19 | NONE | null | null | null | null | Counterpart issue : https://github.com/deepseek-ai/DeepSeek-V3/issues/938
**Describe the issue**
Hi team, I'm working on reproducing the great deepseek-v3 model on [torchtitan](https://github.com/pytorch/torchtitan/tree/refs/heads/main) . While I'm trying to run numerical verification, I noticed the rotary embedding in HF and this repo is different.
HF: https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/modeling_deepseek.py#L339
- In HF rotary embedding implementation, they explicitly permute (interleave the odd column and even columns) the q_pe / k_pe.
Deepseek-AI: https://huggingface.co/deepseek-ai/DeepSeek-V3/blob/main/modeling_deepseek.py#L339
- In DeepSeek-AI's implementation, they didn't permute the q_pe / k_pe.
- In the `convert.py`, the script load HF weights as it's original order, and this script didn't permute the weights, to accommodate the ordering difference in `apply_rotary_embedding()`.
And this discrepancy will result in different mathematical results after the attention module of the first dense layer.
I want to double-check with the team if I missed something here. Thank you for your help in advance! cc @tianyu-l
**To Reproduce**
Environment: transformers ==4.54.0
I run following 2 runs with code: https://github.com/wwwjn/DeepSeek-V3 . I randomized the same inputs for both runs.
1. a single forward pass using HuggingFace `transformers`, with weights from:https://huggingface.co/deepseek-ai/DeepSeek-V3-0324/tree/main .
```python hf_implementation/hf_implementation.py --num_layers 5 > hf_outputs.txt 2>&1```
2. Using this repo's model implementation and run a single forward pass (To simplify, I didn't use the - First, I used the `convert.py` to convert HF checkpoint weights.
```python convert.py --hf-ckpt-path /data/users/jianiw/dsv3-weights/ --save-path /data/users/jianiw/dsv3-weights-5-layer/ --n-experts 256 --model-parallel 8```
- Second, I run a single forward pass using
```torchrun --nnodes 1 --nproc-per-node 8 inference/run_single_forward.py --config inference/configs/config_671B.json > dsv3-output.txt 2>&1```
Here's the detailed numerical comparison I've seen:
<img width="917" height="488" alt="Image" src="https://github.com/user-attachments/assets/a1b3f865-d0ec-4ed1-a5c0-f039fe6879f2" />
<img width="917" height="628" alt="Image" src="https://github.com/user-attachments/assets/73cc92a1-eca0-4cd0-8543-fb049107b66b" />
**Expected behavior**
Expected behavior: After first dense layer's attention layer, the output should be almost the same (There might be slightly difference because of fp8 vs other pericision).
**Additional context**
We observed the same issue for llama3 model before #30872, and get better understanding with @ArthurZucker 's help - The weights for llama3 on HuggingFace is permuted compared to original Meta's weights . so we need to manually permute the weights back to accommodate the rotary embedding implementation difference. Reference: https://github.com/pytorch/torchtitan/issues/335, https://github.com/pytorch/torchtitan/issues/1291#issuecomment-2997077080 | {
"login": "wwwjn",
"id": 40016222,
"node_id": "MDQ6VXNlcjQwMDE2MjIy",
"avatar_url": "https://avatars.githubusercontent.com/u/40016222?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wwwjn",
"html_url": "https://github.com/wwwjn",
"followers_url": "https://api.github.com/users/wwwjn/followers",
"following_url": "https://api.github.com/users/wwwjn/following{/other_user}",
"gists_url": "https://api.github.com/users/wwwjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wwwjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wwwjn/subscriptions",
"organizations_url": "https://api.github.com/users/wwwjn/orgs",
"repos_url": "https://api.github.com/users/wwwjn/repos",
"events_url": "https://api.github.com/users/wwwjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/wwwjn/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39687/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 2
} | https://api.github.com/repos/huggingface/transformers/issues/39687/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39686/comments | https://api.github.com/repos/huggingface/transformers/issues/39686/events | https://github.com/huggingface/transformers/issues/39686 | 3,264,726,601 | I_kwDOCUB6oc7Cl8ZJ | 39,686 | CRITICAL ISSUE REPORT! GEMMA 3 1B CANNOT RUN! | {
"login": "yukiarimo",
"id": 67983369,
"node_id": "MDQ6VXNlcjY3OTgzMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/67983369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yukiarimo",
"html_url": "https://github.com/yukiarimo",
"followers_url": "https://api.github.com/users/yukiarimo/followers",
"following_url": "https://api.github.com/users/yukiarimo/following{/other_user}",
"gists_url": "https://api.github.com/users/yukiarimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yukiarimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yukiarimo/subscriptions",
"organizations_url": "https://api.github.com/users/yukiarimo/orgs",
"repos_url": "https://api.github.com/users/yukiarimo/repos",
"events_url": "https://api.github.com/users/yukiarimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/yukiarimo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-26T00:22:27 | 2025-07-28T12:07:50 | 2025-07-26T06:27:05 | NONE | null | null | null | null | How to reproduce:
Run this:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the base model in FP16
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/gemma-3-1b-pt",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="mps",
)
# Load and configure the tokenizer
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-1b-pt", trust_remote_code=True)
# Generate the text
prompt = "<bos>Once upon a time"
inputs = tokenizer(prompt, return_tensors="pt").to(base_model.device)
outputs = base_model.generate(inputs.input_ids, max_length=50)
# Decode the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
Error:
```
(yuna) yuki@yuki AI % python gener.py
k_out_updated = k_out_shifted.index_copy(2, update_position, key_states)
Traceback (most recent call last):
File "/Users/yuki/Documents/AI/gener.py", line 19, in <module>
outputs = base_model.generate(inputs.input_ids, max_length=50)
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py", line 2623, in generate
result = self._sample(
File "/opt/anaconda3/envs/yuna/lib/python3.10/site-packages/transformers/generation/utils.py", line 3649, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
```
System: macOS Tahoe, MacBook Pro M1 with 16 GB of RAM | {
"login": "yukiarimo",
"id": 67983369,
"node_id": "MDQ6VXNlcjY3OTgzMzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/67983369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yukiarimo",
"html_url": "https://github.com/yukiarimo",
"followers_url": "https://api.github.com/users/yukiarimo/followers",
"following_url": "https://api.github.com/users/yukiarimo/following{/other_user}",
"gists_url": "https://api.github.com/users/yukiarimo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yukiarimo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yukiarimo/subscriptions",
"organizations_url": "https://api.github.com/users/yukiarimo/orgs",
"repos_url": "https://api.github.com/users/yukiarimo/repos",
"events_url": "https://api.github.com/users/yukiarimo/events{/privacy}",
"received_events_url": "https://api.github.com/users/yukiarimo/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39686/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39685/comments | https://api.github.com/repos/huggingface/transformers/issues/39685/events | https://github.com/huggingface/transformers/issues/39685 | 3,264,703,248 | I_kwDOCUB6oc7Cl2sQ | 39,685 | Qwen 2.5 VL - error without attention_mask | {
"login": "aidando73",
"id": 43259657,
"node_id": "MDQ6VXNlcjQzMjU5NjU3",
"avatar_url": "https://avatars.githubusercontent.com/u/43259657?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aidando73",
"html_url": "https://github.com/aidando73",
"followers_url": "https://api.github.com/users/aidando73/followers",
"following_url": "https://api.github.com/users/aidando73/following{/other_user}",
"gists_url": "https://api.github.com/users/aidando73/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aidando73/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aidando73/subscriptions",
"organizations_url": "https://api.github.com/users/aidando73/orgs",
"repos_url": "https://api.github.com/users/aidando73/repos",
"events_url": "https://api.github.com/users/aidando73/events{/privacy}",
"received_events_url": "https://api.github.com/users/aidando73/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-26T00:06:46 | 2025-09-02T08:02:46 | 2025-09-02T08:02:46 | NONE | null | null | null | null | ### System Info
transformers version v4.53.3
```
<frozen runpy>:128: RuntimeWarning: 'torch.utils.collect_env' found in sys.modules after import of package 'torch.utils', but prior to execution of 'torch.utils.collect_env'; this may result in unpredictable behaviour
Collecting environment information...
PyTorch version: 2.7.0a0+ecf3bae40a.nv25.02
Is debug build: False
CUDA used to build PyTorch: 12.8
ROCM used to build PyTorch: N/A
OS: Ubuntu 24.04.2 LTS (x86_64)
GCC version: (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.39
Python version: 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] (64-bit runtime)
Python platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.8.61
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3
GPU 4: NVIDIA H100 80GB HBM3
GPU 5: NVIDIA H100 80GB HBM3
GPU 6: NVIDIA H100 80GB HBM3
GPU 7: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.7.1
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.7.1
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 208
On-line CPU(s) list: 0-207
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480+
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 52
Socket(s): 2
Stepping: 8
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 6.5 MiB (208 instances)
L1i cache: 6.5 MiB (208 instances)
L2 cache: 416 MiB (104 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-103
NUMA node1 CPU(s): 104-207
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Unknown: No mitigations
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cudnn-frontend==1.10.0
[pip3] nvtx==0.2.5
[pip3] onnx==1.17.0
[pip3] onnxruntime-gpu==1.22.0
[pip3] optree==0.14.0
[pip3] pynvjitlink==0.3.0
[pip3] pytorch-triton==3.2.0+git0d4682f0b.nvinternal
[pip3] torch==2.7.0a0+ecf3bae40a.nv25.2
[pip3] torch_geometric==2.5.3
[pip3] torch_tensorrt==2.6.0a0
[pip3] torchaudio==2.1.0+6ea1133
[pip3] torchdata==0.11.0
[pip3] torchprofile==0.0.4
[pip3] torchvision==0.22.0a0
[pip3] triton==3.3.1
[conda] Could not collect
```
### Who can help?
@zucchini-nlp
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
import torch
import requests
from PIL import Image
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor, AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import os
import transformers.models.qwen2_5_vl.modeling_qwen2_5_vl as qwen2_5_vl_modeling
"""
CUDA_VISIBLE_DEVICES=4,5 python do_not_commit/utils/qwen2_5_inference.py
"""
# PEFT adapter path
# peft_path = "do_not_commit/addon-dir/qwen-addon-lora/train_output-07-05_03-32"
model_path = "/shared/text-models/hf/qwen2p5-vl-32b-instruct"
# Load the base model
print("Loading Qwen2.5-VL model and processor...")
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_path,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="flash_attention_2",
trust_remote_code=True,
)
# print("model", model)
# Enable training mode
model.train()
# Enable gradient checkpointing if supported
if hasattr(model, "gradient_checkpointing_enable"):
model.gradient_checkpointing_enable()
elif hasattr(model, "enable_input_require_grads"):
# For some HuggingFace models
model.enable_input_require_grads()
if hasattr(model, "gradient_checkpointing") and not model.gradient_checkpointing:
model.gradient_checkpointing_enable()
else:
# Fallback: try setting config.use_cache = False and enabling checkpointing attribute
if hasattr(model.config, "use_cache"):
model.config.use_cache = False
if hasattr(model, "gradient_checkpointing"):
model.gradient_checkpointing = True
# Load processor
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
# print("processor", processor)
# Load image
messages = [
{
"role": "user",
"content": "Why is the sky blue?",
}
]
# Apply chat template and process inputs
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
return_tensors = "pt"
# return_tensors = None
inputs = tokenizer(text=text, padding=True, return_tensors=return_tensors)
print("input_ids.shape", inputs["input_ids"].shape)
# print("input_ids.shape", len(inputs["input_ids"]))
# inputs["input_ids"] = torch.tensor(inputs["input_ids"])
# print("inputs", inputs)
del inputs["attention_mask"]
# del inputs["position_ids"]
# Generate response
print("Generating response...")
# generated_ids = model.generate(
# **inputs,
# max_new_tokens=512,
# do_sample=True,
# temperature=0.7,
# top_p=0.9,
# pad_token_id=tokenizer.eos_token_id,
# )
res = model(**inputs, return_dict=True)
print("res", res)
# Decode response
# generated_ids = [output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, generated_ids)]
# response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
# Print result
# print("\nResponse:", response[0])
"""
pixel_values torch.Size([35568, 1176])
image_grid_thw tensor([[ 1, 28, 36],
[ 1, 108, 160],
[ 1, 108, 160]])
"""
```
Throws error:
```
Traceback (most recent call last):
File "/home/aidan/home/fireworks/do_not_commit/utils/qwen2_5_inference.py", line 84, in <module>
res = model(**inputs, return_dict=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/accelerate/hooks.py", line 175, in new_forward
output = module._old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aidan/.local/lib/python3.12/site-packages/transformers/utils/generic.py", line 943, in wrapper
output = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aidan/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1514, in forward
outputs = self.model(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aidan/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 1335, in forward
outputs = self.language_model(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aidan/.local/lib/python3.12/site-packages/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py", line 905, in forward
"full_attention": create_causal_mask(**mask_kwargs),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aidan/.local/lib/python3.12/site-packages/transformers/masking_utils.py", line 758, in create_causal_mask
early_exit, attention_mask, packed_sequence_mask, kv_length, kv_offset = _preprocess_mask_arguments(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aidan/.local/lib/python3.12/site-packages/transformers/masking_utils.py", line 709, in _preprocess_mask_arguments
position_ids = position_ids.expand(batch_size, -1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: expand(torch.cuda.LongTensor{[3, 1, 25]}, size=[1, -1]): the number of sizes provided (2) must be greater or equal to the number of dimensions in the tensor (3)
```
### Expected behavior
Shouldn't error | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39685/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39684/comments | https://api.github.com/repos/huggingface/transformers/issues/39684/events | https://github.com/huggingface/transformers/issues/39684 | 3,264,461,557 | I_kwDOCUB6oc7Ck7r1 | 39,684 | Add multi-candidate & tree search for assisted decoding (speculative decoding) | {
"login": "transcend-0",
"id": 131736853,
"node_id": "U_kgDOB9olFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131736853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/transcend-0",
"html_url": "https://github.com/transcend-0",
"followers_url": "https://api.github.com/users/transcend-0/followers",
"following_url": "https://api.github.com/users/transcend-0/following{/other_user}",
"gists_url": "https://api.github.com/users/transcend-0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/transcend-0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/transcend-0/subscriptions",
"organizations_url": "https://api.github.com/users/transcend-0/orgs",
"repos_url": "https://api.github.com/users/transcend-0/repos",
"events_url": "https://api.github.com/users/transcend-0/events{/privacy}",
"received_events_url": "https://api.github.com/users/transcend-0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-25T21:45:11 | 2025-07-28T10:03:43 | null | NONE | null | null | null | null | ### Feature request
Extend `GenerationMixin`'s `_assisted_decoding` to support multiple candidates and tree search.
Specifically, sample multiple draft tokens per decoding step, thereby improving the acceptance rate and speedup.
### Motivation
Current assisted decoding in `transformers` employs single-candidate drafting, which limits the acceptance rate.
Multi-candidate methods with tree search have shown notable performance in many studies [1-5].
[1] Yang et al. Multi-candidate speculative decoding. *arXiv preprint arXiv:2401.06706*.
[2] Cai et al. Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads. In *ICML 2024*.
[3] Li et al. EAGLE-2: Faster Inference of Language Models with Dynamic Draft Trees. In *EMNLP 2024*.
[4] Hu et al. Towards Optimal Multi-draft Speculative Decoding. In *ICLR 2025*.
[5] Xia et al. Unlocking Efficiency in Large Language Model Inference: A Comprehensive Survey of Speculative Decoding. In *ACL 2024*.
### Your contribution
To add this feature, we could modify the existing `_assisted_decoding` in `generation/utils.py`, or introduce a new decoding strategy (e.g., `_multi_candidates_assisted_decoding`).
Would the maintainers be open to such an addition? I’m happy to contribute to the implementation and documentation.
@zucchini-nlp @gante | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39684/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39683/comments | https://api.github.com/repos/huggingface/transformers/issues/39683/events | https://github.com/huggingface/transformers/pull/39683 | 3,264,429,620 | PR_kwDOCUB6oc6gsheq | 39,683 | Fix issue #39191 respect accelerate config to disable torch.dynamo compilation | {
"login": "bonpiedlaroute",
"id": 17798880,
"node_id": "MDQ6VXNlcjE3Nzk4ODgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17798880?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bonpiedlaroute",
"html_url": "https://github.com/bonpiedlaroute",
"followers_url": "https://api.github.com/users/bonpiedlaroute/followers",
"following_url": "https://api.github.com/users/bonpiedlaroute/following{/other_user}",
"gists_url": "https://api.github.com/users/bonpiedlaroute/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bonpiedlaroute/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bonpiedlaroute/subscriptions",
"organizations_url": "https://api.github.com/users/bonpiedlaroute/orgs",
"repos_url": "https://api.github.com/users/bonpiedlaroute/repos",
"events_url": "https://api.github.com/users/bonpiedlaroute/events{/privacy}",
"received_events_url": "https://api.github.com/users/bonpiedlaroute/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-25T21:27:34 | 2025-08-06T13:17:56 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39683",
"html_url": "https://github.com/huggingface/transformers/pull/39683",
"diff_url": "https://github.com/huggingface/transformers/pull/39683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39683.patch",
"merged_at": null
} | # Description
Fixes #39191 where transformers ignored accelerate configuration to disable torch.dynamo, leading to unexpected compilation and `FailOnRecompileLimitHit` errors.
## Problem
When users configure accelerate to disable torch.dynamo, transformers' automatic compilation in `_valid_auto_compile_criteria()` was not respecting this setting, causing:
- Unwanted torch.compile activation
- Excessive recompilations
- `FailOnRecompileLimitHit` crashes in distributed training scenarios
## Solution
- Added `_is_dynamo_compilation_disabled()` method that checks standard environment variables
- Modified `_valid_auto_compile_criteria()` to respect these environment variables before enabling compilation
## Environment Variables Supported
- `TORCHDYNAMO_DISABLE=1`
## Tests done
- Compilation disabled when env var set
- Normal behavior preserved when no env var
- Backward compatible - no breaking changes
## Usage
Users experiencing the issue can now use:
```bash
export TORCHDYNAMO_DISABLE=1
python training_script.py
```
## Who can review?
@SunMarc @gante @zach-huggingface and @qgallouedec
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39683/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39682/comments | https://api.github.com/repos/huggingface/transformers/issues/39682/events | https://github.com/huggingface/transformers/issues/39682 | 3,264,378,732 | I_kwDOCUB6oc7Cknds | 39,682 | Accelerate beam search decoding via tree attention | {
"login": "transcend-0",
"id": 131736853,
"node_id": "U_kgDOB9olFQ",
"avatar_url": "https://avatars.githubusercontent.com/u/131736853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/transcend-0",
"html_url": "https://github.com/transcend-0",
"followers_url": "https://api.github.com/users/transcend-0/followers",
"following_url": "https://api.github.com/users/transcend-0/following{/other_user}",
"gists_url": "https://api.github.com/users/transcend-0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/transcend-0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/transcend-0/subscriptions",
"organizations_url": "https://api.github.com/users/transcend-0/orgs",
"repos_url": "https://api.github.com/users/transcend-0/repos",
"events_url": "https://api.github.com/users/transcend-0/events{/privacy}",
"received_events_url": "https://api.github.com/users/transcend-0/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-25T21:03:10 | 2025-08-05T00:08:05 | null | NONE | null | null | null | null | ### Feature request
Provide an optional `tree_attention` flag in `model.generate()` that enables Tree-Attention decoding for beam search.
Tree-Attention reuses the common prefixes among beams by organizing them into a prefix tree, reducing redundant computations and improving inference speed.
### Motivation
When `num_beams>1`, the current beam search recomputes for every beam separately even when most tokens are shared.
Tree-Attention fuses these shared prefixes into a single tree-structured batch, cutting both FLOPs and KV Cache.
This method is theoretically lossless and has a significant speedup [1].
<img width="624" height="316" alt="Image" src="https://github.com/user-attachments/assets/0fadb066-5f67-4b4d-9f88-c704d2c00889" />
[1] Yao et al. DeFT: Flash Tree-attention with IO-Awareness for Efficient Tree-search-based LLM Inference. In *ICLR 2024 Workshop*.
### Your contribution
I have simply implemented it on `transformers 4.41–4.45` and released it at [BeamSD](https://github.com/transcend-0/BeamSD). I’m happy to port this feature to the latest `transformers` version.
To achieve the minimum modification, we could only add code immediately before and after the line
`model_outputs = self(**model_inputs, return_dict=True)`
(`line 4079` of `generation/utils.py` in `transformers 4.53.3`).
The idea is:
1. Before it: Convert the batched beam `model_inputs` into a single-batch tree.
2. After it: Convert the returned `model_outputs` back to the original layout so that the rest of the generate loop remains unchanged.
May I open a Pull Request for this idea? I’m happy to contribute the code and the doc.
@gante | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39682/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39681/comments | https://api.github.com/repos/huggingface/transformers/issues/39681/events | https://github.com/huggingface/transformers/pull/39681 | 3,264,013,182 | PR_kwDOCUB6oc6grH3f | 39,681 | More robust tied weight test | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T18:14:06 | 2025-07-25T20:03:23 | 2025-07-25T20:03:21 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39681",
"html_url": "https://github.com/huggingface/transformers/pull/39681",
"diff_url": "https://github.com/huggingface/transformers/pull/39681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39681.patch",
"merged_at": "2025-07-25T20:03:21"
} | # What does this PR do?
As per the title. Models without tied weights should not need to override it to skip it -> let's do it inside directly
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39681/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39680/comments | https://api.github.com/repos/huggingface/transformers/issues/39680/events | https://github.com/huggingface/transformers/pull/39680 | 3,263,997,122 | PR_kwDOCUB6oc6grEWv | 39,680 | Fix tied weight test | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T18:07:14 | 2025-07-25T18:09:35 | 2025-07-25T18:09:33 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39680",
"html_url": "https://github.com/huggingface/transformers/pull/39680",
"diff_url": "https://github.com/huggingface/transformers/pull/39680.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39680.patch",
"merged_at": "2025-07-25T18:09:33"
} | # What does this PR do?
No reason to force to tie the embeddings, as then the test will fail for models that do NOT tie the weights by default
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39680/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39679/comments | https://api.github.com/repos/huggingface/transformers/issues/39679/events | https://github.com/huggingface/transformers/pull/39679 | 3,263,888,532 | PR_kwDOCUB6oc6gqspw | 39,679 | Use auto_docstring for perception_lm fast image processor | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T17:20:19 | 2025-07-25T17:33:18 | 2025-07-25T17:32:49 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39679",
"html_url": "https://github.com/huggingface/transformers/pull/39679",
"diff_url": "https://github.com/huggingface/transformers/pull/39679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39679.patch",
"merged_at": "2025-07-25T17:32:49"
} | Nit, as the title says | {
"login": "yonigozlan",
"id": 74535834,
"node_id": "MDQ6VXNlcjc0NTM1ODM0",
"avatar_url": "https://avatars.githubusercontent.com/u/74535834?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yonigozlan",
"html_url": "https://github.com/yonigozlan",
"followers_url": "https://api.github.com/users/yonigozlan/followers",
"following_url": "https://api.github.com/users/yonigozlan/following{/other_user}",
"gists_url": "https://api.github.com/users/yonigozlan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yonigozlan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yonigozlan/subscriptions",
"organizations_url": "https://api.github.com/users/yonigozlan/orgs",
"repos_url": "https://api.github.com/users/yonigozlan/repos",
"events_url": "https://api.github.com/users/yonigozlan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yonigozlan/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39679/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39678/comments | https://api.github.com/repos/huggingface/transformers/issues/39678/events | https://github.com/huggingface/transformers/pull/39678 | 3,263,833,104 | PR_kwDOCUB6oc6gqgq9 | 39,678 | Add missing flag for CacheLayer | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T16:58:46 | 2025-07-25T17:17:11 | 2025-07-25T17:12:14 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39678",
"html_url": "https://github.com/huggingface/transformers/pull/39678",
"diff_url": "https://github.com/huggingface/transformers/pull/39678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39678.patch",
"merged_at": "2025-07-25T17:12:14"
} | # What does this PR do?
The flag is absolutely critical to work correctly, as the mask creation functions rely on it.
This fixes all the `generate_beyond_sliding_window` tests
cc @manueldeprada @ydshieh for viz | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39678/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39678/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39677/comments | https://api.github.com/repos/huggingface/transformers/issues/39677/events | https://github.com/huggingface/transformers/pull/39677 | 3,263,770,593 | PR_kwDOCUB6oc6gqTWU | 39,677 | Add padding-free to Granite hybrid moe models | {
"login": "garrett361",
"id": 44747910,
"node_id": "MDQ6VXNlcjQ0NzQ3OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/44747910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/garrett361",
"html_url": "https://github.com/garrett361",
"followers_url": "https://api.github.com/users/garrett361/followers",
"following_url": "https://api.github.com/users/garrett361/following{/other_user}",
"gists_url": "https://api.github.com/users/garrett361/gists{/gist_id}",
"starred_url": "https://api.github.com/users/garrett361/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/garrett361/subscriptions",
"organizations_url": "https://api.github.com/users/garrett361/orgs",
"repos_url": "https://api.github.com/users/garrett361/repos",
"events_url": "https://api.github.com/users/garrett361/events{/privacy}",
"received_events_url": "https://api.github.com/users/garrett361/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 6202871275,
"node_id": "LA_kwDOCUB6oc8AAAABcbhN6w",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Flash%20Attention",
"name": "Flash Attention",
"color": "201FF8",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 2025-07-25T16:30:31 | 2025-07-25T18:21:51 | 2025-07-25T18:10:50 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39677",
"html_url": "https://github.com/huggingface/transformers/pull/39677",
"diff_url": "https://github.com/huggingface/transformers/pull/39677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39677.patch",
"merged_at": "2025-07-25T18:10:50"
} | # What does this PR do?
Enables padding-free training for granite hybrid moe models, analogously to #35861
Previously, `**kwargs` were not being properly passed down correctly and attempts at padding-free training were silently wrong.
The padding-free correctness tests were also updated to verify that that the losses for models agree.
CC @vasqu @ArthurZucker @fabianlim @Swanand-Kadhe
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39677/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39677/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39676/comments | https://api.github.com/repos/huggingface/transformers/issues/39676/events | https://github.com/huggingface/transformers/pull/39676 | 3,263,619,038 | PR_kwDOCUB6oc6gpyLj | 39,676 | Fix cache-related tests | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-25T15:32:43 | 2025-07-28T15:30:12 | 2025-07-28T15:30:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39676",
"html_url": "https://github.com/huggingface/transformers/pull/39676",
"diff_url": "https://github.com/huggingface/transformers/pull/39676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39676.patch",
"merged_at": "2025-07-28T15:30:11"
} | # What does this PR do?
As per title, failing after latest [cache compatibility PR](https://github.com/huggingface/transformers/pull/38635) | {
"login": "zucchini-nlp",
"id": 100715397,
"node_id": "U_kgDOBgDLhQ",
"avatar_url": "https://avatars.githubusercontent.com/u/100715397?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zucchini-nlp",
"html_url": "https://github.com/zucchini-nlp",
"followers_url": "https://api.github.com/users/zucchini-nlp/followers",
"following_url": "https://api.github.com/users/zucchini-nlp/following{/other_user}",
"gists_url": "https://api.github.com/users/zucchini-nlp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zucchini-nlp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zucchini-nlp/subscriptions",
"organizations_url": "https://api.github.com/users/zucchini-nlp/orgs",
"repos_url": "https://api.github.com/users/zucchini-nlp/repos",
"events_url": "https://api.github.com/users/zucchini-nlp/events{/privacy}",
"received_events_url": "https://api.github.com/users/zucchini-nlp/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39676/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39675/comments | https://api.github.com/repos/huggingface/transformers/issues/39675/events | https://github.com/huggingface/transformers/pull/39675 | 3,263,580,163 | PR_kwDOCUB6oc6gppo2 | 39,675 | [BugFix]: Support dict and config file path for deepspeed | {
"login": "yeshsurya",
"id": 9417467,
"node_id": "MDQ6VXNlcjk0MTc0Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9417467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeshsurya",
"html_url": "https://github.com/yeshsurya",
"followers_url": "https://api.github.com/users/yeshsurya/followers",
"following_url": "https://api.github.com/users/yeshsurya/following{/other_user}",
"gists_url": "https://api.github.com/users/yeshsurya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeshsurya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeshsurya/subscriptions",
"organizations_url": "https://api.github.com/users/yeshsurya/orgs",
"repos_url": "https://api.github.com/users/yeshsurya/repos",
"events_url": "https://api.github.com/users/yeshsurya/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeshsurya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-25T15:18:57 | 2025-08-01T09:54:23 | null | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39675",
"html_url": "https://github.com/huggingface/transformers/pull/39675",
"diff_url": "https://github.com/huggingface/transformers/pull/39675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39675.patch",
"merged_at": null
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #39673
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39675/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39674/comments | https://api.github.com/repos/huggingface/transformers/issues/39674/events | https://github.com/huggingface/transformers/pull/39674 | 3,263,570,601 | PR_kwDOCUB6oc6gpnih | 39,674 | Fix loss scaling and token aggregation to use only data parallel group | {
"login": "Krish0909",
"id": 134591243,
"node_id": "U_kgDOCAWzCw",
"avatar_url": "https://avatars.githubusercontent.com/u/134591243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Krish0909",
"html_url": "https://github.com/Krish0909",
"followers_url": "https://api.github.com/users/Krish0909/followers",
"following_url": "https://api.github.com/users/Krish0909/following{/other_user}",
"gists_url": "https://api.github.com/users/Krish0909/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Krish0909/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Krish0909/subscriptions",
"organizations_url": "https://api.github.com/users/Krish0909/orgs",
"repos_url": "https://api.github.com/users/Krish0909/repos",
"events_url": "https://api.github.com/users/Krish0909/events{/privacy}",
"received_events_url": "https://api.github.com/users/Krish0909/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2025-07-25T15:15:44 | 2025-07-27T15:03:12 | null | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39674",
"html_url": "https://github.com/huggingface/transformers/pull/39674",
"diff_url": "https://github.com/huggingface/transformers/pull/39674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39674.patch",
"merged_at": null
} | What does this PR do?
This PR fixes a bug in the Trainer where loss and token counts were previously being scaled across all Accelerate processes—including tensor parallel (TP) and context parallel (CP) meshes—leading to inflated training losses when using composable parallelism. After this change, loss scaling and token aggregation will only consider the data parallel group, aligning TP/CP runs with pure DDP behavior.
Fixes: Fixes <issues>#39648
Changes
Loss scaling: Replaced self.accelerator.num_processes with self.accelerator.state.num_data_parallel_processes when applying average_tokens_across_devices.
Token aggregation: Updated batching logic to use accelerator.reduce(..., group_type="data") for summing tokens only across the data parallel group.
Motivation and Context
When using Accelerate's composable parallelism (TP/CP), the original implementation erroneously multiplied the loss by the total number of processes (DP × TP × CP). This resulted in losses that were N× larger (where N = TP × CP), making training logs and LR schedulers behave incorrectly. By restricting scaling to the data parallel group, we restore consistency with pure DDP runs.
Testing
All existing Trainer integration tests pass (no regressions).
Manual verification:
Ran run_glue.py on MRPC with --tensor_parallel_size 2 --context_parallel_size 2. Logged losses every 10 steps.
Compared against a pure DDP run (no TP/CP flags). Loss trajectories matched within floating-point tolerance.
Before submitting
Who can review?
Trainer: @zach-huggingface, @SunMarc
Accelerate integration: @SunMarc, @zach-huggingface | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39674/timeline | null | null | null | null | true | false |
https://api.github.com/repos/huggingface/transformers/issues/39673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39673/comments | https://api.github.com/repos/huggingface/transformers/issues/39673/events | https://github.com/huggingface/transformers/issues/39673 | 3,263,569,087 | I_kwDOCUB6oc7Chhy_ | 39,673 | error: argument --deepspeed: invalid dict value: '<path>' | {
"login": "yeshsurya",
"id": 9417467,
"node_id": "MDQ6VXNlcjk0MTc0Njc=",
"avatar_url": "https://avatars.githubusercontent.com/u/9417467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yeshsurya",
"html_url": "https://github.com/yeshsurya",
"followers_url": "https://api.github.com/users/yeshsurya/followers",
"following_url": "https://api.github.com/users/yeshsurya/following{/other_user}",
"gists_url": "https://api.github.com/users/yeshsurya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yeshsurya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yeshsurya/subscriptions",
"organizations_url": "https://api.github.com/users/yeshsurya/orgs",
"repos_url": "https://api.github.com/users/yeshsurya/repos",
"events_url": "https://api.github.com/users/yeshsurya/events{/privacy}",
"received_events_url": "https://api.github.com/users/yeshsurya/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-25T15:15:06 | 2025-09-02T08:02:50 | 2025-09-02T08:02:50 | NONE | null | null | null | null | ### System Info
transformer version : 4.53.3 ,
python 3.10
`train.py: error: argument --deepspeed: invalid dict value: '/mnt/zero3.json'`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run a script which users TRL parser. Since it inherits transformer training args.
### Expected behavior
Should not give error | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39673/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39672/comments | https://api.github.com/repos/huggingface/transformers/issues/39672/events | https://github.com/huggingface/transformers/pull/39672 | 3,263,524,643 | PR_kwDOCUB6oc6gpdhc | 39,672 | Delete bad rebasing functions | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T15:00:23 | 2025-07-28T08:02:07 | 2025-07-25T16:28:09 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39672",
"html_url": "https://github.com/huggingface/transformers/pull/39672",
"diff_url": "https://github.com/huggingface/transformers/pull/39672.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39672.patch",
"merged_at": "2025-07-25T16:28:09"
} | # What does this PR do?
https://github.com/huggingface/transformers/pull/39339 added back some functions that were not supposed to be here anymore!
cc @molbap for viz! | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39672/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39672/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39671/comments | https://api.github.com/repos/huggingface/transformers/issues/39671/events | https://github.com/huggingface/transformers/pull/39671 | 3,263,502,222 | PR_kwDOCUB6oc6gpYl6 | 39,671 | Fix ModernBERT Decoder model | {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-25T14:54:32 | 2025-07-25T15:20:12 | 2025-07-25T15:20:12 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39671",
"html_url": "https://github.com/huggingface/transformers/pull/39671",
"diff_url": "https://github.com/huggingface/transformers/pull/39671.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39671.patch",
"merged_at": "2025-07-25T15:20:12"
} | # What does this PR do?
as in title
| {
"login": "qubvel",
"id": 31920396,
"node_id": "MDQ6VXNlcjMxOTIwMzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31920396?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qubvel",
"html_url": "https://github.com/qubvel",
"followers_url": "https://api.github.com/users/qubvel/followers",
"following_url": "https://api.github.com/users/qubvel/following{/other_user}",
"gists_url": "https://api.github.com/users/qubvel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qubvel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qubvel/subscriptions",
"organizations_url": "https://api.github.com/users/qubvel/orgs",
"repos_url": "https://api.github.com/users/qubvel/repos",
"events_url": "https://api.github.com/users/qubvel/events{/privacy}",
"received_events_url": "https://api.github.com/users/qubvel/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39671/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39670/comments | https://api.github.com/repos/huggingface/transformers/issues/39670/events | https://github.com/huggingface/transformers/pull/39670 | 3,263,496,037 | PR_kwDOCUB6oc6gpXOn | 39,670 | skip `Glm4MoeModelTest::test_torch_compile_for_training` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T14:52:49 | 2025-07-28T14:30:42 | 2025-07-28T14:30:40 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39670",
"html_url": "https://github.com/huggingface/transformers/pull/39670",
"diff_url": "https://github.com/huggingface/transformers/pull/39670.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39670.patch",
"merged_at": "2025-07-28T14:30:40"
} | # What does this PR do?
We have line
> token_indices, weight_indices = torch.where(mask)
which won't work with this test.
It's skipped also in dots1 and deepseek
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39670/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39669/comments | https://api.github.com/repos/huggingface/transformers/issues/39669/events | https://github.com/huggingface/transformers/pull/39669 | 3,263,464,604 | PR_kwDOCUB6oc6gpQZv | 39,669 | Fix AMD dockerfile for audio models | {
"login": "remi-or",
"id": 83456801,
"node_id": "MDQ6VXNlcjgzNDU2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83456801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/remi-or",
"html_url": "https://github.com/remi-or",
"followers_url": "https://api.github.com/users/remi-or/followers",
"following_url": "https://api.github.com/users/remi-or/following{/other_user}",
"gists_url": "https://api.github.com/users/remi-or/gists{/gist_id}",
"starred_url": "https://api.github.com/users/remi-or/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/remi-or/subscriptions",
"organizations_url": "https://api.github.com/users/remi-or/orgs",
"repos_url": "https://api.github.com/users/remi-or/repos",
"events_url": "https://api.github.com/users/remi-or/events{/privacy}",
"received_events_url": "https://api.github.com/users/remi-or/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T14:42:25 | 2025-07-28T17:05:41 | 2025-07-28T17:05:41 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39669",
"html_url": "https://github.com/huggingface/transformers/pull/39669",
"diff_url": "https://github.com/huggingface/transformers/pull/39669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39669.patch",
"merged_at": "2025-07-28T17:05:41"
} | This PR modifies the AMD dockerfile because its base container `rocm/pytorch:rocm6.4.1_ubuntu24.04_py3.12_pytorch_release_2.7.1` has been updated by AMD to include `torchaudio` . Since it also includes `torchvision`, I removed the line installing both those packages again.
I also added the `audio` dependency to `transformers` and the `torchcodec` package, because it seems necessary on AMD, otherwise test fail when decoding audio. I have tested the Nvidia container on A100 and `torchcodec` does not seem to be needed there, which might be because either AMD has this additional dependency or because its dependecies are older.
Also pinned `parametrized` to 0.9 or above, because otherwise test parametrized this way:
```@parameterized.expand([256, 512, 768, 1024])```
(from `tests/models/gemma3n/test_processing_gemma3n.py` line 106) will fail, for instance with version 8.1 which was in the AMD container. | {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39669/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39668/comments | https://api.github.com/repos/huggingface/transformers/issues/39668/events | https://github.com/huggingface/transformers/issues/39668 | 3,263,391,235 | I_kwDOCUB6oc7Cg2YD | 39,668 | Issue when initializing a DynamicCache | {
"login": "xadupre",
"id": 22452781,
"node_id": "MDQ6VXNlcjIyNDUyNzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/22452781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xadupre",
"html_url": "https://github.com/xadupre",
"followers_url": "https://api.github.com/users/xadupre/followers",
"following_url": "https://api.github.com/users/xadupre/following{/other_user}",
"gists_url": "https://api.github.com/users/xadupre/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xadupre/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xadupre/subscriptions",
"organizations_url": "https://api.github.com/users/xadupre/orgs",
"repos_url": "https://api.github.com/users/xadupre/repos",
"events_url": "https://api.github.com/users/xadupre/events{/privacy}",
"received_events_url": "https://api.github.com/users/xadupre/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-25T14:17:58 | 2025-09-02T08:02:51 | 2025-09-02T08:02:51 | CONTRIBUTOR | null | null | null | null | ### System Info
In Cache constructor (https://github.com/huggingface/transformers/blob/main/src/transformers/cache_utils.py#L1081), there are these two lines which empty layers to the cache.
```python
self.num_hidden_layers = getattr(config, "num_hidden_layers", 1)
self.append_new_layers(self.num_hidden_layers - 1)
```
In DynamicCache constructor (https://github.com/huggingface/transformers/blob/main/src/transformers/cache_utils.py#L1304),
which adds layer as well.
```python
if ddp_cache_data is not None:
for key_states, value_states in ddp_cache_data:
self.layers.append(DynamicLayer.from_tensors(key_states, value_states))
```
As a result, the cache contains more layers than expected. I assume the constructor of DynamicCache should call ``update`` to avoid this or the constructor of Cache could simplified by removing the two lines adding n-1 layers.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
cache = transformers.cache_utils.DynamicCache(key_value_pairs)
```
### Expected behavior
The cache should N layers if key_value_pairs has N elements. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39668/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39668/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39667/comments | https://api.github.com/repos/huggingface/transformers/issues/39667/events | https://github.com/huggingface/transformers/pull/39667 | 3,263,376,267 | PR_kwDOCUB6oc6go9C8 | 39,667 | Reduce atol values in test_dynamic_cache_exportability | {
"login": "st81",
"id": 58893365,
"node_id": "MDQ6VXNlcjU4ODkzMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/58893365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/st81",
"html_url": "https://github.com/st81",
"followers_url": "https://api.github.com/users/st81/followers",
"following_url": "https://api.github.com/users/st81/following{/other_user}",
"gists_url": "https://api.github.com/users/st81/gists{/gist_id}",
"starred_url": "https://api.github.com/users/st81/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/st81/subscriptions",
"organizations_url": "https://api.github.com/users/st81/orgs",
"repos_url": "https://api.github.com/users/st81/repos",
"events_url": "https://api.github.com/users/st81/events{/privacy}",
"received_events_url": "https://api.github.com/users/st81/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T14:13:26 | 2025-08-01T08:14:49 | 2025-08-01T08:14:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39667",
"html_url": "https://github.com/huggingface/transformers/pull/39667",
"diff_url": "https://github.com/huggingface/transformers/pull/39667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39667.patch",
"merged_at": null
} | # What does this PR do?
The `atol` values were increased in #39412, but this change doesn't actually increase the difference between original and exported model key value layers. The logits `atol` was also set too high at 1e-5 when 1e-7 is sufficient.
After this change, the test still passes.
```sh
$ python3 -m pytest tests/utils/test_cache_utils.py -k test_dynamic_cache_exportability
======================================================================================= test session starts ========================================================================================
platform linux -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0
rootdir: /home/shutotakahashi/projects/transformers-uv/transformers
configfile: pyproject.toml
collected 61 items / 59 deselected / 2 selected
tests/utils/test_cache_utils.py::CacheExportIntegrationTest::test_dynamic_cache_exportability PASSED [ 50%]
tests/utils/test_cache_utils.py::CacheExportIntegrationTest::test_dynamic_cache_exportability_multiple_run PASSED
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @vasqu
- generate: @zucchini-nlp | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39667/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39666/comments | https://api.github.com/repos/huggingface/transformers/issues/39666/events | https://github.com/huggingface/transformers/pull/39666 | 3,263,261,342 | PR_kwDOCUB6oc6gokNl | 39,666 | Update `QAPipelineTests::test_large_model_course` after #39193 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T13:35:20 | 2025-07-28T14:26:51 | 2025-07-28T14:26:49 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39666",
"html_url": "https://github.com/huggingface/transformers/pull/39666",
"diff_url": "https://github.com/huggingface/transformers/pull/39666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39666.patch",
"merged_at": "2025-07-28T14:26:49"
} | # What does this PR do?
Update `QAPipelineTests::test_large_model_course` after #39193 | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39666/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39665/comments | https://api.github.com/repos/huggingface/transformers/issues/39665/events | https://github.com/huggingface/transformers/pull/39665 | 3,263,227,004 | PR_kwDOCUB6oc6goczH | 39,665 | Add xlstm model | {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T13:23:25 | 2025-07-25T17:39:19 | 2025-07-25T17:39:17 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39665",
"html_url": "https://github.com/huggingface/transformers/pull/39665",
"diff_url": "https://github.com/huggingface/transformers/pull/39665.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39665.patch",
"merged_at": "2025-07-25T17:39:17"
} | # What does this PR do?
This PR supersedes https://github.com/huggingface/transformers/pull/35377 for convenience, because I could not push on the other PR for some final cleanups (org where the fork was made is protected or something)
| {
"login": "Cyrilvallez",
"id": 71554963,
"node_id": "MDQ6VXNlcjcxNTU0OTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/71554963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Cyrilvallez",
"html_url": "https://github.com/Cyrilvallez",
"followers_url": "https://api.github.com/users/Cyrilvallez/followers",
"following_url": "https://api.github.com/users/Cyrilvallez/following{/other_user}",
"gists_url": "https://api.github.com/users/Cyrilvallez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Cyrilvallez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Cyrilvallez/subscriptions",
"organizations_url": "https://api.github.com/users/Cyrilvallez/orgs",
"repos_url": "https://api.github.com/users/Cyrilvallez/repos",
"events_url": "https://api.github.com/users/Cyrilvallez/events{/privacy}",
"received_events_url": "https://api.github.com/users/Cyrilvallez/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39665/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39664/comments | https://api.github.com/repos/huggingface/transformers/issues/39664/events | https://github.com/huggingface/transformers/pull/39664 | 3,263,173,200 | PR_kwDOCUB6oc6goQ_7 | 39,664 | [`Ernie 4.5`] Post merge adaptations | {
"login": "vasqu",
"id": 73884904,
"node_id": "MDQ6VXNlcjczODg0OTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/73884904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasqu",
"html_url": "https://github.com/vasqu",
"followers_url": "https://api.github.com/users/vasqu/followers",
"following_url": "https://api.github.com/users/vasqu/following{/other_user}",
"gists_url": "https://api.github.com/users/vasqu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasqu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasqu/subscriptions",
"organizations_url": "https://api.github.com/users/vasqu/orgs",
"repos_url": "https://api.github.com/users/vasqu/repos",
"events_url": "https://api.github.com/users/vasqu/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasqu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T13:05:11 | 2025-09-22T16:56:48 | 2025-07-25T15:36:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39664",
"html_url": "https://github.com/huggingface/transformers/pull/39664",
"diff_url": "https://github.com/huggingface/transformers/pull/39664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39664.patch",
"merged_at": "2025-07-25T15:36:18"
} | Some post-merge adjustments based on the hub config and merged PRs there:
- Rename MoE to Moe to be consistent with other models + hub config architecture
- Remove revision in tests
- MTP is not supported currently so disabling warnings for those keys
This does not affect functionality but more so maintenance cc @ArthurZucker @Cyrilvallez | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39664/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39663/comments | https://api.github.com/repos/huggingface/transformers/issues/39663/events | https://github.com/huggingface/transformers/pull/39663 | 3,263,157,792 | PR_kwDOCUB6oc6goNpg | 39,663 | [modular] small fixes | {
"login": "MHRDYN7",
"id": 113298714,
"node_id": "U_kgDOBsDNGg",
"avatar_url": "https://avatars.githubusercontent.com/u/113298714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MHRDYN7",
"html_url": "https://github.com/MHRDYN7",
"followers_url": "https://api.github.com/users/MHRDYN7/followers",
"following_url": "https://api.github.com/users/MHRDYN7/following{/other_user}",
"gists_url": "https://api.github.com/users/MHRDYN7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MHRDYN7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MHRDYN7/subscriptions",
"organizations_url": "https://api.github.com/users/MHRDYN7/orgs",
"repos_url": "https://api.github.com/users/MHRDYN7/repos",
"events_url": "https://api.github.com/users/MHRDYN7/events{/privacy}",
"received_events_url": "https://api.github.com/users/MHRDYN7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T13:00:10 | 2025-08-06T07:24:35 | 2025-08-04T15:58:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39663",
"html_url": "https://github.com/huggingface/transformers/pull/39663",
"diff_url": "https://github.com/huggingface/transformers/pull/39663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39663.patch",
"merged_at": null
} | Some small fixes:
1. If there are no imports from the modeling files, an exception is raised https://github.com/huggingface/transformers/blob/ff76b6a07fb0d16ee9b31b5f1cd7ddb5f02b6774/utils/create_dependency_mapping.py#L47-L58 Otherwise, the users currently see the assert message only, which is not helpful https://github.com/huggingface/transformers/blob/f90de364c2484c7c325bbe05befdcf487bd75b63/utils/modular_model_converter.py#L1774
2. The cli command for converting the modular files should have the name of the file not the path as per the code in modular_model_converter.py .
Pinging @Cyrilvallez, for being the last editor of these parts of code. | {
"login": "MHRDYN7",
"id": 113298714,
"node_id": "U_kgDOBsDNGg",
"avatar_url": "https://avatars.githubusercontent.com/u/113298714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MHRDYN7",
"html_url": "https://github.com/MHRDYN7",
"followers_url": "https://api.github.com/users/MHRDYN7/followers",
"following_url": "https://api.github.com/users/MHRDYN7/following{/other_user}",
"gists_url": "https://api.github.com/users/MHRDYN7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MHRDYN7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MHRDYN7/subscriptions",
"organizations_url": "https://api.github.com/users/MHRDYN7/orgs",
"repos_url": "https://api.github.com/users/MHRDYN7/repos",
"events_url": "https://api.github.com/users/MHRDYN7/events{/privacy}",
"received_events_url": "https://api.github.com/users/MHRDYN7/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39663/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39662/comments | https://api.github.com/repos/huggingface/transformers/issues/39662/events | https://github.com/huggingface/transformers/pull/39662 | 3,263,026,117 | PR_kwDOCUB6oc6gnwxe | 39,662 | [CI] revert device in `test_export_static_cache` | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T12:16:52 | 2025-07-25T15:36:12 | 2025-07-25T15:36:12 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39662",
"html_url": "https://github.com/huggingface/transformers/pull/39662",
"diff_url": "https://github.com/huggingface/transformers/pull/39662.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39662.patch",
"merged_at": "2025-07-25T15:36:12"
} | # What does this PR do?
#38976 makes our torch export integration compatible with models on GPU.
However, there is possibly something wrong with it: small models cause GPU OOM in our related tests. This PR reverts the device while we [I need help from the export folks] are exploring the issue, to prevent other regressions. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39662/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39661/comments | https://api.github.com/repos/huggingface/transformers/issues/39661/events | https://github.com/huggingface/transformers/pull/39661 | 3,262,876,752 | PR_kwDOCUB6oc6gnQWL | 39,661 | make fixup | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T11:14:55 | 2025-07-25T11:29:52 | 2025-07-25T11:27:45 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39661",
"html_url": "https://github.com/huggingface/transformers/pull/39661",
"diff_url": "https://github.com/huggingface/transformers/pull/39661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39661.patch",
"merged_at": "2025-07-25T11:27:45"
} | # What does this PR do?
(see title) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39661/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39660/comments | https://api.github.com/repos/huggingface/transformers/issues/39660/events | https://github.com/huggingface/transformers/pull/39660 | 3,262,853,918 | PR_kwDOCUB6oc6gnLno | 39,660 | [docs] Ko doc fixes after toc update | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 8103865784,
"node_id": "LA_kwDOCUB6oc8AAAAB4wctuA",
"url": "https://api.github.com/repos/huggingface/transformers/labels/for%20patch",
"name": "for patch",
"color": "D93F0B",
"default": false,
"description": "Tag issues / labels that should be included in the next patch"
}
] | closed | false | null | [] | null | [] | 2025-07-25T11:06:06 | 2025-07-29T16:06:03 | 2025-07-29T16:05:26 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39660",
"html_url": "https://github.com/huggingface/transformers/pull/39660",
"diff_url": "https://github.com/huggingface/transformers/pull/39660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39660.patch",
"merged_at": "2025-07-29T16:05:26"
} | # What does this PR do?
#39516 updates the `ko` table of contents to match the updated `en` docs.
However, a few problems resulted from the update, which causes `doc-builder` [failures](https://github.com/huggingface/transformers/actions/runs/16518240279/job/46713844556):
1. `.md` files that are not used anywhere
2. references to files that don't exist
This PR:
1. Solves the issues described above. Regarding the unused `.md` files, most of them are not part of the updated `en` docs, so they were removed. The following command now completes: `doc-builder build transformers docs/source/ko/ --language ko --clean`
2. Undoes the temporary `doc-builder` change added in #39500 | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39660/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39659/comments | https://api.github.com/repos/huggingface/transformers/issues/39659/events | https://github.com/huggingface/transformers/pull/39659 | 3,262,543,668 | PR_kwDOCUB6oc6gmJ48 | 39,659 | fix(trainer): Correct loss scaling for incomplete gradient accumulation steps | {
"login": "hutaiHang",
"id": 77798564,
"node_id": "MDQ6VXNlcjc3Nzk4NTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/77798564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hutaiHang",
"html_url": "https://github.com/hutaiHang",
"followers_url": "https://api.github.com/users/hutaiHang/followers",
"following_url": "https://api.github.com/users/hutaiHang/following{/other_user}",
"gists_url": "https://api.github.com/users/hutaiHang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hutaiHang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hutaiHang/subscriptions",
"organizations_url": "https://api.github.com/users/hutaiHang/orgs",
"repos_url": "https://api.github.com/users/hutaiHang/repos",
"events_url": "https://api.github.com/users/hutaiHang/events{/privacy}",
"received_events_url": "https://api.github.com/users/hutaiHang/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T09:11:15 | 2025-07-31T09:14:40 | 2025-07-29T15:12:31 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39659",
"html_url": "https://github.com/huggingface/transformers/pull/39659",
"diff_url": "https://github.com/huggingface/transformers/pull/39659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39659.patch",
"merged_at": "2025-07-29T15:12:31"
} | # What does this PR do?
This PR addresses an issue where the loss scaling during gradient accumulation is incorrect for the final optimizer step of an epoch if the total number of batches is not perfectly divisible by `gradient_accumulation_steps`.
Currently, the loss for each micro-batch is always divided by the configured `args.gradient_accumulation_steps`. This leads to the accumulated loss for the final, incomplete cycle being scaled down too much, resulting in an improperly small gradient update for that step.
This fix resolves the issue by dynamically tracking the number of micro-batches processed in each accumulation cycle and using this actual count for loss scaling.
**The changes are as follows:**
1. In the `_inner_training_loop`, a new instance variable `self.cur_gradient_accumulation_steps` is introduced. It is updated at the start of each optimizer step with the actual number of batches being processed (i.e., `len(batch_samples)`).
2. In the `training_step` method, the loss scaling logic now uses this dynamic `self.cur_gradient_accumulation_steps` value instead of the fixed `self.args.gradient_accumulation_steps`.
This ensures that the loss is correctly averaged over the number of batches that actually contributed to the gradient accumulation, regardless of whether the cycle was complete or not. This change has no new dependencies.
Fixes #38837
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. (This PR is for issue #38837)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- trainer: @SunMarc @zach-huggingface @qgallouedec
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39659/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39658/comments | https://api.github.com/repos/huggingface/transformers/issues/39658/events | https://github.com/huggingface/transformers/pull/39658 | 3,262,483,780 | PR_kwDOCUB6oc6gl9PZ | 39,658 | fix break for ckpt without _tp_plan | {
"login": "MoyanZitto",
"id": 10725096,
"node_id": "MDQ6VXNlcjEwNzI1MDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/10725096?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoyanZitto",
"html_url": "https://github.com/MoyanZitto",
"followers_url": "https://api.github.com/users/MoyanZitto/followers",
"following_url": "https://api.github.com/users/MoyanZitto/following{/other_user}",
"gists_url": "https://api.github.com/users/MoyanZitto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoyanZitto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoyanZitto/subscriptions",
"organizations_url": "https://api.github.com/users/MoyanZitto/orgs",
"repos_url": "https://api.github.com/users/MoyanZitto/repos",
"events_url": "https://api.github.com/users/MoyanZitto/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoyanZitto/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T08:50:06 | 2025-07-25T18:03:48 | 2025-07-25T18:03:48 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39658",
"html_url": "https://github.com/huggingface/transformers/pull/39658",
"diff_url": "https://github.com/huggingface/transformers/pull/39658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39658.patch",
"merged_at": "2025-07-25T18:03:48"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Models without a tp_plan attribute or with _tp_plan set to None would cause a TypeError during caching_allocator_warmup, which breaks my ckpt in hf 4.52.x or later.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Cyrilvallez
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39658/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39657/comments | https://api.github.com/repos/huggingface/transformers/issues/39657/events | https://github.com/huggingface/transformers/pull/39657 | 3,262,390,814 | PR_kwDOCUB6oc6glpa5 | 39,657 | update ernie model card | {
"login": "jzhang533",
"id": 29231,
"node_id": "MDQ6VXNlcjI5MjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/29231?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jzhang533",
"html_url": "https://github.com/jzhang533",
"followers_url": "https://api.github.com/users/jzhang533/followers",
"following_url": "https://api.github.com/users/jzhang533/following{/other_user}",
"gists_url": "https://api.github.com/users/jzhang533/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jzhang533/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jzhang533/subscriptions",
"organizations_url": "https://api.github.com/users/jzhang533/orgs",
"repos_url": "https://api.github.com/users/jzhang533/repos",
"events_url": "https://api.github.com/users/jzhang533/events{/privacy}",
"received_events_url": "https://api.github.com/users/jzhang533/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T08:13:10 | 2025-07-31T03:19:59 | 2025-07-28T10:21:18 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39657",
"html_url": "https://github.com/huggingface/transformers/pull/39657",
"diff_url": "https://github.com/huggingface/transformers/pull/39657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39657.patch",
"merged_at": "2025-07-28T10:21:18"
} | # What does this PR do?
Replaced the previous model card for ERNIE by the standardized model card introduced in issue https://github.com/huggingface/transformers/issues/36979
Links to recent introduced ERNIE 4.5 and ERNIE 4.5 MOE models are also added in this model card.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@stevhliu
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39657/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39656/comments | https://api.github.com/repos/huggingface/transformers/issues/39656/events | https://github.com/huggingface/transformers/issues/39656 | 3,262,269,281 | I_kwDOCUB6oc7Cckdh | 39,656 | T5Gemma training not working | {
"login": "NilsHellwig",
"id": 44339207,
"node_id": "MDQ6VXNlcjQ0MzM5MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/44339207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NilsHellwig",
"html_url": "https://github.com/NilsHellwig",
"followers_url": "https://api.github.com/users/NilsHellwig/followers",
"following_url": "https://api.github.com/users/NilsHellwig/following{/other_user}",
"gists_url": "https://api.github.com/users/NilsHellwig/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NilsHellwig/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NilsHellwig/subscriptions",
"organizations_url": "https://api.github.com/users/NilsHellwig/orgs",
"repos_url": "https://api.github.com/users/NilsHellwig/repos",
"events_url": "https://api.github.com/users/NilsHellwig/events{/privacy}",
"received_events_url": "https://api.github.com/users/NilsHellwig/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-25T07:24:58 | 2025-09-01T08:03:15 | 2025-09-01T08:03:15 | NONE | null | null | null | null | ### System Info
- `transformers` version: 4.53.3
- Platform: Linux-6.11.0-29-generic-x86_64-with-glibc2.39
- Python version: 3.11.13
- Huggingface_hub version: 0.33.5
- Safetensors version: 0.5.3
- Accelerate version: 1.9.0
- Accelerate config: not found
- DeepSpeed version: not installed
- PyTorch version (accelerator?): 2.7.1+cu126 (CUDA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. run the code:
```
#!/usr/bin/env python3
"""
T5Gemma Fine-tuning Example for Classification Task
Uses IMDB sentiment classification dataset
"""
from datasets import load_dataset, concatenate_datasets
import numpy as np
import torch
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
from transformers import (
AutoTokenizer,
AutoModelForSeq2SeqLM,
Seq2SeqTrainer,
Seq2SeqTrainingArguments,
EvalPrediction
)
# Device detection
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")
# Load model and tokenizer
model_name = "google/t5gemma-b-b-ul2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(
model_name,
attn_implementation="eager",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Load IMDB dataset for sentiment classification
print("Loading IMDB dataset...")
dataset = load_dataset("imdb")
# Use smaller subset for faster training
# Create balanced subsets with equal positive and negative samples
train_pos = dataset["train"].filter(lambda x: x["label"] == 1).select(range(500))
train_neg = dataset["train"].filter(lambda x: x["label"] == 0).select(range(500))
train_subset = concatenate_datasets([train_pos, train_neg])
test_pos = dataset["test"].filter(lambda x: x["label"] == 1).select(range(50))
test_neg = dataset["test"].filter(lambda x: x["label"] == 0).select(range(50))
test_subset = concatenate_datasets([test_pos, test_neg])
def preprocess(example):
"""Preprocessing for sentiment classification"""
# Format input text for T5-style classification
input_text = f"classify sentiment: {example['text'][:512]}" # Truncate long texts
target_text = "positive" if example["label"] == 1 else "negative"
target_text = target_text + tokenizer.eos_token
# Tokenize
model_inputs = tokenizer(input_text, max_length=512, truncation=True, padding="max_length")
labels = tokenizer(target_text, max_length=10, truncation=True, padding="max_length")
# Prepare labels for loss calculation
labels_ids = [label if label != tokenizer.pad_token_id else -100 for label in labels["input_ids"]]
model_inputs["labels"] = labels_ids
return model_inputs
def compute_metrics(eval_pred: EvalPrediction):
"""Compute accuracy and F1 score"""
predictions, labels = eval_pred
# Decode predictions and labels
pred_str = tokenizer.batch_decode(predictions, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
label_str = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Convert sentiment strings to binary labels
pred_labels = [1 if p.strip().lower() == "positive" else 0 for p in pred_str]
true_labels = [1 if l.strip().lower() == "positive" else 0 for l in label_str]
print(pred_labels)
# Calculate metrics
accuracy = accuracy_score(true_labels, pred_labels)
precision, recall, f1, _ = precision_recall_fscore_support(true_labels, pred_labels, average='binary')
return {
"accuracy": accuracy,
"f1": f1,
"precision": precision,
"recall": recall
}
# Tokenize datasets
print("Tokenizing datasets...")
train_tokenized = train_subset.map(preprocess, remove_columns=train_subset.column_names)
test_tokenized = test_subset.map(preprocess, remove_columns=test_subset.column_names)
# Training arguments
training_args = Seq2SeqTrainingArguments(
output_dir="./t5gemma-imdb-finetuned",
eval_strategy="steps",
eval_steps=500,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=1,
save_strategy="steps",
save_steps=500,
save_total_limit=2,
load_best_model_at_end=True,
metric_for_best_model="accuracy",
predict_with_generate=True,
bf16=True,
remove_unused_columns=False, # Important for T5Gemma!
logging_steps=100,
warmup_steps=100,
learning_rate=5e-5,
)
# Initialize trainer
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=train_tokenized,
eval_dataset=test_tokenized,
compute_metrics=compute_metrics,
)
if __name__ == "__main__":
print("Starting training...")
trainer.train()
print("Evaluating on test set...")
test_results = trainer.evaluate(test_tokenized)
print(f"Test Accuracy: {test_results['eval_accuracy']:.4f}")
print(f"Test F1: {test_results['eval_f1']:.4f}")
# Save model
trainer.save_model("./t5gemma-imdb-final")
print("Model saved!")
# Example inference
print("\nExample inference:")
input_text = "classify sentiment: This movie was absolutely fantastic! Great acting and storyline."
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=10)
prediction = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"Input: {input_text}")
print(f"Prediction: {prediction}")
```
### Expected behavior
T5Gemma model should learn. Prediction is empty regardless of hyperparameters. T5-base works fine. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39656/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39655/comments | https://api.github.com/repos/huggingface/transformers/issues/39655/events | https://github.com/huggingface/transformers/pull/39655 | 3,262,025,295 | PR_kwDOCUB6oc6gkavm | 39,655 | fix: typo in transcription-related method names across Voxtral model … | {
"login": "blakkd",
"id": 172533995,
"node_id": "U_kgDOCkio6w",
"avatar_url": "https://avatars.githubusercontent.com/u/172533995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blakkd",
"html_url": "https://github.com/blakkd",
"followers_url": "https://api.github.com/users/blakkd/followers",
"following_url": "https://api.github.com/users/blakkd/following{/other_user}",
"gists_url": "https://api.github.com/users/blakkd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blakkd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blakkd/subscriptions",
"organizations_url": "https://api.github.com/users/blakkd/orgs",
"repos_url": "https://api.github.com/users/blakkd/repos",
"events_url": "https://api.github.com/users/blakkd/events{/privacy}",
"received_events_url": "https://api.github.com/users/blakkd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T05:24:20 | 2025-07-25T05:30:14 | 2025-07-25T05:30:14 | NONE | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39655",
"html_url": "https://github.com/huggingface/transformers/pull/39655",
"diff_url": "https://github.com/huggingface/transformers/pull/39655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39655.patch",
"merged_at": null
} | Fixe misspelled `apply_transcrition_request` to `apply_transcription_request` in:
- Documentation (`voxtral.md`)
- Processor implementation (`processing_voxtral.py`)
- Test files (`test_modeling_voxtral.py`) | {
"login": "blakkd",
"id": 172533995,
"node_id": "U_kgDOCkio6w",
"avatar_url": "https://avatars.githubusercontent.com/u/172533995?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blakkd",
"html_url": "https://github.com/blakkd",
"followers_url": "https://api.github.com/users/blakkd/followers",
"following_url": "https://api.github.com/users/blakkd/following{/other_user}",
"gists_url": "https://api.github.com/users/blakkd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blakkd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blakkd/subscriptions",
"organizations_url": "https://api.github.com/users/blakkd/orgs",
"repos_url": "https://api.github.com/users/blakkd/repos",
"events_url": "https://api.github.com/users/blakkd/events{/privacy}",
"received_events_url": "https://api.github.com/users/blakkd/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39655/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39654/comments | https://api.github.com/repos/huggingface/transformers/issues/39654/events | https://github.com/huggingface/transformers/pull/39654 | 3,261,848,798 | PR_kwDOCUB6oc6gj1yh | 39,654 | Enable xpu allocator on caching_allocator_warmup | {
"login": "jiqing-feng",
"id": 107918818,
"node_id": "U_kgDOBm614g",
"avatar_url": "https://avatars.githubusercontent.com/u/107918818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiqing-feng",
"html_url": "https://github.com/jiqing-feng",
"followers_url": "https://api.github.com/users/jiqing-feng/followers",
"following_url": "https://api.github.com/users/jiqing-feng/following{/other_user}",
"gists_url": "https://api.github.com/users/jiqing-feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiqing-feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiqing-feng/subscriptions",
"organizations_url": "https://api.github.com/users/jiqing-feng/orgs",
"repos_url": "https://api.github.com/users/jiqing-feng/repos",
"events_url": "https://api.github.com/users/jiqing-feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiqing-feng/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T03:13:03 | 2025-07-29T14:06:53 | 2025-07-29T14:06:52 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39654",
"html_url": "https://github.com/huggingface/transformers/pull/39654",
"diff_url": "https://github.com/huggingface/transformers/pull/39654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39654.patch",
"merged_at": "2025-07-29T14:06:52"
} | Fix #39627
| {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39654/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39653/comments | https://api.github.com/repos/huggingface/transformers/issues/39653/events | https://github.com/huggingface/transformers/pull/39653 | 3,261,802,758 | PR_kwDOCUB6oc6gjsKZ | 39,653 | revert change to cu_seqlen_k and max_k when preparing from position_ids | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T02:44:09 | 2025-07-25T08:28:23 | 2025-07-25T08:28:23 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39653",
"html_url": "https://github.com/huggingface/transformers/pull/39653",
"diff_url": "https://github.com/huggingface/transformers/pull/39653.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39653.patch",
"merged_at": "2025-07-25T08:28:23"
} | # What does this PR do?
follow up to PR #[39622](https://github.com/huggingface/transformers/pull/39622) as these new lines from #39474 causes this during training w multi-gpu
```
[rank2]: File "/root/miniconda3/envs/py3.11/lib/python3.11/site-packages/flash_attn/flash_attn_interface.py", line 165, in _flash_attn_varlen_forward
[rank2]: out, softmax_lse, S_dmask, rng_state = flash_attn_gpu.varlen_fwd(
[rank2]: ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank2]: RuntimeError: cu_seqlens_k must have shape (batch_size + 1)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39653/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39653/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39652/comments | https://api.github.com/repos/huggingface/transformers/issues/39652/events | https://github.com/huggingface/transformers/pull/39652 | 3,261,608,343 | PR_kwDOCUB6oc6gjCzi | 39,652 | extend more trainer test cases to XPU, all pass | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-25T00:49:00 | 2025-07-29T16:30:26 | 2025-07-29T08:51:00 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39652",
"html_url": "https://github.com/huggingface/transformers/pull/39652",
"diff_url": "https://github.com/huggingface/transformers/pull/39652.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39652.patch",
"merged_at": "2025-07-29T08:51:00"
} | @ydshieh , pls help review, thx. | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39652/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39652/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39651/comments | https://api.github.com/repos/huggingface/transformers/issues/39651/events | https://github.com/huggingface/transformers/pull/39651 | 3,261,522,831 | PR_kwDOCUB6oc6gixZM | 39,651 | Add self-hosted runner scale set workflow for mi325 CI | {
"login": "jitesh-gupta",
"id": 202713221,
"node_id": "U_kgDODBUohQ",
"avatar_url": "https://avatars.githubusercontent.com/u/202713221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jitesh-gupta",
"html_url": "https://github.com/jitesh-gupta",
"followers_url": "https://api.github.com/users/jitesh-gupta/followers",
"following_url": "https://api.github.com/users/jitesh-gupta/following{/other_user}",
"gists_url": "https://api.github.com/users/jitesh-gupta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jitesh-gupta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jitesh-gupta/subscriptions",
"organizations_url": "https://api.github.com/users/jitesh-gupta/orgs",
"repos_url": "https://api.github.com/users/jitesh-gupta/repos",
"events_url": "https://api.github.com/users/jitesh-gupta/events{/privacy}",
"received_events_url": "https://api.github.com/users/jitesh-gupta/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T23:35:02 | 2025-07-28T11:35:49 | 2025-07-28T11:32:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39651",
"html_url": "https://github.com/huggingface/transformers/pull/39651",
"diff_url": "https://github.com/huggingface/transformers/pull/39651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39651.patch",
"merged_at": "2025-07-28T11:32:26"
} | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ivarflakstad",
"id": 69173633,
"node_id": "MDQ6VXNlcjY5MTczNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/69173633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivarflakstad",
"html_url": "https://github.com/ivarflakstad",
"followers_url": "https://api.github.com/users/ivarflakstad/followers",
"following_url": "https://api.github.com/users/ivarflakstad/following{/other_user}",
"gists_url": "https://api.github.com/users/ivarflakstad/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivarflakstad/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivarflakstad/subscriptions",
"organizations_url": "https://api.github.com/users/ivarflakstad/orgs",
"repos_url": "https://api.github.com/users/ivarflakstad/repos",
"events_url": "https://api.github.com/users/ivarflakstad/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivarflakstad/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39651/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39650/comments | https://api.github.com/repos/huggingface/transformers/issues/39650/events | https://github.com/huggingface/transformers/issues/39650 | 3,261,516,378 | I_kwDOCUB6oc7CZspa | 39,650 | Inference API Returning 404 | {
"login": "FoundationINCCorporateTeam",
"id": 130522379,
"node_id": "U_kgDOB8edCw",
"avatar_url": "https://avatars.githubusercontent.com/u/130522379?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FoundationINCCorporateTeam",
"html_url": "https://github.com/FoundationINCCorporateTeam",
"followers_url": "https://api.github.com/users/FoundationINCCorporateTeam/followers",
"following_url": "https://api.github.com/users/FoundationINCCorporateTeam/following{/other_user}",
"gists_url": "https://api.github.com/users/FoundationINCCorporateTeam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FoundationINCCorporateTeam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FoundationINCCorporateTeam/subscriptions",
"organizations_url": "https://api.github.com/users/FoundationINCCorporateTeam/orgs",
"repos_url": "https://api.github.com/users/FoundationINCCorporateTeam/repos",
"events_url": "https://api.github.com/users/FoundationINCCorporateTeam/events{/privacy}",
"received_events_url": "https://api.github.com/users/FoundationINCCorporateTeam/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-24T23:29:31 | 2025-07-25T17:03:15 | 2025-07-25T17:03:15 | NONE | null | null | null | null | ### System Info
So I am using the hugging face inference API and the model wont work on the inference API but works in the hugging face model playground: huggingface_hub.errors.HfHubHTTPError: 404 Client Error: Not Found for url: https://router.huggingface.co/hf-inference/models/HuggingFaceTB/SmolLM3-3B What should I do?
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce use the hugging face API on: HuggingFaceTB/SmolLM3-3B
### Expected behavior
The expected behavior is to get a response to the request. When you get a parameter wrong when sending a request it gives a correct error message for that param but when you get everything correct it sends 404 | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39650/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39649/comments | https://api.github.com/repos/huggingface/transformers/issues/39649/events | https://github.com/huggingface/transformers/pull/39649 | 3,261,490,860 | PR_kwDOCUB6oc6giqwW | 39,649 | 🌐 [i18n-KO] Translated `deepseek_v3.md` to Korean | {
"login": "ssum21",
"id": 116950962,
"node_id": "U_kgDOBviHsg",
"avatar_url": "https://avatars.githubusercontent.com/u/116950962?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssum21",
"html_url": "https://github.com/ssum21",
"followers_url": "https://api.github.com/users/ssum21/followers",
"following_url": "https://api.github.com/users/ssum21/following{/other_user}",
"gists_url": "https://api.github.com/users/ssum21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssum21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssum21/subscriptions",
"organizations_url": "https://api.github.com/users/ssum21/orgs",
"repos_url": "https://api.github.com/users/ssum21/repos",
"events_url": "https://api.github.com/users/ssum21/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssum21/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T23:11:00 | 2025-09-02T20:35:57 | 2025-09-02T20:35:56 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39649",
"html_url": "https://github.com/huggingface/transformers/pull/39649",
"diff_url": "https://github.com/huggingface/transformers/pull/39649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39649.patch",
"merged_at": "2025-09-02T20:35:56"
} | # What does this PR do?
Translated the `deepseek_v3.md` file of the documentation to Korean.
Thank you in advance for your review.
Part of https://github.com/huggingface/transformers/issues/20179
## Before reviewing
- [x] Check for missing / redundant translations (번역 누락/중복 검사)
- [x] Grammar Check (맞춤법 검사)
- [x] Review or Add new terms to glossary (용어 확인 및 추가)
- [x] Check Inline TOC (e.g. `[[lowercased-header]]`)
- [x] Check live-preview for gotchas (live-preview로 정상작동 확인)
## Who can review? (Initial)
<!-- 1. 위 체크가 모두 완료된 뒤에만 KREW 팀원들에게 리뷰를 요청하는 아래 주석을 노출해주세요!-->
May you please review this PR?
@4N3MONE, @yijun-lee, @jungnerd , @harheem
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review? (Final)
<!-- 2. KREW 팀원들의 리뷰가 끝난 후에 아래 주석을 노출해주세요! -->
@stevhliu May you please review this PR? | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39649/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39648/comments | https://api.github.com/repos/huggingface/transformers/issues/39648/events | https://github.com/huggingface/transformers/issues/39648 | 3,261,013,935 | I_kwDOCUB6oc7CXx-v | 39,648 | Use DP+FSDP device mesh dimensions for scaling loss with default value of average_tokens_across_devices: True | {
"login": "winglian",
"id": 381258,
"node_id": "MDQ6VXNlcjM4MTI1OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/381258?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/winglian",
"html_url": "https://github.com/winglian",
"followers_url": "https://api.github.com/users/winglian/followers",
"following_url": "https://api.github.com/users/winglian/following{/other_user}",
"gists_url": "https://api.github.com/users/winglian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/winglian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/winglian/subscriptions",
"organizations_url": "https://api.github.com/users/winglian/orgs",
"repos_url": "https://api.github.com/users/winglian/repos",
"events_url": "https://api.github.com/users/winglian/events{/privacy}",
"received_events_url": "https://api.github.com/users/winglian/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | null | [] | 2025-07-24T19:32:31 | 2025-09-28T08:03:04 | 2025-09-28T08:03:04 | CONTRIBUTOR | null | null | null | null | ### System Info
So with [this PR changing the default for how the train loss is scaled](https://github.com/huggingface/transformers/pull/39395) and with [accelerate now supporting composable parallelisms](https://github.com/huggingface/accelerate/pull/3682), we need to update these token scaling to only scale across the data parallel dimension groups/meshes and ignore any TP or CP meshes.
https://github.com/huggingface/transformers/blob/ad6fd2da0e09233f757bdb0bab0b1ee2b931f33c/src/transformers/trainer.py#L3900-L3905
and
https://github.com/huggingface/transformers/blob/ad6fd2da0e09233f757bdb0bab0b1ee2b931f33c/src/transformers/trainer.py#L5362-L5371
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
in axolotl, remove the hardcoded `average_tokens_across_devices` setting to use the default True, set any of `context_parallel_size` or `tensor_parallel_size` and the training loss will be scaled up by the product of those values.
### Expected behavior
train loss should be similar to the values when using pure DDP. | {
"login": "github-actions[bot]",
"id": 41898282,
"node_id": "MDM6Qm90NDE4OTgyODI=",
"avatar_url": "https://avatars.githubusercontent.com/in/15368?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github-actions%5Bbot%5D",
"html_url": "https://github.com/apps/github-actions",
"followers_url": "https://api.github.com/users/github-actions%5Bbot%5D/followers",
"following_url": "https://api.github.com/users/github-actions%5Bbot%5D/following{/other_user}",
"gists_url": "https://api.github.com/users/github-actions%5Bbot%5D/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github-actions%5Bbot%5D/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github-actions%5Bbot%5D/subscriptions",
"organizations_url": "https://api.github.com/users/github-actions%5Bbot%5D/orgs",
"repos_url": "https://api.github.com/users/github-actions%5Bbot%5D/repos",
"events_url": "https://api.github.com/users/github-actions%5Bbot%5D/events{/privacy}",
"received_events_url": "https://api.github.com/users/github-actions%5Bbot%5D/received_events",
"type": "Bot",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39648/timeline | null | completed | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | true |
https://api.github.com/repos/huggingface/transformers/issues/39647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39647/comments | https://api.github.com/repos/huggingface/transformers/issues/39647/events | https://github.com/huggingface/transformers/issues/39647 | 3,260,591,024 | I_kwDOCUB6oc7CWKuw | 39,647 | Please develop DataCollatorForVisionLanguageModeling to support visual model training !!! | {
"login": "ShelterWFF",
"id": 115854494,
"node_id": "U_kgDOBufMng",
"avatar_url": "https://avatars.githubusercontent.com/u/115854494?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ShelterWFF",
"html_url": "https://github.com/ShelterWFF",
"followers_url": "https://api.github.com/users/ShelterWFF/followers",
"following_url": "https://api.github.com/users/ShelterWFF/following{/other_user}",
"gists_url": "https://api.github.com/users/ShelterWFF/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ShelterWFF/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ShelterWFF/subscriptions",
"organizations_url": "https://api.github.com/users/ShelterWFF/orgs",
"repos_url": "https://api.github.com/users/ShelterWFF/repos",
"events_url": "https://api.github.com/users/ShelterWFF/events{/privacy}",
"received_events_url": "https://api.github.com/users/ShelterWFF/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | null | [] | 2025-07-24T17:03:51 | 2025-10-04T20:00:40 | null | NONE | null | null | null | null | ### Feature request
Please develop DataCollatorForVisionLanguageModeling to support visual model training !!!
### Motivation
Please develop DataCollatorForVisionLanguageModeling to support visual model training !!!
### Your contribution
- simple like:
```python
from dataclasses import dataclass
from typing import Any, Dict, List, Union
import torch
from PIL import Image
import os
@dataclass
class DataCollatorForVisionLanguageModeling:
processor: Any
tokenizer: Any
train_on_responses_only = True
def __call__(self, examples: List[Dict[str, Any]]) -> Dict[str, torch.Tensor]:
input_ids_list, attention_mask_list, labels_list = [], [], []
pixel_values_list, image_grid_thw_list = [], []
for example in examples:
if "messages" in example:
messages = example["messages"]
elif "conversations" in example:
messages = example["conversations"]
else:
messages = example
text = processor.apply_chat_template(
messages[:1], tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages[:1])
inputs = processor(
text=text,
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = {key: value.tolist() for key, value in inputs.items()}
prompt_ids = inputs["input_ids"][0]
response_ids = tokenizer(messages[1]["content"], add_special_tokens=False)["input_ids"]
input_ids = prompt_ids + response_ids + [tokenizer.eos_token_id]
if self.train_on_responses_only:
label_ids = [-100] * len(prompt_ids) + response_ids + [tokenizer.eos_token_ids]
else:
label_ids = input_ids
attention_mask = len(label_ids) * [1]
assert len(input_ids) == len(label_ids)
input_ids_list.append(input_ids)
attention_mask_list.append(attention_mask)
labels_list.append(label_ids)
pixel_values_list.append(inputs["pixel_values"])
image_grid_thw_list.append(torch.tensor(inputs['image_grid_thw']).squeeze(0).tolist())
return {
"input_ids": torch.tensor(input_ids_list),
"attention_mask": torch.tensor(attention_mask_list),
"labels": torch.tensor(labels_list),
"pixel_values": torch.tensor(pixel_values_list),
"image_grid_thw": torch.tensor(image_grid_thw_list),
}
``` | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39647/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39647/timeline | null | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | {
"blocked_by": 0,
"total_blocked_by": 0,
"blocking": 0,
"total_blocking": 0
} | false | false |
https://api.github.com/repos/huggingface/transformers/issues/39646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39646/comments | https://api.github.com/repos/huggingface/transformers/issues/39646/events | https://github.com/huggingface/transformers/pull/39646 | 3,260,515,396 | PR_kwDOCUB6oc6gfTdz | 39,646 | fix chameleonvision UT failure | {
"login": "yao-matrix",
"id": 7245027,
"node_id": "MDQ6VXNlcjcyNDUwMjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7245027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yao-matrix",
"html_url": "https://github.com/yao-matrix",
"followers_url": "https://api.github.com/users/yao-matrix/followers",
"following_url": "https://api.github.com/users/yao-matrix/following{/other_user}",
"gists_url": "https://api.github.com/users/yao-matrix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yao-matrix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yao-matrix/subscriptions",
"organizations_url": "https://api.github.com/users/yao-matrix/orgs",
"repos_url": "https://api.github.com/users/yao-matrix/repos",
"events_url": "https://api.github.com/users/yao-matrix/events{/privacy}",
"received_events_url": "https://api.github.com/users/yao-matrix/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T16:37:50 | 2025-07-30T16:15:08 | 2025-07-30T12:09:26 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39646",
"html_url": "https://github.com/huggingface/transformers/pull/39646",
"diff_url": "https://github.com/huggingface/transformers/pull/39646.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39646.patch",
"merged_at": "2025-07-30T12:09:26"
} | as discussed in https://github.com/huggingface/diffusers/pull/11690, ut `pytest -rA tests/models/chameleon/test_modeling_chameleon.py::ChameleonVision2SeqModelTest::test_model_parallel_beam_search` w/ 2 cards, the error log is "RuntimeError: Expected all tensors to be on the same device, but found at least two devices,src/transformers/models/chameleon/modeling_chameleon.py", the reason is even residual is in the same device as hidden_states at the beginning, but after they went through some operators as both input and output, they finally placed to different device, but when they come to + which is not a nn.Module(so accelerate cannot pre-hook it), error happens.
so, update `no_split_modules` in `ChameleonVQVAE` to avoid it. @SunMarc, pls help review, thx.
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39646/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39645/comments | https://api.github.com/repos/huggingface/transformers/issues/39645/events | https://github.com/huggingface/transformers/pull/39645 | 3,260,501,194 | PR_kwDOCUB6oc6gfQSF | 39,645 | Default processor message | {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T16:32:53 | 2025-07-25T07:22:41 | 2025-07-25T07:11:44 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39645",
"html_url": "https://github.com/huggingface/transformers/pull/39645",
"diff_url": "https://github.com/huggingface/transformers/pull/39645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39645.patch",
"merged_at": null
} | # What does this PR do?
Small PR adding a `.default_message` method to all multimodal processors. Handles just image and text input for now, expandable to videos if there's interest.
Example usage:
<img width="1099" height="481" alt="image" src="https://github.com/user-attachments/assets/96b6197f-795f-4326-b6f3-49e767620215" />
| {
"login": "molbap",
"id": 39954772,
"node_id": "MDQ6VXNlcjM5OTU0Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/39954772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/molbap",
"html_url": "https://github.com/molbap",
"followers_url": "https://api.github.com/users/molbap/followers",
"following_url": "https://api.github.com/users/molbap/following{/other_user}",
"gists_url": "https://api.github.com/users/molbap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/molbap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/molbap/subscriptions",
"organizations_url": "https://api.github.com/users/molbap/orgs",
"repos_url": "https://api.github.com/users/molbap/repos",
"events_url": "https://api.github.com/users/molbap/events{/privacy}",
"received_events_url": "https://api.github.com/users/molbap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39645/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39645/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39644/comments | https://api.github.com/repos/huggingface/transformers/issues/39644/events | https://github.com/huggingface/transformers/pull/39644 | 3,260,500,170 | PR_kwDOCUB6oc6gfQEN | 39,644 | [docs] fix ko cache docs | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T16:32:25 | 2025-07-25T09:06:07 | 2025-07-25T09:06:03 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39644",
"html_url": "https://github.com/huggingface/transformers/pull/39644",
"diff_url": "https://github.com/huggingface/transformers/pull/39644.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39644.patch",
"merged_at": "2025-07-25T09:06:03"
} | # What does this PR do?
Outdated `Cache` references in the Korean docs are breaking our full doc builder run -> https://github.com/huggingface/transformers/actions/runs/16446789118/job/46480867839
This PR copy-pastes `Cache` references from the English docs. | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39644/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39644/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39643/comments | https://api.github.com/repos/huggingface/transformers/issues/39643/events | https://github.com/huggingface/transformers/pull/39643 | 3,260,363,092 | PR_kwDOCUB6oc6geyLm | 39,643 | mllama outputs refactor | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T15:43:28 | 2025-07-30T10:10:39 | 2025-07-28T13:59:20 | COLLABORATOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39643",
"html_url": "https://github.com/huggingface/transformers/pull/39643",
"diff_url": "https://github.com/huggingface/transformers/pull/39643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39643.patch",
"merged_at": "2025-07-28T13:59:20"
} | refactor using latest outputs merge | {
"login": "itazap",
"id": 31893021,
"node_id": "MDQ6VXNlcjMxODkzMDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31893021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itazap",
"html_url": "https://github.com/itazap",
"followers_url": "https://api.github.com/users/itazap/followers",
"following_url": "https://api.github.com/users/itazap/following{/other_user}",
"gists_url": "https://api.github.com/users/itazap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itazap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itazap/subscriptions",
"organizations_url": "https://api.github.com/users/itazap/orgs",
"repos_url": "https://api.github.com/users/itazap/repos",
"events_url": "https://api.github.com/users/itazap/events{/privacy}",
"received_events_url": "https://api.github.com/users/itazap/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39643/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39642/comments | https://api.github.com/repos/huggingface/transformers/issues/39642/events | https://github.com/huggingface/transformers/pull/39642 | 3,260,321,265 | PR_kwDOCUB6oc6gepGD | 39,642 | Push ci image | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T15:28:29 | 2025-07-24T16:02:19 | 2025-07-24T16:02:19 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39642",
"html_url": "https://github.com/huggingface/transformers/pull/39642",
"diff_url": "https://github.com/huggingface/transformers/pull/39642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39642.patch",
"merged_at": null
} | # What does this PR do?
DO NOT MERGE | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39642/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39641/comments | https://api.github.com/repos/huggingface/transformers/issues/39641/events | https://github.com/huggingface/transformers/pull/39641 | 3,260,310,924 | PR_kwDOCUB6oc6gemzt | 39,641 | Fix quant docker for fp-quant | {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T15:24:58 | 2025-08-04T11:57:10 | 2025-08-04T11:57:09 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39641",
"html_url": "https://github.com/huggingface/transformers/pull/39641",
"diff_url": "https://github.com/huggingface/transformers/pull/39641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39641.patch",
"merged_at": "2025-08-04T11:57:09"
} | # What does this PR do?
This PR fixes an issue with the quantization [docker](https://github.com/huggingface/transformers/actions/runs/16484958005/job/46652740114). fp-quant requires to have [py3.11](https://github.com/IST-DASLab/FP-Quant/blob/b7f7462b54c8d572a63bb7a8668da22a4e93079a/inference_lib/setup.py#L19) but our ci uses 3.9. Hence we don't install for now. Thanks @gante for spotting this.
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39641/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39641/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39640/comments | https://api.github.com/repos/huggingface/transformers/issues/39640/events | https://github.com/huggingface/transformers/pull/39640 | 3,260,281,519 | PR_kwDOCUB6oc6gegfI | 39,640 | [timm] new timm pin | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T15:16:20 | 2025-07-24T16:02:35 | 2025-07-24T16:01:59 | MEMBER | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39640",
"html_url": "https://github.com/huggingface/transformers/pull/39640",
"diff_url": "https://github.com/huggingface/transformers/pull/39640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39640.patch",
"merged_at": "2025-07-24T16:01:59"
} | # What does this PR do?
Updates the pin in timm to allow the latest version (1.0.19), but excludes a version that's incompatible with older python versions (1.0.18).
NOTE: `timm==1.0.18` is causing red CI e.g. [here](https://app.circleci.com/pipelines/github/huggingface/transformers/139243/workflows/33c8dbbb-e29b-429c-8cfb-61e28591f902/jobs/1845235/steps) | {
"login": "gante",
"id": 12240844,
"node_id": "MDQ6VXNlcjEyMjQwODQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/12240844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gante",
"html_url": "https://github.com/gante",
"followers_url": "https://api.github.com/users/gante/followers",
"following_url": "https://api.github.com/users/gante/following{/other_user}",
"gists_url": "https://api.github.com/users/gante/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gante/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gante/subscriptions",
"organizations_url": "https://api.github.com/users/gante/orgs",
"repos_url": "https://api.github.com/users/gante/repos",
"events_url": "https://api.github.com/users/gante/events{/privacy}",
"received_events_url": "https://api.github.com/users/gante/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39640/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39639/comments | https://api.github.com/repos/huggingface/transformers/issues/39639/events | https://github.com/huggingface/transformers/pull/39639 | 3,260,271,189 | PR_kwDOCUB6oc6geeSN | 39,639 | Fix: explicit not none check for tensors in flash attention | {
"login": "jeffrey-dot-li",
"id": 46302202,
"node_id": "MDQ6VXNlcjQ2MzAyMjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/46302202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeffrey-dot-li",
"html_url": "https://github.com/jeffrey-dot-li",
"followers_url": "https://api.github.com/users/jeffrey-dot-li/followers",
"following_url": "https://api.github.com/users/jeffrey-dot-li/following{/other_user}",
"gists_url": "https://api.github.com/users/jeffrey-dot-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jeffrey-dot-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jeffrey-dot-li/subscriptions",
"organizations_url": "https://api.github.com/users/jeffrey-dot-li/orgs",
"repos_url": "https://api.github.com/users/jeffrey-dot-li/repos",
"events_url": "https://api.github.com/users/jeffrey-dot-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/jeffrey-dot-li/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T15:13:41 | 2025-07-25T08:09:15 | 2025-07-25T08:09:15 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39639",
"html_url": "https://github.com/huggingface/transformers/pull/39639",
"diff_url": "https://github.com/huggingface/transformers/pull/39639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39639.patch",
"merged_at": "2025-07-25T08:09:15"
} | # What does this PR do?
Current use_mask check in flash attention:
`use_mask = position_ids is not None or all([cu_seq_lens_q, cu_seq_lens_k, max_length_q, max_length_k])`
throws error
```
use_mask = position_ids is not None or all([cu_seq_lens_q, cu_seq_lens_k, max_length_q, max_length_k])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
Replace with explicit not None check:
` use_mask = position_ids is not None or all(k is not None for k in [cu_seq_lens_q, cu_seq_lens_k, max_length_q, max_length_k])`
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ArthurZucker
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker
- vision models: @amyeroberts, @qubvel
- speech models: @eustlb
- graph models: @clefourrier
Library:
- flax: @gante and @Rocketknight1
- generate: @zucchini-nlp (visual-language models) or @gante (all others)
- pipelines: @Rocketknight1
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @zach-huggingface, @SunMarc and @qgallouedec
- chat templates: @Rocketknight1
Integrations:
- deepspeed: HF Trainer/Accelerate: @SunMarc @zach-huggingface
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc @MekkCyber
Documentation: @stevhliu
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @Rocketknight1
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39639/timeline | null | null | null | null | true | true |
https://api.github.com/repos/huggingface/transformers/issues/39638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/39638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/39638/comments | https://api.github.com/repos/huggingface/transformers/issues/39638/events | https://github.com/huggingface/transformers/pull/39638 | 3,260,245,601 | PR_kwDOCUB6oc6geYrv | 39,638 | Support loading Qwen3 MoE GGUF | {
"login": "ctcanbol",
"id": 103742287,
"node_id": "U_kgDOBi77Tw",
"avatar_url": "https://avatars.githubusercontent.com/u/103742287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ctcanbol",
"html_url": "https://github.com/ctcanbol",
"followers_url": "https://api.github.com/users/ctcanbol/followers",
"following_url": "https://api.github.com/users/ctcanbol/following{/other_user}",
"gists_url": "https://api.github.com/users/ctcanbol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ctcanbol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ctcanbol/subscriptions",
"organizations_url": "https://api.github.com/users/ctcanbol/orgs",
"repos_url": "https://api.github.com/users/ctcanbol/repos",
"events_url": "https://api.github.com/users/ctcanbol/events{/privacy}",
"received_events_url": "https://api.github.com/users/ctcanbol/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2025-07-24T15:06:32 | 2025-08-07T07:34:01 | 2025-07-29T13:44:45 | CONTRIBUTOR | null | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/39638",
"html_url": "https://github.com/huggingface/transformers/pull/39638",
"diff_url": "https://github.com/huggingface/transformers/pull/39638.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/39638.patch",
"merged_at": "2025-07-29T13:44:45"
} | # What does this PR do?
Currently, GGUF versions of Qwen3 MoE models raises "GGUF model with architecture qwen3moe is not supported yet" error. This PR resolves this issue.
Fixes #39721
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#create-a-pull-request),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @SunMarc @MekkCyber
| {
"login": "SunMarc",
"id": 57196510,
"node_id": "MDQ6VXNlcjU3MTk2NTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/57196510?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SunMarc",
"html_url": "https://github.com/SunMarc",
"followers_url": "https://api.github.com/users/SunMarc/followers",
"following_url": "https://api.github.com/users/SunMarc/following{/other_user}",
"gists_url": "https://api.github.com/users/SunMarc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SunMarc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SunMarc/subscriptions",
"organizations_url": "https://api.github.com/users/SunMarc/orgs",
"repos_url": "https://api.github.com/users/SunMarc/repos",
"events_url": "https://api.github.com/users/SunMarc/events{/privacy}",
"received_events_url": "https://api.github.com/users/SunMarc/received_events",
"type": "User",
"user_view_type": "public",
"site_admin": false
} | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/39638/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/39638/timeline | null | null | null | null | true | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.