url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
list | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
โ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
โ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/21990
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21990/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21990/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21990/events
|
https://github.com/huggingface/transformers/pull/21990
| 1,613,066,771
|
PR_kwDOCUB6oc5LdIO4
| 21,990
|
Add Image Completion Transformer (ICT)
|
{
"login": "sheonhan",
"id": 4163701,
"node_id": "MDQ6VXNlcjQxNjM3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4163701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sheonhan",
"html_url": "https://github.com/sheonhan",
"followers_url": "https://api.github.com/users/sheonhan/followers",
"following_url": "https://api.github.com/users/sheonhan/following{/other_user}",
"gists_url": "https://api.github.com/users/sheonhan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sheonhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sheonhan/subscriptions",
"organizations_url": "https://api.github.com/users/sheonhan/orgs",
"repos_url": "https://api.github.com/users/sheonhan/repos",
"events_url": "https://api.github.com/users/sheonhan/events{/privacy}",
"received_events_url": "https://api.github.com/users/sheonhan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21990). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,689
| 1,689
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21990/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21990",
"html_url": "https://github.com/huggingface/transformers/pull/21990",
"diff_url": "https://github.com/huggingface/transformers/pull/21990.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21990.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21989
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21989/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21989/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21989/events
|
https://github.com/huggingface/transformers/issues/21989
| 1,612,715,039
|
I_kwDOCUB6oc5gIBQf
| 21,989
|
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
|
{
"login": "realSAH",
"id": 98207838,
"node_id": "U_kgDOBdqIXg",
"avatar_url": "https://avatars.githubusercontent.com/u/98207838?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/realSAH",
"html_url": "https://github.com/realSAH",
"followers_url": "https://api.github.com/users/realSAH/followers",
"following_url": "https://api.github.com/users/realSAH/following{/other_user}",
"gists_url": "https://api.github.com/users/realSAH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/realSAH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/realSAH/subscriptions",
"organizations_url": "https://api.github.com/users/realSAH/orgs",
"repos_url": "https://api.github.com/users/realSAH/repos",
"events_url": "https://api.github.com/users/realSAH/events{/privacy}",
"received_events_url": "https://api.github.com/users/realSAH/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"You need to execute a model loaded in half precision on a GPU, the operations are not implemented in half on the CPU.",
"@sgugger Then how come that this example works on cpu?\r\n\r\n```\r\nfrom transformers import GPTJForCausalLM\r\nimport torch\r\n\r\nmodel = GPTJForCausalLM.from_pretrained(\r\n \"EleutherAI/gpt-j-6B\", revision=\"float16\", torch_dtype=torch.float16, low_cpu_mem_usage=True\r\n)\r\n```\r\n",
"What code are you using exactly to get the error?\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained('bigscience/bloomz-7b1', torch_dtype=torch.float16)\r\n```\r\nworks perfectly fine.",
"@sgugger \r\n\r\nYes, it **loads up perfectly fine** but if you proceed to build the pipeline and generate text, you get the `Half` implementation error. \r\nI just tried your code again\r\n\r\n```\r\n\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, pipeline\r\nmodel = AutoModelForCausalLM.from_pretrained('bigscience/bloomz-7b1', torch_dtype=torch.float16, low_cpu_mem_usage=True)\r\ng = pipeline(task='text-generation', model=model, tokenizer='bigscience/bloomz-7b1')\r\ng(\"Hi, \")\r\n\r\n```\r\n\r\n\r\nI got this traceback:\r\n\r\n\r\n```\r\n\r\n\r\nIn [1]:\r\n ...: import torch\r\n ...: from transformers import AutoModelForCausalLM, pipeline\r\n ...: model = AutoModelForCausalLM.from_pretrained('bigscience/bloomz-7b1', torch_dtype=torch.float16, low_cpu_mem_usage=True)\r\n ...: g = pipeline(task='text-generation', model=model, tokenizer='bigscience/bloomz-7b1')\r\n ...: g(\"Hi, \")\r\n ...:\r\nC:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\generation\\utils.py:1273: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max\r\n_length` will default to 20 (`generation_config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the\r\nconfig in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.\r\n warnings.warn(\r\nโญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ\r\nโ in <module> โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\pipelines\\text_generation.py:210 in โ\r\nโ __call__ โ\r\nโ โ\r\nโ 207 โ โ โ - **generated_token_ids** (`torch.Tensor` or `tf.Tensor`, present when `retu โ\r\nโ 208 โ โ โ ids of the generated text. โ\r\nโ 209 โ โ \"\"\" โ\r\nโ โฑ 210 โ โ return super().__call__(text_inputs, **kwargs) โ\r\nโ 211 โ โ\r\nโ 212 โ def preprocess(self, prompt_text, prefix=\"\", handle_long_generation=None, **generate โ\r\nโ 213 โ โ inputs = self.tokenizer( โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\pipelines\\base.py:1084 in __call__ โ\r\nโ โ\r\nโ 1081 โ โ โ โ ) โ\r\nโ 1082 โ โ โ ) โ\r\nโ 1083 โ โ else: โ\r\nโ โฑ 1084 โ โ โ return self.run_single(inputs, preprocess_params, forward_params, postproces โ\r\nโ 1085 โ โ\r\nโ 1086 โ def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params): โ\r\nโ 1087 โ โ return [self.run_single(item, preprocess_params, forward_params, postprocess_par โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\pipelines\\base.py:1091 in run_single โ\r\nโ โ\r\nโ 1088 โ โ\r\nโ 1089 โ def run_single(self, inputs, preprocess_params, forward_params, postprocess_params): โ\r\nโ 1090 โ โ model_inputs = self.preprocess(inputs, **preprocess_params) โ\r\nโ โฑ 1091 โ โ model_outputs = self.forward(model_inputs, **forward_params) โ\r\nโ 1092 โ โ outputs = self.postprocess(model_outputs, **postprocess_params) โ\r\nโ 1093 โ โ return outputs โ\r\nโ 1094 โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\pipelines\\base.py:992 in forward โ\r\nโ โ\r\nโ 989 โ โ โ โ inference_context = self.get_inference_context() โ\r\nโ 990 โ โ โ โ with inference_context(): โ\r\nโ 991 โ โ โ โ โ model_inputs = self._ensure_tensor_on_device(model_inputs, device=se โ\r\nโ โฑ 992 โ โ โ โ โ model_outputs = self._forward(model_inputs, **forward_params) โ\r\nโ 993 โ โ โ โ โ model_outputs = self._ensure_tensor_on_device(model_outputs, device= โ\r\nโ 994 โ โ โ else: โ\r\nโ 995 โ โ โ โ raise ValueError(f\"Framework {self.framework} is not supported\") โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\pipelines\\text_generation.py:252 in โ\r\nโ _forward โ\r\nโ โ\r\nโ 249 โ โ โ in_b = input_ids.shape[0] โ\r\nโ 250 โ โ prompt_text = model_inputs.pop(\"prompt_text\") โ\r\nโ 251 โ โ # BS x SL โ\r\nโ โฑ 252 โ โ generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=att โ\r\nโ 253 โ โ out_b = generated_sequence.shape[0] โ\r\nโ 254 โ โ if self.framework == \"pt\": โ\r\nโ 255 โ โ โ generated_sequence = generated_sequence.reshape(in_b, out_b // in_b, *genera โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\torch\\autograd\\grad_mode.py:27 in decorate_context โ\r\nโ โ\r\nโ 24 โ โ @functools.wraps(func) โ\r\nโ 25 โ โ def decorate_context(*args, **kwargs): โ\r\nโ 26 โ โ โ with self.clone(): โ\r\nโ โฑ 27 โ โ โ โ return func(*args, **kwargs) โ\r\nโ 28 โ โ return cast(F, decorate_context) โ\r\nโ 29 โ โ\r\nโ 30 โ def _wrap_generator(self, func): โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\generation\\utils.py:1391 in generate โ\r\nโ โ\r\nโ 1388 โ โ โ โ ) โ\r\nโ 1389 โ โ โ โ\r\nโ 1390 โ โ โ # 11. run greedy search โ\r\nโ โฑ 1391 โ โ โ return self.greedy_search( โ\r\nโ 1392 โ โ โ โ input_ids, โ\r\nโ 1393 โ โ โ โ logits_processor=logits_processor, โ\r\nโ 1394 โ โ โ โ stopping_criteria=stopping_criteria, โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\generation\\utils.py:2179 in โ\r\nโ greedy_search โ\r\nโ โ\r\nโ 2176 โ โ โ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) โ\r\nโ 2177 โ โ โ โ\r\nโ 2178 โ โ โ # forward pass to get next token โ\r\nโ โฑ 2179 โ โ โ outputs = self( โ\r\nโ 2180 โ โ โ โ **model_inputs, โ\r\nโ 2181 โ โ โ โ return_dict=True, โ\r\nโ 2182 โ โ โ โ output_attentions=output_attentions, โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\torch\\nn\\modules\\module.py:1194 in _call_impl โ\r\nโ โ\r\nโ 1191 โ โ # this function, and just call forward. โ\r\nโ 1192 โ โ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o โ\r\nโ 1193 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ\r\nโ โฑ 1194 โ โ โ return forward_call(*input, **kwargs) โ\r\nโ 1195 โ โ # Do not call functions when jit is used โ\r\nโ 1196 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ\r\nโ 1197 โ โ if self._backward_hooks or _global_backward_hooks: โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\models\\bloom\\modeling_bloom.py:900 in โ\r\nโ forward โ\r\nโ โ\r\nโ 897 โ โ โ\r\nโ 898 โ โ return_dict = return_dict if return_dict is not None else self.config.use_return โ\r\nโ 899 โ โ โ\r\nโ โฑ 900 โ โ transformer_outputs = self.transformer( โ\r\nโ 901 โ โ โ input_ids, โ\r\nโ 902 โ โ โ past_key_values=past_key_values, โ\r\nโ 903 โ โ โ attention_mask=attention_mask, โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\torch\\nn\\modules\\module.py:1194 in _call_impl โ\r\nโ โ\r\nโ 1191 โ โ # this function, and just call forward. โ\r\nโ 1192 โ โ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o โ\r\nโ 1193 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ\r\nโ โฑ 1194 โ โ โ return forward_call(*input, **kwargs) โ\r\nโ 1195 โ โ # Do not call functions when jit is used โ\r\nโ 1196 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ\r\nโ 1197 โ โ if self._backward_hooks or _global_backward_hooks: โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\transformers\\models\\bloom\\modeling_bloom.py:729 in โ\r\nโ forward โ\r\nโ โ\r\nโ 726 โ โ if inputs_embeds is None: โ\r\nโ 727 โ โ โ inputs_embeds = self.word_embeddings(input_ids) โ\r\nโ 728 โ โ โ\r\nโ โฑ 729 โ โ hidden_states = self.word_embeddings_layernorm(inputs_embeds) โ\r\nโ 730 โ โ โ\r\nโ 731 โ โ presents = () if use_cache else None โ\r\nโ 732 โ โ all_self_attentions = () if output_attentions else None โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\torch\\nn\\modules\\module.py:1194 in _call_impl โ\r\nโ โ\r\nโ 1191 โ โ # this function, and just call forward. โ\r\nโ 1192 โ โ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o โ\r\nโ 1193 โ โ โ โ or _global_forward_hooks or _global_forward_pre_hooks): โ\r\nโ โฑ 1194 โ โ โ return forward_call(*input, **kwargs) โ\r\nโ 1195 โ โ # Do not call functions when jit is used โ\r\nโ 1196 โ โ full_backward_hooks, non_full_backward_hooks = [], [] โ\r\nโ 1197 โ โ if self._backward_hooks or _global_backward_hooks: โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\torch\\nn\\modules\\normalization.py:190 in forward โ\r\nโ โ\r\nโ 187 โ โ โ init.zeros_(self.bias) โ\r\nโ 188 โ โ\r\nโ 189 โ def forward(self, input: Tensor) -> Tensor: โ\r\nโ โฑ 190 โ โ return F.layer_norm( โ\r\nโ 191 โ โ โ input, self.normalized_shape, self.weight, self.bias, self.eps) โ\r\nโ 192 โ โ\r\nโ 193 โ def extra_repr(self) -> str: โ\r\nโ โ\r\nโ C:\\Users\\aalsaf01\\venvs\\nlp\\lib\\site-packages\\torch\\nn\\functional.py:2515 in layer_norm โ\r\nโ โ\r\nโ 2512 โ โ return handle_torch_function( โ\r\nโ 2513 โ โ โ layer_norm, (input, weight, bias), input, normalized_shape, weight=weight, b โ\r\nโ 2514 โ โ ) โ\r\nโ โฑ 2515 โ return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.c โ\r\nโ 2516 โ\r\nโ 2517 โ\r\nโ 2518 def group_norm( โ\r\nโฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ\r\nRuntimeError: \"LayerNormKernelImpl\" not implemented for 'Half'\r\n```\r\n\r\n\r\nBy the way, it also complained about `accelerate` library not being installed saying that its crucial for `low_cpu` and half precision. Then after installation, the loading works fine, but text generation still fails.\r\n\r\nSo the question would be, why does it still work with GPT-J as per the official example on huggingface docs.\r\n",
"also fault when trying the blog of transformers in https://mp.weixin.qq.com/s/k8rE9GrF97E-0TKJhih9kw.",
"As I said before, you need to **run** your model on the GPU as the operations are not all implemented on the CPU in float16. On CPU you can only run models in float32.",
"Okay, thanks for explaining that. I think an update for docs would be appropriate.\r\n\r\nhttps://huggingface.co/docs/transformers/model_doc/gptj\r\n\r\nOne can indicate that low precision example that works on CPU is just a coincidence as the operations happen to be implemented for CPU. In general, this requires acceleration device.\r\n\r\nI'm not sure if Pytorch have cpu implementation on their agenda.",
"Thanks for pointing this example out! It indeed needs to be add a GPU to work. cc @stevhliu or @MKhalusova if you want to fix it (it's the example just before GPTJConfig on the page linked above that loads the model in float16).",
"Tesla P40 not support Half..."
] | 1,678
| 1,682
| 1,678
|
NONE
| null |
### System Info
I'm feeding flags of low memory and half precision data type to `AutoModelForCausalLM.from_pretrained('bigscience\bloomz7b1')` and I'm receiving the error above.
I'm not sure if this is a bug, is it like those flags are only meant to be passed for specific models for which half precision is implemented? If so, how can one tell in a graceful way?
Those low memory flags seem to work like a dream with other models like `EleutherAI/gpt-j-6B`.
Thanks
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
as above.
### Expected behavior
model loaded in half precision.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21989/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21989/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21988
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21988/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21988/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21988/events
|
https://github.com/huggingface/transformers/pull/21988
| 1,612,684,946
|
PR_kwDOCUB6oc5Lb31I
| 21,988
|
Fix broken link
|
{
"login": "ngoquanghuy99",
"id": 36761076,
"node_id": "MDQ6VXNlcjM2NzYxMDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/36761076?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ngoquanghuy99",
"html_url": "https://github.com/ngoquanghuy99",
"followers_url": "https://api.github.com/users/ngoquanghuy99/followers",
"following_url": "https://api.github.com/users/ngoquanghuy99/following{/other_user}",
"gists_url": "https://api.github.com/users/ngoquanghuy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ngoquanghuy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ngoquanghuy99/subscriptions",
"organizations_url": "https://api.github.com/users/ngoquanghuy99/orgs",
"repos_url": "https://api.github.com/users/ngoquanghuy99/repos",
"events_url": "https://api.github.com/users/ngoquanghuy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/ngoquanghuy99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21988). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,680
| 1,680
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21988/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21988",
"html_url": "https://github.com/huggingface/transformers/pull/21988",
"diff_url": "https://github.com/huggingface/transformers/pull/21988.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21988.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21987
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21987/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21987/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21987/events
|
https://github.com/huggingface/transformers/issues/21987
| 1,612,604,545
|
I_kwDOCUB6oc5gHmSB
| 21,987
|
Long inputs to Flan-T5/UL2 text generation with load_in_8bit=True outputs <pad> tokens repeatedly
|
{
"login": "akkikiki",
"id": 1423362,
"node_id": "MDQ6VXNlcjE0MjMzNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1423362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akkikiki",
"html_url": "https://github.com/akkikiki",
"followers_url": "https://api.github.com/users/akkikiki/followers",
"following_url": "https://api.github.com/users/akkikiki/following{/other_user}",
"gists_url": "https://api.github.com/users/akkikiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akkikiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akkikiki/subscriptions",
"organizations_url": "https://api.github.com/users/akkikiki/orgs",
"repos_url": "https://api.github.com/users/akkikiki/repos",
"events_url": "https://api.github.com/users/akkikiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/akkikiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thanks a lot for the issue @akkikiki !\r\nWhat is the hardware you are using + bnb version?",
"Thanks a lot for the reply!\r\nThe hardware is 8 V100 (16GB) GPUs and the bnb version is 0.37.0.",
"I think sadly there is indeed an issue with V100 right now as stated by @TimDettmers here: https://github.com/huggingface/transformers/pull/21955#issuecomment-1455235281 \r\nIt should be fixed somehow soon, also as stated in this comment, more universal methods (that cover most of GPU hardware) should be published soon!",
"Thanks @younesbelkada!\r\nInteresting, so some smart workaround for GPUs without hardware-level support on int8.\r\n\r\nFYI, I actually played around with `BitsAndBytesConfig`, and seems like `quantization_config = BitsAndBytesConfig(llm_int8_threshold=5.0)` resolved the issue.\r\n\r\nOutput result with `quantization_config = BitsAndBytesConfig(llm_int8_threshold=5.0)`:\r\n```\r\n<pad> A Haiku is a Japanese poetry form that uses a 5-7-5 syllable structure. A typical tweet is limited to 140 characters. The answer is no.</s>\r\n```\r\n\r\nWill just close this thread for now. Thanks again for the heads up on V100 issue!",
"This is great! Thanks for the advice! Would you mind posting it in #21955 so that people can be aware of this hack ๐ ?",
"> This is great! Thanks for the advice! Would you mind posting it in #21955 so that people can be aware of this hack ๐ ?\r\n\r\nWill do!",
"Thanks a lot @akkikiki ! Much apprciated!"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.228-141.415.amzn2int.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
When input texts are short, the generated texts look good.
But when input texts are long e.g., the following, then it produces <pad> tokens.
Input
```
model = T5ForConditionalGeneration.from_pretrained("google/flan-ul2", device_map=device_map) # same with "google/flan-t5-xxl"
load_in_8bit=True)
input_text = """Q: Answer the following yes/no question by reasoning step-by-step. Could a dandelion suffer from hepatitis?
A: Hepatitis only affects organisms with livers. Dandelions donโt have a liver. The answer is yes.
Q: Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?
A: """
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids, max_length=100)
print(tokenizer.decode(outputs[0]))
```
Output:
```
<pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad><pad>
```
### Expected behavior
This is the result when loaded with `load_in_8bit=False`
```
<pad> A Haiku is a Japanese poetry form that uses a 5-7-5 syllable structure. A typical tweet is limited to 140 characters. The answer is no.</s>
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21987/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21986
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21986/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21986/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21986/events
|
https://github.com/huggingface/transformers/pull/21986
| 1,612,419,400
|
PR_kwDOCUB6oc5La-4U
| 21,986
|
save_pretrained crashes when torch_dtype is passed
|
{
"login": "NikolaBorisov",
"id": 184322,
"node_id": "MDQ6VXNlcjE4NDMyMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/184322?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikolaBorisov",
"html_url": "https://github.com/NikolaBorisov",
"followers_url": "https://api.github.com/users/NikolaBorisov/followers",
"following_url": "https://api.github.com/users/NikolaBorisov/following{/other_user}",
"gists_url": "https://api.github.com/users/NikolaBorisov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikolaBorisov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikolaBorisov/subscriptions",
"organizations_url": "https://api.github.com/users/NikolaBorisov/orgs",
"repos_url": "https://api.github.com/users/NikolaBorisov/repos",
"events_url": "https://api.github.com/users/NikolaBorisov/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikolaBorisov/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21986). All of your documentation changes will be reflected on that endpoint.",
"My understanding that the following code should work\r\n\r\n```\r\nfrom transformers import pipeline\r\np = pipeline(\"some/model\", torch_dtype=torch.bfloat16)\r\n``` \r\n\r\nfrom what I can see the tokenizer kwargs basically start with copy of the model_kwargs? \r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L869",
"Indeed, this means that the problem is bigger! Popping the argument is good if we want to keep the pipeline that way, I think we agreed to properly handle the extra tokenizer kwargs @Narsil if you want to take care of it, it will fix this! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"@ArthurZucker @Narsil What do you want to do here. I think this PR does the right thing at the moment because extra arguments to the pipeline are passed around to the tokenizer and then the tokenizer crashes on save_pretrained. \r\nI think merging this is strictly better than the current state. Sure the code could be improved internally. Conceptually there should be something that decides which kwargs to the pipeline should be passed to each of the components in the pipeline.",
"Hey! Thanks for reporting the bug, I merged something that seemed more aligned with what we want: torch dtype should just not be passed to the tokenizer, so poping it outside"
] | 1,678
| 1,680
| 1,680
|
NONE
| null |
If you have the following code
```
p = pipeline(... torch_dtype=torch.bfloat16)
p.save_pretrained()
```
you get a crash because `torch.bfloat16` is not json serializable in the `tokenizer.save_pretrained()` method.
This PR fixes this.
@Narsil
@ArthurZucker
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21986/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21986",
"html_url": "https://github.com/huggingface/transformers/pull/21986",
"diff_url": "https://github.com/huggingface/transformers/pull/21986.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21986.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21985
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21985/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21985/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21985/events
|
https://github.com/huggingface/transformers/issues/21985
| 1,612,279,178
|
I_kwDOCUB6oc5gGW2K
| 21,985
|
Ranking a pre-defined list of output candidates for LMs
|
{
"login": "yuchenlin",
"id": 10104354,
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenlin",
"html_url": "https://github.com/yuchenlin",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey @gante , I was wondering you may will also have great suggestions on this. Thanks a lot in advance! :D. ",
"Hey @yuchenlin ๐ We usually don't add those sorts of tools to `transformers`, but I'd be happy to guide you.\r\n\r\nFirst of all, I'd suggest to check [the documentation and the examples for this function](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/text_generation#transformers.GenerationMixin.compute_transition_scores). Then, by invoking it for several models and/or inputs, you will be able to re-rank candidate options according to your needs :)\r\n\r\nIf you find the end results satisfying, I'd suggest opening a Spaces so it can be shared with the world ๐ ",
"Hi @gante thank you so much for the help! I was reading this before but it seems that this method can only output the transition scores for tokens that are considered by the model.generate process. However, the more general case for this application is that we have some external output candidates that may not be generated by \"model.generate()\". \r\n\r\nAnd sure I will put a solution to a Spaces if I manage to do this and I believe this will help many others! :D ",
"I see. We have in our plans to build a function that returns the score for any candidate sequence, but have other competing priorities :) Feel free to have a go at it, if you're interested!",
"Hi @gante , thanks for letting me know. \r\n\r\nI managed to develop a solution here: https://github.com/yuchenlin/rank_outputs/blob/main/rank_outputs/main.py \r\n\r\nWill try to wrap it up as a more general tool. \r\n\r\nThanks! \r\n\r\n"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### Feature request
Can we support a feature that given an input and a list of output candidates for a LM (say fine-tuned T5 or GPT-2), we can get a score for each output candidate and then return the top K of them?
For example, I fine-tuned a seq2seq model (e.g., BART or T5) for summarization, and now given an input doc and a list of candidate summaries, I want to know which ones are the best. I do not want the fine-tuned model to decode and generate its summaries as we normally do with them.
I believe we can do this by computing the token-by-token log-likelihood (with or without length normalization), and then return the top K from the candidate list.
### Motivation
I was reading the paper of T0, and it mentioned that they used such a method to evaluate tasks with multiple choices:
The paper describes it like this on Page 6 from https://arxiv.org/pdf/2110.08207.pdf:
> For tasks that involve choosing the correct completion from several options (e.g. multiple choice
question answering), we follow Brown et al. (2020) and use **rank classification** to evaluate our
model: we compute the log-likelihood of each of the target options under the fine-tuned model and
select the option with the highest log-likelihood as the prediction. For simplicity, we do not apply
length normalization to the log-likelihoods of the target options.
### Your contribution
I found their code that maybe helpful but I feel that they are a bit hard to use directly.
https://github.com/bigscience-workshop/t-zero/blob/25c0761427f3894a8ec5a062a075b96037fb1492/t0/model.py#L67
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21985/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21984
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21984/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21984/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21984/events
|
https://github.com/huggingface/transformers/pull/21984
| 1,612,123,859
|
PR_kwDOCUB6oc5LZ_EQ
| 21,984
|
Update Jukebox tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"test failure irrelevant to this PR."
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Updage Jukebox tests, for PyTorch 2.0.
The reason is the same as in #21975: tiny diff in scores, but sampling can give different results.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21984/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21984",
"html_url": "https://github.com/huggingface/transformers/pull/21984",
"diff_url": "https://github.com/huggingface/transformers/pull/21984.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21984.patch",
"merged_at": 1678159215000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21983
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21983/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21983/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21983/events
|
https://github.com/huggingface/transformers/pull/21983
| 1,612,014,592
|
PR_kwDOCUB6oc5LZnmy
| 21,983
|
Remove unneeded casts to bool
|
{
"login": "regisss",
"id": 15324346,
"node_id": "MDQ6VXNlcjE1MzI0MzQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/15324346?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regisss",
"html_url": "https://github.com/regisss",
"followers_url": "https://api.github.com/users/regisss/followers",
"following_url": "https://api.github.com/users/regisss/following{/other_user}",
"gists_url": "https://api.github.com/users/regisss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regisss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regisss/subscriptions",
"organizations_url": "https://api.github.com/users/regisss/orgs",
"repos_url": "https://api.github.com/users/regisss/repos",
"events_url": "https://api.github.com/users/regisss/events{/privacy}",
"received_events_url": "https://api.github.com/users/regisss/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR removes some conversions to `torch.bool` which are not needed anymore following #21384.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21983/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21983",
"html_url": "https://github.com/huggingface/transformers/pull/21983",
"diff_url": "https://github.com/huggingface/transformers/pull/21983.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21983.patch",
"merged_at": 1678192550000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21982
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21982/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21982/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21982/events
|
https://github.com/huggingface/transformers/pull/21982
| 1,611,948,101
|
PR_kwDOCUB6oc5LZZNB
| 21,982
|
docs: New terms and updates to glossary
|
{
"login": "MichaelRipa",
"id": 51883134,
"node_id": "MDQ6VXNlcjUxODgzMTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/51883134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MichaelRipa",
"html_url": "https://github.com/MichaelRipa",
"followers_url": "https://api.github.com/users/MichaelRipa/followers",
"following_url": "https://api.github.com/users/MichaelRipa/following{/other_user}",
"gists_url": "https://api.github.com/users/MichaelRipa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MichaelRipa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MichaelRipa/subscriptions",
"organizations_url": "https://api.github.com/users/MichaelRipa/orgs",
"repos_url": "https://api.github.com/users/MichaelRipa/repos",
"events_url": "https://api.github.com/users/MichaelRipa/events{/privacy}",
"received_events_url": "https://api.github.com/users/MichaelRipa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the detailed edits, they are helpful! ๐ I will refresh permissions and also add a link to the pipeline for inference doc tomorrow ",
"Added my update, and I followed the instructions to refresh permissions. I don't seem to have permissions to manually restart the CircleCI Pipeline, so not sure whether the steps I took were reflected in my last commit (I did my updates before refreshing permissions). ",
"You can push an empty commit with `git commit -m \"Trigger CI\" --allow-empty\" and then push to your branch.",
"Thanks for the review! I'll start going over the changes / implementing your suggestions over the weekend ",
"> Thanks for your PR. I don't think we should remove entries from the glossary. Linking to other entries is better. Could you also add an entry for \"self-supervised learning\" since most pretraining of Transformer models use that technique?\r\n\r\nI will add back in the old entries and add links between them, and yeah that's a great idea!",
"Awesome! I committed the suggestion, looks to have merged successfully๐ Thanks again for the help with the edits/reviews! (also feel free to ping me if you see any further edits/additions to be done)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21982). All of your documentation changes will be reflected on that endpoint."
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
Updates to the glossary as proposed in #21801 .
All terms suggested in the issue were added, except for Encoder/Decoder. Acronyms were also added for terms in which it made sense (e.g. NLP)
Note that this PR replaces ***autoencoding models*** with ***encoder models*** (and provides a more detailed definition) as well as merging ***autoregressive models*** and ***causal language modeling*** with ***decoder models***.
This is my first draft and would likely benefit from additional edits/revision. Any suggestions in terms of updates or areas to expand further on are welcome ๐
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21801
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stevhliu
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21982/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21982/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21982",
"html_url": "https://github.com/huggingface/transformers/pull/21982",
"diff_url": "https://github.com/huggingface/transformers/pull/21982.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21982.patch",
"merged_at": 1678748978000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21981
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21981/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21981/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21981/events
|
https://github.com/huggingface/transformers/issues/21981
| 1,611,917,272
|
I_kwDOCUB6oc5gE-fY
| 21,981
|
Function infer_channel_dimension_format has a bug
|
{
"login": "aleksmirosh",
"id": 74064180,
"node_id": "MDQ6VXNlcjc0MDY0MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/74064180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aleksmirosh",
"html_url": "https://github.com/aleksmirosh",
"followers_url": "https://api.github.com/users/aleksmirosh/followers",
"following_url": "https://api.github.com/users/aleksmirosh/following{/other_user}",
"gists_url": "https://api.github.com/users/aleksmirosh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aleksmirosh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aleksmirosh/subscriptions",
"organizations_url": "https://api.github.com/users/aleksmirosh/orgs",
"repos_url": "https://api.github.com/users/aleksmirosh/repos",
"events_url": "https://api.github.com/users/aleksmirosh/events{/privacy}",
"received_events_url": "https://api.github.com/users/aleksmirosh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hi @aleksmirosh - you've tagged the right person :) Thanks for raising this issue. \r\n\r\nThe functions in the image transforms library currently only support grayscale (1 channel) and RGB (3 channel) images. What format are the images you're trying to use? ",
"@amyeroberts \r\nThank you for your quick response and sorry for not complet describing of the issue.\r\nSo I use 6 channels. \r\nI used version 4.24.0 before and tried to update it to the latest.\r\nDo you plan to update to any channel number in the future? \r\n",
"There aren't any immediate plans to accept an arbitrary number of channels. I agree it would be useful and will add it to the potential extensions of the library. \r\n\r\nFor the 6 channel images - are these concatenations of 2, 3-channel images or a single image with 6 channels? If the former, the simplest way to get this working quickly would be to pass each of 3 RBG images to the image processor and then concatenate. However, this will likely be quite inefficient. ",
"Hello! Just wanted to add that I've come across this issue as well, but using the CLIPProcessor which uses https://github.com/huggingface/transformers/blob/2f320661f364557c821c285729dab3881e977363/src/transformers/image_transforms.py#L304\r\n\r\nBasically I pass in images that always have the dimension (H,W,C), but occasionally I'll get images that are (1,1,3) or (3,*,3), but in both cases the first dimension is inferred as the channel dimension, which is not what I intended. The (3,1,3) case will not error, but silently proceed, but the (1,1,3) case errors for me b/c the mean is of length 3 but the inferred num_channels of the image is 1",
"> For the 6 channel images - are these concatenations of 2, 3-channel images or a single image with 6 channels? If the former, the simplest way to get this working quickly would be to pass each of 3 RBG images to the image processor and then concatenate. However, this will likely be quite inefficient.\r\n\r\nthis is 6 channels single image, but I will try to use concatenation. thank you for the advice.\r\nAlso if using the parameter 'data_format=None' could it help?\r\n\r\n@terrykong maybe data_format=None could help with your case?",
"Thanks for the suggestion @aleksmirosh. Unfortunately, that kwarg only affects the output format: https://github.com/huggingface/transformers/blob/2f320661f364557c821c285729dab3881e977363/src/transformers/image_transforms.py#L322-L323\r\n\r\nThe input format is not configurable and is inferred: https://github.com/huggingface/transformers/blob/2f320661f364557c821c285729dab3881e977363/src/transformers/image_transforms.py#L340\r\n\r\nIt would be nice if the input format could be specified.",
"Hi @aleksmirosh , `data_format` specifies the desired output format for the images. I'm working on a PR to add the option to specify for the input format @terrykong ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,693
| 1,693
|
NONE
| null |
### System Info
Hi, I am working on my own DataLoader.
So my input to class transformers.SegformerImageProcessor has shape (6, 512, 512).
So channel first.
As I realized the problem is **image.shape[first_dim]** for me is 0,
but your if construction said channel dimension should be always 1 or 3,
which does not make sense for 3 dim images, because above you assigned first_dim, last_dim = 0, 2.
@amyeroberts
not sure I tagged right person, sorry
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
def infer_channel_dimension_format(image: np.ndarray) -> ChannelDimension:
if image.ndim == 3:
first_dim, last_dim = 0, 2
elif image.ndim == 4:
first_dim, last_dim = 1, 3
else:
raise ValueError(f"Unsupported number of image dimensions: {image.ndim}")
if image.shape[first_dim] in (1, 3):
return ChannelDimension.FIRST
elif image.shape[last_dim] in (1, 3):
return ChannelDimension.LAST
raise ValueError("Unable to infer channel dimension format")
```
any image wish a shape (ch, w, h)
infer_channel_dimension_format(image)
### Expected behavior
I expected function will return ChannelDimension.FIRST for image in shape (CH, W, H)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21981/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21980
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21980/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21980/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21980/events
|
https://github.com/huggingface/transformers/pull/21980
| 1,611,887,155
|
PR_kwDOCUB6oc5LZMUe
| 21,980
|
Fix gradient checkpointing bug in ESM
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21980/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21980",
"html_url": "https://github.com/huggingface/transformers/pull/21980",
"diff_url": "https://github.com/huggingface/transformers/pull/21980.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21980.patch",
"merged_at": 1678124693000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21979
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21979/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21979/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21979/events
|
https://github.com/huggingface/transformers/pull/21979
| 1,611,881,466
|
PR_kwDOCUB6oc5LZLH-
| 21,979
|
Fix gradient checkpointing bug in Codegen
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21979/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21979",
"html_url": "https://github.com/huggingface/transformers/pull/21979",
"diff_url": "https://github.com/huggingface/transformers/pull/21979.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21979.patch",
"merged_at": 1678124672000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21978
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21978/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21978/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21978/events
|
https://github.com/huggingface/transformers/pull/21978
| 1,611,876,051
|
PR_kwDOCUB6oc5LZJ-W
| 21,978
|
Fix gradient checkpointing bug in BlipText
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21978/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21978",
"html_url": "https://github.com/huggingface/transformers/pull/21978",
"diff_url": "https://github.com/huggingface/transformers/pull/21978.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21978.patch",
"merged_at": 1678124633000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21977
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21977/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21977/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21977/events
|
https://github.com/huggingface/transformers/pull/21977
| 1,611,867,159
|
PR_kwDOCUB6oc5LZIBr
| 21,977
|
Fix gradient checkpointing bug in Blenderbot Small
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21977/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21977",
"html_url": "https://github.com/huggingface/transformers/pull/21977",
"diff_url": "https://github.com/huggingface/transformers/pull/21977.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21977.patch",
"merged_at": 1678124605000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21976
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21976/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21976/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21976/events
|
https://github.com/huggingface/transformers/pull/21976
| 1,611,858,033
|
PR_kwDOCUB6oc5LZF-0
| 21,976
|
Fix gradient checkpointing bug in BigBird Pegasus
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21976/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21976",
"html_url": "https://github.com/huggingface/transformers/pull/21976",
"diff_url": "https://github.com/huggingface/transformers/pull/21976.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21976.patch",
"merged_at": 1678124573000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21975
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21975/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21975/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21975/events
|
https://github.com/huggingface/transformers/pull/21975
| 1,611,831,322
|
PR_kwDOCUB6oc5LZAJQ
| 21,975
|
Update expected values for `test_xglm_sample`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Despite the `probs` value has only a difference of `7.59e-07`, the `torch.multinomial(probs, num_samples=1)` still gives different `next_tokens`:
- 3967 (`sun`) in `torch 1.13.1`
- 4565 (`water`) in `torch 2.0`
Currently I keep both values, but we can remove the one for `torch 1.13` soon.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21975/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21975",
"html_url": "https://github.com/huggingface/transformers/pull/21975",
"diff_url": "https://github.com/huggingface/transformers/pull/21975.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21975.patch",
"merged_at": 1678122452000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21974
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21974/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21974/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21974/events
|
https://github.com/huggingface/transformers/pull/21974
| 1,611,828,710
|
PR_kwDOCUB6oc5LY_k4
| 21,974
|
[DETR, YOLOS] Fix device bug
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a device bug for DETR and YOLOS's `post_process_object_detection` methods, which currently give a device mismatch between CPU and CUDA when running the model on CUDA.
The PR also makes sure the postprocess methods are tested in the integration tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21974/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21974",
"html_url": "https://github.com/huggingface/transformers/pull/21974",
"diff_url": "https://github.com/huggingface/transformers/pull/21974.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21974.patch",
"merged_at": 1678192445000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21973
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21973/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21973/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21973/events
|
https://github.com/huggingface/transformers/issues/21973
| 1,611,766,011
|
I_kwDOCUB6oc5gEZj7
| 21,973
|
GitHub Private Vulnerability reporting
|
{
"login": "Sim4n6",
"id": 13036531,
"node_id": "MDQ6VXNlcjEzMDM2NTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/13036531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sim4n6",
"html_url": "https://github.com/Sim4n6",
"followers_url": "https://api.github.com/users/Sim4n6/followers",
"following_url": "https://api.github.com/users/Sim4n6/following{/other_user}",
"gists_url": "https://api.github.com/users/Sim4n6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sim4n6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sim4n6/subscriptions",
"organizations_url": "https://api.github.com/users/Sim4n6/orgs",
"repos_url": "https://api.github.com/users/Sim4n6/repos",
"events_url": "https://api.github.com/users/Sim4n6/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sim4n6/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @Michellehbn ",
"Hi @Sim4n6, Thanks for reaching out to us! ๐ค We have a bug bounty program with HackerOne and would love for you to submit security vulnerability reports to https://hackerone.com/hugging_face. This is a private program so we will need to invite you. Do you happen to have an H1 username? Or you can send security@huggingface.co an email and we'll send you an invite! ",
"it is [Sim4n6](https://hackerone.com/sim4n6?type=user). No problem.",
"Invite sent! Thanks again! ",
"Done."
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### Feature request
Enable Private vulnerability reporting in the GitHub repository, please.
https://docs.github.com/en/code-security/security-advisories/repository-security-advisories/configuring-private-vulnerability-reporting-for-a-repository
### Motivation
I may have identified something low-impact vulnerability, but I would like to report it privately via the usual channel and request a CVE in that case.
### Your contribution
The report that I will submit, when the feature is enabled, please.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21973/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21972
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21972/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21972/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21972/events
|
https://github.com/huggingface/transformers/issues/21972
| 1,611,752,143
|
I_kwDOCUB6oc5gEWLP
| 21,972
|
Support for `Flax` Trainer
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is also something we do not want to add or maintain for Flax, as researchers usually dislike Trainer classes very much.",
"Understood, thanks for such a quick reply :hugs: "
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### Feature request
A `Trainer` class similar to PyTorch/Tensorflow (deprecating in v5) .
### Motivation
The process of training HuggingFace models in `PyTorch` has become more accessible due to the availability of the `Trainer` class and extensive documentation and tutorials. Similarly, in `Tensorflow`, training models require only a single line of code (`model.fit`). and it makes sense to deprecate the current trainer to avoid redundancy.
However, the process of training `Flax` models currently requires boilerplate code similar to `PyTorch` which a `Trainer` class would be helpful to eliminate. Making flax `Trainer` available will allow for really fast training in GPU/TPUs and with the support of weights conversion, one can instantly convert the model to `Tensorflow`/`PyTorch` for inference and deployment.
### Your contribution
I can submit the PR.
cc @sanchit-gandhi since it's related to flax.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21972/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21971
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21971/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21971/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21971/events
|
https://github.com/huggingface/transformers/issues/21971
| 1,611,636,612
|
I_kwDOCUB6oc5gD5-E
| 21,971
|
No clear documentation for enabling padding in FeatureExtractionPipeline
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I simplified a bit for future readers:\r\n\r\n```python\r\nexample = \"After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank.\"\r\n\r\npipeline_without_padding_as_an_argument = pipeline(\r\n \"feature-extraction\",\r\n model=\"bert-base-uncased\",\r\n return_tensors=True,\r\n)\r\npipeline_with_padding_in_kwarg = pipeline(\r\n \"feature-extraction\",\r\n model=\"bert-base-uncased\",\r\n tokenize_kwargs={\"padding\": \"max_length\"},\r\n return_tensors=True,\r\n)\r\n```\r\n\r\nAlso please note that `padding: \"max_length\"` makes unnecessary long tensors in most cases, slowing down the overal inference of your model.\r\n\r\nI would refrain heavily from using it in a pipeline. `{\"padding\": True}` should be better.",
"That makes sense. Do we need to make it clear in the documentation about how to use `tokenize_kwargs`?",
"PRs to make it clearer are welcome for sure ! ",
"@Narsil Submitted PR #22031 that adds the `tokenize_kwargs` definition in `FeatureExtrationPipeline`. Thanks!",
"Thanks it looks great."
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.26.0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- Huggingface_hub version: 0.12.0
- PyTorch version (GPU?): 1.13.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The documentation for the [FeatureExtractionPipeline](https://huggingface.co/docs/transformers/v4.26.1/en/main_classes/pipelines#transformers.FeatureExtractionPipeline) does not currently provide instructions on how to use the truncation and padding arguments.
These arguments can be passed in the `tokenize_kwargs` parameter, which is parsed by the [self._sanitize_parameters](https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/pipelines/feature_extraction.py#L58) method. While the `truncation` argument can also be passed as a separate keyword argument, the `padding` argument can only be recognized if it is included in `tokenize_kwargs`.
To improve clarity, it would be beneficial for the documentation to explicitly state that the existence of `tokenize_kwargs` parameter for passing tokenizer arguments and add that only the `truncation` argument can be used as a keyword argument, while other tokenizer parameters should be included in `tokenize_kwargs`.
I can submit a PR to add the documentation if it sounds good to you!
### Expected behavior
Below is the code to show that how padding should be used in FeatureExtractionPipeline.
```python
example = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=False)
pipeline_without_padding_as_an_argument = pipeline(
"feature-extraction",
model="bert-base-uncased",
tokenizer=tokenizer,
return_tensors=True,
)
pipeline_with_padding_as_an_argument = pipeline(
"feature-extraction",
model="bert-base-uncased",
tokenizer=tokenizer,
padding="max_length",
return_tensors=True,
)
pipeline_with_padding_in_kwarg = pipeline(
"feature-extraction",
model="bert-base-uncased",
tokenizer=tokenizer,
tokenize_kwargs={"padding": "max_length"},
return_tensors=True,
)
print(
pipeline_without_padding_as_an_argument(example).shape
) # torch.Size([1, 22, 768])
print(
pipeline_with_padding_as_an_argument(example).shape
) # torch.Size([1, 22, 768]) padding = max_length not working
print(
pipeline_with_padding_in_kwarg(example).shape
) # torch.Size([1, 512, 768]) padding = max_length working
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21971/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21970
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21970/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21970/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21970/events
|
https://github.com/huggingface/transformers/issues/21970
| 1,611,579,178
|
I_kwDOCUB6oc5gDr8q
| 21,970
|
Unable to load a pretrained model
|
{
"login": "poojitharamachandra",
"id": 39840406,
"node_id": "MDQ6VXNlcjM5ODQwNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/39840406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poojitharamachandra",
"html_url": "https://github.com/poojitharamachandra",
"followers_url": "https://api.github.com/users/poojitharamachandra/followers",
"following_url": "https://api.github.com/users/poojitharamachandra/following{/other_user}",
"gists_url": "https://api.github.com/users/poojitharamachandra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poojitharamachandra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poojitharamachandra/subscriptions",
"organizations_url": "https://api.github.com/users/poojitharamachandra/orgs",
"repos_url": "https://api.github.com/users/poojitharamachandra/repos",
"events_url": "https://api.github.com/users/poojitharamachandra/events{/privacy}",
"received_events_url": "https://api.github.com/users/poojitharamachandra/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Could you paste the code you are using?\r\n```py\r\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"microsoft/unixcoder-base\")\r\nmodel = AutoModel.from_pretrained(\"microsoft/unixcoder-base\")\r\n```\r\nworks perfectly fine.",
"from transformers import (WEIGHTS_NAME, AdamW, get_linear_schedule_with_warmup,\r\n RobertaConfig, RobertaModel, RobertaTokenizer) \r\n\r\n tokenizer = RobertaTokenizer.from_pretrained(args.model_name_or_path)\r\n config = RobertaConfig.from_pretrained(args.model_name_or_path)\r\n model = RobertaModel.from_pretrained(args.model_name_or_path) \r\n.............................\r\n\r\n\r\n\r\n\r\n\r\n\r\n model = RobertaModel.from_pretrained('./changesets_model')\r\n model = Model(model)\r\n model.to(args.device)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.18.0
- Platform: Linux-4.18.0-425.10.1.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
OSError: Can't load config for '~/Unixcoder/model/changesets_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '~/Unixcoder/model/changesets_model' is the correct path to a directory containing a config.json file
I am trying to load a second model during the course of training the first model.
I am able to load the model for the first time, but not for the second time(even though all the config files are present)
I am trying to load the model downloaded from https://huggingface.co/microsoft/unixcoder-base/tree/main
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. load the model from downloaded version of https://huggingface.co/microsoft/unixcoder-base/tree/main
2. train the model on custom dataset
3. load the similar model that is pretrained on different dataset
### Expected behavior
model is successfully loaded
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21970/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21969
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21969/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21969/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21969/events
|
https://github.com/huggingface/transformers/pull/21969
| 1,611,573,027
|
PR_kwDOCUB6oc5LYHbj
| 21,969
|
Add check before int casting for PIL conversion
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Adds safeguards when images are converted to PIL images:
* Additional condition if inferring `do_rescale`
* Raise an error if values cannot be cast to `uint8`
The PIL library is used for resizing images in image processors. If not explicitly set, whether or not to rescale pixel values is inferred based on the input type: if float then values are multiplied by 255. If the input image has integer values between 0-255, but are of floating type, then these pixel are rescaled to values between [0, 65025]. This results in overflow errors when cast to `uint` [here](https://github.com/huggingface/transformers/blob/bc33fbf956eef62d0ba8d3cd67ee955ad5defcdb/src/transformers/image_transforms.py#L162) before converting to a `PIL.Image.Image`.
Fixes #21915
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21969/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21969",
"html_url": "https://github.com/huggingface/transformers/pull/21969",
"diff_url": "https://github.com/huggingface/transformers/pull/21969.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21969.patch",
"merged_at": 1678187650000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21968
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21968/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21968/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21968/events
|
https://github.com/huggingface/transformers/pull/21968
| 1,611,560,310
|
PR_kwDOCUB6oc5LYElM
| 21,968
|
[TF] Fix creating a PR while pushing in TF framework
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Yep, adding this",
"I had a question, the `create_pr` function parameter is not available in the corresponding `PyTorch` or `Flax` implementation, I was wondering if this is intended. ",
"Not sure I follow, torch already has this parameter. "
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #21967, where models in TF can't push and open a PR. A test should probably be added
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21968/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21968",
"html_url": "https://github.com/huggingface/transformers/pull/21968",
"diff_url": "https://github.com/huggingface/transformers/pull/21968.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21968.patch",
"merged_at": 1678206728000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21967
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21967/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21967/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21967/events
|
https://github.com/huggingface/transformers/issues/21967
| 1,611,559,489
|
I_kwDOCUB6oc5gDnJB
| 21,967
|
[TF] Can't open a pr when pushing
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante "
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
Creating a PR when uploading a model should work.
```python
>>> from transformers import FlaxT5ForConditionalGeneration, TFT5ForConditionalGeneration
>>> import jax.numpy as jnp
>>> model = FlaxT5ForConditionalGeneration.from_pretrained("./art/flan-ul2", dtype = jnp.bfloat16, from_pt = True)
>>> model.push_to_hub("google/flan-ul2", use_auth_token = "XXXXX",create_pr = True)
>>> del model
>>> model = TFT5ForConditionalGeneration.from_pretrained("./art/flan-ul2", from_pt = True)
>>> model.push_to_hub("google/flan-ul2", use_auth_token = "XXXXX",create_pr = True)
File "/home/arthur_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 2986, in push_to_hub
self.create_model_card(**base_model_card_args)
TypeError: create_model_card() got an unexpected keyword argument 'create_pr'
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21967/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21967/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21966
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21966/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21966/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21966/events
|
https://github.com/huggingface/transformers/pull/21966
| 1,611,555,392
|
PR_kwDOCUB6oc5LYDg6
| 21,966
|
Use larger atol in `torch.allclose` for some tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Running CI against torch 2.0 (and therefore CUDA 11.7), some tests for `BridgeTowerModel` failed:
- test_disk_offload
- test_cpu_offload
- test_model_parallelism
While with torch `1.13.1` (with CUDA 11.6), the difference between `base_output` and `new_output` in these 3 tests are 0.0, we get `1e-7 ~ 3e-6` as difference for torch `2.0`.
**Deep debugging reveals that the first non-zero difference occurs when `nn.MultiheadAttention` is called.**
This PR increase the atol to `1e-5` (the default one is `1e-8`) in `ModelTesterMixin`.
If we don't feel super good with this for all model tests, we can override these 3 tests in `BridgeTowerModelTest`.
(But when more models start using `nn.MultiheadAttention`, it's best to keep this larger value in common testing file)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21966/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21966",
"html_url": "https://github.com/huggingface/transformers/pull/21966",
"diff_url": "https://github.com/huggingface/transformers/pull/21966.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21966.patch",
"merged_at": 1678120861000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21965
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21965/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21965/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21965/events
|
https://github.com/huggingface/transformers/pull/21965
| 1,611,092,004
|
PR_kwDOCUB6oc5LWfE4
| 21,965
|
[๐ ๏ธ] Fix-whisper-breaking-changes
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Test are failing because we do not check the `model.config` updating and will look good! ",
"The same should now be applied to both the `TF` and the `Flax` version as the overwriting of the `generate` function is also supported. Will open a follow up PR for these ",
"For TF and flax #21334"
] | 1,678
| 1,706
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
Should fix the backward compatibility issue with `model.config.forced_decoder_ids = ...` and should help users who want to generate with timestamps.
Fixes #21937 and #21878
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21965/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21965",
"html_url": "https://github.com/huggingface/transformers/pull/21965",
"diff_url": "https://github.com/huggingface/transformers/pull/21965.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21965.patch",
"merged_at": 1678782229000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21964
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21964/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21964/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21964/events
|
https://github.com/huggingface/transformers/pull/21964
| 1,610,812,653
|
PR_kwDOCUB6oc5LVjNb
| 21,964
|
Add BridgeTowerForContrastiveLearning
|
{
"login": "abhiwand",
"id": 12353176,
"node_id": "MDQ6VXNlcjEyMzUzMTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/12353176?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhiwand",
"html_url": "https://github.com/abhiwand",
"followers_url": "https://api.github.com/users/abhiwand/followers",
"following_url": "https://api.github.com/users/abhiwand/following{/other_user}",
"gists_url": "https://api.github.com/users/abhiwand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhiwand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhiwand/subscriptions",
"organizations_url": "https://api.github.com/users/abhiwand/orgs",
"repos_url": "https://api.github.com/users/abhiwand/repos",
"events_url": "https://api.github.com/users/abhiwand/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhiwand/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@sgugger We have addressed all of your review in the latest commit. We have also added tests for BridgeTowerForContrastiveLearning. Would you please review the latest commit? We are looking forward to having this PR merged into main.\r\nThanks",
"Thanks for adding this model @abhiwand @tileintel ! Great to see it merged into the library :) \r\n\r\nFor the model tests, some of the configuration values in `BridgeTowerModelTester` result in large models being created and used in the test suite e.g. `vocab_size = 50265` set [here](https://github.com/huggingface/transformers/blob/bcc8d30affba29c594320fc80e4a4422fb850175/tests/models/bridgetower/test_modeling_bridgetower.py#L97), which results in periodic OOM errors in the CI runs. \r\n\r\nCould you add a follow up PR for `BridgeTowerModelTester` and `BridgeTowerModelTest` to have smaller default values to create lighter tests? A good reference for this would be [CLIP](https://github.com/huggingface/transformers/blob/main/tests/models/clip/test_modeling_clip.py). Ideally we would also have a similar structure of test classes for the different modalities i.e. `BridgeTower[Text|Vision]ModelTest(er)`. "
] | 1,678
| 1,686
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21964/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21964",
"html_url": "https://github.com/huggingface/transformers/pull/21964",
"diff_url": "https://github.com/huggingface/transformers/pull/21964.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21964.patch",
"merged_at": 1678284055000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21963
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21963/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21963/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21963/events
|
https://github.com/huggingface/transformers/pull/21963
| 1,610,702,098
|
PR_kwDOCUB6oc5LVLNL
| 21,963
|
Fix bert issue
|
{
"login": "saswatmeher",
"id": 35535056,
"node_id": "MDQ6VXNlcjM1NTM1MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/35535056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saswatmeher",
"html_url": "https://github.com/saswatmeher",
"followers_url": "https://api.github.com/users/saswatmeher/followers",
"following_url": "https://api.github.com/users/saswatmeher/following{/other_user}",
"gists_url": "https://api.github.com/users/saswatmeher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saswatmeher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saswatmeher/subscriptions",
"organizations_url": "https://api.github.com/users/saswatmeher/orgs",
"repos_url": "https://api.github.com/users/saswatmeher/repos",
"events_url": "https://api.github.com/users/saswatmeher/events{/privacy}",
"received_events_url": "https://api.github.com/users/saswatmeher/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"@younesbelkada or @gante, Can anyone of you please help me with the error I am getting after running fix-copies. Running fix-copies made changes to multiple files(19) and now I am getting tests_torch, tests_torch_and_tf error on it.",
"Hi @saswatmeher \r\nThanks for the PR! \r\nI would probably try:\r\n1- `pip install --upgrade -e .[\"quality\"]` and then run `make fix-copies`\r\nLet us know if this works!",
"@younesbelkada It worked. Thanks! "
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes issue #21737 for Bert.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.(#21737)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@younesbelkada, @gante
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21963/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21963",
"html_url": "https://github.com/huggingface/transformers/pull/21963",
"diff_url": "https://github.com/huggingface/transformers/pull/21963.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21963.patch",
"merged_at": 1678114532000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21962
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21962/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21962/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21962/events
|
https://github.com/huggingface/transformers/issues/21962
| 1,610,693,995
|
I_kwDOCUB6oc5gAT1r
| 21,962
|
use datasets streaming mode in trainer ddp mode cause memory leak
|
{
"login": "gromzhu",
"id": 15223544,
"node_id": "MDQ6VXNlcjE1MjIzNTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/15223544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gromzhu",
"html_url": "https://github.com/gromzhu",
"followers_url": "https://api.github.com/users/gromzhu/followers",
"following_url": "https://api.github.com/users/gromzhu/following{/other_user}",
"gists_url": "https://api.github.com/users/gromzhu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gromzhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gromzhu/subscriptions",
"organizations_url": "https://api.github.com/users/gromzhu/orgs",
"repos_url": "https://api.github.com/users/gromzhu/repos",
"events_url": "https://api.github.com/users/gromzhu/events{/privacy}",
"received_events_url": "https://api.github.com/users/gromzhu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @lhoestq ",
"Hi ! The axis 0 in your plot is time. Do you know how many training steps it corresponds to ?\r\n\r\nFYI the `text` loading in `datasets` samples text files line by line (with a buffer of >10MB to avoid small IO calls).\r\nMoreover .shuffle() uses a shuffle buffer of 1,000 examples, and batched `map` uses batches of 1,000 examples as well.\r\n\r\nTherefore unless there is a major leak somewhere, `datasets` doesn't use much RAM in streaming mode.\r\n\r\nFeel free to try profiling memory usage and check where the biggest source of memory comes from in the code, that would be super helpful to diagnose the potential memory leak and fix it.",
"> Hi ! The axis 0 in your plot is time. Do you know how many training steps it corresponds to ?\r\n> \r\n> FYI the `text` loading in `datasets` samples text files line by line (with a buffer of >10MB to avoid small IO calls). Moreover .shuffle() uses a shuffle buffer of 1,000 examples, and batched `map` uses batches of 1,000 examples as well.\r\n> \r\n> Therefore unless there is a major leak somewhere, `datasets` doesn't use much RAM in streaming mode.\r\n> \r\n> Feel free to try profiling memory usage and check where the biggest source of memory comes from in the code, that would be super helpful to diagnose the potential memory leak and fix it.\r\n\r\nit is roughly about between 500000 steps - 650000 steps ",
"> Hi ! The axis 0 in your plot is time. Do you know how many training steps it corresponds to ?\r\n> \r\n> FYI the `text` loading in `datasets` samples text files line by line (with a buffer of >10MB to avoid small IO calls). Moreover .shuffle() uses a shuffle buffer of 1,000 examples, and batched `map` uses batches of 1,000 examples as well.\r\n> \r\n> Therefore unless there is a major leak somewhere, `datasets` doesn't use much RAM in streaming mode.\r\n> \r\n> Feel free to try profiling memory usage and check where the biggest source of memory comes from in the code, that would be super helpful to diagnose the potential memory leak and fix it.\r\n\r\ni try use memory_profiler to profiling memory ,but profiling memory can only report the master thread memory. do you know which tool can report the dataloader worker thread memory?",
"I haven't tried memory_profiler with multiprocessing, but you can already try iterating on the DataLoader without multiprocessing and check if you observe a memory leak.",
"> I haven't tried memory_profiler with multiprocessing, but you can already try iterating on the DataLoader without multiprocessing and check if you observe a memory leak.\r\n\r\nset dataloader_num=0 ,no memory leak",
"This could be an issue with the torch `DataLoader` then, or python multiprocessing.\r\n\r\nOn the `datasets` side this is the whole code that `yield` example if `num_worker > 0`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/c5ca1d86949ec3a5fdaec03b80500fb822bcfab4/src/datasets/iterable_dataset.py#L843\r\n\r\nwhich is almost identical to the code without multiprocessing:\r\n\r\nhttps://github.com/huggingface/datasets/blob/c5ca1d86949ec3a5fdaec03b80500fb822bcfab4/src/datasets/iterable_dataset.py#L937-L945\r\n\r\nCould you check on another environment that you also observe the memory leak ?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
NONE
| null |
### System Info
pytorch 1.11.0
py 3.8
cuda 11.3
transformers 4.26.1
datasets 2.9.0
### Who can help?
@sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
import os
import time
import datetime
import sys
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, SequentialSampler,DistributedSampler,BatchSampler
torch.manual_seed(42)
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, GPT2Model,DataCollatorForLanguageModeling,AutoModelForCausalLM
from transformers import AdamW, get_linear_schedule_with_warmup
hf_model_path ='./Wenzhong-GPT2-110M'
tokenizer = GPT2Tokenizer.from_pretrained(hf_model_path)
tokenizer.add_special_tokens({'pad_token': '<|pad|>'})
from datasets import load_dataset
gpus=8
max_len = 576
batch_size_node = 17
save_step = 5000
gradient_accumulation = 2
dataloader_num = 4
max_step = 351000*1000//batch_size_node//gradient_accumulation//gpus
#max_step = -1
print("total_step:%d"%(max_step))
import datasets
datasets.__version__
dataset = load_dataset("text", data_files="./gpt_data_v1/*",split='train',cache_dir='./dataset_cache',streaming=True)
print('load over')
shuffled_dataset = dataset.shuffle(seed=42)
print('shuffle over')
def dataset_tokener(example,max_lenth=max_len):
example['text'] = list(map(lambda x : x.strip()+'<|endoftext|>',example['text'] ))
return tokenizer(example['text'], truncation=True, max_length=max_lenth, padding="longest")
#return tokenizer(example[0], truncation=True, max_length=max_lenth, padding="max_length")
new_new_dataset = shuffled_dataset.map(dataset_tokener, batched=True, remove_columns=["text"])
print('map over')
configuration = GPT2Config.from_pretrained(hf_model_path, output_hidden_states=False)
model = AutoModelForCausalLM.from_pretrained(hf_model_path)
model.resize_token_embeddings(len(tokenizer))
seed_val = 42
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
from transformers import Trainer,TrainingArguments
import os
print("strat train")
training_args = TrainingArguments(output_dir="./test_trainer",
num_train_epochs=1.0,
report_to="none",
do_train=True,
dataloader_num_workers=dataloader_num,
local_rank=int(os.environ.get('LOCAL_RANK', -1)),
overwrite_output_dir=True,
logging_strategy='steps',
logging_first_step=True,
logging_dir="./logs",
log_on_each_node=False,
per_device_train_batch_size=batch_size_node,
warmup_ratio=0.03,
save_steps=save_step,
save_total_limit=5,
gradient_accumulation_steps=gradient_accumulation,
max_steps=max_step,
disable_tqdm=False,
data_seed=42
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=new_new_dataset,
eval_dataset=None,
tokenizer=tokenizer,
# Data collator will default to DataCollatorWithPadding, so we change it.
data_collator=DataCollatorForLanguageModeling(tokenizer,mlm=False),
#compute_metrics=compute_metrics if training_args.do_eval and not is_torch_tpu_available() else None,
#preprocess_logits_for_metrics=preprocess_logits_for_metrics
#if training_args.do_eval and not is_torch_tpu_available()
#else None,
)
trainer.train(resume_from_checkpoint=True)
### Expected behavior
use the train code uppper
my dataset ./gpt_data_v1 have 1000 files, each file size is 120mb
start cmd is : python -m torch.distributed.launch --nproc_per_node=8 my_train.py
here is result:

here is memory usage monitor in 12 hours

every dataloader work allocate over 24gb cpu memory
according to memory usage monitor in 12 hours,sometime small memory releases, but total memory usage is increase.
i think datasets streaming mode should not used so much memery,so maybe somewhere has memory leak.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21962/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21961
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21961/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21961/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21961/events
|
https://github.com/huggingface/transformers/issues/21961
| 1,610,509,465
|
I_kwDOCUB6oc5f_myZ
| 21,961
|
Support customized vocabulary for decoding (in model.generate)
|
{
"login": "yuchenlin",
"id": 10104354,
"node_id": "MDQ6VXNlcjEwMTA0MzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuchenlin",
"html_url": "https://github.com/yuchenlin",
"followers_url": "https://api.github.com/users/yuchenlin/followers",
"following_url": "https://api.github.com/users/yuchenlin/following{/other_user}",
"gists_url": "https://api.github.com/users/yuchenlin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuchenlin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuchenlin/subscriptions",
"organizations_url": "https://api.github.com/users/yuchenlin/orgs",
"repos_url": "https://api.github.com/users/yuchenlin/repos",
"events_url": "https://api.github.com/users/yuchenlin/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuchenlin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"I have read this post: https://huggingface.co/blog/constrained-beam-search\r\n\r\nBut it seems that such Constraints can only support constraints of ensuring some tokens are part of the sentences but cannot prevent other tokens to be selected during decoding. ",
"Found this post to use `bad_word_list` as the whole vocab - customized vocab as the input: \r\n\r\nhttps://stackoverflow.com/questions/63920887/whitelist-tokens-for-text-generation-xlnet-gpt-2-in-huggingface-transformers\r\n\r\nWill have a try but sounds like a bit awkward to use. ",
"cc @gante ",
"Hey @yuchenlin ๐ \r\n\r\nMy first approach would be to use `bad_word_list`, passing to it all but the tokens you want to use. It's a no-code approach, but perhaps not the most efficient computationally.\r\n\r\nAlternatively, you can write your own processor class that sets to `-inf` the logits of all but the tokens you want to consider. To do it, you would have to:\r\n1. Write your own class that implements the logic. You can see plenty of examples in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py)\r\n2. Use your class at generation time, e.g. \r\n```py\r\ntokens_to_keep = tokenizer(xxx) # xxx = list with your valid words\r\nmy_processor = MyLogitsProcessorClass(tokens_to_keep=tokens_to_keep)\r\nmodel.generate(inputs, ..., logits_processor=LogitsProcessorList([my_processor]))\r\n```\r\n\r\nI hope this short guide helps ๐ค ",
"Hi @gante ,\r\n\r\nThanks a lot! Yeah I have tried with the `bad_wordLlist` (see example below) and I found that the generated outputs are much worse than before although they are indeed constrained to the given vocabulary. I was using beam search and I'm not sure if it is because that the vocab is so small that the normalization or other process becomes unstable. \r\n\r\nI will try the logit processor idea as well. Thank you! :D \r\n\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModelForSeq2SeqLM\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"google/flan-t5-small\")\r\n\r\nmodel = AutoModelForSeq2SeqLM.from_pretrained(\"google/flan-t5-small\")\r\n\r\n \r\nwhitelist = [\"move\", \"it\", \"pick\", \"up\", \"focus\", \"on\"]\r\nwhitelist_ids = [tokenizer.encode(word)[0] for word in whitelist]\r\nbad_words_ids=[[id] for id in range(tokenizer.vocab_size) if id not in whitelist_ids]\r\n\r\n\r\nencoder_input_str = \"Explain this concept to me: machine learning\"\r\ninput_ids = tokenizer(encoder_input_str, return_tensors=\"pt\").input_ids\r\n\r\noutputs = model.generate(\r\n input_ids,\r\n num_beams=10,\r\n do_sample=False,\r\n num_return_sequences=1,\r\n no_repeat_ngram_size=1,\r\n remove_invalid_values=True,\r\n bad_words_ids = bad_words_ids,\r\n)\r\nprint(tokenizer.decode(outputs[0], skip_special_tokens=True))\r\n```",
"@yuchenlin haha yes, the quality of the output will likely decrease significantly, that is to be expected! \r\n\r\nInstead of whitelisting words, consider the \"soft-whitelisting\" alternative: increase the odds of picking a token from the whitelist. You can easily implement this by changing the repetition penalty logits processor to boost the odds of certain tokens :)",
"Thanks a lot for the advice! I currently used a simpler method --- adding some random tokens (say 30% of the whole vocab) to the whitelist and it seems to help. \r\n\r\nWill also try your idea soon! Thanks again! :D ",
"Just in case you are interested in more diversity of these constraints, I wrote a whole package and paper about this idea: https://github.com/Hellisotherpeople/Constrained-Text-Generation-Studio",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,678
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### Feature request
Use case:
Given a small list of tokens that is a subset of the whole vocabulary of the tokenizers for T5. For example, ["put", "move", "pick", "up", "on", "in", "apple", "bag", ....]
And when we decode by using `model.generate()`, we want the model only output sentences that consist of words in the above list (i.e., limited vocabulary for beam searching or sampling).
Maybe it is already supported in some way?
### Motivation
For some applications, we only want to decode sentences with a limited vocabulary instead of allowing open-ended generation.
### Your contribution
I'm not sure what is the best way to add this feature, if it is easy to limit the vocab for generate functions, then I can help add this PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21961/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21960
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21960/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21960/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21960/events
|
https://github.com/huggingface/transformers/pull/21960
| 1,610,465,758
|
PR_kwDOCUB6oc5LUXzN
| 21,960
|
Add missing parameter definition in layoutlm config
|
{
"login": "Atomnp",
"id": 45496355,
"node_id": "MDQ6VXNlcjQ1NDk2MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/45496355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Atomnp",
"html_url": "https://github.com/Atomnp",
"followers_url": "https://api.github.com/users/Atomnp/followers",
"following_url": "https://api.github.com/users/Atomnp/following{/other_user}",
"gists_url": "https://api.github.com/users/Atomnp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Atomnp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Atomnp/subscriptions",
"organizations_url": "https://api.github.com/users/Atomnp/orgs",
"repos_url": "https://api.github.com/users/Atomnp/repos",
"events_url": "https://api.github.com/users/Atomnp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Atomnp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @amyeroberts "
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
Four parameters in `LayoutLM` config were missing definitions, Added their definition (copied from BertConfig).
# What does this PR do?
Fix docs, add parameter definition copying them from BertConfig
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21960/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21960/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21960",
"html_url": "https://github.com/huggingface/transformers/pull/21960",
"diff_url": "https://github.com/huggingface/transformers/pull/21960.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21960.patch",
"merged_at": 1678116011000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21959
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21959/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21959/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21959/events
|
https://github.com/huggingface/transformers/pull/21959
| 1,610,282,038
|
PR_kwDOCUB6oc5LTyLR
| 21,959
|
Fix MinNewTokensLengthLogitsProcessor when used with a list of eos tokens
|
{
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,678
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
MinNewTokensLengthLogitsProcessor is missing support for a list of eos token ids.
This PR adds the missing support, in the same way it was added in MinLengthLogitsProcessor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21959/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21959",
"html_url": "https://github.com/huggingface/transformers/pull/21959",
"diff_url": "https://github.com/huggingface/transformers/pull/21959.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21959.patch",
"merged_at": 1678190363000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21958
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21958/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21958/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21958/events
|
https://github.com/huggingface/transformers/issues/21958
| 1,610,101,047
|
I_kwDOCUB6oc5f-DE3
| 21,958
|
Cannot get the model weight of T5 INT8 model with Transformers 4.26.1
|
{
"login": "XuhuiRen",
"id": 44249229,
"node_id": "MDQ6VXNlcjQ0MjQ5MjI5",
"avatar_url": "https://avatars.githubusercontent.com/u/44249229?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XuhuiRen",
"html_url": "https://github.com/XuhuiRen",
"followers_url": "https://api.github.com/users/XuhuiRen/followers",
"following_url": "https://api.github.com/users/XuhuiRen/following{/other_user}",
"gists_url": "https://api.github.com/users/XuhuiRen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/XuhuiRen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/XuhuiRen/subscriptions",
"organizations_url": "https://api.github.com/users/XuhuiRen/orgs",
"repos_url": "https://api.github.com/users/XuhuiRen/repos",
"events_url": "https://api.github.com/users/XuhuiRen/events{/privacy}",
"received_events_url": "https://api.github.com/users/XuhuiRen/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Cool that this seems to have been fixed for you! Could you tell us what the solution was? (for futur reference and other users who might stumble upon the smae problem)",
"@ArthurZucker Hi, Arthur, we still did not fix this issue. Could you please have a check for this issue? We just modify the previous description to make the problem more clear for you to solve. ",
"Hey! Is there a reason why you are not using `load_in_8bit = True`? If you install the `bits_and_bytes` library, getting the 8bit quantized version of the model is as easy as the following:\r\n```python \r\n import transformers\r\n from datasets import load_dataset\r\n model_name = 't5-small'\r\n model = transformers.AutoModelForSeq2SeqLM.from_pretrained(model_name,load_in_8bit = True, device_map=\"auto\")\r\n```",
"Hello @XuhuiRen ,\r\n\r\nYour script worked fine on the `main` branch of `transformers`\r\n\r\n```python\r\nimport torch\r\nimport transformers\r\nfrom datasets import load_dataset\r\nmodel_name = 't5-small'\r\nmodel_fp32 = transformers.AutoModelForSeq2SeqLM.from_pretrained(\r\n model_name,\r\n)\r\nmodel_int8 = torch.ao.quantization.quantize_dynamic(\r\n model_fp32,\r\n {torch.nn.Linear},\r\n dtype=torch.qint8\r\n)\r\n\r\noutput = model_int8.generate(torch.LongTensor([[0, 1, 2, 3]]))\r\nprint(output)\r\n```\r\n\r\nCan you try it with:\r\n```\r\npip install git+https://github.com/huggingface/transformers.git\r\n```\r\n\r\nFor more context, the PR: https://github.com/huggingface/transformers/pull/21843 solved your issue",
"Hi, @ArthurZucker and @younesbelkada really thanks for your reply. Your solution is works for my issue. "
] | 1,678
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.0+cpu
- Tensorflow version (GPU?): not installed (No)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <No>
### Who can help?
@ArthurZucker @younesbelkada @sgu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The code is shown as follows:
```python
import torch
import transformers
from datasets import load_dataset
model_name = 't5-small'
model_fp32 = transformers.AutoModelForSeq2SeqLM.from_pretrained(
model_name,
)
model_int8 = torch.ao.quantization.quantize_dynamic(
model_fp32,
{torch.nn.Linear},
dtype=torch.qint8)
def get_example_inputs(model_name, dataset_name='sst2'):
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
dataset = load_dataset(dataset_name, split='validation')
text = dataset[0]['text'] if dataset_name=='lambada' else dataset[0]['sentence']
input = tokenizer(text, padding='max_length', max_length=195, return_tensors='pt')
example_inputs = input['input_ids'][0].to('cpu').unsqueeze(0)
return example_inputs
example_inputs = get_example_inputs(model_name, dataset_name='lambada')
output = model_int8.generate(example_inputs)
print(output)
```
The error message is shown as follows,
[Issue Report.txt](https://github.com/huggingface/transformers/files/10905558/Issue.Report.txt)
### Expected behavior
This example's expected behavior is quantizing the FP32 T5 model into INT8 format using INT8 model to generate the output.
This code could make a success with the previous version Transformers 4.26.0. After the recent updates, This code cannot run normally anymore.
We found the error is result from the fix of another issue: https://github.com/huggingface/transformers/issues/20287. That is probably because that you have used weight from "self.wo". But that weight becomes a function in the INT8 module. It should be used by "self.wo.weight()". Please reconsider the previous fix for that issue to make it compatible.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21958/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21957
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21957/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21957/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21957/events
|
https://github.com/huggingface/transformers/pull/21957
| 1,610,062,223
|
PR_kwDOCUB6oc5LTG1G
| 21,957
|
Update expected values in `XLMProphetNetModelIntegrationTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
After #21870, we also need to update some expected values in `XLMProphetNetModelIntegrationTest` (as has been done for `ProphetNetModelIntegrationTest` in that PR)
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21957/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21957",
"html_url": "https://github.com/huggingface/transformers/pull/21957",
"diff_url": "https://github.com/huggingface/transformers/pull/21957.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21957.patch",
"merged_at": 1678090545000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21956
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21956/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21956/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21956/events
|
https://github.com/huggingface/transformers/pull/21956
| 1,610,029,650
|
PR_kwDOCUB6oc5LTAp0
| 21,956
|
[Generate] Fix gradient_checkpointing and use_cache bug for BLOOM
|
{
"login": "asrimanth",
"id": 30816357,
"node_id": "MDQ6VXNlcjMwODE2MzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/30816357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asrimanth",
"html_url": "https://github.com/asrimanth",
"followers_url": "https://api.github.com/users/asrimanth/followers",
"following_url": "https://api.github.com/users/asrimanth/following{/other_user}",
"gists_url": "https://api.github.com/users/asrimanth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asrimanth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asrimanth/subscriptions",
"organizations_url": "https://api.github.com/users/asrimanth/orgs",
"repos_url": "https://api.github.com/users/asrimanth/repos",
"events_url": "https://api.github.com/users/asrimanth/events{/privacy}",
"received_events_url": "https://api.github.com/users/asrimanth/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
Fixes #21737 for Bloom.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a GitHub issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
cc @younesbelkada @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21956/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21956",
"html_url": "https://github.com/huggingface/transformers/pull/21956",
"diff_url": "https://github.com/huggingface/transformers/pull/21956.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21956.patch",
"merged_at": 1678114601000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21955
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21955/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21955/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21955/events
|
https://github.com/huggingface/transformers/pull/21955
| 1,609,993,956
|
PR_kwDOCUB6oc5LS6Cq
| 21,955
|
LLaMA Implementation
|
{
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"does this work with int8?",
"> does this work with int8?\r\n\r\nNo idea! I haven't messed with int8 too much myself. It ought to be compatible with whatever is already supported in the HF models.",
"nice work! thanks for the upload and I hope it gets pulled",
"_The documentation is not available anymore as the PR was closed or merged._",
"It looks like the tests which are currently failing are unrelated to the LLaMA code, so this should be good to review/use.\r\n\r\nIf folks can try it out (particularly with the larger, sharded models) and see if there are any issues, that will be helpful!",
"> It looks like the tests which are currently failing are unrelated to the LLaMA code, so this should be good to review/use.\r\n> \r\n> If folks can try it out (particularly with the larger, sharded models) and see if there are any issues, that will be helpful!\r\n\r\nAt lest the convert script seems to work fine. I was able to convert 7B to 30B. I do not have enough ram to convert 65B.",
"Great work. thanks for putting this together",
"After replacing transformers from Kobold with this PR I am able to load the shards as expected. Just I cant generate anything because Kobold still needs some changes.\r\n\r\n",
"> > does this work with int8?\r\n> \r\n> No idea! I haven't messed with int8 too much myself. It ought to be compatible with whatever is already supported in the HF models.\r\n\r\nInt8 seems not working but float16 is fine, in my hasty put-together test at https://github.com/zsc/llama_infer . Please throw a comment in case you find something!",
"@zphang I'm not able to get something like `tokenizer = AutoTokenizer.from_pretrained(\"/data/llama/hf/7b/tokenizer/\")` to work. Is this intentional or just leaving AutoTokenizer for future work?",
"> @zphang I'm not able to get something like `tokenizer = AutoTokenizer.from_pretrained(\"/data/llama/hf/7b/tokenizer/\")` to work. Is this intentional or just leaving AutoTokenizer for future work?\r\n\r\nWhat issue are you having / what is the error?",
"I have tested the code and these are my findings:\r\n\r\n1. The conversion script works.\r\n2. Loading the model works.\r\n3. Loading the tokenizer with `transformers.LLaMATokenizer.from_pretrained` works.\r\n4. Loading the tokenizer with `AutoTokenizer.from_pretrained` does not work and generates this error:\r\n\r\n```\r\nOSError: /tmp/converted/tokenizer/ does not appear to have a file named config.json. Checkout\r\n'https://huggingface.co//tmp/converted/tokenizer//None' for available files.\r\n\r\n```\r\n\r\n5. The generated text seems to be incoherent. If I try these default values for the generation parameters:\r\n\r\n```\r\nmodel.generate(input_ids, eos_token_id=2, do_sample=True, temperature=1, top_p=1, typical_p=1, repetition_penalty=1, top_k=50, min_length=0, no_repeat_ngram_size=0, num_beams=1, penalty_alpha=0, length_penalty=1, early_stopping=False, max_new_tokens=200).cuda()\r\n```\r\n\r\nwith this prompt:\r\n\r\n```\r\nCommon sense questions and answers\r\n\r\nQuestion: What color is the sky?\r\nFactual answer:\r\n```\r\n\r\nI get\r\n\r\n```\r\nCommon sense questions and answers\r\n\r\nQuestion: What color is the sky?\r\nFactual answer: Tags: python, django, django-models\r\n\r\nQuestion: Using Django with multiple databases\r\n\r\nI am attempting to use django with multiple databases, and I have the following code:\r\n\r\n\\begin{code}\r\nDATABASES = {\r\n 'default': {\r\n 'ENGINE': 'django.db.backends.sqlite3',\r\n 'NAME': ':memory:',\r\n },\r\n 'db_one': {\r\n 'ENGINE': 'django.db.backends.sqlite3',\r\n 'NAME': 'db_one',\r\n },\r\n 'db_two': {\r\n 'ENGINE': 'django.db.backends.sqlite3',\r\n 'NAME': 'db_two',\r\n },\r\n}\r\n```\r\n\r\nIt seems to me that prompts are being completely ignored.\r\n\r\n6. Loading in 8-bit mode with `load_in_8bit=True` works.",
"This is OK: `tokenizer = transformers.LLaMATokenizer.from_pretrained(\"/data/llama/hf/7b/tokenizer/\")`\r\n\r\nIf using `tokenizer = AutoTokenizer.from_pretrained(\"/data/llama/hf/7b/tokenizer/\"` then it will complain no \"config.json\". \r\n```\r\nOSError: /data/llama/hf/7b/tokenizer/ does not appear to have a file named config.json. Checkout \r\n'https://huggingface.co//data/llama/hf/7b/tokenizer//None' for available files.\r\n```\r\n\r\nI then hacked by softlinking `/data/llama/hf/7b/tokenizer/special_tokens_map.json` to `/data/llama/hf/7b/tokenizer/config.json` and it works. So maybe just rename?\r\n\r\nAnyway, can now happily play with LLaMA in Hugging Face world and thanks for the great work!",
"Thanks for the comments. Looks like the saved tokenizer doesn't work for `AutoTokenizer` but works if you directly instantiate from `LLaMATokenizer`. Maybe one of the HF folks can chime in on the best way to address that.\r\n\r\n> The generated text seems to be incoherent. If I try these default values for the generation parameters:\r\n\r\nCan you check the input_ids you're using to generate? The tokenizer currently adds both BOS and EOS tokens by default, and an EOS might cause the model to ignore your prompt.\r\n\r\nPerhaps I can set EOS to not be added by default so it operates closer to expected behavior.",
"For this prompt:\r\n\r\n```\r\n'Common sense questions and answers\\n\\nQuestion: What color is the sky?\\nFactual answer:'\r\n```\r\n\r\nthese are the input_ids:\r\n\r\n```\r\ntensor([[ 1, 13103, 4060, 5155, 322, 6089, 13, 13, 16492, 29901,\r\n 1724, 2927, 338, 278, 14744, 29973, 13, 29943, 19304, 1234,\r\n 29901, 2]], device='cuda:0')\r\n```\r\n\r\nI do not know how to interpret these numbers, but if there is an EOS token in that tensor and that token is causing the text generation to derail, changing that default would be valuable.",
"1 is BOS and 2 is EOS. Can you try without the last input id?\r\n\r\nI also added an example in my PR message.",
"I confirm that doing this \r\n```\r\n input_ids = input_ids[:, :-1]\r\n\r\n```\r\n\r\nto remove the last input id before calling `model.generate(...)` causes the text generation to become coherent:\r\n\r\n```\r\nCommon sense questions and answers\r\n\r\nQuestion: What color is the sky?\r\nFactual answer: The sky is blue. The sky is blue, and it is a fact that it is blue. The sky is indisputably blue.\r\n\r\n```",
"Added a commit that should fix the tokenizer issues, and not add BOS and EOS by default.",
"Awesome, I confirm that the text generation is coherent by default now.\r\n\r\nI still cannot load the tokenizer with `AutoTokenizer.from_pretrained`. The error has now changed to this:\r\n\r\n```\r\n File \"/tmp/transformers/src/transformers/models/auto/tokenization_auto.py\", line 694, in from_pretrained\r\n tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]\r\n File \"/tmp/transformers/src/transformers/models/auto/auto_factory.py\", line 610, in __getitem__\r\n raise KeyError(key)\r\nKeyError: <class 'transformers.models.llama.configuration_llama.LLaMAConfig'>\r\n\r\n```",
"> > does this work with int8?\r\n> \r\n> No idea! I haven't messed with int8 too much myself. It ought to be compatible with whatever is already supported in the HF models.\r\n\r\nAfter the fix with EOS, int8 (bitsandbytes) looks decent. Example in https://github.com/zsc/llama_infer/blob/main/README.md",
"After https://github.com/huggingface/transformers/pull/21955/commits/459e2ac9f551650ced58deb1c65f06c3d483d606, `AutoTokenizer.from_pretrained` now works as expected.\r\n",
"KoboldAI now works",
"I'd like to see a more memory-efficient conversion script, the current version loads everything into system memory which makes converting the 30B and 65B variants challenging on some systems",
"Yes, this is a quick and dirty version that loads everything into memory.\r\nOne issue is that the way the weights are sharded (for tensor parallelism) is orthogonal to the way that HF shards the weights (by layer). So either we have to load everything in at once, or we have to load/write multiple times. The latter would be slower but useful for folks with less memory.",
"Has anyone tested loading 65B with `accelerate` to load on multiple GPUs?",
"I can't load the 7B model to cuda with one A4000\r\nshould I just change the gpu?\r\n",
"I'm observing some strange behavior with the tokenizer when encoding sequences beginning with a newline:\r\n```\r\n>>> t = AutoTokenizer.from_pretrained(\"llama_hf/tokenizer\")\r\n>>> res = t.encode(\"\\nYou:\")\r\n>>> res\r\n[29871, 13, 3492, 29901]\r\n>>> t.decode(res)\r\n'You:'\r\n```\r\n\r\nThe newline seems to get lost somewhere along the way.\r\n\r\nEDIT: Looking into this, it seems it might be the expected behavior of `sentencepiece`.",
"> Has anyone tested loading 65B with `accelerate` to load on multiple GPUs?\r\n\r\n||fp16|int8(bitsandbytes)|\r\n|--|--|--|\r\n|V100|OK, 5xV100|Bad results, short generated sequences|\r\n|A100|OK, 6xA100 when using \"auto\"|OK, 3xA100|\r\n\r\nYes, I currently have a 65B fp16 model running on 6xV100 now (5X should be enough). My working code is at https://github.com/zsc/llama_infer/ . If there are CUDA OOM due to bad distribution of weights among cards, one thing worth trying is tweaking the device_map (`accelerate` seems to only counts weights when enforcing the memory cap in device_map, so there is an art for setting custom cap a little lower for every card, especially card 0).\r\n\r\nStrangely, int8 (LLM.int8 to be specific) for 65B model works like a charm on A100, but leads to bad results on V100 with abnormally short generated sequences.",
"> Strangely, int8 (LLM.int8 to be specific) for 65B model works like a charm on A100, but leads to bad results on V100 with abnormally short generated sequences.\r\n\r\nI will have a look at this later next week. The V100 takes a different code path than the A100 because the V100 does not support Int8 tensor cores. I think that is the issue here. We will soon publish FP4 inference which should be more universal and easier to use.",
"Jumping on @thomasw21 comment, we sadly cannot accept any code licensed GPLv3 as it would taint the whole Transformers library under that license. This means that the modeling code should be copied from GPT-NeoX whenever possible (with Copied from statements) since I believe that this model is very close to it and that you should be super familiar with it @zphang ;-) , and that no parts of the modeling code should be copy-pasted from the original Llama code.\r\n\r\nWe also cannot attribute Copyright to Meta-AI /Meta in all those files, as attributing that copyright would admit the code in the PR is based on theirs and thus get us back to the license problem."
] | 1,677
| 1,691
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Implementation of LLaMA models (https://arxiv.org/abs/2302.13971). Model weights can be requested [here](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform). Weight conversion script is included.
Weights conversion can be run via:
```bash
python src/transformers/models/llama/convert_llama_weights_to_hf.py \
--input_dir /path/to/downloaded/llama/weights \
--model_size 7B \
--output_dir /output/path
```
Models can then be loaded via:
```python
tokenizer = transformers.LLaMATokenizer.from_pretrained("/output/path/tokenizer/")
model = transformers.LLaMAForCausalLM.from_pretrained("/output/path/llama-7b/")
```
Example:
```bash
batch = tokenizer(
"The primary use of LLaMA is research on large language models, including",
return_tensors="pt",
add_special_tokens=False
)
batch = {k: v.cuda() for k, v in batch.items()}
generated = model.generate(batch["input_ids"], max_length=100)
print(tokenizer.decode(generated[0]))
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/21796
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21955/reactions",
"total_count": 200,
"+1": 14,
"-1": 0,
"laugh": 4,
"hooray": 4,
"confused": 0,
"heart": 88,
"rocket": 85,
"eyes": 5
}
|
https://api.github.com/repos/huggingface/transformers/issues/21955/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21955",
"html_url": "https://github.com/huggingface/transformers/pull/21955",
"diff_url": "https://github.com/huggingface/transformers/pull/21955.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21955.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21954
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21954/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21954/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21954/events
|
https://github.com/huggingface/transformers/issues/21954
| 1,609,969,774
|
I_kwDOCUB6oc5f9jBu
| 21,954
|
`from_pretrained` broken 4.26.1
|
{
"login": "NicholasKao1029",
"id": 45542006,
"node_id": "MDQ6VXNlcjQ1NTQyMDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/45542006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NicholasKao1029",
"html_url": "https://github.com/NicholasKao1029",
"followers_url": "https://api.github.com/users/NicholasKao1029/followers",
"following_url": "https://api.github.com/users/NicholasKao1029/following{/other_user}",
"gists_url": "https://api.github.com/users/NicholasKao1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NicholasKao1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicholasKao1029/subscriptions",
"organizations_url": "https://api.github.com/users/NicholasKao1029/orgs",
"repos_url": "https://api.github.com/users/NicholasKao1029/repos",
"events_url": "https://api.github.com/users/NicholasKao1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/NicholasKao1029/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Hey! If you go to `hf.co/runwayml/stable-diffusion-v1-5` you will se that there are no `tokenizer.json` or any files related to tokenization. However there is a `tokenizer` folder. The new version of transformers supports `subfolder`. \r\nThe model on the hub was modified, thus the command that you are looking for is probably: \r\n```python \r\n>>> from transformers import AutoTokenizer\r\n>>> AutoTokenizer.from_pretrained(\"runwayml/stable-diffusion-v1-5\",subfolder= \"tokenizer\")\r\n```\r\nwhich worked for me ๐ ",
"oh yes this is great thank you"
] | 1,677
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-53-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.0a0+d0d6b1f (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@amyeroberts
@ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
code block
```
from transformers import CLIPTokenizer, CLIPTextModel
import torch
tokenizer=CLIPTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5"),
text_encoder=CLIPTextModel.from_pretrained("runwayml/stable-diffusion-v1-5").to("cuda")
```
output
```
Traceback (most recent call last):
File "diffusers-oneflow/examples/poc.py", line 17, in <module>
tokenizer=CLIPTokenizer.from_pretrained("runwayml/stable-diffusion-v1-5"),
File "/opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'runwayml/stable-diffusion-v1-5'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'runwayml/stable-diffusion-v1-5' is the correct path to a directory containing all relevant files for a CLIPTokenizer tokenizer.
```
### Expected behavior
Expect to be able to consume and not throw
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21954/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21953
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21953/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21953/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21953/events
|
https://github.com/huggingface/transformers/pull/21953
| 1,609,908,395
|
PR_kwDOCUB6oc5LSqJD
| 21,953
|
Disable DDP for neuron
|
{
"login": "sangeethabal",
"id": 83724701,
"node_id": "MDQ6VXNlcjgzNzI0NzAx",
"avatar_url": "https://avatars.githubusercontent.com/u/83724701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sangeethabal",
"html_url": "https://github.com/sangeethabal",
"followers_url": "https://api.github.com/users/sangeethabal/followers",
"following_url": "https://api.github.com/users/sangeethabal/following{/other_user}",
"gists_url": "https://api.github.com/users/sangeethabal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sangeethabal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sangeethabal/subscriptions",
"organizations_url": "https://api.github.com/users/sangeethabal/orgs",
"repos_url": "https://api.github.com/users/sangeethabal/repos",
"events_url": "https://api.github.com/users/sangeethabal/events{/privacy}",
"received_events_url": "https://api.github.com/users/sangeethabal/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
This PR is to disable DDP when using neuron. Currently, we are overwriting the _wrap_model function in trainer.py to disable DDP. We want to avoid overwriting by disabling DDP when using neuron.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21953/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21953",
"html_url": "https://github.com/huggingface/transformers/pull/21953",
"diff_url": "https://github.com/huggingface/transformers/pull/21953.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21953.patch",
"merged_at": 1678113225000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21952
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21952/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21952/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21952/events
|
https://github.com/huggingface/transformers/pull/21952
| 1,609,867,086
|
PR_kwDOCUB6oc5LSjHd
| 21,952
|
docs: improve clarity for language modeling
|
{
"login": "pdhall99",
"id": 20580126,
"node_id": "MDQ6VXNlcjIwNTgwMTI2",
"avatar_url": "https://avatars.githubusercontent.com/u/20580126?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdhall99",
"html_url": "https://github.com/pdhall99",
"followers_url": "https://api.github.com/users/pdhall99/followers",
"following_url": "https://api.github.com/users/pdhall99/following{/other_user}",
"gists_url": "https://api.github.com/users/pdhall99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pdhall99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pdhall99/subscriptions",
"organizations_url": "https://api.github.com/users/pdhall99/orgs",
"repos_url": "https://api.github.com/users/pdhall99/repos",
"events_url": "https://api.github.com/users/pdhall99/events{/privacy}",
"received_events_url": "https://api.github.com/users/pdhall99/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
- Improve clarity of tasks/language_modeling and tasks/masked_language_modeling docs.
- In example preprocessing, remove `truncation=True` parameter from `tokenizer` so texts aren't truncated before being concatenated and chunked.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, @stevhliu
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21952/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21952",
"html_url": "https://github.com/huggingface/transformers/pull/21952",
"diff_url": "https://github.com/huggingface/transformers/pull/21952.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21952.patch",
"merged_at": 1678126423000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21951
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21951/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21951/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21951/events
|
https://github.com/huggingface/transformers/issues/21951
| 1,609,682,819
|
I_kwDOCUB6oc5f8c-D
| 21,951
|
TimeSeriesTransformerModel - 'features' Is 'NoneType'
|
{
"login": "LtlSh",
"id": 109275417,
"node_id": "U_kgDOBoNpGQ",
"avatar_url": "https://avatars.githubusercontent.com/u/109275417?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LtlSh",
"html_url": "https://github.com/LtlSh",
"followers_url": "https://api.github.com/users/LtlSh/followers",
"following_url": "https://api.github.com/users/LtlSh/following{/other_user}",
"gists_url": "https://api.github.com/users/LtlSh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LtlSh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LtlSh/subscriptions",
"organizations_url": "https://api.github.com/users/LtlSh/orgs",
"repos_url": "https://api.github.com/users/LtlSh/repos",
"events_url": "https://api.github.com/users/LtlSh/events{/privacy}",
"received_events_url": "https://api.github.com/users/LtlSh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@LtlSh can you kindly try with the main branch of transformers?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
### System Info
Sat Mar 4 10:44:51 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 527.41 Driver Version: 527.41 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A |
| 30% 37C P8 18W / 350W | 550MiB / 24576MiB | 27% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1580 C+G ...y\ShellExperienceHost.exe N/A |
| 0 N/A N/A 5332 C+G ...e\PhoneExperienceHost.exe N/A |
| 0 N/A N/A 7560 C+G ...logioptionsplus_agent.exe N/A |
| 0 N/A N/A 7672 C+G ...5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 8268 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 11324 C+G ...cw5n1h2txyewy\LockApp.exe N/A |
| 0 N/A N/A 12316 C+G ...(x86)\AnyDesk\AnyDesk.exe N/A |
| 0 N/A N/A 12908 C+G ...ge\Application\msedge.exe N/A |
| 0 N/A N/A 13064 C+G ...2txyewy\TextInputHost.exe N/A |
| 0 N/A N/A 13764 C+G ...3d8bbwe\CalculatorApp.exe N/A |
| 0 N/A N/A 14880 C+G ...lPanel\SystemSettings.exe N/A |
+-----------------------------------------------------------------------------+
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi,
We're trying to train TimeSeriesTransformerModel, but it seems that there's a variable called 'features' that's not getting assigned. However, we can't figure out why is this happening.
Our code:
`
# Initializing a default Time Series Transformer configuration
configuration = TimeSeriesTransformerConfig(prediction_length = 327, lags_sequence = [0, 0, 0])
# Randomly initializing a model (with random weights) from the configuration
model = TimeSeriesTransformerForPrediction(configuration)
# Accessing the model configuration
configuration = model.config
#we dont know if passing the data as a dataframe instead if a tesndor would work
#currently model.train() is throwing an error, maybe we need to use a gpu? TODO
# Setting the model to training mode
model.train()
# Defining the loss function and optimizer
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
# Training loop
for epoch in range(100):
for batch in dataloader:
# Forward pass
outputs = model(
past_values=batch["past_values"],
past_time_features=batch["past_time_features"],
past_observed_mask=None,
static_categorical_features=None,
static_real_features=None,
future_values=batch["future_values"],
future_time_features=batch["future_time_features"],
)
loss = loss_fn(outputs, batch)
# Backward pass and optimization
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Printing the training loss
if (epoch + 1) % 10 == 0:
print(f"Epoch [{epoch + 1}/100], Loss: {loss.item()}")
`
We're getting this error:
`Traceback (most recent call last):
File "D:\Final Project\fMRI_Ariel_Lital\train.py", line 58, in <module>
outputs = model(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1813, in forward
outputs = self.model(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1626, in forward
transformer_inputs, scale, static_feat = self.create_network_inputs(
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 1532, in create_network_inputs
embedded_cat = self.embedder(static_categorical_features)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 250, in forward
[
File "C:\Users\Cognition\anaconda3\envs\ArielLital\lib\site-packages\transformers\models\time_series_transformer\modeling_time_series_transformer.py", line 251, in <listcomp>
embed(cat_feature_slice.squeeze(-1))
AttributeError: 'NoneType' object has no attribute 'squeeze'`
Here is the source code where the error occurs - we noticed 'features' is NoneType but don't know why:
`class FeatureEmbedder(nn.Module):
def __init__(self, cardinalities: List[int], embedding_dims: List[int]) -> None:
super().__init__()
self.num_features = len(cardinalities)
self.embedders = nn.ModuleList([nn.Embedding(c, d) for c, d in zip(cardinalities, embedding_dims)])
def forward(self, features: torch.Tensor) -> torch.Tensor:
if self.num_features > 1:
# we slice the last dimension, giving an array of length
# self.num_features with shape (N,T) or (N)
cat_feature_slices = torch.chunk(features, self.num_features, dim=-1)
else:
cat_feature_slices = [**features**]
return torch.cat(
[
embed(cat_feature_slice.squeeze(-1))
for embed, cat_feature_slice in zip(self.embedders, cat_feature_slices)
],
dim=-1,
)
`
Would appreciate your help.
@ArthurZucker and @younesbelkadav
[train.txt](https://github.com/huggingface/transformers/files/10887890/train.txt)
### Expected behavior
We would like to train a TimeSeriesTransformerModelfor for forcasting on tabular data.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21951/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21950
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21950/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21950/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21950/events
|
https://github.com/huggingface/transformers/issues/21950
| 1,609,682,793
|
I_kwDOCUB6oc5f8c9p
| 21,950
|
auto_find_batch_size should say what batch size it is using
|
{
"login": "p-christ",
"id": 26346243,
"node_id": "MDQ6VXNlcjI2MzQ2MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/26346243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/p-christ",
"html_url": "https://github.com/p-christ",
"followers_url": "https://api.github.com/users/p-christ/followers",
"following_url": "https://api.github.com/users/p-christ/following{/other_user}",
"gists_url": "https://api.github.com/users/p-christ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/p-christ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/p-christ/subscriptions",
"organizations_url": "https://api.github.com/users/p-christ/orgs",
"repos_url": "https://api.github.com/users/p-christ/repos",
"events_url": "https://api.github.com/users/p-christ/events{/privacy}",
"received_events_url": "https://api.github.com/users/p-christ/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] |
[
"cc @muellerzr ",
"Also, it would be good if auto find batch size also worked for the eval batch size? Otherwise for the eval batch size I still have to guess a few times and hope I don't run out of memory?",
"Thanks! Solved with https://github.com/huggingface/transformers/pull/23800",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,687
| 1,687
|
NONE
| null |
### Feature request
When using `auto_find_batch_size=True` in the trainer I believe it identifies the right batch size but then it doesn't log it to the console anywhere?
It would be good if it could log what batch size it is using?
### Motivation
I'd like to know what batch size it is using because then I will know roughly how big a batch can fit in memory - this info would be useful elsewhere.
### Your contribution
N/A
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21950/reactions",
"total_count": 11,
"+1": 11,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21950/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21949
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21949/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21949/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21949/events
|
https://github.com/huggingface/transformers/pull/21949
| 1,609,614,458
|
PR_kwDOCUB6oc5LRxF4
| 21,949
|
just testing
|
{
"login": "Raman-Kumar",
"id": 32980600,
"node_id": "MDQ6VXNlcjMyOTgwNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32980600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raman-Kumar",
"html_url": "https://github.com/Raman-Kumar",
"followers_url": "https://api.github.com/users/Raman-Kumar/followers",
"following_url": "https://api.github.com/users/Raman-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Raman-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raman-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raman-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Raman-Kumar/orgs",
"repos_url": "https://api.github.com/users/Raman-Kumar/repos",
"events_url": "https://api.github.com/users/Raman-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raman-Kumar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21949). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21949/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21949",
"html_url": "https://github.com/huggingface/transformers/pull/21949",
"diff_url": "https://github.com/huggingface/transformers/pull/21949.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21949.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21948
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21948/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21948/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21948/events
|
https://github.com/huggingface/transformers/pull/21948
| 1,609,610,111
|
PR_kwDOCUB6oc5LRwKE
| 21,948
|
test pull request for tokengt
|
{
"login": "Raman-Kumar",
"id": 32980600,
"node_id": "MDQ6VXNlcjMyOTgwNjAw",
"avatar_url": "https://avatars.githubusercontent.com/u/32980600?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raman-Kumar",
"html_url": "https://github.com/Raman-Kumar",
"followers_url": "https://api.github.com/users/Raman-Kumar/followers",
"following_url": "https://api.github.com/users/Raman-Kumar/following{/other_user}",
"gists_url": "https://api.github.com/users/Raman-Kumar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raman-Kumar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raman-Kumar/subscriptions",
"organizations_url": "https://api.github.com/users/Raman-Kumar/orgs",
"repos_url": "https://api.github.com/users/Raman-Kumar/repos",
"events_url": "https://api.github.com/users/Raman-Kumar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raman-Kumar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21948). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21948/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21948",
"html_url": "https://github.com/huggingface/transformers/pull/21948",
"diff_url": "https://github.com/huggingface/transformers/pull/21948.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21948.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21947
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21947/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21947/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21947/events
|
https://github.com/huggingface/transformers/issues/21947
| 1,609,541,724
|
I_kwDOCUB6oc5f76hc
| 21,947
|
Pre-training language model with Translation Language Modelling (TLM) objective
|
{
"login": "aloka-fernando",
"id": 36888985,
"node_id": "MDQ6VXNlcjM2ODg4OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/36888985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aloka-fernando",
"html_url": "https://github.com/aloka-fernando",
"followers_url": "https://api.github.com/users/aloka-fernando/followers",
"following_url": "https://api.github.com/users/aloka-fernando/following{/other_user}",
"gists_url": "https://api.github.com/users/aloka-fernando/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aloka-fernando/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aloka-fernando/subscriptions",
"organizations_url": "https://api.github.com/users/aloka-fernando/orgs",
"repos_url": "https://api.github.com/users/aloka-fernando/repos",
"events_url": "https://api.github.com/users/aloka-fernando/events{/privacy}",
"received_events_url": "https://api.github.com/users/aloka-fernando/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"If this is supported in the codebase please provide the steps or refer to an available resource.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,683
| 1,683
|
NONE
| null |
### Feature request
For large-scale pre-training MLM objective is heavily used. eg: BERT (Devlin et al., 2018), XLM-R (Connaue et al.,2019). The resources available from huggingface is to train a language model (LM) using MLM or CLM objectives. (ie. https://github.com/huggingface/notebooks/blob/main/examples/language_modeling.ipynb)
To recreate XLM (Lample and Connaue, 2019) I wish to pre-train my own language model using both MLM and TLM objectives. Please advice on how to do this using the huggingface transformers.
Thank you.
### Motivation
MLM + TLM is a common pre-training objective during language modelling training. Therefore for further improvement to these models, we need to first train using MLM + TLM objectives. In addition to this, if I need to customize the masking to be done for noun terms only, or verb terms only, please let me know whether this is allowed.
### Your contribution
I would appreciate if the pre-training on TLM objective is also provided.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21947/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21946
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21946/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21946/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21946/events
|
https://github.com/huggingface/transformers/pull/21946
| 1,609,196,426
|
PR_kwDOCUB6oc5LQV3s
| 21,946
|
Fix gradient checkpointing bug in Roformer
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21946/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21946",
"html_url": "https://github.com/huggingface/transformers/pull/21946",
"diff_url": "https://github.com/huggingface/transformers/pull/21946.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21946.patch",
"merged_at": 1677944674000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21945
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21945/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21945/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21945/events
|
https://github.com/huggingface/transformers/pull/21945
| 1,609,193,713
|
PR_kwDOCUB6oc5LQVSv
| 21,945
|
Fix gradient checkpointing bug in Rembert
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21945/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21945",
"html_url": "https://github.com/huggingface/transformers/pull/21945",
"diff_url": "https://github.com/huggingface/transformers/pull/21945.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21945.patch",
"merged_at": 1677944647000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21944
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21944/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21944/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21944/events
|
https://github.com/huggingface/transformers/pull/21944
| 1,609,191,459
|
PR_kwDOCUB6oc5LQUz1
| 21,944
|
Fix gradient checkpointing bug in Pegasus
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21944/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21944",
"html_url": "https://github.com/huggingface/transformers/pull/21944",
"diff_url": "https://github.com/huggingface/transformers/pull/21944.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21944.patch",
"merged_at": 1677944613000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21943
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21943/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21943/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21943/events
|
https://github.com/huggingface/transformers/pull/21943
| 1,609,188,166
|
PR_kwDOCUB6oc5LQUFV
| 21,943
|
Fix gradient checkpointing bug in OPT
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21943/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21943",
"html_url": "https://github.com/huggingface/transformers/pull/21943",
"diff_url": "https://github.com/huggingface/transformers/pull/21943.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21943.patch",
"merged_at": 1677944578000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21942
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21942/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21942/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21942/events
|
https://github.com/huggingface/transformers/pull/21942
| 1,609,187,121
|
PR_kwDOCUB6oc5LQT3K
| 21,942
|
[examples/speech-recognition] Add SpecAugment to run_speech_recognition_seq2seq.py
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @sgugger,\r\n\r\nThanks for the review ! I've removed the SpecAugment arguments from `ModelArguments` and left some default values as asked. Also secured the attribute access in the line 447 to define `return_attention_mask` :)",
"I think it's okay to have the whisper-specific changes as they are in this PR. We have similar things in `run_translation` for some specific models as well.",
"Thanks for the clarification @sgugger and @bofenghuang for the contribution! Merging in that case ๐ค"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hello ๐,
In this PR, I tried to add SpecAugment to run_speech_recognition_seq2seq.py training example since it's asked in https://github.com/huggingface/transformers/pull/21298.
I tried to not impact the training of other seq2seq models using this script. But I might still have missed something.
PS: Also removed useless argument `text_column` :)
Thanks in advance!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
cc @ArthurZucker @sanchit-gandhi @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21942/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21942",
"html_url": "https://github.com/huggingface/transformers/pull/21942",
"diff_url": "https://github.com/huggingface/transformers/pull/21942.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21942.patch",
"merged_at": 1678294772000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21941
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21941/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21941/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21941/events
|
https://github.com/huggingface/transformers/pull/21941
| 1,609,137,345
|
PR_kwDOCUB6oc5LQJVe
| 21,941
|
Adding Type Hints to TF_Pegasus model
|
{
"login": "mollerup23",
"id": 69806327,
"node_id": "MDQ6VXNlcjY5ODA2MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/69806327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mollerup23",
"html_url": "https://github.com/mollerup23",
"followers_url": "https://api.github.com/users/mollerup23/followers",
"following_url": "https://api.github.com/users/mollerup23/following{/other_user}",
"gists_url": "https://api.github.com/users/mollerup23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mollerup23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mollerup23/subscriptions",
"organizations_url": "https://api.github.com/users/mollerup23/orgs",
"repos_url": "https://api.github.com/users/mollerup23/repos",
"events_url": "https://api.github.com/users/mollerup23/events{/privacy}",
"received_events_url": "https://api.github.com/users/mollerup23/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @mollerup23, mostly looks good! One thing to watch out for is that in some cases the default value of the argument has been changed. It's easy to see if you look in the GitHub \"Files changed\" tab (see the image - `return_dict` had its default argument changed)\r\n\r\n\r\nIf you fix the instances where that happened and double-check that it's all okay in the Files Changed tab, we should be good to go!",
"Hi @Rocketknight1, I updated and committed again. Hopefully these fixes help, let me know if there is anything else I should do!"
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
# What does this PR do?
Added type hints for remaining `call()` functions for the Pegasus model (changes made only in the models/pegasus/modeling_tf_pegasus.py file).
Fixes # [(16059)](https://github.com/huggingface/transformers/issues/16059)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
[approval](https://github.com/huggingface/transformers/issues/16059#issuecomment-1441895896)
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Rocketknight1
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21941/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21941",
"html_url": "https://github.com/huggingface/transformers/pull/21941",
"diff_url": "https://github.com/huggingface/transformers/pull/21941.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21941.patch",
"merged_at": 1678723110000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21940
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21940/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21940/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21940/events
|
https://github.com/huggingface/transformers/pull/21940
| 1,609,100,685
|
PR_kwDOCUB6oc5LQBk2
| 21,940
|
[CI] Fix ci
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"> 2 other slow tests are failing, but they were also failing prior to #20211 .\r\n\r\nCould you share the names of these 2 tests.\r\nI can check on CI runners. I did it once on Friday for one test you mentioned, but let's make sure.\r\n",
"There is :\r\n- `tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_with_box_refine_two_stage`\r\n- `tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head_equivalence_cpu_gpu`\r\n- `tests/models/deformable_detr/test_modeling_deformable_detr.py::DeformableDetrModelIntegrationTests::test_inference_object_detection_head`\r\nRan them on the CI runner an they are all green so LGTM. "
] | 1,677
| 1,678
| 1,678
|
COLLABORATOR
| null |
# What does this PR do?
A typo made during the full cleanup of the code was making 2 test fail.
2 other slow tests are failing, but they were also failing prior to #20211 .
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21940/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21940",
"html_url": "https://github.com/huggingface/transformers/pull/21940",
"diff_url": "https://github.com/huggingface/transformers/pull/21940.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21940.patch",
"merged_at": 1678112548000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21939
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21939/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21939/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21939/events
|
https://github.com/huggingface/transformers/pull/21939
| 1,609,080,631
|
PR_kwDOCUB6oc5LP9Xe
| 21,939
|
Add TF contrastive image text finetuning example
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Update: Woah! The argument names to `PushToHubCallback` across our examples became outdated and tests weren't picking that up. I'm fixing it in this PR too."
] | 1,677
| 1,678
| 1,678
|
MEMBER
| null |
This PR adds a TF port of the PyTorch example for finetuning the `TFVisionTextDualEncoderModel` class. Functionality is largely the same, but I used a `tf.data` pipeline to efficiently stream images instead of `torchvision`. I also added the ability to specify separate image/text models with arguments, whereas in the PyTorch example you have to create the dual encoder with a separate script.
I also caught a small bug in the original model code while writing this - loss is a scalar rather than having shape `(1,)`. That's fixed in here too!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21939/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21939",
"html_url": "https://github.com/huggingface/transformers/pull/21939",
"diff_url": "https://github.com/huggingface/transformers/pull/21939.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21939.patch",
"merged_at": 1678121861000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21938
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21938/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21938/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21938/events
|
https://github.com/huggingface/transformers/pull/21938
| 1,609,039,943
|
PR_kwDOCUB6oc5LP0ra
| 21,938
|
[Whisper] Fix feature normalization in `WhisperFeatureExtractor`
|
{
"login": "bofenghuang",
"id": 38185248,
"node_id": "MDQ6VXNlcjM4MTg1MjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/38185248?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bofenghuang",
"html_url": "https://github.com/bofenghuang",
"followers_url": "https://api.github.com/users/bofenghuang/followers",
"following_url": "https://api.github.com/users/bofenghuang/following{/other_user}",
"gists_url": "https://api.github.com/users/bofenghuang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bofenghuang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bofenghuang/subscriptions",
"organizations_url": "https://api.github.com/users/bofenghuang/orgs",
"repos_url": "https://api.github.com/users/bofenghuang/repos",
"events_url": "https://api.github.com/users/bofenghuang/events{/privacy}",
"received_events_url": "https://api.github.com/users/bofenghuang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Hello @ArthurZucker @sanchit-gandhi ,
In this PR, I tried to fix the feature normalization in `WhisperFeatureExtractor` which is actually not used. The line 354 takes the `input_features` from the line 340.
In addition, the `zero_mean_unit_var_norm` expects a `padded_inputs["attention_mask"]` also in the sample level (48000) as `padded_inputs["input_features"]`. IMO it should not be rescaled in the line 344 (it's also repeated in the line 363).
Please let me know what you think :)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21938/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21938",
"html_url": "https://github.com/huggingface/transformers/pull/21938",
"diff_url": "https://github.com/huggingface/transformers/pull/21938.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21938.patch",
"merged_at": 1677871274000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21937
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21937/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21937/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21937/events
|
https://github.com/huggingface/transformers/issues/21937
| 1,608,970,591
|
I_kwDOCUB6oc5f5vFf
| 21,937
|
Whisper does not respect `config.forced_decoder_ids`
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
] |
[
"This is based on the behavior that we enforce through the generate function. If there is a `generation_config`, it will be used and has priority over the `config`. \r\nThere should be a warning deprecating the control of generation through the `config`. See [here](https://github.com/ArthurZucker/transformers/blob/main/src/transformers/generation/utils.py#LL538C17-L538C17).\r\nThis is a minor breaking change but should also be adresses with deprecation cycle "
] | 1,677
| 1,678
| 1,678
|
CONTRIBUTOR
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: macOS-13.1-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.3 (cpu)
- Jax version: 0.4.1
- JaxLib version: 0.4.1
### Who can help?
@sanchit-gandhi @ArthurZucker
### Reproduction
Previously, the recommended method for setting the language and transcription for Whisper inference was by setting the `config.forced_decoder_ids`. However, this method no longer works on the main branch, for example if we force the model to generate in French:
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
from datasets import load_dataset
# load model and processor
processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
sample = ds[0]["audio"]
input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
# set the forced ids
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
# generate token ids
predicted_ids = model.generate(input_features)
# decode token ids to text
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
print(transcription)
```
**Print Output**:
```
['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
```
### Expected behavior
Language token should correspond to the forced decoder token id (e.g. `"<|fr|>"` in this example). This was the case in previous transformer versions, so there has been a breaking change that we need to remedy.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21937/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21936
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21936/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21936/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21936/events
|
https://github.com/huggingface/transformers/issues/21936
| 1,608,938,023
|
I_kwDOCUB6oc5f5nIn
| 21,936
|
Whisper breaks on poor quality speech audio
|
{
"login": "frankiedrake",
"id": 12988773,
"node_id": "MDQ6VXNlcjEyOTg4Nzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/12988773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankiedrake",
"html_url": "https://github.com/frankiedrake",
"followers_url": "https://api.github.com/users/frankiedrake/followers",
"following_url": "https://api.github.com/users/frankiedrake/following{/other_user}",
"gists_url": "https://api.github.com/users/frankiedrake/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankiedrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankiedrake/subscriptions",
"organizations_url": "https://api.github.com/users/frankiedrake/orgs",
"repos_url": "https://api.github.com/users/frankiedrake/repos",
"events_url": "https://api.github.com/users/frankiedrake/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankiedrake/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @ArthurZucker and @sanchit-gandhi ",
"Hey! Did you get similar results using the original `Whisper` model? Most probably this is just the model not being very good. ",
"> Hey! Did you get similar results using the original `Whisper` model? Most probably this is just the model not being very good.\r\n\r\nNo, I don't face such problem with original `Whisper`",
"OK! There can be a few things to try, but if you can share an audio file with me it would help a lot! ",
"@frankiedrake Have you tried using `return_timestamps` ? When experimenting I found that the default for openai is using timestamps (even if not shown) and that the model seems to perform better than without.",
"> \r\n\r\nSorry for a noob question, where should I put this parameter? I tried to add `no_timestamps=False` to `processor.get_decoder_prompt_ids` call and also to the three other methods call, but result didn't change.\r\n\r\nBtw it works well with `medium` model, but it's too heavy for my setup",
"> OK! There can be a few things to try, but if you can share an audio file with me it would help a lot!\r\n\r\nUnfortunately, I cannot share it because of NDA, but I'll try to record one myself",
"`generate(..., return_timestamps=True)` to get the proper generation mode.\r\nThat's correct right @ArthurZucker ",
"> return_timestamps\r\n\r\nI tried this, but got an error, that model's `forward` method isn't aware of this parameter\r\n\r\n```ValueError: The following `model_kwargs` are not used by the model: ['return_timestamps'] (note: typos in the generate arguments will also show up in this list)```",
"`pipe = pipeline(..., return_timestamps=True)` should work though.",
"> @frankiedrake Have you tried using `return_timestamps` ? When experimenting I found that the default for openai is using timestamps (even if not shown) and that the model seems to perform better than without.\r\n\r\nI tested all variants, result is the same",
"Without being able to reproduce it's really hard. Could you dive to the level of logits and figure out any potential differences ?\r\n\r\nI'm pretty sure it should come down to a configuration difference in the end, but if *we* can't reproduce, it'd be hard to understand. ",
"> Without being able to reproduce it's really hard. Could you dive to the level of logits and figure out any potential differences ?\r\n> \r\n> I'm pretty sure it should come down to a configuration difference in the end, but if _we_ can't reproduce, it'd be hard to understand.\r\n\r\nOkay, thank you for suggestions, I'll try to look at it, and also will try to find an audio I could share with you",
"You are probably not using `main`! The timestamps were not part of the initial release ๐ ",
"@ArthurZucker Correct! I installed the library from main branch and the transcription is now better with one file, but remains the same with another, probably I can tune the parameters further to make it work even better",
"Glad that I could help ๐ ",
"> OK! There can be a few things to try, but if you can share an audio file with me it would help a lot!\r\n\r\nGuys, I was able to record an [audio](https://github.com/frankiedrake/demo/blob/master/whisper_test.wav) that reproduces a problem, but it wasn't so easy, so the text I recorded doesn't actually exists ๐, anyway I'd expect the model gives some text instead of repeating a single word multiple times.\r\nAlso, I noticed that if I don't specify language manually the model seems not to break, in my case it detected Chinese language and emitted some readable text.\r\nOutput of Ukrainian:\r\n`ะะตัะตัะธ, ัะดะธะฝะต ะผ'ัะบะพััั, ัะดะธะฝะต ัััะตะผะพ, ั ะดะฒะฐ, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต, ั ะฑะตัะต.`\r\n\r\nThe original whisper produces:\r\n`ะะตััะพััั, ั ััะฒะฝั, ั ะผัะปัะนะพััะฒะฝั, ั ััะพะณะพะดะฝั, ั ะดะฒะฐ, ั ะฑะตัะต, ั ะฑะตัะต, ััะถะตะฝะพ, ััะถะตะฝะพ, ะฝะตัะฒะฝะพ, ะฝะตัะฒะฝะพ.`\r\n ",
"> f I don't specify language manually the model seems not to break\r\n\r\nCan you share how you specify language in both `openai` and `transformers` ? The difference is likely coming from there.",
"\r\n\r\n\r\n> > f I don't specify language manually the model seems not to break\r\n> \r\n> Can you share how you specify language in both `openai` and `transformers` ? The difference is likely coming from there.\r\n\r\nI really doubt that it's related because it reproduces even if I don't specify the language in other audios",
"Seems like it's a known [issue](https://github.com/microsoft/DialoGPT/issues/45#issuecomment-680087019) with transformers models, and tuning `temperature` and `repetition_penalty` params of `generate` method helps, I was able to stop the model from repeating a single word, text now looks much much better, but seems I noticed a small cut at the end of the audio, will investigate a bit deeper. I read about `repetition_penalty` before but I didn't succeed, maybe the conjunction with the `temperature` is significant. \r\nMaybe you can shed some light on why these parameters are so important?",
"> Maybe you can shed some light on why these parameters are so important?\r\n\r\nLMs are know to hallucinate by repeating tokens, or several tokens. I don't think there's a good consensus on why it's the case, but it's a very well know issue on LMs. Adding penalties does help, but it's a clutch in my very personal opinion.\r\n\r\nMaybe the openai defaults are still different the ones we have\r\n",
"Do we agree the openai test you're doing is simply \r\n\r\n```\r\nwhisper whisper_test.wav --model small\r\n```\r\n\r\nAnd get this\r\n\r\n```\r\nDetecting language using up to the first 30 seconds. Use `--language` to specify the language\r\nDetected language: Chinese\r\n[00:00.420 --> 00:02.000] ๆๅไปๅ\r\n[00:02.000 --> 00:03.320] ไปๅๅจ้ฃ้\r\n[00:03.320 --> 00:05.280] ้ ไพฟไปๅ็ญ่้ๅป\r\n[00:05.280 --> 00:06.980] ้่ฆๅป\r\n[00:06.980 --> 00:08.260] ๅๅ่ชชไปๅ่ๅพ\r\n```\r\n \r\nright ?\r\n",
"I have the following result\r\n```\r\nDetecting language using up to the first 30 seconds. Use `--language` to specify the language\r\nDetected language: Chinese\r\n[00:00.000 --> 00:01.700] ่ฆ็ไธๅนดๅนด่ไบ\r\n[00:01.700 --> 00:03.280] ไธ็ไธไธๅนด่\r\n[00:03.480 --> 00:05.960] ็ทดไธ้ฃๅญ\r\n[00:05.960 --> 00:06.700] ็ฑๆ
ไธไบ\r\n[00:06.700 --> 00:07.620] ่ฌ่ฌๅคงๅฎถ\r\n```\r\n",
"Can you share anything reproducible ? Right now it's a back and forth and we can't reproduce anything on our end.\r\n\r\nPlease share a clear (small) script, that I can copy past that should reproduce the issue on `transformers@main` and `whisper@main` otherwise, it's going to be too tedious for us to investigate.",
"> Can you share anything reproducible ? Right now it's a back and forth and we can't reproduce anything on our end.\r\n> \r\n> Please share a clear (small) script, that I can copy past that should reproduce the issue on `transformers@main` and `whisper@main` otherwise, it's going to be too tedious for us to investigate.\r\n\r\nBut I shared a file with you, what exactly you can't reproduce?",
"I don't have the same output for neither `openai/whisper` nor `transformers` on your file.\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.19.0-32-generic-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.11.1
- PyTorch version (GPU?): 1.13.1+cu117 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I don't know if it's a bug, but it's definitely not an expected behaviour for me.
Also I saw a thread with such behaviour where @Narsil said that "the model is repeating itself", but I can't find it right now, I'll update the issue when I do.
To recognize audio file I'm using script that I found in one of the threads here on github [link](https://colab.research.google.com/drive/1Qz9hUL3Z3SxHLUt7f4vuzZG7KEgV0ofk?usp=share_link)
```
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(task="transcribe")
input_speech, sr = audio2numpy.audio_from_file(file)
input_features = processor(input_speech, return_tensors="pt", sampling_rate=16000).input_features
predicted_ids = model.generate(input_features, max_length=model.config.max_length, repetition_penalty=1)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
```
And it works okay if the speech is clear and the utterance is detected good, but when speaker talks fast or not legibly enough, or if there's little silence in audio, then the transcription becomes ugly.
It looks like:
"ะัะธะฒัั, ั
ะพัะพัะฐ ะฟะพะณะพะดะฐ ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต ะฐะปะต"
Currently I'm using only ukrainian files so I'm not aware if it happens in other languages.
### Expected behavior
The text is recognized along the whole audio file without breaking
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21936/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21935
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21935/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21935/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21935/events
|
https://github.com/huggingface/transformers/pull/21935
| 1,608,927,712
|
PR_kwDOCUB6oc5LPc07
| 21,935
|
Update feature selection in to_tf_dataset
|
{
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @MKhalusova Reference on updated API for selecting features from TF datasets",
"Thanks for updating the docs too! Looks neat :)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,682
| 1,682
|
COLLABORATOR
| null |
# What does this PR do?
Updates feature selection to ensure returned dataset structure is consistent after merging of datasets PR: https://github.com/huggingface/datasets/pull/5602. The PR makes it possible to return a TF dataset with a dict structure even if only a single feature is selected.
Compatibility with this version of datasets was run with [this commit](https://github.com/huggingface/transformers/pull/21935/commits/b64204bc1093edd7e3666ad76354fa09405cf4ec) and had a [successful run](https://github.com/huggingface/transformers/actions/runs/4555221658/jobs/8034018899).
Note: In all the cases here, the examples were tested with and without these updates. The models would successfully train with both the new and old dataset structures.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21935/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21935",
"html_url": "https://github.com/huggingface/transformers/pull/21935",
"diff_url": "https://github.com/huggingface/transformers/pull/21935.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21935.patch",
"merged_at": 1682354071000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21934
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21934/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21934/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21934/events
|
https://github.com/huggingface/transformers/issues/21934
| 1,608,907,562
|
I_kwDOCUB6oc5f5fsq
| 21,934
|
Faster `Skipping the first batches` in Trainer
|
{
"login": "SamuelLarkin",
"id": 7314973,
"node_id": "MDQ6VXNlcjczMTQ5NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7314973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamuelLarkin",
"html_url": "https://github.com/SamuelLarkin",
"followers_url": "https://api.github.com/users/SamuelLarkin/followers",
"following_url": "https://api.github.com/users/SamuelLarkin/following{/other_user}",
"gists_url": "https://api.github.com/users/SamuelLarkin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamuelLarkin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamuelLarkin/subscriptions",
"organizations_url": "https://api.github.com/users/SamuelLarkin/orgs",
"repos_url": "https://api.github.com/users/SamuelLarkin/repos",
"events_url": "https://api.github.com/users/SamuelLarkin/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamuelLarkin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This is already done in the main branch. You just need to have `Accelerate` installed as an extra dependency.",
"Thanks for the info but I do have `accelerate==0.16.0` installed as reported in my logs but I still get extremely slow `Skipping the first batches`\r\n\r\n```yaml\r\nCONDA: transformers-4.20.1\r\nname: transformers-4.20.1\r\nchannels:\r\n - pytorch\r\n - huggingface\r\n - anaconda\r\n - conda-forge\r\n - defaults\r\ndependencies:\r\n - _libgcc_mutex=0.1=conda_forge\r\n - _openmp_mutex=4.5=2_kmp_llvm\r\n - abseil-cpp=20211102.0=hd4dd3e8_0\r\n - aiohttp=3.8.1=py39h7f8727e_1\r\n - aiosignal=1.2.0=pyhd3eb1b0_0\r\n - arrow-cpp=8.0.0=py39h60b952e_0\r\n - async-timeout=4.0.1=pyhd3eb1b0_0\r\n - attrs=21.4.0=pyhd3eb1b0_0\r\n - aws-c-common=0.4.57=he6710b0_1\r\n - aws-c-event-stream=0.1.6=h2531618_5\r\n - aws-checksums=0.1.9=he6710b0_0\r\n - aws-sdk-cpp=1.8.185=hce553d0_0\r\n - blas=1.0=mkl\r\n - boost-cpp=1.73.0=h7f8727e_12\r\n - bottleneck=1.3.4=py39hce1f21e_0\r\n - brotli=1.0.9=he6710b0_2\r\n - brotlipy=0.7.0=py39h27cfd23_1003\r\n - bzip2=1.0.8=h7b6447c_0\r\n - c-ares=1.18.1=h7f8727e_0\r\n - ca-certificates=2023.01.10=h06a4308_0\r\n - certifi=2022.12.7=py39h06a4308_0\r\n - cffi=1.15.0=py39hd667e15_1\r\n - charset-normalizer=2.0.4=pyhd3eb1b0_0\r\n - click=8.0.4=py39h06a4308_0\r\n - cryptography=37.0.1=py39h9ce1e76_0\r\n - cudatoolkit=11.6.0=hecad31d_10\r\n - cycler=0.11.0=pyhd3eb1b0_0\r\n - dataclasses=0.8=pyh6d0b6a4_7\r\n - datasets=2.3.2=py_0\r\n - dbus=1.13.18=hb2f20db_0\r\n - dill=0.3.4=pyhd3eb1b0_0\r\n - expat=2.4.4=h295c915_0\r\n - filelock=3.6.0=pyhd3eb1b0_0\r\n - fontconfig=2.13.1=h6c09931_0\r\n - fonttools=4.25.0=pyhd3eb1b0_0\r\n - freetype=2.11.0=h70c0345_0\r\n - frozenlist=1.2.0=py39h7f8727e_0\r\n - fsspec=2022.3.0=py39h06a4308_0\r\n - gflags=2.2.2=he6710b0_0\r\n - giflib=5.2.1=h7b6447c_0\r\n - glib=2.69.1=h4ff587b_1\r\n - glog=0.5.0=h2531618_0\r\n - grpc-cpp=1.46.1=h33aed49_0\r\n - gst-plugins-base=1.14.0=h8213a91_2\r\n - gstreamer=1.14.0=h28cd5cc_2\r\n - h5py=3.7.0=py39h737f45e_0\r\n - hdf5=1.10.6=hb1b8bf9_0\r\n - huggingface_hub=0.8.1=py_0\r\n - icu=58.2=he6710b0_3\r\n - idna=3.3=pyhd3eb1b0_0\r\n - importlib-metadata=4.11.3=py39h06a4308_0\r\n - importlib_metadata=4.11.3=hd3eb1b0_0\r\n - intel-openmp=2021.4.0=h06a4308_3561\r\n - joblib=1.1.0=pyhd3eb1b0_0\r\n - jpeg=9e=h7f8727e_0\r\n - kiwisolver=1.4.2=py39h295c915_0\r\n - krb5=1.19.2=hac12032_0\r\n - lcms2=2.12=h3be6417_0\r\n - ld_impl_linux-64=2.38=h1181459_1\r\n - lerc=3.0=h295c915_0\r\n - libboost=1.73.0=h28710b8_12\r\n - libbrotlicommon=1.0.9=h166bdaf_7\r\n - libbrotlidec=1.0.9=h166bdaf_7\r\n - libbrotlienc=1.0.9=h166bdaf_7\r\n - libclang=10.0.1=default_hb85057a_2\r\n - libcurl=7.82.0=h0b77cf5_0\r\n - libdeflate=1.8=h7f8727e_5\r\n - libedit=3.1.20210910=h7f8727e_0\r\n - libev=4.33=h7f8727e_1\r\n - libevent=2.1.12=h8f2d780_0\r\n - libffi=3.3=he6710b0_2\r\n - libgcc=7.2.0=h69d50b8_2\r\n - libgcc-ng=12.1.0=h8d9b700_16\r\n - libgfortran-ng=7.5.0=ha8ba4b0_17\r\n - libgfortran4=7.5.0=ha8ba4b0_17\r\n - libllvm10=10.0.1=hbcb73fb_5\r\n - libnghttp2=1.46.0=hce63b2e_0\r\n - libpng=1.6.37=hbc83047_0\r\n - libpq=12.9=h16c4e8d_3\r\n - libprotobuf=3.20.1=h4ff587b_0\r\n - libssh2=1.10.0=h8f2d780_0\r\n - libstdcxx-ng=11.2.0=h1234567_1\r\n - libthrift=0.15.0=hcc01f38_0\r\n - libtiff=4.4.0=hecacb30_0\r\n - libutf8proc=2.6.1=h27cfd23_0\r\n - libuuid=1.0.3=h7f8727e_2\r\n - libwebp=1.2.2=h55f646e_0\r\n - libwebp-base=1.2.2=h7f8727e_0\r\n - libxcb=1.15=h7f8727e_0\r\n - libxkbcommon=1.0.1=hfa300c1_0\r\n - libxml2=2.9.14=h74e7548_0\r\n - libxslt=1.1.35=h4e12654_0\r\n - libzlib=1.2.12=h166bdaf_1\r\n - llvm-openmp=14.0.4=he0ac6c6_0\r\n - lz4-c=1.9.3=h295c915_1\r\n - matplotlib=3.5.1=py39h06a4308_1\r\n - matplotlib-base=3.5.1=py39ha18d171_1\r\n - mkl=2021.4.0=h06a4308_640\r\n - mkl-service=2.4.0=py39h7f8727e_0\r\n - mkl_fft=1.3.1=py39hd3c417c_0\r\n - mkl_random=1.2.2=py39h51133e4_0\r\n - multidict=5.2.0=py39h7f8727e_2\r\n - multiprocess=0.70.12.2=py39h7f8727e_0\r\n - munkres=1.1.4=py_0\r\n - ncurses=6.3=h7f8727e_2\r\n - nodejs=6.11.2=h3db8ef7_0\r\n - nspr=4.33=h295c915_0\r\n - nss=3.74=h0370c37_0\r\n - numexpr=2.8.1=py39h807cd23_2\r\n - numpy=1.22.3=py39he7a7128_0\r\n - numpy-base=1.22.3=py39hf524024_0\r\n - openssl=1.1.1t=h7f8727e_0\r\n - orc=1.7.4=h07ed6aa_0\r\n - packaging=21.3=pyhd3eb1b0_0\r\n - pandas=1.4.2=py39h295c915_0\r\n - parquet-cpp=1.5.1=h34088ae_4\r\n - pcre=8.45=h295c915_0\r\n - pillow=9.2.0=py39hace64e9_1\r\n - pip=21.2.4=py39h06a4308_0\r\n - ply=3.11=py39h06a4308_0\r\n - protobuf=3.20.1=py39h295c915_0\r\n - pyarrow=8.0.0=py39h992f0b0_0\r\n - pycparser=2.21=pyhd3eb1b0_0\r\n - pyopenssl=22.0.0=pyhd3eb1b0_0\r\n - pyparsing=3.0.4=pyhd3eb1b0_0\r\n - pyqt=5.15.7=py39h6a678d5_1\r\n - pyqt5-sip=12.11.0=py39h6a678d5_1\r\n - pysocks=1.7.1=py39h06a4308_0\r\n - python=3.9.12=h12debd9_1\r\n - python-dateutil=2.8.2=pyhd3eb1b0_0\r\n - python-xxhash=2.0.2=py39h7f8727e_0\r\n - python_abi=3.9=1_cp39\r\n - pytorch=1.12.0=py3.9_cuda11.6_cudnn8.3.2_0\r\n - pytorch-mutex=1.0=cuda\r\n - pytz=2022.1=py39h06a4308_0\r\n - pyyaml=6.0=py39h7f8727e_1\r\n - qt-main=5.15.2=h327a75a_6\r\n - qt-webengine=5.15.9=hd2b0992_4\r\n - qtwebkit=5.212=h4eab89a_4\r\n - re2=2022.04.01=h295c915_0\r\n - readline=8.1.2=h7f8727e_1\r\n - regex=2022.3.15=py39h7f8727e_0\r\n - requests=2.27.1=pyhd3eb1b0_0\r\n - s2n=1.3.0=h9b69904_0\r\n - sacremoses=master=py_0\r\n - scikit-learn=1.0.2=py39h51133e4_1\r\n - scipy=1.7.3=py39hc147768_0\r\n - setuptools=61.2.0=py39h06a4308_0\r\n - sip=6.6.2=py39h6a678d5_0\r\n - six=1.16.0=pyhd3eb1b0_1\r\n - snappy=1.1.9=h295c915_0\r\n - sqlite=3.38.5=hc218d9a_0\r\n - tensorboardx=2.2=pyhd3eb1b0_0\r\n - threadpoolctl=2.2.0=pyh0d69192_0\r\n - tk=8.6.12=h1ccaba5_0\r\n - tokenizers=0.12.1=py39_0\r\n - toml=0.10.2=pyhd3eb1b0_0\r\n - tornado=6.1=py39h27cfd23_0\r\n - tqdm=4.64.0=py39h06a4308_0\r\n - transformers=4.20.1=pyhd8ed1ab_0\r\n - typing-extensions=4.1.1=hd3eb1b0_0\r\n - typing_extensions=4.1.1=pyh06a4308_0\r\n - tzdata=2022a=hda174b7_0\r\n - urllib3=1.26.9=py39h06a4308_0\r\n - utf8proc=2.6.1=h27cfd23_0\r\n - wheel=0.37.1=pyhd3eb1b0_0\r\n - xxhash=0.8.0=h7f8727e_3\r\n - xz=5.2.5=h7f8727e_1\r\n - yaml=0.2.5=h7b6447c_0\r\n - yarl=1.6.3=py39h27cfd23_0\r\n - zipp=3.8.0=py39h06a4308_0\r\n - zlib=1.2.12=h166bdaf_1\r\n - zstd=1.5.2=ha4553b6_0\r\n - pip:\r\n - absl-py==1.3.0\r\n - accelerate==0.16.0\r\n - asttokens==2.2.1\r\n - backcall==0.2.0\r\n - cachetools==5.2.1\r\n - codetiming==1.4.0\r\n - decorator==5.1.1\r\n - executing==1.2.0\r\n - google-auth==2.15.0\r\n - google-auth-oauthlib==0.4.6\r\n - grpcio==1.51.1\r\n - ipython==8.7.0\r\n - jedi==0.18.2\r\n - levenshtein==0.20.9\r\n - markdown==3.4.1\r\n - markupsafe==2.1.1\r\n - matplotlib-inline==0.1.6\r\n - more-itertools==9.0.0\r\n - oauthlib==3.2.2\r\n - parso==0.8.3\r\n - pexpect==4.8.0\r\n - pickleshare==0.7.5\r\n - prompt-toolkit==3.0.36\r\n - psutil==5.9.4\r\n - ptyprocess==0.7.0\r\n - pudb==2022.1.3\r\n - pure-eval==0.2.2\r\n - py-spy==0.3.3+computecanada\r\n - pyasn1==0.4.8\r\n - pyasn1-modules==0.2.8\r\n - pygments==2.13.0\r\n - rapidfuzz==2.13.7\r\n - requests-oauthlib==1.3.1\r\n - rsa==4.9\r\n - stack-data==0.6.2\r\n - tensorboard==2.11.0\r\n - tensorboard-data-server==0.6.1\r\n - tensorboard-plugin-wit==1.8.1\r\n - traitlets==5.7.1\r\n - urwid==2.1.2\r\n - urwid-readline==0.13\r\n - wcwidth==0.2.5\r\n - werkzeug==2.2.2\r\nprefix: /gpfs/projects/DT/mtp/WMT20/opt/miniconda3/envs/transformers-4.20.1\r\n```",
"I should point out that I'm running `examples/pytorch/language-modeling/run_mlm.py`.",
"I said on the the main branch. You have Transformers 4.20 installed, you need a source install.",
"Thank you for the much needed speed improvement.\r\nI'm now using `accelerate==0.16.0` with `transformers==4.27.0.dev0` and it no longer takes 11h to skip the first batches."
] | 1,677
| 1,679
| 1,679
|
CONTRIBUTOR
| null |
### Feature request
Improve speed when skipping the first batches in `trainer.py`.
### Motivation
Skipping batches should be fast.
```
Skipping the first batches: 76%|โโโโโโโโ | 93017/122500 [6:35:47<2:26:12, 3.36it/s]
```
In this example, the 'Trainer` already spent ~6h30 simply skipping batches and it estimates another ~2h30 to complete the task. This should not take that long to accomplish as those batches are not used and simply discarded.
Early investigation show that the `dataloader` is invoke which implies that it samples, fetches and collates the data where collating can be expensive and useless.
Wouldn't simply looping X times the `dataloader.sampler` be sufficient to resume the state of training? This would be a light process that could be done prior to the training loop.
Perhaps, as an alternate solution, we could temporarily attach a noop collator to `train_dataloader` while skipping the first batches.
### Your contribution
I would be glad to provide a PR. I can investigate the issue further but I would need advices on the matter as I don't have a setup to test all possible combinations of `dataloader`, distributed, `deepspeed` and so forth.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21934/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21933
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21933/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21933/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21933/events
|
https://github.com/huggingface/transformers/pull/21933
| 1,608,855,040
|
PR_kwDOCUB6oc5LPNP8
| 21,933
|
Update README logo
|
{
"login": "gary149",
"id": 3841370,
"node_id": "MDQ6VXNlcjM4NDEzNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gary149",
"html_url": "https://github.com/gary149",
"followers_url": "https://api.github.com/users/gary149/followers",
"following_url": "https://api.github.com/users/gary149/following{/other_user}",
"gists_url": "https://api.github.com/users/gary149/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gary149/subscriptions",
"organizations_url": "https://api.github.com/users/gary149/orgs",
"repos_url": "https://api.github.com/users/gary149/repos",
"events_url": "https://api.github.com/users/gary149/events{/privacy}",
"received_events_url": "https://api.github.com/users/gary149/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"nice!"
] | 1,677
| 1,678
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Update the README logo, mainly to have it visible in dark-mode (instead of black on black).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21933/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21933",
"html_url": "https://github.com/huggingface/transformers/pull/21933",
"diff_url": "https://github.com/huggingface/transformers/pull/21933.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21933.patch",
"merged_at": 1677862660000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21932
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21932/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21932/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21932/events
|
https://github.com/huggingface/transformers/issues/21932
| 1,608,828,095
|
I_kwDOCUB6oc5f5MS_
| 21,932
|
Some text in the international README files are in the wrong language
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### System Info
Some of the text in the international README files, such as the model descriptions, are in the wrong language.
### Who can help?
_No response_
### Reproduction
N/A
### Expected behavior
The text in each README file should be in the correct language.
A good starting point is looking at the model list for each language.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21932/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21931
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21931/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21931/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21931/events
|
https://github.com/huggingface/transformers/pull/21931
| 1,608,770,641
|
PR_kwDOCUB6oc5LO7GF
| 21,931
|
[CLAP] Support batched inputs for CLAP. Fixes pipeline issues
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Just need to update the expected values for the doctest and will merge",
"Pipeline, CI and doctests are all green ๐ "
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Support batching the `is_longer` part of the input for `zero_shot_audio_classification`. The model was not able to run batched inputs. Added a test to support this.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21931/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21931",
"html_url": "https://github.com/huggingface/transformers/pull/21931",
"diff_url": "https://github.com/huggingface/transformers/pull/21931.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21931.patch",
"merged_at": 1677865339000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21930
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21930/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21930/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21930/events
|
https://github.com/huggingface/transformers/pull/21930
| 1,608,700,096
|
PR_kwDOCUB6oc5LOsFP
| 21,930
|
Avoid failure in `check_repo.py` due to missing backends
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
My 2 PRs #21903 and #21909 forgot to check missing backends just like `check_all_models_are_auto_configured`, and it could fail on users env where some backends are missinig.
This PR fixes this by checking missing backends. Sorry about that.
**Remark**
```
from transformers.models.auto.modeling_flax_auto import FLAX_MODEL_MAPPING_NAMES
```
this worked even without backend jax/flax. That's why I missed.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21930/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21930",
"html_url": "https://github.com/huggingface/transformers/pull/21930",
"diff_url": "https://github.com/huggingface/transformers/pull/21930.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21930.patch",
"merged_at": 1677854061000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21929
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21929/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21929/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21929/events
|
https://github.com/huggingface/transformers/pull/21929
| 1,608,604,265
|
PR_kwDOCUB6oc5LOWqy
| 21,929
|
[Flan-UL2] Add-flan-ul2
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Also, why are the international README entries in the wrong language?",
"This should not be the case! Will fix this. ",
"@sgugger it seems that the README translations are not correct for several languages (korean, jp, etc) due to the fact that they do not follow the correct structure (\"released with the repository\", \"released with the blogpost\", ..), As this requires a bit of work I think that we can address a solution in a follow-up PR! Wdyt?",
"Btw, I opened a new issue for the README translations to keep things nice and organized (#21932).",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot! "
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Adds the documentation for the FLan-UL2 model
cc @younesbelkada
Fixes #21917
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21929/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21929",
"html_url": "https://github.com/huggingface/transformers/pull/21929",
"diff_url": "https://github.com/huggingface/transformers/pull/21929.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21929.patch",
"merged_at": 1677862645000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21928
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21928/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21928/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21928/events
|
https://github.com/huggingface/transformers/pull/21928
| 1,608,584,437
|
PR_kwDOCUB6oc5LOSVw
| 21,928
|
Use large VM for `repo_utils_job`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Can you just try to do a fake code modification in one of the repo utils so that the tests are run and we can check it works?",
"Should be fine with 2 runs\r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/58978/workflows/d2090147-1c3f-4b21-8840-36409ab3424b\r\n\r\nhttps://app.circleci.com/pipelines/github/huggingface/transformers/58978/workflows/09bc67f7-a079-4865-9ec2-9094820afa0e/jobs/719891\r\n\r\nand it shows docker/large",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Use `large` VM for `repo_utils_job`, as #21856 add `torch` for that job which requires more memory.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21928/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21928",
"html_url": "https://github.com/huggingface/transformers/pull/21928",
"diff_url": "https://github.com/huggingface/transformers/pull/21928.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21928.patch",
"merged_at": 1677850983000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21927
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21927/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21927/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21927/events
|
https://github.com/huggingface/transformers/pull/21927
| 1,608,580,260
|
PR_kwDOCUB6oc5LORiv
| 21,927
|
Fix `ZeroShotAudioClassificationPipeline` doctest
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,679
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fix `ZeroShotAudioClassificationPipeline` doctest
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21927/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21927",
"html_url": "https://github.com/huggingface/transformers/pull/21927",
"diff_url": "https://github.com/huggingface/transformers/pull/21927.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21927.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21926
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21926/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21926/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21926/events
|
https://github.com/huggingface/transformers/pull/21926
| 1,608,421,217
|
PR_kwDOCUB6oc5LNvRq
| 21,926
|
Sync preprocesses before loading the processor at run_speech_recognition_ctc.py
|
{
"login": "mpenagar",
"id": 1698682,
"node_id": "MDQ6VXNlcjE2OTg2ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1698682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mpenagar",
"html_url": "https://github.com/mpenagar",
"followers_url": "https://api.github.com/users/mpenagar/followers",
"following_url": "https://api.github.com/users/mpenagar/following{/other_user}",
"gists_url": "https://api.github.com/users/mpenagar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mpenagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpenagar/subscriptions",
"organizations_url": "https://api.github.com/users/mpenagar/orgs",
"repos_url": "https://api.github.com/users/mpenagar/repos",
"events_url": "https://api.github.com/users/mpenagar/events{/privacy}",
"received_events_url": "https://api.github.com/users/mpenagar/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"cc @sanchit-gandhi ",
"Updated seq2seq ASR fine-tuning script. I'm not very good with github, I guess there is no need to do a new PR.",
"Wait... there is something I don't get correctly.\r\n\r\nAs far as I understand from the [(documentation )](https://huggingface.co/transformers/v4.11.3/main_classes/trainer.html#transformers.TrainingArguments.main_process_first) , any code inside a block `with training_args.main_process_first():` should be executed only by the main process:\r\n\r\n```\r\nA context manager for torch distributed environment where\r\non needs to do something on the main process, while blocking\r\nreplicas, and when itโs finished releasing the replicas.\r\n\r\nOne such use is for datasetsโs map feature which to be efficient\r\nshould be run once on the main process, which upon completion\r\nsaves a cached version of results and which then automatically\r\ngets loaded by the replicas.\r\n```\r\n\r\nBut in my experience, the code is executed by all the processes, not just the main one. Take this minimal `example.py':\r\n\r\n```python\r\nfrom transformers import TrainingArguments,HfArgumentParser\r\nfrom transformers.trainer_utils import is_main_process\r\n\r\ndef main():\r\n parser = HfArgumentParser((TrainingArguments,))\r\n training_args, = parser.parse_args_into_dataclasses()\r\n rank = training_args.local_rank\r\n main_process = is_main_process(rank)\r\n print(f'\\nBEFORE WITH - local_rank={rank} is_main_process={main_process}')\r\n with training_args.main_process_first():\r\n print(f'\\nINSIDE WITH - local_rank={rank}')\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nIf I execute it in a 4 GPU node:\r\n\r\n`OMP_NUM_THREADS=1 python3 -m torch.distributed.launch --nproc_per_node 4 example.py --output_dir none`\r\n\r\nThe synching is working, but all processes execute the \"INSIDE\" `print`\r\n\r\nExecuting with newer `torchrun` does the same:\r\n\r\n`OMP_NUM_THREADS=1 torchrun --standalone --nnodes=1 --nproc_per_node=4 example.py --output_dir none`\r\n\r\nWhat I am geting wrong?",
"No, as the name indicates, it executes the code in the context manager on the main process, and then on all the others. The code is indeed executed in all processes, just in a certain order.\r\n\r\nSince with Datasets, everything is cached, executing the preprocessing inside that contextmanager means that process 0 will do the preprocessing, and then all the other will load the result from the cache without needing to do the preprocessing.",
"Ok, then the PR is not correct, since all the processes will try to write the json files. I removed the original:\r\n\r\n`if is_main_process(training_args.local_rank):`\r\n\r\nthat should be there inside the `with` block...",
"Indeed, your changes are perfect. Is this ready to be merged now?",
"It is working in my end without any problem",
"Is this good for merge @mpenagar? Changes LGTM!",
"Yes, it is ready. Anyway, I don't know how github works. Should I close the PR (there is a \"Close\" button there)?",
"Awesome, thanks for confirming @mpenagar and for your contribution ๐ค"
] | 1,677
| 1,680
| 1,680
|
CONTRIBUTOR
| null |
# What does this PR do?
Make sure all processes wait until data is saved before loading the processor from the `output_dir` in the `pytorch/speech-recognition/run_speech_recognition_ctc.py` example.
Issue:
* Non-main proccesses might try to load the processor from the `output_dir` before it is saved.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21926/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21926",
"html_url": "https://github.com/huggingface/transformers/pull/21926",
"diff_url": "https://github.com/huggingface/transformers/pull/21926.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21926.patch",
"merged_at": 1680701765000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21925
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21925/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21925/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21925/events
|
https://github.com/huggingface/transformers/issues/21925
| 1,608,400,837
|
I_kwDOCUB6oc5f3j_F
| 21,925
|
[Whisper] Index error with model weights like base, large-v2
|
{
"login": "kurianbenoy-sentient",
"id": 101088788,
"node_id": "U_kgDOBgZ-FA",
"avatar_url": "https://avatars.githubusercontent.com/u/101088788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kurianbenoy-sentient",
"html_url": "https://github.com/kurianbenoy-sentient",
"followers_url": "https://api.github.com/users/kurianbenoy-sentient/followers",
"following_url": "https://api.github.com/users/kurianbenoy-sentient/following{/other_user}",
"gists_url": "https://api.github.com/users/kurianbenoy-sentient/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kurianbenoy-sentient/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kurianbenoy-sentient/subscriptions",
"organizations_url": "https://api.github.com/users/kurianbenoy-sentient/orgs",
"repos_url": "https://api.github.com/users/kurianbenoy-sentient/repos",
"events_url": "https://api.github.com/users/kurianbenoy-sentient/events{/privacy}",
"received_events_url": "https://api.github.com/users/kurianbenoy-sentient/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Do you have the sample audio that triggers the bug ?\r\nI couldn't reproduce locally.\r\n\r\nAslo, is there any particular reason for having such a complex script ? I can see several things that could slow down this code more than it should.\r\n\r\n`pipe = pipeline(task=\"automatic-speech-recognition\", model=model_dir, device=device)` should work out of the box.\r\n\r\nHere for instance the device is set during prediction, which is fcalled repetively could slow down things a bit (once the device is set it shouldn't move afterwards. Even the pipeline creation, is not big, but still could be done ahead of time.\r\n\r\n\r\nCheers ! ",
"@Narsil my apologies. I got this error with specific weights in transformers library v4.26.1. \r\n\r\nYou can reproduce this error with the file in the below link.\r\n\r\nhttps://drive.google.com/file/d/1ubw9PLlo5NwB7xwTwfVnuRWKH8L5XPIx/view?usp=sharing\r\n\r\nI am using whisper models for offline inference. So simply passing the pipeline directly won't work:\r\n\r\n```\r\npipe = pipeline(task=\"automatic-speech-recognition\", model=model_dir, device=device)\r\n```\r\n\r\nThat's why I have to manually pass the pipeline:\r\n\r\n```\r\npipe = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model=self.model,\r\n tokenizer=self.tokenizer,\r\n feature_extractor=self.feature_extractor,\r\n framework=\"pt\",\r\n chunk_length_s=30,\r\n generate_kwargs={\"max_new_tokens\": 1024},\r\n device=dno,\r\n return_timestamps=timestamp\r\n )\r\n```\r\n\r\nThanks for pointing the issue in device_no, I will set it in such a way it won't always be called each time when method is called. ",
"While in the latest version in master I am getting the error: v4.27.0-dev for all model weights, when I use the above code:\r\n\r\n```\r\nTraceback (most recent call last): File \"/root/whisper_project/microservice-ai-whisper/deploy/Whisper.py\", line 99, in <module> print(model.predict_raw(payload)) \r\nile \"/root/whisper_project/microservice-ai-whisper/deploy/Whisper.py\", line 72, in predict_raw\r\n results = pipe(a_file.name) File \"/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 272, in __call__ return super().__call__(inputs, **kwargs) File \"/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1101, in __call__ return next( File \"/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py\", line 124, in __next__ item = next(self.iterator) File \"/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py\", line 266, in __next__ processed = self.infer(next(self.iterator), **self.params) File \"/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py\", line 1015, in forward model_outputs = self._forward(model_inputs, **forward_params) File \"/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py\", line 445, in _forward tokens = self.model.generate( File \"/root/mambaforge/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py\", line 1534, in generate logits_processor = [WhisperTimeStampLogitsProcessor(generation_config)] File \"/root/mambaforge/lib/python3.10/site-packages/transformers/generation/logits_process.py\", line 935, in __init__ self.no_timestamps_token_id = generate_config.no_timestamps_token_id AttributeError: 'GenerationConfig' object has no attribute 'no_timestamps_token_id' \r\n```\r\n\r\n\r\n",
"Ok this works:\r\n\r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model=\"openai/whisper-large-v2\",\r\n chunk_length_s=30,\r\n device=0,\r\n return_timestamps=True,\r\n)\r\n\r\nout = pipe(\"sample.wav\")\r\nprint(out)\r\n```\r\n\r\nThen you can definitely use the same simplicity with local values. Just save the pipeline\r\n\r\n`pipe.save_pretrained(\"whisper-local\")`\r\n\r\nAnd then \r\n```python\r\nfrom transformers import pipeline\r\n\r\npipe = pipeline(\r\n task=\"automatic-speech-recognition\",\r\n model=\"./whisper_local\",\r\n chunk_length_s=30,\r\n device=0,\r\n return_timestamps=True,\r\n)\r\n```\r\n\r\nShould work.\r\n\r\nNo for the latest error, I'm guessing this has to do with recent changes in the configuration of whisper.\r\nIs that possible @ArthurZucker ?",
"Thank you @Narsil for techniques to reduce complexity. \r\n\r\nI didn't we can save pipeline also locally.",
"Yes, if you are using main, and want to use timestamps, you should make sure that the model has the `no_timestamp_id`. \r\nThis is because this is a new feature, and for proper generation, we recommend setting a `generation_config`. \r\nAlso, if you check both models, they have a different `configuration.forced_decoder_ids`. This is probably what was causing the difference in behaviour between the `large` and `base` (for example).\r\n\r\n\r\n",
"@ArthurZucker how can I make sure that model has the `no_timestamp_id`?",
"You should save/push the update `generation_config`. \r\nThe simplest you can do is the following: \r\n```python \r\nfrom transformers import GenerationConfig, WhisperForConditionalGeneration\r\nmodel = WhisperForConditionalGeneration.from_pretrained(\"your_pretrained_checkpoint\")\r\ngeneration_config = GenerationConfig.from_pretrained(\"openai/whisper-base\") # if you are using a multilingual model\r\nmodel.generation_config = generation_config\r\nmodel.push_to_hub(\"your_pretrained_checkpoint\", use_auth_token = \"your_token_if_not_logged_in\", create_pr = True)\r\n```",
"Thanks for your answer. Hope you will resolve this issue before taking it in next stable version. I am closing this issue for now.\r\n"
] | 1,677
| 1,678
| 1,678
|
NONE
| null |
### System Info
- `transformers` version: 4.27.0.dev0
- Platform: Linux-5.4.0-66-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@sanchit-gandhi @Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. I have been using Whisper from transformers library with my own custom script. The full code is attached below for reference:
```
import argparse
import base64
import os
import tempfile
import torch
from transformers import (
AutoFeatureExtractor,
AutoTokenizer,
WhisperForConditionalGeneration,
WhisperProcessor,
pipeline,
)
from utils.b64 import convert_to_base64
model_dir = "openai/whisper-base"
class Whisper:
def __init__(self):
self.tokenizer = AutoTokenizer.from_pretrained(model_dir)
self.feature_extractor = AutoFeatureExtractor.from_pretrained(model_dir)
self.processor = WhisperProcessor.from_pretrained(model_dir)
self.device = "cuda" if torch.cuda.is_available() else "cpu"
# print([x for x in Path(model_dir).iterdir()])
self.model = WhisperForConditionalGeneration.from_pretrained(model_dir).to(self.device)
self.is_timestamp = False
def predict_raw(self, payload):
if payload is None:
return {"inputerror": "JSON expected"}
if "wav_base64" not in payload:
return {"inputerror": "Missing key wav_base64 in payload."}
afs = payload["wav_base64"]
if not isinstance(afs, str):
return {"inputerror": "Audio file to passed as input in base64 format"}
if "timestamps" not in payload:
timestamp = self.is_timestamp
elif "timestamps" in payload and type(payload["timestamps"]) != str:
return {"inputerror": "timestamps should be of string datatype"}
elif "timestamps" in payload and payload ["timestamps"] != "true":
return {"inputerror": "timestamps payload should be of Value True"}
else:
timestamp = True
lang = payload.get("language")
print(lang)
afs = base64.b64decode(afs)
dno = torch.cuda.current_device() if self.device == "cuda" else -1
with tempfile.NamedTemporaryFile() as a_file:
a_file.write(afs)
pipe = pipeline(
task="automatic-speech-recognition",
model=self.model,
tokenizer=self.tokenizer,
feature_extractor=self.feature_extractor,
framework="pt",
chunk_length_s=30,
generate_kwargs={"max_new_tokens": 1024},
device=dno,
return_timestamps=timestamp
)
if lang:
self.model.config.forced_decoder_ids = self.processor.get_decoder_prompt_ids(
task="transcribe", language=lang
)
if timestamp:
results = pipe(a_file.name)
timestamp_info = [{"text": x["text"], "start": x["timestamp"][0],"end": x["timestamp"][1]} for x in results["chunks"]]
return {"text": results["text"], "timestamps": timestamp_info}
else:
return pipe(a_file.name)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"audio_file",
type=str,
help="Input the path to audio file you want to transcribe",
)
parser.add_argument(
"--language",
type=str,
help="Input the language",
)
args = parser.parse_args()
kwargs = vars(args)
b64 = convert_to_base64(kwargs["audio_file"])
if kwargs["language"]:
payload = {"wav_base64": b64, "language": kwargs["language"]}
payload = {"wav_base64": b64, "timestamps": "true"}
model = Whisper()
print(model.predict_raw(payload))
payload = {"wav_base64": b64}
print(model.predict_raw(payload))
```
2. When I am using model weights of `base` and `large-v2`. The code is getting an error as below:
```
Traceback (most recent call last):
File "/root/whisper_project/microservice-ai-whisper/deploy/debug.py", line 102, in <module>
print(model.predict_raw(payload))
File "/root/whisper_project/microservice-ai-whisper/deploy/debug.py", line 75, in predict_raw
results = pipe(a_file.name)
File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 272, in __call__
return super().__call__(inputs, **kwargs)
File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1101, in __call__
return next(
File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/pt_utils.py", line 266, in __next__
processed = self.infer(next(self.iterator), **self.params)
File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1015, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/root/mambaforge/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 445, in _forward
tokens = self.model.generate(
File "/root/mambaforge/lib/python3.10/site-packages/transformers/models/whisper/modeling_whisper.py", line 1543, in generate
return super().generate(
File "/root/mambaforge/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/root/mambaforge/lib/python3.10/site-packages/transformers/generation/utils.py", line 1406, in generate
return self.greedy_search(
File "/root/mambaforge/lib/python3.10/site-packages/transformers/generation/utils.py", line 2211, in greedy_search
next_token_logits = outputs.logits[:, -1, :]
IndexError: index -1 is out of bounds for dimension 1 with size 0
```
3. I noticed this issue only with some model weights. The exact same code works with medium and large-v1 model weights
### Expected behavior
Get the transcribed output along with corresponding timestamps
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21925/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21924
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21924/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21924/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21924/events
|
https://github.com/huggingface/transformers/pull/21924
| 1,608,382,882
|
PR_kwDOCUB6oc5LNnAM
| 21,924
|
[WIP] [Flax] Improving Docs
|
{
"login": "Shubhamai",
"id": 51819922,
"node_id": "MDQ6VXNlcjUxODE5OTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/51819922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shubhamai",
"html_url": "https://github.com/Shubhamai",
"followers_url": "https://api.github.com/users/Shubhamai/followers",
"following_url": "https://api.github.com/users/Shubhamai/following{/other_user}",
"gists_url": "https://api.github.com/users/Shubhamai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shubhamai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shubhamai/subscriptions",
"organizations_url": "https://api.github.com/users/Shubhamai/orgs",
"repos_url": "https://api.github.com/users/Shubhamai/repos",
"events_url": "https://api.github.com/users/Shubhamai/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shubhamai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21924). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding relevant Jax/Flax code in `<frameworkcontent>` in Transformers Docs.
Issues:
- Since Flax has no `Trainer` class, no code exist for `Train` section in the tasks guide ([example](https://huggingface.co/docs/transformers/tasks/token_classification#train)).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
*Not mentioning due to currently in WIP*
- Documentation: sgugger, stevhliu and MKhalusova
- Flax: sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21924/timeline
| null | true
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21924",
"html_url": "https://github.com/huggingface/transformers/pull/21924",
"diff_url": "https://github.com/huggingface/transformers/pull/21924.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21924.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21923
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21923/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21923/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21923/events
|
https://github.com/huggingface/transformers/pull/21923
| 1,608,317,817
|
PR_kwDOCUB6oc5LNYjx
| 21,923
|
Fix `AlignModelTest` tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
- Fix `AlignModelTest` torchscript tests. This model has non-persistent buffer, and need a bit more care (same as in common test)
```python
self.register_buffer(
"token_type_ids", torch.zeros(self.position_ids.size(), dtype=torch.long), persistent=False
)
```
- Fix `AlignModelTest.test_multi_gpu_data_parallel_forward`: same as CLIP, we need a even number for `batch_size`, as this model have `logits_per_image` and `logits_per_text` which is of shape `(batch_size, batch_size)`, and in order to gather across devices, we need the second dim being the same.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21923/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21923",
"html_url": "https://github.com/huggingface/transformers/pull/21923",
"diff_url": "https://github.com/huggingface/transformers/pull/21923.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21923.patch",
"merged_at": 1677851230000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21922
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21922/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21922/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21922/events
|
https://github.com/huggingface/transformers/pull/21922
| 1,608,265,173
|
PR_kwDOCUB6oc5LNM9e
| 21,922
|
update model_split_percents
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
#21883 changed `WhisperModelTest`'s `model_split_percents ` from `[0.5, 0.7, 0.9]` to `= [0.8, 0.9]` to make `test_model_parallelism` work. But it fails `test_disk_offload` which uses `self.model_split_percents[0]`.
This PR adds back `0.5`. With `model_split_percents = [0.5, 0.8, 0.9]`, the relevant Whisper tests using `model_split_percents` all pass.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21922/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21922",
"html_url": "https://github.com/huggingface/transformers/pull/21922",
"diff_url": "https://github.com/huggingface/transformers/pull/21922.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21922.patch",
"merged_at": 1677850509000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21921
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21921/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21921/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21921/events
|
https://github.com/huggingface/transformers/pull/21921
| 1,608,253,076
|
PR_kwDOCUB6oc5LNKVE
| 21,921
|
Fix gradient checkpointing megatron bert
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21921/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21921",
"html_url": "https://github.com/huggingface/transformers/pull/21921",
"diff_url": "https://github.com/huggingface/transformers/pull/21921.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21921.patch",
"merged_at": 1677844222000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21920
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21920/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21920/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21920/events
|
https://github.com/huggingface/transformers/pull/21920
| 1,608,252,928
|
PR_kwDOCUB6oc5LNKTA
| 21,920
|
Fix gradient checkpointing bug in mvp
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21920/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21920",
"html_url": "https://github.com/huggingface/transformers/pull/21920",
"diff_url": "https://github.com/huggingface/transformers/pull/21920.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21920.patch",
"merged_at": 1677844190000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21919
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21919/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21919/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21919/events
|
https://github.com/huggingface/transformers/pull/21919
| 1,608,077,381
|
PR_kwDOCUB6oc5LMkEX
| 21,919
|
Fix wrong documentation about DataCollator padding defaults
|
{
"login": "substanc3-dev",
"id": 6898693,
"node_id": "MDQ6VXNlcjY4OTg2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6898693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/substanc3-dev",
"html_url": "https://github.com/substanc3-dev",
"followers_url": "https://api.github.com/users/substanc3-dev/followers",
"following_url": "https://api.github.com/users/substanc3-dev/following{/other_user}",
"gists_url": "https://api.github.com/users/substanc3-dev/gists{/gist_id}",
"starred_url": "https://api.github.com/users/substanc3-dev/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/substanc3-dev/subscriptions",
"organizations_url": "https://api.github.com/users/substanc3-dev/orgs",
"repos_url": "https://api.github.com/users/substanc3-dev/repos",
"events_url": "https://api.github.com/users/substanc3-dev/events{/privacy}",
"received_events_url": "https://api.github.com/users/substanc3-dev/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fixes some documation for DataCollatorForTokenClassification and DataCollatorForSeq2Seq, which previously mentioned no padding is the default, which is false.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR (@sgugger).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21919/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21919",
"html_url": "https://github.com/huggingface/transformers/pull/21919",
"diff_url": "https://github.com/huggingface/transformers/pull/21919.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21919.patch",
"merged_at": 1677862314000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21918
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21918/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21918/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21918/events
|
https://github.com/huggingface/transformers/pull/21918
| 1,607,975,616
|
PR_kwDOCUB6oc5LMOJz
| 21,918
|
Fix gradient checkpointing bug in MBart
|
{
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/followers",
"following_url": "https://api.github.com/users/KMFODA/following{/other_user}",
"gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions",
"organizations_url": "https://api.github.com/users/KMFODA/orgs",
"repos_url": "https://api.github.com/users/KMFODA/repos",
"events_url": "https://api.github.com/users/KMFODA/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMFODA/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
This PR fixes a bug that a user can encounter while using generate and models that use gradient_checkpointing.
Fixes Issue https://github.com/huggingface/transformers/issues/21737
cc @younesbelkada or @gante
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21918/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21918",
"html_url": "https://github.com/huggingface/transformers/pull/21918",
"diff_url": "https://github.com/huggingface/transformers/pull/21918.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21918.patch",
"merged_at": 1677844168000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21917
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21917/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21917/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21917/events
|
https://github.com/huggingface/transformers/issues/21917
| 1,607,951,523
|
I_kwDOCUB6oc5f12Sj
| 21,917
|
Add FLAN-UL2
|
{
"login": "shermansiu",
"id": 12627125,
"node_id": "MDQ6VXNlcjEyNjI3MTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shermansiu",
"html_url": "https://github.com/shermansiu",
"followers_url": "https://api.github.com/users/shermansiu/followers",
"following_url": "https://api.github.com/users/shermansiu/following{/other_user}",
"gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions",
"organizations_url": "https://api.github.com/users/shermansiu/orgs",
"repos_url": "https://api.github.com/users/shermansiu/repos",
"events_url": "https://api.github.com/users/shermansiu/events{/privacy}",
"received_events_url": "https://api.github.com/users/shermansiu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] |
[
"@DanielHesslow (since you ported the original UL2 weights). I would like to contribute, but I'm not too sure how to convert the weights from JAX to PyTorch.",
"I had a dirty dirty script which unfortunately lives on my old dev machine that I don't have with me at the moment ๐
\r\n\r\nI basically just loaded the t5 weights and went through and renamed every thing to match the HF format. ",
"Hey! Thanks for opening, they will be available on the hub soon! We are converting them with @younesbelkada ",
"The model is already out! (https://huggingface.co/google/flan-ul2)\r\n@younesbelkada has a space comparing Flan-T5-XXL and Flan-UL2 here: https://huggingface.co/spaces/ybelkada/i-like-flan-ul2"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Model description
UL2 is a unified framework for pretraining models that are universally effective across datasets and setups. UL2 uses Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes.
FLAN-UL2 has the same configuration as the original UL2 20B model, except that it has been instruction tuned with Flan.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
The model architecture (UL2) is already in Huggingface Transformers.
The 20B model weights are here: https://github.com/google-research/google-research/tree/master/ul2#checkpoints
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21917/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21916
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21916/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21916/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21916/events
|
https://github.com/huggingface/transformers/pull/21916
| 1,607,853,726
|
PR_kwDOCUB6oc5LL0sH
| 21,916
|
[In progress] Add warning padding attention mask
|
{
"login": "anruijian",
"id": 115125339,
"node_id": "U_kgDOBtysWw",
"avatar_url": "https://avatars.githubusercontent.com/u/115125339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anruijian",
"html_url": "https://github.com/anruijian",
"followers_url": "https://api.github.com/users/anruijian/followers",
"following_url": "https://api.github.com/users/anruijian/following{/other_user}",
"gists_url": "https://api.github.com/users/anruijian/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anruijian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anruijian/subscriptions",
"organizations_url": "https://api.github.com/users/anruijian/orgs",
"repos_url": "https://api.github.com/users/anruijian/repos",
"events_url": "https://api.github.com/users/anruijian/events{/privacy}",
"received_events_url": "https://api.github.com/users/anruijian/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"cc @gante ",
"Thank you for the comment!\r\n\r\nBased on my understanding, [this line of code](https://github.com/huggingface/transformers/pull/21916/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaR1054) enables the checking process only once during the forward pass, so it should not significantly impact performance.\r\n\r\nThe current warning method only issue warnings when `attention_mask` is necessary (due to the presence of padding tokens in the input), but no `attention_mask` is provided. In other cases where `attention_mask` is not required, no warning is issued. The additional checking on special tokens allows a more detailed warning message. \r\n\r\nI agree that your suggested method is more concise and efficient, but it may generate warnings when `attention_mask` is not needed. \r\n\r\nSince it's my first time contributing to the community, I don't have a strong opinion towards either solution. The original work is by @ydshieh and @patrickvonplaten. Perhaps they have additional insights and can suggest a more effective solution.",
"> Why not a simple logger.warning_once()\r\n\r\nThis is recently introduced :-)",
"@anruijian It checks `input_ids` until there is a batch in which a `pad_token_id` exists. If a user is working on a problem where they have no `pad_token_id` on their data and they don't pass the `attention_mask`, there is a check made every forward pass. I'd strongly advocate for a simple warning when the `attention_mask` is not passed ๐ค \r\n\r\nAs a side note, we have related problems at other points in the code base. Getting into the habit of passing the `attention_mask` would really make everyone happier!",
"@gante Just to confirm before updating the PR, we are going to remove `warn_if_pad_token_in_input_ids_no_attention_mask` method and use `logger.warning_once` in `forward()`:\r\n```python\r\ndef forward(...):\r\n ...\r\n if not attention_mask:\r\n logger.warning_once(\r\n \"\\nWe strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the\"\r\n \" attention weights. \"\r\n )\r\n ...\r\n```",
"@anruijian correct :) I would add a short example in the warning, such as `(e.g. to correctly mask the pad tokens)`, but I'll leave that up to you!",
"@gante \r\n```python\r\ndef forward(...):\r\n ...\r\n if not attention_mask:\r\n logger.warning_once(\r\n \"\\nWe strongly recommend passing an `attention_mask` to avoid possibly incorrectly computing the\"\r\n \" attention weights. Example to correctly mask the pad tokens: model(input_ids, attention_mask=attention_mask).\"\r\n \" See https://huggingface.co/docs/transformers/v4.23.1/en/troubleshooting#incorrect-output-when-padding-tokens-arent-masked for more details.\"\r\n )\r\n ...\r\n```\r\nDoes this example look good to you? I also link the official doc on the issue. Not sure if it's too long. Let me know what you think about this. Thanks!\r\n",
"@anruijian sounds good to me! (A minor nit: the link is for v4.23 of the docs, should be `https://huggingface.co/docs/transformers/troubleshooting#incorrect-output-when-padding-tokens-arent-masked` instead)",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16136
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21916/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21916",
"html_url": "https://github.com/huggingface/transformers/pull/21916",
"diff_url": "https://github.com/huggingface/transformers/pull/21916.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21916.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21915
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21915/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21915/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21915/events
|
https://github.com/huggingface/transformers/issues/21915
| 1,607,847,050
|
I_kwDOCUB6oc5f1cyK
| 21,915
|
Mask2Former ImageProcessor produces different results on Mac vs Windows.
|
{
"login": "nickponline",
"id": 590151,
"node_id": "MDQ6VXNlcjU5MDE1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/590151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nickponline",
"html_url": "https://github.com/nickponline",
"followers_url": "https://api.github.com/users/nickponline/followers",
"following_url": "https://api.github.com/users/nickponline/following{/other_user}",
"gists_url": "https://api.github.com/users/nickponline/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nickponline/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nickponline/subscriptions",
"organizations_url": "https://api.github.com/users/nickponline/orgs",
"repos_url": "https://api.github.com/users/nickponline/repos",
"events_url": "https://api.github.com/users/nickponline/events{/privacy}",
"received_events_url": "https://api.github.com/users/nickponline/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Here is the image I used.\r\n\r\n",
"Also cc @alaradirik ",
"Thanks for raising this issue @nickponline and for all the details! \r\n\r\nCould you give details on how you're reading in the image e.g. through torchvision and the format the image is saved in? If I download the image in the comment above I get different results than in the snippet.\r\n\r\n```\r\nimport torchvision\r\n\r\n# Load in downloaded image\r\nimage = torchvision.io.read_image('222617740-0088ded3-cd49-46df-aa23-0c2a30605729.jpg')\r\nimage = image.numpy()\r\nprint(image.dtype, image.shape, image.sum()) # uint8 (3, 1000, 1000) 443861838\r\n```",
"@amyeroberts @sgugger \r\n\r\nI'm reading the image with PIL\r\n\r\n```\r\nfrom PIL import Image\r\nimage = Image.open(filename)\r\nimage = image.convert('RGB')\r\nimage = np.array(image)\r\nimage = image.astype(np.float32)\r\nimage = image.transpose(2,0,1)\r\n```\r\n\r\nAt that point I have confirmed the the `image` is identical on both Windows and Mac. Also after inference further in the code the Mac result is the worse than the windows result if that help. But it's the image processor that is generating a different result for identical inputs. ",
"@amyeroberts @sgugger the means and stds of the input image are different on Windows and Mac after `ImageProcessor` forward call:\r\n\r\nWindows\r\n```\r\nmean = [-0.4228946 -0.17078026 0.25235963]\r\nstd = [0.81622934 0.699496 0.71027416]\r\n```\r\n\r\nMac\r\n```\r\nmean = [-1.229962 -1.1720737 -0.6407509]\r\nstd = [1.5912648 1.5453817 1.7506045]\r\n```",
"@amyeroberts @sgugger I updated the repro snippet above to make it easier to confirm.",
"@nickponline - thank you very much for extra details! I'll dig into this and try to figure out what's happening ๐ต๏ธโโ๏ธ ",
"@amyeroberts @sgugger I feel the issue is here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/image_transforms.py#L159\r\n\r\nThe image is already in the range `[0..255]` and after the rescale and then `image.astype(np.uint8)` the arrays are different on Windows and Mac. ",
"Calling in backup here: https://stackoverflow.com/questions/75632469/why-does-np-astypeuint8-give-different-results-on-windows-versus-mac ๐",
"Confirming that this works with `Python 3.10.6+ (Mac) Numpy 1.24.2+`. ShruggingFace ๐คทโโ๏ธ. It must be a bug or change of behavior in Numpy or Python. Can close. ",
"@nickponline Thanks for the updates and all the work digging into this! \r\n\r\nLooking at the line you highlighted and conversation on stackoverflow, it seems there's two things happening, resulting in this issue: \r\n* Rescaling the pixel values by multiplying by 255 if the input image is of type `float32`. Resulting in pixel values between 0 and 65,025. Then casting to `uint8` [here](https://github.com/huggingface/transformers/blob/fcf813417aa34f3a0ea7d283f7d4f6b0834cf098/src/transformers/image_transforms.py#L162)\r\n* Different overflow behaviour in numpy - as highlighted in [the stackoverflow comment](https://stackoverflow.com/a/75632979)\r\n\r\nIn this case, updating numpy will give consistent results between the OS's, however the resulting pixel_values from the image processor may not be sensible or produce good predictions from the model, depending on how the values are cast when overflow occurs. \r\n\r\nThe first issue is tricky to handle - the logic is partly there for backwards compatibility as resizing was handled by the PIL library and, when converting to PIL images, whether to rescale the pixel values was inferred by the type. The assumption is that raw pixel values are of an int type and between 0-255; unnormalized float type pixel values have values between 0-1. \r\n\r\nI think there's two possible things we can do to address these issues in the future: \r\n* Add an additional check on pixel values before rescaling\r\n* Raise a warning when casting to uint8 if overflow is going to occur \r\nI'll open a PR for these. \r\n\r\nAs a side note, you don't need to convert your images to float before feeding into the image processor. You can pass in the PIL images directly. \r\n\r\np.s. thanks for coining 'Shrugging Face' - I shall be using it in the future! \r\n"
] | 1,677
| 1,678
| 1,678
|
NONE
| null |
### System Info
>>> transformers.__version__
'4.27.0.dev0'
>>> Python 3.10.6
Windows vs Mac
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
import torch
from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation
processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-instance", reduce_labels=False, ignore_index=255, do_resize=True, size=dict(width=500, height=500), do_normalize=True, image_mean=[0.485, 0.456, 0.406], image_std=[0.229, 0.224, 0.225])
device = torch.device("cpu")
image = Image.open(filename1)
image = image.convert('RGB')
image = np.array(image)
image = image.astype(np.float32)
image = image.transpose(2,0,1)
print(image.dtype, image.shape, image.mean((1, 2))) # float32 (3, 1000, 1000) [156.41327 149.47672 137.97989]
ret = processor([image], return_tensors="pt")
pixel_values = ret["pixel_values"].to(device)
print(pixel_values.dtype, pixel_values.shape, pixel_values[0].mean((1, 2)), pixel_values[0].std((1, 2)))
```
Windows
```
float32 (3, 1000, 1000) [156.41327 149.47672 137.97989]
mean = [-0.4228946 -0.17078026 0.25235963]
std = [0.81622934 0.699496 0.71027416]
```
Mac
```
float32 (3, 1000, 1000) [156.41327 149.47672 137.97989]
mean = [-1.229962 -1.1720737 -0.6407509]
std = [1.5912648 1.5453817 1.7506045]
```
### Expected behavior
Same result on Windows and Mac
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21915/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21914
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21914/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21914/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21914/events
|
https://github.com/huggingface/transformers/pull/21914
| 1,607,841,971
|
PR_kwDOCUB6oc5LLyUK
| 21,914
|
feat: filter try/except when looking at custom code
|
{
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks again!",
"Happy to help! Thanks for the great packages!\n\nOn Fri, Mar 3, 2023 at 8:44 AM Sylvain Gugger ***@***.***>\nwrote:\n\n> Thanks again!\n>\n> โ\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/transformers/pull/21914#issuecomment-1453553255>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AIBFIPJCABTGQJDB3PCJKPDW2HYTDANCNFSM6AAAAAAVOB6YRQ>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #21912
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21914/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21914",
"html_url": "https://github.com/huggingface/transformers/pull/21914",
"diff_url": "https://github.com/huggingface/transformers/pull/21914.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21914.patch",
"merged_at": 1677851040000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21913
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21913/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21913/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21913/events
|
https://github.com/huggingface/transformers/issues/21913
| 1,607,789,680
|
I_kwDOCUB6oc5f1Oxw
| 21,913
|
[performance] from_pretrained is still much slower than torch.load and seems to be initializing weights
|
{
"login": "moyix",
"id": 34380,
"node_id": "MDQ6VXNlcjM0Mzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/34380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moyix",
"html_url": "https://github.com/moyix",
"followers_url": "https://api.github.com/users/moyix/followers",
"following_url": "https://api.github.com/users/moyix/following{/other_user}",
"gists_url": "https://api.github.com/users/moyix/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moyix/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moyix/subscriptions",
"organizations_url": "https://api.github.com/users/moyix/orgs",
"repos_url": "https://api.github.com/users/moyix/repos",
"events_url": "https://api.github.com/users/moyix/events{/privacy}",
"received_events_url": "https://api.github.com/users/moyix/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Thank you for trying to analyse this, @moyix and for wanting to make things faster.\r\n\r\nI dug into it and here is what I have to share with you.\r\n\r\n# What's happening for real\r\n\r\nIt's pretty clear from your profiler report that the diff comes from weights init which as you said get overwritten with weights.\r\n\r\nIndeed this is what's happening here. Except you are mixing 2 things.\r\n\r\nAs you discovered lazy model init was implemented here https://github.com/huggingface/transformers/pull/11471 and it later was improved upon in multiple PRs. This was done only for `_init_weights` functions defined in the modeling code of `transformers`.\r\n\r\nNow you're forgetting about calls like \r\n\r\nhttps://github.com/huggingface/transformers/blob/37e0974afcbccdc85da59d51b44e1437b6b3caea/src/transformers/models/codegen/modeling_codegen.py#L117-L119\r\n\r\nwhich of course by default call their init functions:\r\n\r\n```\r\n File \"/mnt/nvme0/code/huggingface/transformers-master/src/transformers/models/codegen/modeling_codegen.py\", line 117, in __init__\r\n self.qkv_proj = nn.Linear(self.embed_dim, self.embed_dim * 3, bias=False)\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/linear.py\", line 101, in __init__\r\n self.reset_parameters()\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/modules/linear.py\", line 107, in reset_parameters\r\n init.kaiming_uniform_(self.weight, a=math.sqrt(5))\r\n File \"/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/torch/nn/init.py\", line 396, in kaiming_uniform_\r\n```\r\n\r\nSo that overhead all comes from pytorch `nn.Module` submodules and not `_init_weights` defined in the modeling code of `transformers`.\r\n\r\nYou're wanting to use a huge 14GB model and it surely adds some 30sec to init it.\r\n\r\nThe problem is that you're comparing loading the weights only with instantiating the model plus loading the weights, so of course they aren't the same thing. But we agree that it's a pointless waste of compute and time to init weights that are going to be overwritten moments later.\r\n\r\nTo test I changed pytorch's `kaiming_uniform_` to be:\r\n```\r\ndef kaiming_uniform_(\r\n tensor: Tensor, a: float = 0, mode: str = 'fan_in', nonlinearity: str = 'leaky_relu'\r\n):\r\n return tensor\r\n```\r\nand the same for `uniform_` and `from_pretrained` was as fast as you wanted it to be.\r\n\r\nhint: perhaps you can use it as a hack until a better solution is provided - simply monkey patch the init functions with a no-op (I hope I covered the ones that are used here).\r\n\r\n```\r\nfrom transformers import CodeGenForCausalLM\r\nimport torch.nn.init\r\ntorch.nn.init.kaiming_uniform_ = lambda x, *args, **kwargs: x\r\ntorch.nn.init.uniform_ = lambda x, *args, **kwargs: x\r\nCodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono')\r\n```\r\nof course, I assume you are either doing inference or you have all weights in the distributed file - so no important init is missed.\r\n\r\nthis I think should give you the speed closer to `torch.load`\r\n\r\n# What can be done\r\n\r\nBut why you'd say can't you skip those inits?\r\n\r\nWe actually are able to do so since pytorch-1.10 where special functionality was added.\r\n- https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html#torch.nn.utils.skip_init\r\n- https://pytorch.org/tutorials/prototype/skip_param_init.html\r\n\r\nLooking at the requirements it actually appears to be possible despite needing to support pytorch<1.10 as well.\r\n\r\nThe modules will have to be adapted to meet 2 requirements:\r\nhttps://pytorch.org/tutorials/prototype/skip_param_init.html#updating-modules-to-support-skipping-initialization\r\nI will repaste them here:\r\n\r\n1. The module must accept a device kwarg in its constructor that is passed to any parameters or buffers created during construction.\r\n2. The module must not perform any computation on parameters or buffers in its constructor except initialization (i.e. functions from torch.nn.init).\r\n\r\nThe first one is certainly possible since doing:\r\n\r\n```\r\n- def __init__(self, foo, bar):\r\n+ def __init__(self, foo, bar, device=None):\r\n```\r\nshould be backward compatible.\r\n\r\nI think the 2nd requirement should be somewhat possible, but I can't speak for the multitude of models we have.\r\n\r\nOnce this is done, the rest of the `from_pretrained` will need to be adapted to use the `device` argument as in the example of the tutorial,\r\n```\r\nm = nn.Linear(10, 5, device='meta')\r\n```\r\nbut of course it will be `m = ModelName(..., device='meta')`\r\n\r\nI think this needs to happen sooner than later as it'd greatly simplify the various juggling we have during the loading process (after updating all the models, e.g. like `low_cpu_mem_usage` functionality). But needing to support torch<1.10 might make this somewhat messy. I'm not sure.\r\n\r\nSo now let me bring here @sgugger and @patrickvonplaten to take over as I'm currently working on a different project, and they can decide on whether the project is ready for this major change or not quite yet and then you can use my hack ;)\r\n\r\np.s. BTW, while studying your report I have invalidated your suggestion that there was a general `from_pretrained` regression, but to do that I had to use a different class since `CodeGenForCausalLM` was added only recently. I went all the way back to `transformers==4.14` and `t5-large` loads with the same speed as the latest version.\r\n\r\n**edit** Additional solutions are added in:\r\n- https://github.com/huggingface/transformers/issues/21913#issuecomment-1453482689\r\n- https://github.com/huggingface/transformers/issues/21913#issuecomment-1453858274",
"I'm curious, are you doing inference or finetuning? Because for the latter usually the init overhead is usually irrelevant.\r\n\r\nFast loading is also important for debug.\r\n\r\nI think I'm going to propose to pytorch this new feature:\r\n```\r\nwith torch.inference:\r\n m = MyModel(...)\r\n```\r\nand it would just work and be really fast w/o the overhead of init'ing weights which will be overloaded from pretrained weights.\r\n",
"Thanks for the very comprehensive answer! That makes perfect sense :) I am indeed doing inference and trying to get the batch size correct โย so having to wait a long time for the model load each attempt (only to get a CUDA out of memory error) was a bit painful.\r\n\r\nThat hack helps a lot for now, thanks!",
"Using `low_cpu_mem_usage=True` will initialize the model on the meta device (requires Accelerate as an extra dep) and should speed up the initialization as a result. This will become the default mid-term but we need some more preparation work by making the tests more robust for `from_pretrained` to make sure we absolutely don't break anything.",
"Some additional solutions coming from pytorch-slack where I asked [this question](https://pytorch.slack.com/archives/C3PDTEV8E/p1677813090248699):\r\n\r\n1. install pytorch-nightly from instructions at https://pytorch.org/get-started/locally/ (or if you read this later when pytorch==2.0 is released any 2.0 and higher version will do).\r\n\r\nnow you can do:\r\n\r\n```\r\n with torch.device(\"cuda\"):\r\n model = CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono')\r\n```\r\n\r\nso it instantiates the model directly on your gpu and all the inits are run much faster. This solution is just a bit slower than cancelling out the init functions. plus your model will already be on gpu, so no copying overhead from cpu.\r\n\r\nInstead of using the context manager you can just set the default device like so:\r\n\r\n```\r\ntorch.set_default_device('cuda')\r\n```\r\nand you no longer need to indent your existing code.\r\n\r\n1b. Using materialization on the `meta` device will be really fast as it will cancel out the init functions and won't even waste time on allocating memory for the weights:\r\n```\r\n with torch.device(\"meta\"):\r\n model = CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono')\r\n```\r\nbut the resulting model isn't usable right away and requires additional manipulations to materialize it on the target device with the preloaded weights. This most likely have to be done by `transformers` unless pytorch comes up with a magical method a user could do themselves.\r\n\r\ncredits: @alband and @stephenroller\r\n\r\n2. Another solution comes from https://pytorch.org/torchdistx/latest/deferred_init.html, but it requires tweaking `from_pretrained` to support `from torchdistx.deferred_init import deferred_init, materialize_module` and this experimental package isn't easy to install since it requires CUDA extensions building (though not for this functionality), so we can't make `transformers` depend on it. It will have to be upstreamed into pytorch first.\r\n\r\ncredits: @cbalioglu",
"In extension of @stas00 's number one, one might enhance the context manager solution with a diversion of the `init` functions. I wrote up [a bit more detail on my blog](http://lernapparat.de/faster-model-init).\r\n",
"@stas00 your solution is great, tested it a bit. Is there any timeline for this feature and could one help with integration? Would be interested to know what are the team's thoughts on integrating this feature within the `Trainer` but also `pipelines`? Happy to help if I can!",
"For the timeline questions we need to ask @sgugger ",
"The `low_cpu_mem_usage=True` option is already there in Transformers and usable today. Changing the default will take more time to ensure backward compatibility.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I know this issue is closed but just some relevant feedback: I'm also facing extremely slow performance with the `from_pretrained` method, this time in a conda environment. I tried the `low_cpu_mem_usage=True` solution, but this requires a more recent version of `transformers` than is available in the conda repos so I can't. Reported already on [Stack Overflow](https://stackoverflow.com/questions/76059654/getting-a-error-when-running-gptneoxforcausallm-from-transformers-library-namee).\r\n\r\nTLDR: for a chunk of users (anyone who has to use a conda environment) the `low_cpu_mem_usage=True` parameter is not available or usable.",
"Hey @tomwagstaff-opml, thanks for reporting.\r\n\r\nI believe you're using the `transformers` version from the main channel of anaconda, but we don't (and none of the open-source project maintainers do) maintain this version. This is maintained by the anaconda team.\r\n\r\nIn our README we indicate that you should use the [huggingface channel in order to install the package](https://github.com/huggingface/transformers#with-conda).\r\n\r\nPlease install it as such:\r\n\r\n```\r\nconda install -c huggingface transformers\r\n```\r\n\r\nor, alternatively, use the conda-forge channel which is also the latest version:\r\n\r\n```\r\nconda install -c conda-forge transformers\r\n```",
"Thanks for your help @LysandreJik - installing `transformers` from the Hugging Face channel has worked and allowed me to try out the `low_cpu_mem_usage` parameter"
] | 1,677
| 1,683
| 1,681
|
NONE
| null |
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 2.0.0.dev20230224+cu118 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@stas00, @patrickvonplaten
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Loading a model with `from_pretrained` takes much longer than the underlying torch.load. For example, for the `Salesforce/codegen-6B-mono` model, `CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono')` takes ~38 seconds, whereas `torch.load()` on its `pytorch_model.bin` takes just ~5.4 seconds. This is very similar to #9205, but is happening with the latest transformers from pip (4.26.1), so possibly a regression?
Short repro:
```python
import time
import torch
from transformers import CodeGenForCausalLM
t1 = time.time()
CodeGenForCausalLM.from_pretrained('Salesforce/codegen-6B-mono')
t2 = time.time()
print("Load took", t2-t1, "seconds")
```
Prints **Load took 37.78910255432129 seconds**
```python
import time
import torch
from transformers.utils import cached_file
torch.load(cached_file('Salesforce/codegen-6B-mono', 'pytorch_model.bin'))
```
Prints **Load took 5.443041801452637 seconds**
Based on profiling the HF from_pretrained script, it seems like ~75% of the time is being spent doing random initialization of weights that are about to be overwritten. This is the same problem that was fixed in PR #11471 so I'm not sure what's going on here.
Here's the cProfile output and output from gprof2dot:
[loadmodel_profile.txt](https://github.com/huggingface/transformers/files/10877225/loadmodel_profile.txt)
[hf_loadmodel_new.pdf](https://github.com/huggingface/transformers/files/10877227/hf_loadmodel_new.pdf)
### Expected behavior
`from_pretrained` should skip weight initialization when loading a pretrained model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21913/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21912
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21912/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21912/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21912/events
|
https://github.com/huggingface/transformers/issues/21912
| 1,607,672,293
|
I_kwDOCUB6oc5f0yHl
| 21,912
|
Allow for try/except imports for custom code
|
{
"login": "zanussbaum",
"id": 33707069,
"node_id": "MDQ6VXNlcjMzNzA3MDY5",
"avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zanussbaum",
"html_url": "https://github.com/zanussbaum",
"followers_url": "https://api.github.com/users/zanussbaum/followers",
"following_url": "https://api.github.com/users/zanussbaum/following{/other_user}",
"gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions",
"organizations_url": "https://api.github.com/users/zanussbaum/orgs",
"repos_url": "https://api.github.com/users/zanussbaum/repos",
"events_url": "https://api.github.com/users/zanussbaum/events{/privacy}",
"received_events_url": "https://api.github.com/users/zanussbaum/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Indeed that sounds like a nice feature to have. I think the easiest way to deal with it would be to remove all try/except blocks from the content [here](https://github.com/huggingface/transformers/blob/37e0974afcbccdc85da59d51b44e1437b6b3caea/src/transformers/dynamic_module_utils.py#L117) before the tests of the imports. If you want to take a stab at it, happy to review a PR!",
"Yes I'll give it a go and send a PR out in a few!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
### Feature request
When uploading a model with custom code, I wanted to try and use Flash Attention in one of the modules. However, to get around a case where people might use it on CPU, I added `try/except` block for the import.
However, I get an error when downloading locally like `This modeling file requires the following packages that were not found in your environment` which seems to come from [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/dynamic_module_utils.py#L112).
### Motivation
I want to be able to write custom model code that allows for optional imports if possible (like FlashAttention)
### Your contribution
I could try a crack at a PR?
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21912/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21911
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21911/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21911/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21911/events
|
https://github.com/huggingface/transformers/pull/21911
| 1,607,395,048
|
PR_kwDOCUB6oc5LKP9k
| 21,911
|
Avoid modeling tests run in pipeline CI jobs
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21911). All of your documentation changes will be reflected on that endpoint."
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Avoid modeling tests run in pipeline CI jobs.
PR #21887 applied
```
@is_pipeline_test
class PipelineTesterMixin:
```
Together the changes in #21516
```
class BertModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase):
```
these 2 make all modeling test methods become pipeline tests, therefore CircleCI pipeline jobs run all pipeline tests + all model tests. This causes OOM as pipeline jobs are run with `-n 8`.
This PR applies `@is_pipeline_test` to each test methods instead of the test class to avoid such issue.
### Effect
Run
```python
python -m pytest -m is_pipeline_test -v tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_model
```
#### On main
test pass
#### on this PR
test deselected
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21911/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21911",
"html_url": "https://github.com/huggingface/transformers/pull/21911",
"diff_url": "https://github.com/huggingface/transformers/pull/21911.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21911.patch",
"merged_at": 1677788586000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21910
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21910/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21910/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21910/events
|
https://github.com/huggingface/transformers/pull/21910
| 1,607,390,271
|
PR_kwDOCUB6oc5LKO8X
| 21,910
|
Fix doctests for TFVisionTextDualEncoder
|
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21910). All of your documentation changes will be reflected on that endpoint."
] | 1,677
| 1,677
| 1,677
|
MEMBER
| null |
So I might have just copy-pasted all the PyTorch doctests into the TF class and made the CI angry. But it's fixed now!
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21910/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21910",
"html_url": "https://github.com/huggingface/transformers/pull/21910",
"diff_url": "https://github.com/huggingface/transformers/pull/21910.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21910.patch",
"merged_at": 1677802691000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21909
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21909/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21909/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21909/events
|
https://github.com/huggingface/transformers/pull/21909
| 1,607,375,601
|
PR_kwDOCUB6oc5LKLzY
| 21,909
|
Cleanup more auto mapping names
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Just a follow-up PR of #21903.
Rebase on main once #21911 (or a fix from your side) being merged will be OK.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21909/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21909",
"html_url": "https://github.com/huggingface/transformers/pull/21909",
"diff_url": "https://github.com/huggingface/transformers/pull/21909.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21909.patch",
"merged_at": 1677851025000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21908
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21908/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21908/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21908/events
|
https://github.com/huggingface/transformers/pull/21908
| 1,607,260,282
|
PR_kwDOCUB6oc5LJzAv
| 21,908
|
Temporarily skip 3 tests in `BridgeTowerModelTest`
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
skip for now
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21908/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21908",
"html_url": "https://github.com/huggingface/transformers/pull/21908",
"diff_url": "https://github.com/huggingface/transformers/pull/21908.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21908.patch",
"merged_at": 1677780964000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21907
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21907/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21907/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21907/events
|
https://github.com/huggingface/transformers/pull/21907
| 1,607,204,738
|
PR_kwDOCUB6oc5LJnE-
| 21,907
|
Update modeling_funnel.py
|
{
"login": "robinbg",
"id": 28918243,
"node_id": "MDQ6VXNlcjI4OTE4MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/28918243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/robinbg",
"html_url": "https://github.com/robinbg",
"followers_url": "https://api.github.com/users/robinbg/followers",
"following_url": "https://api.github.com/users/robinbg/following{/other_user}",
"gists_url": "https://api.github.com/users/robinbg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/robinbg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/robinbg/subscriptions",
"organizations_url": "https://api.github.com/users/robinbg/orgs",
"repos_url": "https://api.github.com/users/robinbg/repos",
"events_url": "https://api.github.com/users/robinbg/events{/privacy}",
"received_events_url": "https://api.github.com/users/robinbg/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
Just for check, don't merge
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21907/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21907",
"html_url": "https://github.com/huggingface/transformers/pull/21907",
"diff_url": "https://github.com/huggingface/transformers/pull/21907.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21907.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21906
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21906/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21906/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21906/events
|
https://github.com/huggingface/transformers/pull/21906
| 1,607,196,243
|
PR_kwDOCUB6oc5LJlQW
| 21,906
|
faster forward following what is done for images
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failure seems unrelated"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Follow up of #21897
Keeps batch_size active, but restricts it to the imagepart.
Calculates candidate_labels features only once (there's no way with this current pipeline to send different candidate_labels so we can take the optimization.
Ultra narrowed for CLIP but the pipeline has existed for 4 months now with no new models. Given the popularity of CLIP for diffusion models, I think this is ok to overspecify. We can always relax later when new models come in.
This allows to downgrade from ChunkPipeline to regular PIpeline
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21906/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21906",
"html_url": "https://github.com/huggingface/transformers/pull/21906",
"diff_url": "https://github.com/huggingface/transformers/pull/21906.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21906.patch",
"merged_at": 1677820699000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21905
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21905/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21905/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21905/events
|
https://github.com/huggingface/transformers/pull/21905
| 1,607,131,884
|
PR_kwDOCUB6oc5LJXOi
| 21,905
|
Adds "causal-lm-with-past" to codegen
|
{
"login": "corey-nm",
"id": 109536191,
"node_id": "U_kgDOBodjvw",
"avatar_url": "https://avatars.githubusercontent.com/u/109536191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/corey-nm",
"html_url": "https://github.com/corey-nm",
"followers_url": "https://api.github.com/users/corey-nm/followers",
"following_url": "https://api.github.com/users/corey-nm/following{/other_user}",
"gists_url": "https://api.github.com/users/corey-nm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/corey-nm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/corey-nm/subscriptions",
"organizations_url": "https://api.github.com/users/corey-nm/orgs",
"repos_url": "https://api.github.com/users/corey-nm/repos",
"events_url": "https://api.github.com/users/corey-nm/events{/privacy}",
"received_events_url": "https://api.github.com/users/corey-nm/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for your PR, but ONNX support is now in the optimum library and we don't accept new PRs in Transformers.",
"Okay, thanks, closing then!"
] | 1,677
| 1,677
| 1,677
|
NONE
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21905/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21905",
"html_url": "https://github.com/huggingface/transformers/pull/21905",
"diff_url": "https://github.com/huggingface/transformers/pull/21905.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21905.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21904
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21904/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21904/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21904/events
|
https://github.com/huggingface/transformers/pull/21904
| 1,607,049,289
|
PR_kwDOCUB6oc5LJFeQ
| 21,904
|
Add Blip and Blip2 for pipeline tests
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you ! !!!"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
A continuation of (not merged) #21802.
@NielsRogge I will add you as the contributor.
@Narsil Just in case you want to take a look too, as you reviewed #21904, and asked for adding the tests.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21904/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21904",
"html_url": "https://github.com/huggingface/transformers/pull/21904",
"diff_url": "https://github.com/huggingface/transformers/pull/21904.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21904.patch",
"merged_at": 1677777635000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21903
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21903/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21903/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21903/events
|
https://github.com/huggingface/transformers/pull/21903
| 1,606,812,003
|
PR_kwDOCUB6oc5LISNI
| 21,903
|
Clean up auto mapping names
|
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"so far on `main`\r\n```bash\r\n`layoutxlm` appears in the mapping `PROCESSOR_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`.\r\n`wav2vec2_with_lm` appears in the mapping `PROCESSOR_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`.\r\n`blip_2` appears in the mapping `MODEL_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`.\r\n`decision_transformer_gpt2` appears in the mapping `MODEL_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`.\r\n`nllb` appears in the mapping `MODEL_MAPPING_NAMES` but it is not defined in the keys of `CONFIG_MAPPING_NAMES`.\r\n\r\n```",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Clean up auto mapping names + add a check ๐โ๐ฆบ
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21903/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21903",
"html_url": "https://github.com/huggingface/transformers/pull/21903",
"diff_url": "https://github.com/huggingface/transformers/pull/21903.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21903.patch",
"merged_at": 1677773691000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21902
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21902/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21902/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21902/events
|
https://github.com/huggingface/transformers/issues/21902
| 1,606,792,420
|
I_kwDOCUB6oc5fxbTk
| 21,902
|
how to beam search
|
{
"login": "Mryangkaitong",
"id": 23132307,
"node_id": "MDQ6VXNlcjIzMTMyMzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/23132307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mryangkaitong",
"html_url": "https://github.com/Mryangkaitong",
"followers_url": "https://api.github.com/users/Mryangkaitong/followers",
"following_url": "https://api.github.com/users/Mryangkaitong/following{/other_user}",
"gists_url": "https://api.github.com/users/Mryangkaitong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mryangkaitong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mryangkaitong/subscriptions",
"organizations_url": "https://api.github.com/users/Mryangkaitong/orgs",
"repos_url": "https://api.github.com/users/Mryangkaitong/repos",
"events_url": "https://api.github.com/users/Mryangkaitong/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mryangkaitong/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"Please use the [forums](https://discuss.huggingface.co/) to ask such questions.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
NONE
| null |
The current interface is similar to
`output=model.generate(**inputs, num_beams=4, no_repeat_ngram_size=7)`
but decoding one by one is too slow. Does beam seach support batch-by-batch decoding?
How to write if supportใ
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21902/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21901
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21901/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21901/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21901/events
|
https://github.com/huggingface/transformers/pull/21901
| 1,606,788,221
|
PR_kwDOCUB6oc5LINCH
| 21,901
|
[WIP] Creating automated test with release candidate of safetensors.
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21901). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21901/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21901",
"html_url": "https://github.com/huggingface/transformers/pull/21901",
"diff_url": "https://github.com/huggingface/transformers/pull/21901.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21901.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21900
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21900/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21900/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21900/events
|
https://github.com/huggingface/transformers/pull/21900
| 1,606,753,450
|
PR_kwDOCUB6oc5LIFZL
| 21,900
|
Make TFForceTokensLogitsProcessor exportable as tf concrete function.
|
{
"login": "mfuntowicz",
"id": 2241520,
"node_id": "MDQ6VXNlcjIyNDE1MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2241520?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mfuntowicz",
"html_url": "https://github.com/mfuntowicz",
"followers_url": "https://api.github.com/users/mfuntowicz/followers",
"following_url": "https://api.github.com/users/mfuntowicz/following{/other_user}",
"gists_url": "https://api.github.com/users/mfuntowicz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mfuntowicz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mfuntowicz/subscriptions",
"organizations_url": "https://api.github.com/users/mfuntowicz/orgs",
"repos_url": "https://api.github.com/users/mfuntowicz/repos",
"events_url": "https://api.github.com/users/mfuntowicz/events{/privacy}",
"received_events_url": "https://api.github.com/users/mfuntowicz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21900). All of your documentation changes will be reflected on that endpoint.",
"Also, check CI haha",
"Will handle these ๐๐ป ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
MEMBER
| null |
Previously we were doing an inner conversion from `[[int]]` to `dict[int, int]` but `dict` is not something concrete functions like.
This PR provides a fully compatible concrete function export, removing the inner transformation to `dict` tu use `tf.Tensor` only.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21900/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21900",
"html_url": "https://github.com/huggingface/transformers/pull/21900",
"diff_url": "https://github.com/huggingface/transformers/pull/21900.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21900.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21899
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21899/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21899/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21899/events
|
https://github.com/huggingface/transformers/issues/21899
| 1,606,637,031
|
I_kwDOCUB6oc5fw1Xn
| 21,899
|
Enable to pass two or model pretrained models to the Trainer.
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,677
| 1,681
| 1,681
|
CONTRIBUTOR
| null |
### Feature request
I tried a compound architecture that uses two tower models that get initialized with from_pretrained methods.
Then I wrapped the Custom model class with the Pretrained model class.
It works well in a single GPU. But fails in multiple GPU settings.
### Motivation
It would be useful to change models like DPR.
### Your contribution
I have tested a few methods and can help to improve.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21899/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21898
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21898/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21898/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21898/events
|
https://github.com/huggingface/transformers/pull/21898
| 1,606,611,803
|
PR_kwDOCUB6oc5LHmXM
| 21,898
|
fix typo in Bart's attention
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"thank you!"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix a typo in Bart's attention.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21898/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21898",
"html_url": "https://github.com/huggingface/transformers/pull/21898",
"diff_url": "https://github.com/huggingface/transformers/pull/21898.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21898.patch",
"merged_at": 1677764967000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21897
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21897/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21897/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21897/events
|
https://github.com/huggingface/transformers/pull/21897
| 1,606,533,351
|
PR_kwDOCUB6oc5LHVWT
| 21,897
|
Faster zero shot image
|
{
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/followers",
"following_url": "https://api.github.com/users/Narsil/following{/other_user}",
"gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Narsil/subscriptions",
"organizations_url": "https://api.github.com/users/Narsil/orgs",
"repos_url": "https://api.github.com/users/Narsil/repos",
"events_url": "https://api.github.com/users/Narsil/events{/privacy}",
"received_events_url": "https://api.github.com/users/Narsil/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"@yessenzhar",
"_The documentation is not available anymore as the PR was closed or merged._",
"Does it also do a cross product ? Then possibly yes.",
"I mostly copy pasted the whole sequence. Will open a PR ",
"> I mostly copy pasted the whole sequence. Will open a PR\r\n\r\nThe cross product is in the model itself (image_batch_size x text_batch_size).",
"CLAP has the same implementation for loss and etc. So we should see a similar behaviour \r\n",
"@Narsil Thank you for sorting this out this quick."
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Should superseed: https://github.com/huggingface/transformers/pull/21861#event-8644623666
- Keeps `batch_size` active, but restricts it to the `image`part.
- Calculates `candidate_labels` features only once (there's no way with this current pipeline to send different candidate_labels so we can take the optimization.
- Ultra narrowed for `CLIP` but the pipeline has existed for 4 months now with no new models. Given the popularity of CLIP for diffusion models, I think this is ok to overspecify. We can always relax later when new models come in.
This allows to downgrade from ChunkPipeline to regular PIpeline
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21897/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21897",
"html_url": "https://github.com/huggingface/transformers/pull/21897",
"diff_url": "https://github.com/huggingface/transformers/pull/21897.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21897.patch",
"merged_at": 1677782782000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21896
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21896/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21896/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21896/events
|
https://github.com/huggingface/transformers/pull/21896
| 1,606,412,521
|
PR_kwDOCUB6oc5LG69O
| 21,896
|
[T5 doc] Fix confusing documentation about `d_kv`
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fixes #21641. The documentation mentioned that `d_kv` must be equal to `d_model//num_heads` while this does not hold in the code. The code states that `d_kv = inner_dim//num_heads`, but inner dim can be different from the d_model.
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21896/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21896",
"html_url": "https://github.com/huggingface/transformers/pull/21896",
"diff_url": "https://github.com/huggingface/transformers/pull/21896.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21896.patch",
"merged_at": 1677762446000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21895
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21895/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21895/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21895/events
|
https://github.com/huggingface/transformers/pull/21895
| 1,606,312,157
|
PR_kwDOCUB6oc5LGk5r
| 21,895
|
Make error message more informative
|
{
"login": "Atomnp",
"id": 45496355,
"node_id": "MDQ6VXNlcjQ1NDk2MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/45496355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Atomnp",
"html_url": "https://github.com/Atomnp",
"followers_url": "https://api.github.com/users/Atomnp/followers",
"following_url": "https://api.github.com/users/Atomnp/following{/other_user}",
"gists_url": "https://api.github.com/users/Atomnp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Atomnp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Atomnp/subscriptions",
"organizations_url": "https://api.github.com/users/Atomnp/orgs",
"repos_url": "https://api.github.com/users/Atomnp/repos",
"events_url": "https://api.github.com/users/Atomnp/events{/privacy}",
"received_events_url": "https://api.github.com/users/Atomnp/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,677
| 1,677
| 1,677
|
CONTRIBUTOR
| null |
# What does this PR do?
Make error message more descriptive by adding one word, changes the message `The hidden size ({config.hidden_size}) is not a multiple of the number of attention` to `The hidden size ({config.hidden_size}) is not a multiple of the number of attention heads` in the previous error message, the word "head" was missing
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @sgugger
Integrations:
- deepspeed: HF Trainer: @stas00, Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
Documentation: @sgugger, @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: @sgugger
- TensorFlow: @Rocketknight1
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21895/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21895",
"html_url": "https://github.com/huggingface/transformers/pull/21895",
"diff_url": "https://github.com/huggingface/transformers/pull/21895.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21895.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21894
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21894/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21894/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21894/events
|
https://github.com/huggingface/transformers/pull/21894
| 1,606,281,683
|
PR_kwDOCUB6oc5LGeUw
| 21,894
|
Add FlaxWhisperForAudioClassification model
|
{
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21894). All of your documentation changes will be reflected on that endpoint.",
"> Modelling code looks good @raghavanone! Nice one on getting this working so quickly ๐ Do you want to have a go at adding the encoder-only tests? See the PyTorch WhisperForAudioClassficiation PR for details, think you can also add these quite quickly :)\r\n\r\nI have added the Encoder tests, But some test are failing, The FlaxWhisperForAudioClassification class extends FlaxWhisperPreTrainedModel . Due to this inheritance, the call method expects decoder related params. \r\n\r\nShould the FlaxWhisperForAudioClassification not extend FlaxWhisperPreTrainedModel instead create a new pretrainedclass ? ",
"Hey @raghavanone! The PyTorch model has just been merged (https://github.com/huggingface/transformers/pull/21754), so you can rebase onto main to get the required config changes:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/main\r\n```\r\nThis will fix the failing Flax tests we're getting here: https://app.circleci.com/pipelines/github/huggingface/transformers/58972/workflows/2388bd70-553e-412f-9ee7-0599cace5639/jobs/719829\r\n\r\nThe only thing to make sure is that the first time you push after rebasing, you **force push** to origin:\r\n```\r\ngit add .\r\ngit commit -m \"Some new changes after rebase\"\r\ngit push -f origin fix_issue_21779\r\n```\r\n\r\nYou only have to force push once, the next time you can just regular push:\r\n```\r\ngit add .\r\ngit commit -m \"Some more changes\"\r\ngit push -u origin fix_issue_21779\r\n```",
"@sanchit-gandhi There are 2 test failing here, I am unable to get the same failure locally in my machine. Any pointers on how to replicate failing test and fix it ? ",
"Hey @raghavanone! Would you mind going through the previous review comments and marking them as resolved where you've addressed them? I'll then get you a final review asap! Thanks!",
"Hey @raghavanone - I think the commit history has been corrupted for this PR? Gentle reminder that one must force push after rebasing: https://github.com/huggingface/transformers/pull/21894#issuecomment-1458359220 Think this is probably the culprit for the 250 extra commits!\r\n\r\nIn this instance, it's probably best to close this PR in favour of a new one that only contains the new changes you with to merge. Sorry about that!",
"Closing in favour of #22883"
] | 1,677
| 1,683
| 1,683
|
CONTRIBUTOR
| null |
# What does this PR do?
Fix : #21779
Please review and let me know changes @sanchit-gandhi
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21894/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21894",
"html_url": "https://github.com/huggingface/transformers/pull/21894",
"diff_url": "https://github.com/huggingface/transformers/pull/21894.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21894.patch",
"merged_at": null
}
|
https://api.github.com/repos/huggingface/transformers/issues/21893
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21893/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21893/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21893/events
|
https://github.com/huggingface/transformers/pull/21893
| 1,606,256,828
|
PR_kwDOCUB6oc5LGY_G
| 21,893
|
[ZAC] fix ci daily
|
{
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing test is unrelated"
] | 1,677
| 1,677
| 1,677
|
COLLABORATOR
| null |
# What does this PR do?
Fixes the loading of zero shot audio classification pipeline by providing a correct revision. The previous one was destroyed when checkpoints were overwritten at some point. (that's also when the model card dissapeared).
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21893/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21893",
"html_url": "https://github.com/huggingface/transformers/pull/21893",
"diff_url": "https://github.com/huggingface/transformers/pull/21893.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21893.patch",
"merged_at": 1677750364000
}
|
https://api.github.com/repos/huggingface/transformers/issues/21892
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21892/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21892/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21892/events
|
https://github.com/huggingface/transformers/issues/21892
| 1,606,239,921
|
I_kwDOCUB6oc5fvUax
| 21,892
|
[i18n-<languageCode>] Translating docs to <languageName>
|
{
"login": "sachiweb",
"id": 102918669,
"node_id": "U_kgDOBiJqDQ",
"avatar_url": "https://avatars.githubusercontent.com/u/102918669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sachiweb",
"html_url": "https://github.com/sachiweb",
"followers_url": "https://api.github.com/users/sachiweb/followers",
"following_url": "https://api.github.com/users/sachiweb/following{/other_user}",
"gists_url": "https://api.github.com/users/sachiweb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sachiweb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sachiweb/subscriptions",
"organizations_url": "https://api.github.com/users/sachiweb/orgs",
"repos_url": "https://api.github.com/users/sachiweb/repos",
"events_url": "https://api.github.com/users/sachiweb/events{/privacy}",
"received_events_url": "https://api.github.com/users/sachiweb/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] |
closed
| false
| null |
[] |
[] | 1,677
| 1,677
| 1,677
|
NONE
| null |
<!--
Note: Please search to see if an issue already exists for the language you are trying to translate.
-->
Hi!
Let's bring the documentation to all the <languageName>-speaking community ๐ (currently 0 out of 267 complete)
Who would want to translate? Please follow the ๐ค [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers ๐ค).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `<languageCode>` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `<languageCode>/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger for review.
* ๐ If you'd like others to help you with the translation, you can also post in the ๐ค [forums](https://discuss.huggingface.co/).
## Get Started section
- [ ] [index.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/index.mdx) https://github.com/huggingface/transformers/pull/20180
- [ ] [quicktour.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/quicktour.mdx) (waiting for initial PR to go through)
- [ ] [installation.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/installation.mdx).
## Tutorial section
- [ ] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/pipeline_tutorial.mdx)
- [ ] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx)
- [ ] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/preprocessing.mdx)
- [ ] [training.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/training.mdx)
- [ ] [accelerate.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/accelerate.mdx)
- [ ] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_sharing.mdx)
- [ ] [multilingual.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/multilingual.mdx)
<!--
Keep on adding more as you go ๐ฅ
-->
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21892/timeline
|
completed
| null | null |
https://api.github.com/repos/huggingface/transformers/issues/21891
|
https://api.github.com/repos/huggingface/transformers
|
https://api.github.com/repos/huggingface/transformers/issues/21891/labels{/name}
|
https://api.github.com/repos/huggingface/transformers/issues/21891/comments
|
https://api.github.com/repos/huggingface/transformers/issues/21891/events
|
https://github.com/huggingface/transformers/pull/21891
| 1,606,038,926
|
PR_kwDOCUB6oc5LFqHM
| 21,891
|
[Time-Series] Autoformer model
|
{
"login": "elisim",
"id": 17675462,
"node_id": "MDQ6VXNlcjE3Njc1NDYy",
"avatar_url": "https://avatars.githubusercontent.com/u/17675462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elisim",
"html_url": "https://github.com/elisim",
"followers_url": "https://api.github.com/users/elisim/followers",
"following_url": "https://api.github.com/users/elisim/following{/other_user}",
"gists_url": "https://api.github.com/users/elisim/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elisim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elisim/subscriptions",
"organizations_url": "https://api.github.com/users/elisim/orgs",
"repos_url": "https://api.github.com/users/elisim/repos",
"events_url": "https://api.github.com/users/elisim/events{/privacy}",
"received_events_url": "https://api.github.com/users/elisim/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] |
[
"One small open issue left, is adding the series decomposition to the decoder with the trend input. Will do after the initial review :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"some of the TF tests are failing and I believe they are unrelated",
"PR is green ",
"thank you @amyeroberts will get it fixed!",
"@amyeroberts, thank you for the comprehensive CR! I sincerely appreciate the effort and time you dedicated to thoroughly assessing this pull request. \r\n\r\nWill be fixed!",
"CR changes I did:\r\n\r\r\n* Added `layer_norm_eps` \r\n* Model layers are now take the config, except the `AutoformerAttention` which I wasn't sure about\n* Better variables names \n* Addressed questions I could answer\n\n`fix-copies` is falling because diffs with the time-series-transformer. It's about to decide if to change time-series-transformer here, or to remove \"copied from...\"\r\n\r\n@kashif @amyeroberts \n\n"
] | 1,677
| 1,685
| 1,685
|
CONTRIBUTOR
| null |
# What does this PR do?
Adding Time Series Autoformer model https://arxiv.org/abs/2106.13008
Related issue: #21890
@kashif :)
## Differences between the vanilla transformer
<img width="484" alt="image" src="https://user-images.githubusercontent.com/17675462/229438415-6a5bba78-c3bf-47b1-966a-f3664a0921e0.png">
* Introduced Series Decomposition in encoder & decoder --- done, waiting for review
* Replaced canonical self-attention with autocorrelation block --- done, waiting for review
* Added seasonal and trend inputs for the decoder --- added todo places in the code
* trend seasonal pseudo code:
```
mean_data = mean(enc_input)
zeros = zeros() # size: x_dec.size(0), prediction_length, x_dec.size(2)
seasonal_init, trend_init = decomp_layer(enc_input)
trend_init = concat(trend_init, mean_data)
seasonal_init = concat(seasonal_init, zeros)
```
|
{
"url": "https://api.github.com/repos/huggingface/transformers/issues/21891/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/transformers/issues/21891/timeline
| null | false
|
{
"url": "https://api.github.com/repos/huggingface/transformers/pulls/21891",
"html_url": "https://github.com/huggingface/transformers/pull/21891",
"diff_url": "https://github.com/huggingface/transformers/pull/21891.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/21891.patch",
"merged_at": 1685435013000
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.